top of page

The Rise of AI Cartels – How a Handful of Companies Are Monopolizing AI Development

AI was supposed to level the playing field by democratizing innovation, opening doors for independent developers, and ushering in a golden age of technological progress. Instead, we got an AI oligarchy. What started as a research-driven field full of open experimentation has turned into a high-stakes industry where a handful of corporations hoard the most valuable resources (compute power, proprietary data, and regulatory influence) while everyone else fights for scraps.


The result? AI is no longer a wide-open competitive frontier; it’s a cartel. A handful of companies are consolidating control, dictating the speed, direction, and applications of AI development. This shift isn’t just about business strategy; it’s reshaping innovation, economic power structures, and global governance. And unless something changes, the future of AI won’t be decided by researchers, entrepreneurs, or policymakers, it’ll be dictated by the handful of corporations that own the infrastructure.


What This Article Will Cover:


  • The key players and how they are monopolizing AI

  • The role of compute power, data access, and regulation in AI centralization

  • The impact of AI monopolization on society, competition, and innovation

  • Potential solutions and what comes next


The Power Players – Who Controls AI?


AI is the Wild West. An open frontier where academic researchers, indie developers, and scrappy startups pushed the boundaries of what was possible. But that scenario is already fading fast. Now, AI development is increasingly concentrated in the hands of a few corporate giants, you know, the ones with the deep pockets, massive datasets, and enough computing power to bend the industry to their will. These companies don’t just build AI; they control the rules of the game; from who gets access to cutting-edge models to what counts as “ethical” AI. When a handful of players own the infrastructure, they don’t just shape the technology, they decide who gets to compete at all.

table showing the key companies leading in AI development.

The AI Oligopoly: Key Companies Dominating AI Development


The modern AI ecosystem is controlled by a handful of corporations that own, fund, or influence the majority of large-scale AI projects.


OpenAI (Microsoft-backed)—Initially founded as a nonprofit to democratize AI, OpenAI has since pivoted to a for-profit model with deep financial backing from Microsoft. Microsoft invested $10 billion in OpenAI, securing exclusive integration of its models into Azure Cloud and Microsoft products. OpenAI’s GPT models dominate the generative AI space, yet their training data and full methodologies remain opaque despite their initial claims of openness.


Google DeepMind & Google AIGoogle’s DeepMind pioneered some of the most advanced AI research (AlphaFold, AlphaZero) and remains at the cutting edge of reinforcement learning and AGI research. Google also owns Gemini AI (formerly Bard) and Tensor Processing Units (TPUs), which give them exclusive compute power advantages. With Android, YouTube, and Google Search, the company has access to some of the largest datasets in human history—fueling its AI dominance.


Meta AIMeta has aggressively pursued open-source AI models like LLaMA while still leveraging its unparalleled social media data from Facebook (META), Instagram, and WhatsApp. While its models are marketed as open, the scale of its data resources gives Meta an advantage no independent developer can match.


Anthropic (Amazon & Google-backed)—Anthropic, founded by ex-OpenAI employees, is a rising force in AI safety and LLMs. Google has invested $2 billion, while Amazon committed $4 billion, securing their stakes in its Claude AI model. The company presents itself as an alternative to OpenAI but still relies on big cloud infrastructure for survival.


Amazon AI & Apple’s AI PlayAmazon Web Services (AWS) hosts a significant portion of AI models globally, meaning many AI startups and even competitors are financially dependent on AWS infrastructure. Apple, while not leading in generative AI, is embedding AI into its ecosystem (Siri, Apple Intelligence, on-device AI models). Apple’s advantage—Data privacy & integration with iOS devices, which will shape AI-powered consumer products.


Chinese Tech Giants: Baidu, Alibaba, Tencent (BAT)—China’s government-backed AI initiatives are centered around Baidu, Alibaba, and Tencent, which receive direct state support. Unlike Western AI firms, China’s AI development is closely aligned with government surveillance and state priorities. AI regulation in China is far stricter, ensuring that only state-aligned companies thrive.


Why/How These Companies Control AI


These tech giants maintain dominance by controlling three key resources:


Exclusive Access to Compute Power – AI training requires massive computational resources (e.g., NVIDIA GPUs, TPUs, custom accelerators). These companies have priority access to the most powerful AI chips, while startups and independent researchers struggle with high costs and GPU shortages.


Ownership of Proprietary Data – AI models are only as good as the data they train on. The largest AI firms have access to billions of user interactions, search queries, and proprietary datasets, allowing them to train better models. Privacy laws like GDPR ironically reinforce monopolies by making it harder for new entrants to collect data.


Regulatory Influence – These firms are deeply involved in shaping AI policies and regulations, ensuring that laws favor incumbents while making compliance expensive for new players. They promote AI ethics initiatives and self-regulation while simultaneously securing first-mover advantages in deployment.



So, what?


AI’s future is being dictated by a few corporations with overwhelming control over infrastructure, research, and policy-making; This raises critical concerns.


Suppression of Competition – The Death of AI Startups?


Startups don’t stand a chance against trillion-dollar behemoths. The AI industry is quickly becoming an exclusive club, where only the wealthiest corporations can afford a seat at the table. Training cutting-edge AI models isn’t just expensive, it’s astronomically expensive. We’re talking massive compute power, NVIDIA GPUs that cost more than luxury cars, and proprietary datasets hoarded like gold. Unless you’re a tech giant with deep pockets, good luck keeping up.


For independent researchers and startups, the barriers to entry are nearly impossible to scale. Either they take funding from the very corporations that dominate the industry, or they abandon their ambitions altogether. And even when a promising AI startup does break through? It’s either acquired, crushed, or outspent into oblivion by the dominant players. The result? A tech landscape where innovation slows, competition dies, and AI’s future is dictated by the same handful of companies that already run the show.


AI monopolies don’t just corner the market, they suffocate it. When a few companies control the infrastructure, they don’t just control AI’s development, they control who gets to participate in the future at all.


Compute Power Costs AI startups can’t afford the massive computational resources required to train models, forcing them to rely on AWS, Microsoft Azure, or Google Cloud—who also happen to be their competitors.


Data Access Restrictions – AI requires vast datasets, but major tech firms own proprietary data and can cut off competitors from using their platforms (e.g., Twitter/X restricting OpenAI’s access to its data).


Acquisition & Absorption The biggest AI firms buy out promising competitors before they can become threats. OpenAI, Google, and Meta have all acquired AI startups to absorb their technology and talent rather than allowing them to flourish independently.


VC Funding Bias – Venture capital investment is overwhelmingly directed toward incumbents and corporate-backed AI initiatives, making it harder for independent companies to break into the market.


AI Deployment for Profit, Not Public Good 


Will AI serve corporate interests over humanity’s needs? AI is often marketed as a transformative force for education, healthcare, and public services, yet its development is increasingly driven by profit motives rather than societal benefit. The companies leading AI innovation are also publicly traded corporations, meaning their primary obligation is to shareholders, not the public. Instead of focusing on AI applications that could improve access to knowledge, sustainability, or global equity, resources are funneled toward high-profit use cases, such as advertising algorithms, predictive financial models, and enterprise automation. If AI continues down this path, we risk a future where technological advancements primarily serve corporate agendas rather than the broader needs of humanity.


As AI becomes increasingly profit-driven, its applications are shaped by corporate financial incentives rather than societal needs.


Pay-to-Play AI Access – The most powerful AI models (GPT-4, Gemini, Claude) are behind high-cost API access paywalls, restricting accessibility to well-funded enterprises.


Ethical AI vs. Profitable AI – AI companies prioritize scalable revenue models, often at the cost of safety, ethics, and bias mitigation.


Advertising & Monetization Models – AI is increasingly being optimized for engagement rather than truth (e.g., AI-driven social media algorithms that reinforce bias and misinformation).


Ethical Risks, Bias & Transparency Issues


Fewer players controlling AI means less accountability in how models are trained and used. The fewer companies that control AI, the harder it becomes to hold them accountable for bias, misinformation, and opaque decision-making. Most cutting-edge AI models, including LLMs like GPT-4 and Gemini, operate as black boxes, meaning even their creators often struggle to explain how or why certain decisions are made. This lack of transparency becomes even more concerning when AI is used in hiring, law enforcement, lending, and healthcare, areas where algorithmic bias can reinforce societal inequalities.


With only a handful of companies developing these models, bias remains unchecked, and independent auditing becomes increasingly difficult. Without transparency and external oversight, AI risks becoming an unregulated force with real-world consequences for marginalized communities, democratic institutions, and global economies.


Ethical AI development is being sidelined in favor of rapid deployment, leaving regulators, researchers, and society struggling to keep up. The fewer companies controlling AI, the less accountability there is for its unintended consequences.


AI Bias Reinforcement – Without transparency in training data, AI models perpetuate and amplify societal biases, disproportionately affecting marginalized groups.


Algorithmic Black Boxes – Closed-source AI models mean even researchers don’t fully understand how these systems make decisions.


Deepfakes & Disinformation at Scale – AI-driven fake news, synthetic media, and propaganda campaigns are growing concerns. Without independent oversight, AI can be weaponized for misinformation at unprecedented scales.


National & Geopolitical Risks 


AI is now a national security asset, and its development is tied to international power struggles. AI has become one of the most strategic assets in global power struggles, rivaling energy resources and military defense systems in importance. Governments recognize that AI supremacy is now a national security issue, and countries like the U.S., China, and the EU are racing to develop, regulate, and control their own AI ecosystems. Nations that lack domestic AI capabilities will become technologically dependent on foreign AI models, creating new vulnerabilities in cybersecurity, intelligence, and economic stability.


The increasing weaponization of AI, whether through automated cyberattacks, deepfake-driven disinformation campaigns, or AI-powered warfare, means that AI dominance isn’t just an economic advantage; it’s a geopolitical necessity. As AI monopolies strengthen, the battle for control over these technologies will define global politics for decades to come.

The question isn’t just who controls AI today, but who will control the future of intelligence itself?

The monopolization of AI isn’t just a business problem, it’s a national security issue that will define power dynamics in the coming decades. AI is no longer just a technology issue, it’s a global power struggle.


AI as a Geopolitical Asset – Governments are recognizing AI as a strategic resource, with the U.S., China, and the EU racing to secure dominance.


Dependence on Big Tech for National Security – AI advancements in cybersecurity, defense, and intelligence gathering are increasingly dependent on private corporations. How much control should private AI firms have over national security infrastructure?


The Rise of AI Cold Wars – Restricting AI chip exports, controlling AI-powered cyberwarfare, and manipulating global AI regulations are already shaping a new digital arms race.The Impact of AI Monopolization


As AI development becomes increasingly concentrated in the hands of a few major corporations, its impact extends far beyond the tech industry. The monopolization of AI is shaping economies, job markets, innovation, ethics, and even global power structures.


The Automation Shockwave – Who Wins and Who Loses?


AI is concentrating power among a small elite while displacing millions from the workforce. One of the most immediate impacts of AI monopolization is its effect on jobs and economic inequality.


AI Is Replacing White-Collar Jobs – Unlike past automation waves that disrupted blue-collar labor, today’s AI advancements are replacing knowledge workers (journalists, customer service reps, paralegals, coders).


AI-Powered Labor Efficiency Is Unevenly Distributed – Large corporations with AI infrastructure will increase productivity and profits, while smaller businesses and workers without AI access will struggle to compete.


Widening Economic Inequality – Just as industrial monopolies widened economic gaps in past revolutions, AI monopolization will further divide the “AI-empowered” from the “AI-replaced.”


Why We Should Be Paying Attention


The concentration of AI power into the hands of a few corporations isn’t just an industry issue—it’s a societal one. The consequences are clear.



Unless governments, independent researchers, and the public push for greater oversight, transparency, and accessibility, AI will follow the path of other monopolized industries, concentrating power in ways that benefit the few while harming the many.


What Can Be Done? Potential Solutions & Alternatives


The monopolization of AI raises urgent concerns, but is there a way to decentralize control and create a more equitable AI ecosystem?


The challenge is that AI development requires high-cost infrastructure, proprietary datasets, and regulatory influence, making it difficult for new players to compete. However, several potential solutions could help prevent AI from becoming fully monopolized.


Breaking Up AI Monopolies – The Antitrust Debate


Antitrust measures might slow down AI monopolization, but they won’t fix the fundamental accessibility gap. Regulating AI like Big Tech or Big Oil, breaking up dominant players, enforcing competition laws, sounds good on paper. But without deeper interventions, the same companies will still control the infrastructure that makes AI possible.


To level the playing field, regulators need to target the real choke points: compute power and proprietary data. Right now, cloud giants like AWS, Azure, and Google Cloud can give their own AI initiatives priority access to computing resources while everyone else struggles with skyrocketing costs. Governments should force cloud providers to offer equitable access to independent researchers and startups because if AI infrastructure stays locked behind corporate walls, smaller players will never stand a chance.


Another key issue? Anti-competitive acquisitions. The biggest AI firms aren’t just dominating the market, they’re buying up anything that could threaten them. OpenAI, for example, has absorbed multiple research labs and talent pools, consolidating its position while effectively neutralizing external competition. Regulators need to block these strategic buyouts before every disruptive startup becomes just another subsidiary of a trillion-dollar giant.


And finally, transparency needs to be enforced. Not as a suggestion, but as a legal requirement. AI companies should be forced to disclose their training data sources, biases, and potential ethical risks. Right now, AI development is happening behind closed doors, with a handful of corporations shaping the future of intelligence without public accountability. Without guardrails, AI won’t democratize technology, it’ll just entrench power in the hands of the few who already have it.


Regulating AI monopolies presents significant challenges, as past attempts to rein in Big Tech through antitrust actions have largely failed due to legal loopholes and aggressive corporate lobbying. AI companies are likely to employ similar tactics, making it difficult for regulators to impose meaningful restrictions. Additionally, the rapid pace of AI development far outstrips the slow-moving nature of government policy, meaning that by the time regulations are enacted, AI firms may have already cemented their dominance or adapted to new legal constraints.


Even if regulators were successful in breaking up AI monopolies, it would not solve the core issue of resource disparity which is smaller players would still lack access to high-performance computing, vast datasets, and financial backing, leaving them unable to compete on the same level. Without a comprehensive approach that addresses both monopolistic practices and resource accessibility, regulatory efforts may have limited impact on AI’s growing concentration of power.


Open Source AI – Can Decentralization Work?


Open-source AI lowers barriers to entry but does not address AI’s infrastructure cost problem. Some organizations are pushing for open-source AI models as an alternative to centralized corporate control. The goal? Make AI accessible to independent researchers, smaller firms, and nonprofits.


Examples of Open AI Efforts


  1. Hugging Face – An open platform for AI models, datasets, and research.

  2. Stability AI (Stable Diffusion) – One of the most widely available generative AI models.

  3. EleutherAI – A non-profit research lab developing open-source LLMs.


Challenges & Risks


Open-source AI often gets hyped as the silver bullet for AI monopolization, but in reality, it’s more like a rubber band holding back a tidal wave. Sure, making models open-source sounds great, until you realize the compute costs are still sky-high. Even open models need high-end GPUs, massive cloud resources, and enough electricity to power a small town. Independent developers and small organizations might as well be bringing a slingshot to a tank fight. They can’t afford the infrastructure, forcing them to rent computing power from AWS, Google Cloud, or Microsoft Azure which, hilariously, means paying the very corporations they’re trying to compete with.


Then there’s the “open” part of open-source AI—which, if we’re being honest, is a double-edged sword. Freely available models can be weaponized for all kinds of malicious purposes, from creating deepfakes and automated cyberattacks to building mass surveillance systems. Without proper safeguards, bad actors can use AI to spread misinformation, manipulate public opinion, and outpace human analysts in cybersecurity. The result? Governments and researchers are already sounding the alarm over AI-generated propaganda, automated fraud, and privacy-shattering surveillance tools that operate in the wild west of regulation.


And let’s not forget how big tech loves to play the open-source card; as long as it benefits them. The pattern is all too familiar: corporations co-opt open research, integrate it into their proprietary systems, and then slam the door on true accessibility. Take Meta’s LLaMA-2, for example. It was built on open-source research, but good luck getting real access to it. Meta gets to reap the benefits of open development while keeping proprietary control over deployment. This isn’t open-source for the public good, it’s open-source for corporate dominance.


Without intervention, open-source AI might just become another tool in big tech’s arsenal, propped up as a symbol of inclusivity and innovation while ensuring the real power and scalability stay right where they’ve always been; at the top.


Public AI Infrastructure – A Tech Commons?


Public AI infrastructure could democratize access but requires massive investment and strong governance. Governments and global organizations could fund and develop publicly accessible AI infrastructure to prevent private firms from controlling AI’s future.


Potential Public AI Initiatives


  1. Government-backed AI research labs (modeled after CERN or NASA).

  2. Publicly available computing resources for independent AI research.

  3. Decentralized AI cloud networks to reduce dependence on corporate servers.


Challenges & Risks


Governments face significant hurdles in developing publicly funded AI infrastructure, primarily due to funding constraints and political resistance. AI research and deployment require billions in investment, and policymakers may struggle to justify such spending when private companies are already leading innovation. Many governments lack the technical expertise to build and maintain cutting-edge AI systems, forcing them to partner with private firms, which can lead to conflicts of interest and further entrench corporate dominance. Without strong political will and long-term commitment, public AI initiatives risk being underfunded, deprioritized, or inefficiently managed, leaving the field open for tech monopolies to dictate AI’s trajectory.


Another challenge is the question of control, who oversees and regulates public AI? In some cases, state-run AI efforts could become highly politicized, shaping AI development to align with government interests rather than public welfare. For example, China tightly regulates AI, using it for state surveillance, censorship, and ideological enforcement. Meanwhile, international cooperation on AI governance remains weak and fragmented, with the U.S., China, and the EU all adopting different AI policies, making global collaboration difficult. Without a unified approach, AI development risks becoming a race for dominance rather than a collective effort to ensure its ethical and equitable use.


Stronger AI Regulations – A Global Approach


Global AI regulation is necessary but difficult to enforce across borders. Since AI monopolies shape economies, politics, and national security, governments must take an active role in AI governance to prevent corporate overreach.


Key Regulatory Proposals


  1. The EU’s AI Act – The world’s first comprehensive AI law, requiring AI risk assessments and accountability measures.

  2. U.S. AI Executive Order – The Biden administration is exploring AI safety policies, but regulation remains fragmented.

  3. China’s Strict AI Controls – China mandates government oversight of all AI models deployed in the country.


Challenges & Risks


AI regulation is increasingly fragmented, with the EU, U.S., and China each taking vastly different approaches, leading to uncertainty for businesses and researchers trying to navigate global compliance. The EU’s AI Act is one of the most stringent, focusing on risk-based regulation and transparency, while the U.S. has taken a more hands-off approach, favoring self-regulation and corporate influence. Meanwhile, China imposes strict government oversight, ensuring that AI aligns with state priorities, particularly in surveillance and censorship.


Adding to the complexity is regulatory capture, where the same AI companies pushing for regulation are also shaping it to serve their own interests, ensuring that new laws favor large incumbents while making it harder for smaller players to compete. At the same time, there’s a risk that over-regulation could stifle innovation, creating excessive compliance burdens that slow progress, restrict experimentation, and prevent startups from entering the AI market. Striking the right balance between accountability and innovation remains one of the greatest challenges in AI governance.



The Path Forward – What’s Realistic?


Each solution has trade-offs, but a multi-pronged approach is likely the best path forward.


  1. Antitrust enforcement to prevent monopolistic behavior

  2. Open-source AI models to promote accessibility

  3. Government-supported AI research to balance private interests

  4. Stronger regulatory frameworks to prevent abuse


Without intervention, AI monopolies will continue shaping technology, economies, and global power structures without public accountability. 


The Future of AI: A Monopoly or a Shared Asset?


AI has hit a critical crossroads and, unsurprisingly, the road ahead is looking more like a corporate toll road than an open highway. What started as a field of discovery, experimentation, and independent breakthroughs has morphed into a high-stakes industry controlled by a handful of corporations. These companies aren’t just leading AI development, they’re locking everyone else out. With unmatched compute power, mountains of proprietary data, and enough regulatory influence to shape policy in their favor, they’ve built an AI ecosystem that primarily serves their own interests.


And if history is any guide, we know exactly where this is going. AI is on track to follow the same monopolization playbook we’ve seen before which is to concentrate power, crush competition, and prioritize profits over public benefit. The consequences won’t just affect who builds AI, but who gets to use it, who benefits from it, and who gets left behind. If left unchecked, AI won’t democratize innovation, it’ll consolidate it into the hands of the few who already run the show.


  • Innovation: New players struggle to enter the market due to high costs and restrictive access to resources.

  • Labor & Economy: AI-driven automation is accelerating, benefiting corporations while leaving workers vulnerable.

  • Ethics & Transparency: The increasing opacity of AI decision-making makes it harder to address bias, misinformation, and potential harms.

  • Geopolitics: AI is becoming a tool of national security, cyber warfare, and global influence, with tech companies wielding disproportionate power over governments.


Reversing AI’s monopolization won’t be easy, but it is not inevitable. There are practical solutions, from regulatory reforms and antitrust measures to public AI initiatives and open-source development. However, these require public pressure, governmental action, and industry accountability.



📣 Call to Action – Join the Conversation


What do you think?

  • Should AI companies be regulated like Big Tech or Big Oil monopolies?

  • Is open-source AI a viable alternative, or does it create new risks?

  • How can AI development be made more equitable and transparent?




Want more insights? 🚀

Follow Sarah Mancinho on LinkedIn

Subscribe to Tech Revolution on Substack

Visit the Digital Society Review and subscribe to the blog

Check out the Mental Models & Mastery newsletter on Linkedin

Stay updated with bite-sized trends on my LinkedIn newsletter: Emerging Technology




bottom of page