U.S. Department of Justice and Federal Trade Commission (FTC) reach agreement to separately investigate Microsoft, OpenAI and Nvidia for potential anti-competitive behavior.
Meanwhile, with the introduction of the EU AI Act, the European Commission's interest in AI investigations is growing. In March and May 2024, the European Commission sent formal information requests to Google, Facebook and TikTok, and Microsoft, asking for information on the risks and mitigation measures of generative AI. On July 16, 2024, ### the UK Competition and Markets Authority (CMA) announced the launch of an investigation into "Microsoft's hiring of Inflection AI's core team" to determine if this recruitment is equivalent to an "acquisition".
This competitive scrutiny is so urgent because, on the one hand, ### competition regulators do not want to be caught off guard by big tech companies again; on the other hand, the inherent conditions for the development of large AI models seem to have ### "concentration" and "restriction" attributes. Therefore, whether it is the EU or the US, they seem to prefer to conduct "ex-ante intervention" before tech companies successfully occupy a dominant position in the AI market.
### Inherent "anti-competitive" attributes of AI development?
### 1. Barriers of cloud infrastructure and computing power
Some experts believe that ### an oligopolistic AI market is almost inevitable.
For AI startups, model training is an expensive fixed cost and a major barrier to entering the AI market. It is difficult to sustain these costs by relying solely on investor funding.
Only the largest big tech companies (mainly Google, Amazon, Microsoft and Meta, as well as Apple and Nvidia, collectively known as GAMMAN) have the cloud infrastructure and computing power needed to meet the training needs of the largest AI models. GAMMAN's control over key assets limits smaller startups to subordinate roles.
This explains why AI startups are often willing to seek cooperation agreements with big tech companies. For example, the most typical is OpenAI's cooperation with Microsoft, exchanging access to computing infrastructure for GAMMAN's access to its latest models. Or, AI startups can also stay away from the technological frontier, focusing on small AI models that perform well on specific tasks, "laying the foundation" for large models to create applications with proprietary data.
### 2. Barriers to accessing copyrighted data
Many high-quality text, audio and image AI training datasets are copyright protected. Authors can in principle charge licensing fees for the use of their works. This will lead to a reduction in training data supply due to additional costs. This will also increase the cost of model training and reduce competition among model developers.
The EU AI Act requires model developers to comply with EU copyright law as stipulated in the Copyright Directive, especially Article 4, which grants copyright exceptions for commercial research but allows copyright holders to opt out of this exception. There is some legal uncertainty in the US, with several cases pending. If courts rule that the fair use exception does not apply, AI investors will face the risk of punitive statutory damages. To avoid this, the largest AI companies have already signed data licensing agreements with major media companies. For example, OpenAI has signed agreements with The New York Times, Bertelsmann Media Group and Reddit news platform.
If countries strictly enforce copyright laws, it will become more difficult for AI models to obtain training data, and smaller AI developers and startups may not have the financial resources to pay for copyright licenses, to the extent that they are completely squeezed out of the market.
### 3. Convenience of user access channels
AI model developers need commercial channels to generate revenue to pay for the costs of training and running models.
Some startups build their own business models from scratch and are quite successful. For example, OpenAI created a paid GPT app store and charges subscription fees to professional ChatGPT users. ChatGPT covered over 100 million users within a year of its launch.
However, ### for AI startups with weak or no network effects, it is difficult to build a business model from scratch. A simpler way to generate revenue is to partner with GAMMAN and embed AI models into their mature business models. For example, Google is embedding its own and third-party AI models into its search engine and other services, charging high prices for access to some AI-driven services.
Therefore, startups that have not yet formed a business model are also willing to cooperate with GAMMAN to embed AI models into the downstream end of their value chain - existing GAMMAN user-facing services; in return, startups cooperate in reverse at the upstream end of the value chain, with GAMMAN granting startups access to computing infrastructure and possible training data.
### Are "coopetition" agreements between GAMMAN and AI startups considered "acquisitions"?
For the above reasons, AI startups wanting to maintain a technological edge need to sign coopetition agreements with GAMMAN to overcome barriers to training costs and customer acquisition. GAMMAN can vertically integrate along the entire AI value chain, while startups mainly cover the input and intermediate parts of the value chain.
Competition authorities are skeptical of these deals and agreements, concerned that coopetition agreements could become a Trojan horse for GAMMAN to exert influence and reduce competition from AI startups. ### One important legal question is whether GAMMAN's actions, from strategic investment to poaching founders and technical talent from startups, are a new form of acquisition - "quasi-acquisition" - that only circumvents antitrust oversight?
But several investigations by competition authorities have so far found no conclusive evidence.
Although the European Commission concluded in April 2024 that Microsoft's investment arrangements in OpenAI did not constitute an acquisition, it is still considering whether to open a formal antitrust investigation into the arrangement on the grounds that it could have a distorting effect on the EU internal market. Germany is similar, having determined in November 2023 that the arrangement is not subject to German merger control, but it has retained the possibility of re-examination if Microsoft increases its influence over OpenAI in the future.
US antitrust enforcement agencies have now also joined the investigation.
### Breakthrough points for generative AI antitrust investigations
If the concept of "quasi-acquisition" is legally difficult to break through, regulators are likely to seek breakthroughs from the control of one or more key components that generative AI relies on.
### 1. Data
Unlike hardware, training data is non-competitive and can be used by many people simultaneously. However, many high-quality training data sources are subject to copyright and licensing fees.
This is especially true in professional fields or areas where data regulation is stricter (such as healthcare or finance). Pre-training or fine-tuning a model with deep expertise in these areas may require access to large amounts of data that ### is not widely available and difficult for new market entrants to collect.
Of course, merely possessing large amounts of data is not illegal. However, antitrust enforcement agencies may pay special attention to companies' control over data to discern whether it may reduce the supply of data, create barriers to access, and hinder the full development of fair competition.
### 2. Talent
Another important input for generative AI is labor expertise. Developing generative models requires a large number of engineers and researchers who must have specific and relatively scarce skills and a deep understanding of machine learning, natural language processing, and computer vision. The talent that companies are able to acquire and retain may play a key role not only in the development path of generative AI but also in the speed of development.
Due to the scarcity of talent, powerful companies may have the motivation to lock in employees, thereby stifling competition from actual or potential competitors. To ensure market competition and innovation, it is essential to allow talented individuals with innovative ideas to move freely and, crucially, not be hindered by non-compete clauses.
The UK CMA's announcement in July to investigate Microsoft's hiring of Inflection AI's (OpenAI's competitor) core team falls into this category. Additionally, the US FTC's announcement on April 23, 2024, of a comprehensive ban on all employees (including senior management) signing new non-compete agreements is also aimed at promoting the flow of IT talent. However, its legality was quickly challenged by judges, and the outlook remains uncertain.
### 3. Computing resources
Generative AI systems typically require large amounts of computing resources. Computing often requires dedicated hardware, such as computers with specialized chips like graphics processing units (GPUs), or access to computing resources through cloud computing services. However, ### the maintenance costs of chips and cloud service prices are expensive, and currently only a few companies provide them, which increases the risk of anti-competitive behavior.
Today, some specialized chip markets are already highly concentrated, and demand for server chips may exceed supply. For example, the surge in demand for server chips that can be used to train AI has led to shortages, prompting major cloud server providers such as AWS, Microsoft, Google, and Oracle to "limit the availability of products to customers." Companies in highly concentrated markets are more likely to engage in unfair competitive practices or other antitrust law violations.
In 2022, after more than two months of litigation by the US Federal Trade Commission against Nvidia, Nvidia abandoned its acquisition of Arm. The FTC believed that the deal would allow Nvidia to suppress innovative competing technologies and unfairly undermine the position of Arm's licensing business competitors.