Control over the production and supply of key components for artificial intelligence has become one of the most significant strategic battles of the 21st century. Reuters reported on 29 April that the administration in Washington is considering abolishing the three-tier system for the export of advanced AI chips and replacing it with bilateral agreements with each individual country.
Just five days earlier, Alphabet Inc. announced its first-quarter results, with revenues of USD 90.23 billion and record capital expenditures totalling USD 17.2 billion. At the same time, the company announced investments worth USD 75 billion in global data centres and specialised AI hardware.
These two overlapping signals, the political impetus to revise export rules and the massive allocation of capital by corporations, indicate that technical resources will continue to determine who dictates technical progress.
The previous regime divided destinations into three categories, with the strictest measures targeting countries such as China and Russia, whereas allies were subject to more favourable conditions.
Individual authorisations were required for each shipment to highly regulated countries, slowing deliveries and increasing costs even for civil and scientific projects.
The 29 April proposal introduces bilateral licences with each individual country, which reduces bureaucracy but also risks complex intermediary chains enabling abuse in countries considered security threats.
Changing the bureaucratic framework
Those in favour of reform argue that easier access to chips would strengthen cooperation with partners, such as EU member states, Japan, and Australia, while critics warn that it could weaken American control over technology flows and encourage restricted countries to resort to covert channels to acquire sophisticated processors.
In any case, changing the bureaucratic framework only reinforces the imperative of domestic production, because no administrative measure can replace the physical reserve of chips.
The technology giants are not waiting for the final result from Brussels or Washington
The technology giants are not waiting for the final result from Brussels or Washington. Alphabet reported a 43% increase in capital expenditures in the first quarter, investing USD 17.2 billion in new data centres, tensor processing units, and fibre optic connectors. Alphabet plans to invest a total of USD 75 billion in 2025 to maintain the performance of services like Google Search and Gemini AI assistant.
Amazon has poured over USD 8 billion into a partnership with startup Anthropic by integrating the Claude model into Alexa+, while Microsoft is balancing between the acquisition of Nvidia chips for Azure and the development of internal accelerators.
In addition to the obvious market motives, these investments have another function: to create a barrier to entry for medium-sized and smaller players. While the cost of building a single hyperscale data centre exceeds USD 1 billion, non-profit projects and research institutions are increasingly reliant on publicly funded or open-source hardware alternatives to remain competitive.
Reducing risk in the supply chain
The physical availability of AI processors has become an essential part of the supply chain. In Arizona, TSMC has begun construction of the second giga processing complex for "System on Wafer-X" technology, at a cost exceeding USD 20 billion, while AMD plans to process and test the fifth generation of EPYC processors in American facilities.
Such a presence on American soil reduces the risk of supply disruptions due to changes in export rules or increased trade tensions.
The European Commission has pledged several billion euros in subsidies to develop the semiconductor industry and regulate the ethical application of AI, but this is still modest compared to the tens of billions of dollars invested by the US.
Without a strong investment strategy, Europe could remain a passive buyer of technology
Germany and France are considering joint projects for packaging and lithography, but without an additional EUR 20–30 billion over the next four years, the capacity created will not be able to gain a foothold in the global market.
Without a strong investment strategy, Europe could remain a passive buyer of technology, dependent on the decisions of Washington and Beijing in setting global standards.
Establishing a balance in production
In the coming period, the industry will move to a model in which smaller providers who cannot keep up the pace of investment seek partnerships or concentrate on narrowly defined systems.
These are, for example, specialised AI platforms or hardware accelerators developed for tasks such as real-time fraud detection.
At the same time, the largest players are constantly refining their chips for specialised tasks — from large-scale language models to real-time video processing — while maintaining general-purpose lines.
The next phase will not only be about building bigger facilities full of chips but also about striking a balance between centralised giants and decentralised innovators
As the cost of commercial services rises, more and more researchers and startups will turn to open projects and public initiatives in an attempt to find a solution that costs no more than the resources they use.
In this way, the next phase will not only be about building bigger facilities full of chips but also about striking a balance between centralised giants and decentralised innovators: whoever can adapt supply chains the fastest, respond to rule changes, and maintain a high standard of technological precision and flexibility at the same time will win.
How to prevent monopolies from forming?
The current concentration of investment in a few of the largest corporations increases the risk of digital inequality; as data centre capacities and AI services remain expensive, countries and entities without access to capital become dependent on foreign technologies, further widening the gap between technologically developed and less developed regions.
This dynamic could encourage the formation of coalitions of states interested in developing public AI platforms to counterbalance private dominance.
The shift in export regulations for AI chips and the tech giants' heavy investment in infrastructure reveal a central reality of the modern world - TSMC, Arizona
At the same time, the speed at which companies are pushing for vertical integration (from developing chips to building hyperscale data centres) poses new challenges for regulators: how to prevent the formation of too large a monopoly controlling hardware, software and the distribution of services without stifling innovation?
The answer to this question could shape not only the technological scene but also economic and geostrategic relations over the next decade.
The shift in export regulations for AI chips and the tech giants' heavy investment in infrastructure reveal a central reality of the modern world: in the race for digital supremacy, the ability to secure and distribute hardware carries the same, if not more, weight than the software itself.
As the US works to simplify the rules to protect its own interests, companies are pre-emptively building capacities, securing supply and diversifying supply chains.
Those who establish a robust and adaptable network of technologies will set the pace of future technological advancement in a world where any policy changes can alter the game.