top of page

Silicon Redux: How AI Is Becoming the New Chip War and Reshaping Geopolitics

  • Writer: Chockalingam Muthian
    Chockalingam Muthian
  • Sep 28
  • 5 min read

From integrated circuits to intelligent circuits – a historical lens


At first glance, the global scramble for generative AI models and the chips that power them may feel like a brand‑new phenomenon. Yet history offers an instructive parallel. In the 1960s Texas Instruments used the integrated circuit, previously a military component, to build the Canon Pocketronic, the first handheld calculator. Conceived in 1965, the Pocketronic was commercialised in 1970 and 1971 and quickly sold millions of units. By shifting computing from mainframes to personal devices, integrated circuits enabled not just pocket calculators but microprocessors, personal computers and ultimately the digital economy. Semiconductors became a core strategic resource; Washington and Tokyo subsidised fabs, while Cold War export controls kept advanced chips out of adversaries’ hands. The formation of what would become Taiwan’s TSMC in the late 1980s arose directly from this era’s industrial policies.


The geopolitical significance of semiconductors was thus established decades ago: whoever controlled chip production, controlled innovation. Supply chains were intentionally globalised to tie allies together and isolate rivals. Asian democracies such as South Korea and Taiwan built their economies around contract manufacturing, while the United States retained design and lithography monopolies. The result was a delicate balance: technological prosperity could be shared among allies, but the disruption of any node, whether through trade wars or conflict threatened the entire system. As we enter the age of artificial intelligence, a similar dynamic is emerging, albeit amplified by the power of software and data.


AI chips: hardware of intelligence and drivers of competition

Unlike general‑purpose semiconductors, AI accelerators are specialised hardware designed to speed up AI workloads. These chips such as GPUs, field‑programmable gate arrays (FPGAs) and application‑specific integrated circuits (ASICs) handle massive datasets and execute complex mathematical computations with improved energy efficiency. Nvidia’s CUDA framework and its dominant GPU ecosystem illustrate how software and hardware co‑evolve. The HBKU white paper notes that the AI chip race is spurring tech giants to design custom accelerators, sparking a new quest for supremacy. In other words, control over the “brains” of AI models is as strategic today as control over transistor yields was half a century ago.


The geopolitical implications are already visible. China has declared its ambition to become a global AI superpower by 2030, and Chinese tech giants such as Huawei and Cambricon are heavily supported by state investment and favourable regulation. The same white paper emphasises that the United States is concerned about China’s rapid advances and has undertaken policies to regain semiconductor manufacturing capabilities. To strengthen resilience, Washington passed the Chips and Science Act, while Brussels launched a EU Chips Act; both schemes aim to expand domestic production, reduce reliance on external suppliers and maintain export‑control leverage. The underlying fear is that domination of AI hardware could translate into economic coercion, espionage advantages and military superiority.


A three‑way contest: US, China and the EU

During the Cold War the semiconductor industry was a two horse race between the US and Japan. Today’s AI geopolitics features a triangular contest among the US, China and the European Union. The US still leads in chip design and AI research, boasting firms such as Nvidia, Google and OpenAI. But manufacturing has drifted to Taiwan and South Korea, creating vulnerabilities, the Taiwan Strait now functions as both supply chain linchpin and geopolitical flashpoint. China, despite US export controls, continues to invest billions in local fabs and to involve its tech giants. Huawei’s Ascend series of AI chips and Baidu’s Kunlun processors exemplify Beijing’s strategy to indigenise hardware and reduce dependence on Western suppliers. Meanwhile, the EU, long reliant on imported semiconductors, has begun courting investments for advanced fabrication in Germany, France and the Netherlands, viewing digital sovereignty as essential to industrial competitiveness.


Unlike the 1970s, when Japan’s rise in DRAM production triggered a trade war but did not fundamentally challenge US hegemony, the AI contest has broader implications. AI systems influence surveillance, social stability and military command and control. Algorithms trained on massive data sets can make lethal decisions or shape public opinion thus, the states that control them can project algorithmic power. Export controls on AI chips or foundation models function not only to protect intellectual property but to limit rivals capabilities in sensitive areas like cryptanalysis and autonomous weaponry. In this sense, the new chip war is less about mass market electronics and more about the strategic control of intelligence itself.


Beyond great powers: the rest of the world mobilises

Geopolitics today is not solely defined by the US - China rivalry. The HBKU white paper notes that countries such as Saudi Arabia, the United Arab Emirates and India are investing in research, infrastructure and talent to develop AI industries. India’s semiconductor mission, for instance, offers billions of dollars in subsidies to attract fabs and design centres, while also positioning the country as a neutral hub for global supply chains. The Gulf states, flush with petrodollars, are establishing data centres powered by cheap energy to lure AI developers. Such initiatives echo the ambitions of South Korea and Taiwan in the 1970s: leverage government support and cost advantages to climb the value chain. The difference is that today’s entrants must navigate a landscape shaped by intellectual‑property restrictions, export controls and the network effects of established AI ecosystems.


For smaller states, AI presents both opportunity and risk. On one hand, access to AI models and chips can accelerate development in sectors ranging from health care to climate forecasting. On the other, dependence on foreign technology creates vulnerabilities to sanctions or surveillance. Qatar’s national AI strategy, for example, recommends focusing on a single stage of the chip supply chain, attracting manufacturers with low energy costs and building local talent. This strategic hedging mirrors the industrial policies of the semiconductor era: specialise to become indispensable without overreaching.


Techno‑political fault lines: supply chains, standards and values

AI’s geopolitics also intersects with norms and values. While semiconductors in the 1970s were primarily an economic commodity, AI shapes how societies make decisions. Debates about algorithmic bias, privacy and workers rights have become geopolitical issues, with liberal democracies championing “trustworthy AI” frameworks and authoritarian regimes promoting surveillance oriented models. Export controls on AI chips, by extension, are framed as instruments of national security and human rights. The EU’s AI Act and China’s generative AI regulations embody competing visions of governance: one emphasises accountability and transparency, the other social stability and state oversight.


Supply‑chain resilience is another techno‑political fault line. The HBKU report highlights that high‑performance AI chips rely on manufacturing nodes controlled by a handful of firms and countries. Disruptions such as earthquakes in Taiwan to shipping blockages in the Red Sea could stall AI development worldwide. Governments are therefore sponsoring diversification, investing in “friend‑shoring” and building reference AI data centres to attract chip firms. Companies, for their part, are experimenting with open source hardware designs and exploring alternatives to GPUs to mitigate dependency on any single vendor. The interplay between industrial policy and private innovation will shape how evenly AI benefits are distributed.


Looking ahead: from silicon to cognition

The analogy between the integrated circuit revolution of the 1960s/70s and the AI era is instructive but not perfect. Semiconductors were a hardware breakthrough that enabled the personal computing boom; AI combines hardware, algorithms and data. Yet both transformations share a key feature: they reconfigure the global balance of power. The Canon Pocketronic’s success illustrated how a technology once reserved for military applications could democratise calculation and spawn new industries. Today, AI promises to automate reasoning, design synthetic organisms and weaponise information. States and corporations are racing not just to produce chips but to embed intelligence into every device and institution.


For readers of this article, the stakes are clear. In the coming decade, AI will influence everything from voter behaviour to military deterrence. The geopolitics of AI will therefore not only hinge on who builds the fastest chips but on who sets the standards and values that guide intelligent systems. Just as the semiconductor era catalysed a reordering of alliances and supply chains, the AI era will force democracies and autocracies alike to rethink sovereignty and cooperation. Understanding this continuum—from integrated circuits powering 1970s calculators to custom AI accelerators driving generative models—is essential for making sense of our rapidly changing world.

 

 
 
 

Recent Posts

See All
LLM Tech Stack

Pre-trained AI models represent the most important architectural change in software development. They make it possible for individual...

 
 
 
bottom of page