In 2025, there will be a course correction in AI and geopolitics, as world leaders increasingly understand that their national interests are best served through the promise of a more positive and cooperative future.

The post-ChatGPT years in AI discourse could be characterized as somewhere between a gold rush and a moral panic. In 2023, at the same time as there was record investment in AI, tech experts, including Elon Musk and Steve Wozniak, published an open letter calling for a six-month moratorium on the training of AI systems more powerful than GPT-4, while others compared AI to a “nuclear war” and a “pandemic.”

This has understandably clouded the judgment of political leaders, pushing the geopolitical conversation about AI into some disturbing places. At the AI & Geopolitics Project, my research organization at Cambridge University, our analysis clearly shows the increasing trend towards AI nationalism.

In 2017, for example, President Xi Jinping announced plans for China to become an AI superpower by 2030. The Chinese “New Generation AI Development Plan” aimed for the country to reach a “world-leading level” of AI innovation by 2025 and become a major AI innovation center by 2030.

The CHIPs and Science Act of 2022—a US ban on exporting semiconductors—was a direct response to this, designed to advantage US domestic AI capabilities and curtail China. In 2024, following an executive order signed by President Biden, the US Treasury Department also published draft rules to ban or restrict investments in artificial intelligence in China.

AI nationalism depicts AI as a battle to be won, rather than an opportunity to be harnessed. Those who favor this approach, however, would do well to learn deeper lessons from the Cold War beyond the notion of an arms race. At that time, the United States, while pushing to become the most advanced technological nation, managed to use politics, diplomacy and statecraft to create a positive and aspirational vision for space exploration. Successive US governments also managed to get support at the UN for a treaty that protected space from nuclearization, specified that no nation could colonize the moon, and ensured that space was “the province of all mankind.”

That same political leadership has been lacking in AI. In 2025, however, we will start to see a shift back in the direction of cooperation and diplomacy.

The AI Summit in France in 2025 will be part of this shift. President Macron is already reframing his event away from a strict “safety” framing of AI risk, and towards one that, in his words, focuses on the more pragmatic “solutions and standards.” In a virtual address to the Seoul Summit, the French president made clear that he intends to address a much broader range of policy issues, including how to actually ensure society benefits from AI.

The UN, recognizing the exclusion of some countries from the debate around AI, has also released in 2024 its own plans aimed at a more collaborative global approach.

Even the US and China have begun to engage in tentative diplomacy, establishing a bilateral consultation channel on AI in 2024. While the impact of these initiatives remains uncertain, they clearly indicate that, in 2025, the world’s AI superpowers will likely pursue diplomacy over nationalism.

Share.
Exit mobile version