One development that has tended to pass under the radar in Donald Trump’s “America First” agenda is the decision to revoke regulations on artificial intelligence (AI). This raises serious concerns about ethical standards and the potential misuse of cutting-edge AI technology. It also confirms that the United States is no longer willing to exercise leadership in containing the more egregious impacts of this revolutionary technology.
AI development transcends borders, affecting international collaboration, ethical standards, and the balance of power in technology. The choices made in one country can have downstream effects for others, influencing how AI is harnessed for good—or misused—worldwide. Not only does AI promise rapid advances, AI also poses significant challenges to labour force composition, with some large corporations shedding large sections of their workforce based on a cost-benefit analysis of leveraging AI to undertake tasks where humans previously led. For an AI superpower like the US, any move to strip away controls over AI is especially significant.
Under the Biden administration, an Executive Order on AI endorsed the goal of “developing and deploying safe, secure, and trustworthy AI systems” while emphasising the need for responsible development of AI consistent with civil rights, equity, and workers’ rights. For its part, the Trump administration has repudiated this approach and foregrounded the commercial opportunities for the United States that flow from exploiting its position as an AI superpower. The administration has privileged deregulation, and Trump has rendered explicit an ambition to preserve US primacy in the global AI race. This was codified in the presidential Executive Order issued just two days after his inauguration, which activated the administration’s commitment to removing regulatory guardrails for the private sector.
Like the dominant revolutionary technology of the 20th century, nuclear power, AI can be exploited for nefarious as well as positive ends.
Justified as necessary to accelerate innovation and maintain US dominance, Trump’s Executive Order has stoked concerns, particularly in Europe, about ethical oversight and the future of international cooperation. By contrast, the Biden administration had focused on fostering partnerships with allies like Europe and Australia, emphasising shared values and the need for governments to mitigate the inevitable risks associated with AI’s rapid development.
An unregulated mindset in relation to these risks demands careful reflection. The inherent dual-use nature of AI means that technologies designed for civilian applications can be weaponised, with the potential to escalate conflicts and undermine global security. States with expansionist designs and ideologies inimical to democratic societies will seek to leverage AI to enable and improve military systems that enhance coercive capabilities. China is investing significantly in military AI with a view to offsetting key US conventional force advantages in the Indo-Pacific, which has obvious implications for US allies.
Furthermore, the rapid pace of AI advancements leaves us grappling with unknowns. Future quantum leaps in technology will yield unforeseen consequences – this makes the precautionary principle a prudent strategy. The Trump administration’s free-for-all approach empowers private actors, but it risks sidelining national security and public welfare in favour of profit-driven innovation.
Without state-based guardrails, private entities will inevitably prioritise short-term gains over ethical considerations, equity, and long-term societal impacts. This unregulated landscape could lead to a fragmented and dangerous trajectory for AI, one with profound consequences that impacts on all countries, including Australia.
Like the dominant revolutionary technology of the 20th century, nuclear power, AI can be exploited for nefarious as well as positive ends. While attempts to stop the atomic genie escaping from the bottle failed, it is not too late for countries to reinforce regulatory guardrails around AI. Contrary to popular wisdom, states remain the single most powerful actor in the international system. In the case of regulating AI, we should all be looking to unleash the power of the state to safeguard the interests of current and future generations.