Published daily by the Lowy Institute

Australia’s $17 trillion AI moment

In a world where you can be fired by a bot, countries that get workplace AI governance right will reap economic gains.

Research shows companies implementing strategic human–AI collaboration achieve 20–30 per cent productivity gains (Luke Jones/Unsplash)
Research shows companies implementing strategic human–AI collaboration achieve 20–30 per cent productivity gains (Luke Jones/Unsplash)

The global economy stands on the brink of an unprecedented transformation that artificial intelligence (AI) will drive over the next decade. Goldman Sachs estimates that two-thirds of jobs in Europe and the United States are exposed to some level of AI automation, while McKinsey research suggests AI will generate more than US$17 trillion in annual productivity gains. But policymakers are missing an important insight: the distribution of gains depends entirely on the implementation of AI in the workplace.

Analysis from the International Monetary Fund has shown that the stakes are significant. Roughly half of AI-exposed jobs will benefit from integration that enhances productivity. The other half face wage cuts and reduced hiring as AI replaces people in the workforce. The difference isn't technical – it's governance. Countries that get workplace AI governance right will capture economic gains while maintaining socioeconomic stability. Those that don't risk the erosion of both public trust in AI and the competitive economic advantage AI provides.

Consider the human dynamics when your boss fires you. They arrange a private meeting, explain the rationale, acknowledge contributions, follow transition procedures. These behavioural norms – institutional rituals around difficult decisions—form the invisible infrastructure that makes power relationships tolerable. But what happens when AI makes that decision?

Traditional governance relies on people who understand soft rules – don't sack someone on Christmas Eve, treat outliers as humans not data points.

Recent examples illustrate the risks. Amazon's routing algorithm automatically dismissed delivery drivers for “efficiency violations” with no appeal, context, or human review. China's Ele.me platform uses algorithms that trim delivery windows to the second, forcing couriers to run red lights. Beijing regulators have ordered platforms to rectify these exploitative controls. Across thousands of firms globally, software now hires, fires, sets wages, and redesigns workflows outside the behavioural constraints that decades of corporate culture evolved.

This matters economically. Research shows companies implementing strategic human–AI collaboration achieve 20–30 per cent productivity gains, while firms automating primarily to cut workforces see only short-term benefits. MIT economist Daron Acemoglu calls most current deployments “so-so automation” – cost saving but rarely transformative. The productivity revolution will depend on the economics of complementarities, not substitutions.

Machines excel at pattern recognition across vast datasets but stumble over nuance, ethics and context. Humans excel at coordination, empathy, and the reputational awareness that algorithms cannot replicate. Traditional governance relies on people who understand soft rules – don't sack someone on Christmas Eve, treat outliers as humans not data points. Software doesn't inherit that level of social awareness.

Algorithm
Machines excel at pattern recognition while humans excel at coordination, empathy, and the reputational awareness that algorithms cannot replicate (Markus Spiske/Unsplash)

Australia faces a distinctive strategic choice. While China pursues efficiency-first automation and the United States allows market-driven fragmentation, Australia could pioneer a hybrid governance model that captures AI's economic potential while maintaining public trust. Australia has form in exporting governance models that balance innovation with social protection. Australia’s compulsory superannuation system influenced pension reform across OECD countries. Post-global financial crisis banking regulations became templates for emerging economies. The social licence frameworks embedded in Australia’s mining sector are studied globally. This track record positions Australia to pioneer AI workplace governance that other democracies will adapt – if Australia moves first.

The hybrid governance framework rests on three design principles:

  • Retain human judgment at decision points that have significant social cost,
  • Make algorithmic reasoning transparent and contestable,
  • Build feedback loops so contextual experience continuously trains and updates an AI's behavioural responses.

Rather than spending billions on AIs deployed to fix problems they aren't well suited to, let humans steer strategy while machines handle the computational heavy lifting. While centralised regulation remains important, decentralised social governance must play a prominent role too.

Early examples validate the approach. Banks use AI for loan screening but require human approval for rejections. The principle works; systematic implementation is the challenge. Countries establishing governance models that balance AI's economic potential with social accountability will maintain skilled workforces and avoid policies that threaten to widen existing inequality gaps.

If Australian institutions prove human–AI collaboration delivers competitive performance alongside social fairness, others will follow.

International dynamics amplify the stakes. A Brookings analysis of 34 national AI strategies shows similar governance approaches clustering across countries, suggesting an interconnected element in which no country is truly going it alone in AI governance. Australia's positioning matters: its corporate culture already values social licence to operate, and businesses face pressure to demonstrate environmental, social and governance credentials to global investors.

Start with government procurement contracts worth AU$99.6 billion annually: require human oversight for AI affecting individual rights. Success creates templates for private markets while building public trust. Next, engage the AU$4.1 trillion superannuation sector. Training AI to flag when optimisation clashes with long-term social goals would demonstrate hybrid governance – algorithms that aren’t just maximising profits but are learning boundaries of behaviour.

If Australian institutions prove human–AI collaboration delivers competitive performance alongside social fairness, others will follow. The window is narrow – perhaps five years before alternative patterns cement. Building on OECD AI Principles and the EU's Artificial Intelligence Act, Australia could export governance models as quickly as technology spreads.

In the next two years, government agencies such as Treasury or the Reserve Bank of Australia could pilot studies to establish measurable benefits of hybrid governance models. Success metrics should include improved trust in workplace AI, adaptive improvements in participating government departments, and international interest in Australian frameworks. The choice remains urgent – establish governance leadership while competitors are still experimenting, or accept the patterns others will set. Australia's share of AI’s economic gains depends on getting workplace governance right.




You may also be interested in