Published daily by the Lowy Institute

AI surveillance and the governance vacuum in the Asia-Pacific

While Europe moves ahead with regulation, China is exporting the infrastructure of digital control.

There is an urgent need for regional dialogue and coordination on AI surveillance governance (Getty Images)
There is an urgent need for regional dialogue and coordination on AI surveillance governance (Getty Images)

With the EU’s Artificial Intelligence (AI) Act introducing a phased, risk-based framework for regulating AI systems across sectors, the bloc stands alone as the first major political actor to impose enforceable limits on biometric surveillance, including a near-total ban on real-time facial recognition in public spaces. Key prohibitions have been in force since February 2025, with rules on general-purpose AI systems coming into effect next month. While Europe moves ahead with implementation, China is quietly exporting the infrastructure of digital control across the Asia-Pacific. The region is seeing a rapid expansion of powerful surveillance technologies yet remains largely devoid of the governance frameworks needed to regulate their use.

China is the world’s leading exporter of AI-powered surveillance technology. Its firms – Huawei, Hikvision, ZTE, CloudWalk, and others – are deeply embedded in “safe city” and smart city projects across Southeast Asia, Central Asia, Africa, and the Middle East. These systems integrate facial recognition, biometric profiling, and real-time video analytics into urban infrastructure under the banner of public safety and efficiency. However, their architecture also enables political surveillance on a scale and with a capacity previously unavailable to many governments.

AI-powered surveillance technologies, most notably facial recognition, are a textbook example of dual-use technologies.

Unlike in the EU, where such capabilities are now subject to legal constraints, the Asia-Pacific region lacks comprehensive rules governing the deployment of biometric surveillance in public spaces. While democracies such as Japan, South Korea, and Australia are beginning to explore AI ethics and risk classification, these remain early-stage efforts. Meanwhile, across much of Southeast Asia, surveillance infrastructure is expanding rapidly – particularly through partnerships with Chinese tech companies—in environments where institutional checks are weak, public consultation is minimal, and transparency is often low.

AI-powered surveillance technologies, most notably facial recognition, are a textbook example of dual-use technologies: promoted as tools for counterterrorism and crime prevention yet easily repurposed to consolidate political control. Their core function – identifying individuals in real time across public spaces – makes them uniquely effective not only for enhancing security, but for monitoring protests, intimidating dissenters, and suppressing opposition. This logic is vividly demonstrated in China’s domestic deployment of facial recognition in Xinjiang, where the state has constructed one of the world’s most extensive surveillance regimes to monitor and repress the Uyghur minority. Elements of this model are now being exported globally through China’s Digital Silk Road.

Facial recognition
Once operational, surveillance systems reshape institutional practices and normalise constant monitoring (Getty Images)

The EU’s AI Act deserves credit for advancing protections against the misuse of AI surveillance. Although structured as a risk-based framework, the Act embeds rights-based safeguards – especially in its treatment of biometric surveillance technologies such as remote facial recognition. Real-time biometric identification in publicly accessible spaces by law enforcement is banned in most circumstances, with narrowly defined exceptions. Other uses are classified as “high-risk” but are permitted under strict regulatory requirements. These rules apply not only to European governments and companies, but also to any firms placing AI systems on the EU market. As such, the Act has the potential to shape international norms around biometric surveillance, particularly in jurisdictions where regulatory alignment with the EU is either desirable or economically necessary.

However, its influence remains limited beyond the EU’s jurisdiction, especially in states not party to multilateral export control regimes such as the Wassenaar Arrangement, which seeks to regulate the trade in dual-use technologies. The result is a fragmented global landscape in which repressive regimes can shop for surveillance tools with few questions asked, while suppliers claim neutrality in the face of foreseeable misuse.

China’s surveillance exports are not just commercial products; they are powerful instruments of norm diffusion.

This governance gap is not accidental. It reflects a deeper asymmetry: while Europe writes AI rules, China is building the AI surveillance systems. The EU is investing in normative leadership through regulation; China in technological innovation, standard-setting and geopolitical alignment. Huawei’s involvement in building surveillance networks in Serbia, Kenya, Pakistan, and Laos – often in conjunction with smart city infrastructure – is a case in point. These are long-term, embedded systems that are difficult to replace and easy to repurpose. They lock governments into Chinese hardware, software, and servicing ecosystems. More insidiously, they generate political inertia. Once operational, surveillance systems reshape institutional practices and normalise constant monitoring.

The Asia-Pacific is particularly exposed. Many of the region’s political systems are semi-authoritarian or hybrid regimes where surveillance tools can be used with little public oversight. In others, the state’s administrative capacity is limited, and Chinese firms offer bundled financing and turnkey solutions that are hard to resist. Even in more established democracies, there is currently no shared vision for how to regulate high-risk AI systems such as facial recognition.

This is not a call to adopt the EU’s model wholesale. The political, economic, and legal contexts of the Asia-Pacific are distinct. But there is an urgent need for regional dialogue and coordination on AI surveillance governance. ASEAN, APEC, and Quad member states should take leadership roles in setting principles for transparency, accountability, and human rights protections in the deployment of biometric and AI-powered surveillance tools. Regional democracies, including Japan, South Korea, and Australia, have both the capacity and the credibility to lead efforts in this area.

The alternative is that norms will be set elsewhere – embedded not in regulation or public oversight, but in technical systems, software defaults, and procurement contracts. China’s surveillance exports are not just commercial products; they are powerful instruments of norm diffusion. And if the Asia-Pacific does not begin defining what acceptable AI surveillance looks like on its own terms, it may soon find that those terms have already been decided.




You may also be interested in