In Malaysia, a sex-tape scandal engulfing the country in recent months has threatened to destabilise the governing coalition. While the tape has been determined by the police to be authentic, and not a forgery, it is still questioned in some quarters. The truth of the video itself is a point of debate.

The erosion of public trust in traditional media, driven by “fake news” and hyper-partisan news outlets, has been well documented. And it’s poised to get much worse. Tech experts warn that as video technology able to make anyone say (and do) anything – or, that is, appear to – becomes increasingly sophisticated and more widely available, its impact on politics and culture could grow out of control.

In the US and Europe, business leaders and politicians are making efforts to stay ahead of the tech and the manipulation of information, motivated in part by fears of election interference. In the emerging and delicate democracies of Southeast Asia, however, conditions are ripe for chaos. Social media platforms have enormous reach and little oversight, and political scandals can spread like wildfire.

While machine-learning video currently lies firmly in the “uncanny valley” – that is, it looks almost realistic but still elicits an unsettling feeling in the viewer as not quite right – it is only a matter of time before it reaches the point of being undetectable.

“Deep fake” is the term for video that has been doctored to present real people doing unreal things. By combining two data sets — say, one of the face of the targeted person and the other of the body onto which it will be superimposed — anyone can use deep-learning artificial intelligence to “teach” the target to “say” whatever they please.

A YouTube channel called derpfakes is one example of how convincing this trickery can be, albeit through the more benign project of inserting Nicolas Cage into films and television shows in which he never starred.

Apps and tutorials have sprung up enabling anyone, anywhere to create deep-fake video. Videos of senior Democrat Nancy Pelosi and, pointedly, Facebook founder Mark Zuckerberg show that even crudely altered videos – aka “cheap fakes” – can be convincing to some viewers.

While machine-learning video currently lies firmly in the “uncanny valley” – that is, it looks almost realistic but still elicits an unsettling feeling in the viewer as not quite right – it is only a matter of time before it reaches the point of being undetectable. Bloomberg technology correspondent Jeremy Kahn has called this kind of content “fake news on steroids, potentially”.

US lawmakers are scrambling to limit the impact ahead of the 2020 elections. Democratic Congressman Adam Schiff, who chairs the House Intelligence Committee, recently sounded the alarm: “Now is the time for social media companies to put in place policies to protect users from this kind of misinformation, not in 2021 after viral deep fakes have polluted the 2020 elections.”

Bosses from Google (which owns YouTube), Twitter, and Facebook (which owns Instagram) answered in varying degrees to questions posed in a letter from the committee.


A post shared by Bill Posters (@bill_posters_uk) on Jun 13, 2019 at 5:18am PDT

A deep fake video of Facebook founder Mark Zuckerberg, posted on Instagram (via bill_posters_uk)

Facebook and Twitter have vowed to cut off users found to be posting deep fakes and to improve efforts to identify artificially produced video. For the social media giants, such action would theoretically root out both election disinformation and the proliferation of deep-faked pornography (which, in true internet fashion, is where much of the technology was pioneered).

Google was less forthcoming. It vaguely referred to the development of automation that would allow YouTube to identify and remove falsified video.

The companies’ responses bode poorly for next year’s US election. Congressman Schiff insists the companies have a responsibility to prevent the weaponising of their platforms. "It's clear they are far from ready to accomplish that," he said in a statement.

For Southeast Asia’s fragile democracies, the lack of resolve could lead to disaster.

In Myanmar, Facebook has already been accused by UN human rights experts of playing a role in spreading hate speech against the Rohingya minority. As the 2020 general election approaches, hate speech and fake news are seen as a real threat in a country struggling with ethnic divisions and conflict, and a tenuous grip on democracy. 

The controversy in Malaysia, meanwhile, is one of the first cases where the mere suspicion of deep fakes has undermined the authenticity and factual accuracy of news reporting. Likewise, a video of political aide Haziq Abdul Aziz has been accused of being faked. With so much confusion surrounding the long-running scandal, it has become a Rorschach test for how one feels about about the accused.

And even before “deep fake” entered the lexicon, Indonesians got in an uproar when a video of Jakarta’s then-governor Basuki Tjahaja Purnama apparently insulting the Koran went viral. Filmed during a campaign event in late 2016, the video shows the ethnic-Chinese, Christian governor saying voters should not be swayed by those “using the Koran as a political tool”. The video was transcribed and uploaded by a university lecturer who omitted a single word, “pakai” (“use”), which left plenty of room for ambiguity. Both the uploader and the governor were sentenced to prison.

While the Purnama case looks positively primitive given today’s emerging technology, the video was easy material for exploitation among hard-line opposition groups and became a catalyst to justify xenophobic campaigning. The possibilities of who could be targeted next and why are endless — and certainly not confined to the Islamic hard right.

Elsewhere, infamously nasty fake news industries in the Philippines and recent revelations  of Russia-linked campaigns in Thailand show how widespread, diverse, and out-of-hand political hoaxes have become.

Southeast Asian democracies are the perfect evironment for the abuse of this technology, and there is no one-size-fits-all response where the actors are hidden and the battle lines are fluid. But with the ever-growing use of social media and low digital literacy across the region, there is an urgent need for action – far quicker than the badgering in US congressional hearings will produce.