As some Asian countries gear up for elections in the later part of 2020, tackling the spread of malicious disinformation is a priority. Even as South Korea battles to control the coronavirus outbreak, elections for the legislative council look set to continue on 15 April, while question marks still hang over the timing of a legislative council ballot in Hong Kong and general elections in Singapore.

But regardless of when the elections are held, countering the type of information that is false and the person disseminating it is aware it is false will be essential. This challenge was already apparent long before Covid-19 came to dominate the headlines – if anything, the “infodemic” of misinformation surrounding the virus has made the need to counter disinformation even more acute, with some causing a public health hazard, others seeking to facilitate foreign interference.

And like propaganda of eras gone by, disinformation is far from just words. Visual forms of disinformation deserve particular attention, whatever the particular differences in Asian political systems.

Social media now comprises more non-textual content such as images, video, and emojis. Short-video sharing site TikTok is now the third most-downloaded social media app around the globe. Photo-sharing platform Instagram has become the sixth most popular social networking platform, while Facebook will soon allow 3D posts on its platform.

Visual content is now widely shared across platforms. In closed instant messaging platforms such as WhatsApp, which is becoming a primary way in which many people around the world receive news and by extension, misinformation, it is common to send GIFs, memes, or videos in place of texts.

Image search is now a popular way of finding online content. Some estimates at least 50% of searches will be through images or speech.

Bangkok, Thailand (Bryon Lippincott/Flickr)

The pervasiveness of visuals means that disinformation is increasingly produced in the form of images and videos rather than texts. The Oxford Internet Institute warned, for instance, that online disinformation campaigns are driven by viral videos, memes and photos. Much has also been made of the potentially disastrous impact of deepfakes, artificial intelligence-assisted video and audio editing for creating disinformation. From privacy breach to undermining public trust and even national security, the implications of deepfakes are limited only by the imaginations of certain actors.

Visuals have a unique emotive impact on the audience, an aspect that has been exploited by disinformation producers. Visual images reinforce popular antagonism and harness the virulent rage of political supporters online. It is easier to get away with subversive messaging using memes, photos and videos. Research noted that humans are inept at identifying manipulated photos. Current sentiment analysis software still finds it tricky to interpret the messages behind visual content too.

The proliferation of visuals on social media has certainly changed the way disinformation producers work. In Indonesia, the so-called “buzzers” (digital influencers that are hired to strategically amplify online messages) are increasingly hired to work on photo-sharing platform Instagram instead of Facebook and Twitter. In view of the increasing popularity of Instagram, that more “click farmers” are also hired to maximise the appearance of engagement with the platform.

Visuals have a unique emotive impact on the audience, an aspect that has been exploited by disinformation producers.

Rather than experimenting with complex new technological tools such as deepfakes, purveyors of disinformation may fall back on existing ones that are easier to deploy. Simpler forms of disinformation that rely on the strategic deployment of visuals are more pervasive, and at times cause more damage. For example, in Indonesia, former Jakarta governor Basuki Tjahaja Purnama (Ahok) was convicted of blasphemy after a clipped video of him accusing his political opponents of using religion as a campaign tool was shared on Facebook in October 2016. In early March 2020, a video of former US vice-president Joe Biden had similarly been simply trimmed to remove key parts of what he said. The net effect, which was to make him sound like he was endorsing US President Donald Trump’s re-election, was still achieved. Technologically sophisticated tools such as deepfakes are thus peripheral in the larger goal of disinformation.

Despite the challenges, countering disinformation by focusing on visual, rather than textual, narratives are essential in preparing for elections. An agile crisis communication plan must be put in place to anticipate disinformation. The plan should itself include the usage of visuals such as memes, GIFs and videos for effective messaging.

Longer-term, efforts must be made in the areas of research and development as well as regulatory solutions to address the prevalence of visual-mediated disinformation. Legal or regulatory obligations could be placed on relevant technological and social media companies to deploy digital provenance solutions in devices such as laptops and smartphones. Given the reluctance of device makers to acquire digital authentication in the absence of certainty of its affordability, demand and performance, as well as the lack of interest of social media companies to lose market share as the result of preventing the uploading of unverified content, making measures enforceable under law could become necessary in future.

Research in the area of digital media forensics must be also intensified. This can be done through laboratories that are already actively involved in collaborations with both public agencies and private companies to provide effective solutions, for instance. Recruitment for international researchers knowledgeable in media forensics could also be conducted. Also, research in image analytics, particularly the incorporation of cultural perspectives such as art studies in technical detection, could be stepped up in relevant organisations.

Efforts must be made to cultivate a younger generation of researchers interested in the area of digital media forensics. Increasing the interest of potential students in the area through hackathons, for instance, will be a worthwhile endeavour in this respect.  

As legal and regulatory efforts are underway, a measure that tackles the problem of visual-mediated disinformation from the cognitive aspect needs to be implemented as well. Emphasising the visual and video literacy (the ability to understand and create visual and video messages) component in existing digital literacy efforts has become more pressing than ever.