You’ll have heard the news by now, perhaps even via your Facebook feed. A whistle-blower has revealed that UK firm Cambridge Analytica enabled the development of micro-targeted advertising on Facebook to influence the 2016 US election and other elections. The company uses data harvested from users to generate these targets, and users are understandably outraged.
But Facebook and other data-driven companies have been collecting and sharing data for some time in ways their users may not like, and almost certainly don’t understand. In fact, the business model is driven by harvesting, storing, and selling user data.
Facebook is unlikely to have been as surprised by the Cambridge Analytica revelations as CEO Mark Zuckerberg has claimed. The company knew two years ago that data had likely been used for political purposes, but did little to redress matters once it found out and chose not to engage publicly with the issue until now.
Despite the potential political implications, Facebook is not required to share what it knew about Cambridge Analytica’s use of its data. The social media company (and the developers which feed off its data) are not subject to laws governing political campaigning. Indeed, tech companies have lobbied the US Federal Election Commission to have online political advertising exempted from regulation.
Although the effectiveness of Cambridge Analytica’s model is far from clear, what is perhaps most shocking is that it and others have operated with a minimum of oversight.
How should states manage data-driven tech companies with a global reach in the face of what appears to be a regulatory vacuum, especially if those companies have the capacity to influence security outcomes for states and their citizens?
Tech companies are not bound by territory, making such regulation difficult both conceptually and practically. Theoretically, my (deleted!) Facebook profile could be held in one of any number of servers around the world, or be in transit. Who should govern it? How? Whose laws apply? Why?
Like all multinationals, tech companies are adept at minimising regulation. But because their core asset – data – is so easily deterritorialised and is generated by citizens, tech companies have a more elusive relationship with the states seeking to regulate them.
The United States v. Microsoft Corp. case currently before the US Supreme Court suggests that even American agencies may not be able to bend US-born tech companies to their will. The case concerns US Department of Justice attempts to access data held by Microsoft outside American territory, and turns on the question of whether territory – the location of servers – determines which country’s rules apply.
Similarly, the UK employed a special envoy to the US on intelligence and law enforcement data-sharing because it hasn’t been able to influence US tech companies the way it wants, especially in counterterrorism efforts. And repeated attempts in Europe to bring US tech companies under European law show that such control is neither firm nor absolute.
In developing countries where, for many, Facebook is the internet, the opportunity for regulation is minimal, and the potential impact of a cavalier data company even greater. In Myanmar, Facebook has been accused of aiding alleged genocide. In Cambodia, Facebook is alleged to have facilitated abuses of electoral processes by Prime Minister Hun Sen. And the social media giant’s unannounced, unregulated experiment with user feeds in five countries, including Cambodia and Sri Lanka, suggest flagrant disregard for its role in the fragile media ecosystem of post-conflict states.
States can also benefit in important ways from data harvested from their own citizens and others. For some, the Edward Snowden revelations may have implied that all state security agencies have access to all the data they need, all of the time. But this is not so – tech companies can be important intermediaries.
For example, a significant detail in the Cambridge Analytica story concerns the relationship between it and a company with links to British and US defence contractors. A range of US law enforcement, defence, and security agencies have engaged with data developers feeding off social media data, including that of Facebook.
The problem is complex, and the outrage is real. But perhaps we shouldn’t be surprised. Instead, we should be seriously assessing our options for understanding new business models and their relationship with the state.
Tech companies’ white-bread solutions to concerns about privacy, security, and fake news are insufficient; improved media literacy and various fact-checking initiatives are not going to address the real issues at hand. We need to move beyond the traditional-media model of regulation and assess the relationship between states, tech companies, and citizens as an entirely new phenomenon, with new demands and opportunities, rather than as simply a development in the traditional model. Our friends (and our enemies) demand it.
Photo by Flickr user Alessio Jacona.