Brexit Welcomes a New Age of Data Rights

The Florence News (Winter Edition, 2019)

Perhaps a more important resource than even oil, data is now the most globally desired commodity. Each click we leave an archive, each piece of broadcasted information a world of detection and possible third-party tracing. Data privacy holds intrinsic weight, not because of subjective understandings of the public and private spheres and what should exist where—it is not about preference—a call for data ethics is a call for human rights.

Founded in 2013 and defunct in 2018, the company Cambridge Analytica was brought to international consideration by a man named Christopher Wylie. His coming-forward about the science behind Cambridge Analytica has informed the masses that their privacy is often at risk. Wylie, the most prominent whistleblower from within Cambridge Analytica, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And build models to exploit what we knew about them and target their inner demons. That was the basis [that] the entire company was built on.”

Alexander Nix, the former CEO of Cambridge Analytica, claims there are no ties to the Leave.EU campaign, that the only talks with the campaign were discussions of future collaborations that never came to fruition. Yet, Brittany Kaiser, the former business development head at Cambridge Analytica and notable whistleblower, said that, “Leave.EU used data-sets created by Cambridge Analytica to target votes with online political messages to potentially sway public opinion in 2016.”

Many political scholars and engaged voters are starting to question the likelihood of a true democratic process when data rights are not protected. Cambridge Analytica is a well-known company that used data mining and data scraping in order to influence voters, including that of the Leave.EU campaign. Though at first their CEO, Alexander Nix, denied Cambridge Analytica's influence in Brexit, further research explains just how intertwined the two were. Brexit provides a very tangible account of who has control of data and the dangerous extent to which that control can be used.

What would it mean to remain so data illiterate for future democratic elections? Is such a thing as true democracy possible without more developed data rights?

Last November there was an event called Big Data London, a tech conference attended by many of the leading minds in data intelligence. When talking about data rights, an employee of WPS Analytics said that the United States was “like the wild west.” He continued, “The USA, you know, they care more about business—here everyone is scared,” the man rolled his eyes, but the leaders of the ‘Future of AI Panel’ admitted to fearing certain aspects of artificial intelligence. Andy Steed, Mariana Perreira, Steph Locke, Katie Gibbs, Thomas Cronin and Mile Bugembe began the panel by addressing, quite bluntly, the general air of anxiety that surrounds AI. They attempted to dismantle mythic responses, gave examples of instances when AI had helped small businesses, and emphasized the potential for machine learning aiding in the fight against climate change. The panel ended with their own awareness of the danger of unknown third parties gaining access to data and using it for tools of manipulation. One of the women on the panel said, “There needs to be a data exchange. If there isn’t some end value, people shouldn’t be sharing their data.”

In 1998 the European Union laid out a set of laws called General Data Protection Regulation, or, as it goes by more commonly, GDPR. Through the European Data Protection Board, laws are written that detail the extent of a European citizen’s rights to their privacy. There is a guide, easily accessible from the online Information Commissioner's Office, that points small businesses and others with similar data responsibility toward their legal expectations: “The guide covers the Data Protection Act 2018 (DPA 2018), and the General Data Protection Regulation (GDPR) as it applies in the UK.”

There are no doubt data laws in place in the European Union, but Christopher Wylie, the keynote speaker at Big Data London, speaks of the issue with the current lack of a ‘code of ethics,’ for those behind big data. He likens creators of tech (data analysts, software engineers, ect.) to architects. Wylie says that, despite the similarity between the jobs, people working in tech are not considering how their creations could negatively affect the world. “There is no code of ethics,” he says.

The New Yorker quotes a former employee of SCL Group talking about getting to know a population of people and what policies those people long for in their political representatives. The ex-employee says, “It is not so much, let’s make these people do this thing: it is, can we take this thing in such a way that the people who should get it do get it?” This sounds idealistically sincere enough, but the question remains: who is deciding the should?

Some 80% of the world’s top data scientists hold jobs within tech giants (Google and Amazon), which means, the every-day, tech-uninformed person, cannot be expected to keep those with influence in the industry in check.

It will take time before a comfortable set of rules are in place. With technology advancing as rapidly as it has, the rights that follow must develop with hardly any foresight. Like reaching through the dark, the European Union, separate or not from the United Kingdom, must learn from the consequences of Leave.EU and Brexit. This political split will be but a lesson that forces the world to take very seriously our lack of data literacy and protection.