In opposition to data ownership

Avatar photo

By Natalie Chyi on

Should you be paid for every page you like on Facebook? Read the winning entry to the BARBRI International Privacy Law Blogging Prize, by UCL law student Natalie Chyi

One idea that has taken off in response to the Cambridge Analytica scandal is that of “data ownership” on social media platforms.

Though the concept has been around for some time, it has recently re-gained traction. The idea is that individuals should automatically “own” any data they generate, because this gives them more control over it. Practically speaking, it proposes the monetisation of personal data, where data subjects can sell their data directly to interested buyers and “get paid for the value our data generates”. Examples of how this might work include being paid for each tweet you post, LinkedIn article you write, or Facebook page that you like.

Though this idea may seem attractive at first, upon further thought it becomes quite evident that it has some deeply problematic aspects.

Firstly, on a theoretical level, treating personal data as proprietary implies that human rights are conditional. Allowing individuals to sell their personal data implies that people are capable of selling away their fundamental right to privacy, which should clearly not be the case.

Secondly, this type of exchange exacerbates the inherent power asymmetry between user and platform, leaving individuals with more lose and more susceptible to be taken advantage of. Researchers have found that firms can manipulate consumers’ behaviour by giving them the illusion of control. They coined this the control paradox — when individuals believe they have more control, they are more willing to freely share more information with more people. The option to monetise, combined with the language of “ownership”, could easily lead users to feel that they have reclaimed some power over their data (despite that the binary option to sell or not sell hardly gives users more control over what is done with their information), thus encouraging them to overshare. And even when consumers know the exchange clearly favours the platform, many may still partake with a better than nothing mentality because they “feel resigned to the inevitability of surveillance and the power of marketers to harvest their data”.

It may also embolden platforms to demand more data or labour from users, who feel obliged to supply because they’re being paid for it. One New York Times article suggests:

“Facebook could directly ask users to tag the puppy pictures to train the machines. It could ask translators to upload their translations. Facebook and Google could demand quality information if the value of the transaction were more transparent.”

Here, monetisation would further facilitate data exploitation by requiring users to share more than they previously would have.

Worst of all, depending on the terms of sale, an individual could effectively sign away all their rights to the data they’re selling. This is especially worrying because of the opaque nature of the informational ecosystem that social media platforms operate in. When most individuals don’t know how their data is used or how it could be used against them, monetisation would only encourage misuse of information under the justification of “consent”.

Thirdly, as Privacy International notes, data ownership is market-driven and therefore “will only result in the exploitation of people’s economic concerns at the expense of their personal data and fundamental right”. This could be especially damaging for those who are already economically vulnerable. For example, one of the ways Cambridge Analytica obtained data was through harvesting the data of workers on Amazon’s Mechanical Turk for $1-2 USD. On average, it was estimated that data scientist Aleksandr Kogan paid less than two cents for each Facebook profile used.

The fact that personal data on an individual level is worth so little may not incentivise more data sharing among people who are financially viable, but will more likely incentivise those in poverty. And this brings the possibility of price discrimination — companies may offer to buy data from people of a certain income level or race at higher or lower prices depending on what price they think these demographics would be willing to sell their data at. This raises questions about whose information is considered “valuable”, and along what parameters this is decided.

Lastly, it is practically impossible to have “ownership” over all the personal information that platforms hold, due to the lack of transparency surrounding the information ecosystem. For one, we have no idea the full extent of who has our data and what data they hold on us. We can’t even access the full copy of our Facebook data (the downloadable file provides an incomplete dataset), much less access the information held by data brokers that most don’t even know exist.

Another issue concerns the inferred information companies use to profile us, which is increasingly being used to inform judgments about everything from criminality to creditworthiness. These inferences are derived not just from information collected from individuals directly, but aggregated data about others thought to be similar to the individual in question (“lookalike audiences”) as well. Claiming ownership over this type of information is difficult because it is generated by companies and not the individuals themselves, and may not even have been generated using a consumer’s personal data. But if these inferences are being used to make decisions about people, then they should have the right to access and control this information. We need to look beyond data knowingly provided, and data ownership is too simplistic to do this.

Though not perfect, the current system of data protection does more for consumers than a model of data ownership because it ensures that individuals will always have rights over their data. What needs to be advocated for are strong legal safeguards and a move towards technical and operational practices that are privacy protecting, such as requirements of data minimisation, portability, and interoperability. Ultimately, granting individuals the ability to sell their personal information on social media platforms will not give them more control over their data, nor will it protect them from being exploited. And I think we all deserve more than being ripped off by social media platforms under the illusion of regaining control.

Natalie Chyi is UCL law graduate who will be starting an LLM in law, technology and entrepreneurship at Cornell Tech in autumn 2018. She is the winner of the BARBRI International Privacy Law Blogging Prize.

BARBRI International will be hosting a 4 July Independence Day party at its London office. Register to attend here.

Want to write for the Legal Cheek Journal?

Find out more

Please bear in mind that the authors of many Legal Cheek Journal pieces are at the beginning of their career. We'd be grateful if you could keep your comments constructive.

Join the conversation

Related Stories

Now open for entries: The Privacy Law Blogging Prize

Social media is rife with privacy law issues, particularly in the wake of the Cambridge Analytica scandal. Tell us about them for the chance to win a free return flight to New York City

Apr 11 2018 10:17am

Can Facebook really listen in on your conversations?

The hills have eyes, but it would be a 'scandalous breach of data protection' if they have ears too

Dec 13 2017 9:10am