Journal

In opposition to data ownership

By on
3

Should you be paid for every page you like on Facebook? Read the winning entry to the BARBRI International Privacy Law Blogging Prize, by UCL law student Natalie Chyi

One idea that has taken off in response to the Cambridge Analytica scandal is that of “data ownership” on social media platforms.

Though the concept has been around for some time, it has recently re-gained traction. The idea is that individuals should automatically “own” any data they generate, because this gives them more control over it. Practically speaking, it proposes the monetisation of personal data, where data subjects can sell their data directly to interested buyers and “get paid for the value our data generates”. Examples of how this might work include being paid for each tweet you post, LinkedIn article you write, or Facebook page that you like.

Though this idea may seem attractive at first, upon further thought it becomes quite evident that it has some deeply problematic aspects.

Firstly, on a theoretical level, treating personal data as proprietary implies that human rights are conditional. Allowing individuals to sell their personal data implies that people are capable of selling away their fundamental right to privacy, which should clearly not be the case.

Secondly, this type of exchange exacerbates the inherent power asymmetry between user and platform, leaving individuals with more lose and more susceptible to be taken advantage of. Researchers have found that firms can manipulate consumers’ behaviour by giving them the illusion of control. They coined this the control paradox — when individuals believe they have more control, they are more willing to freely share more information with more people. The option to monetise, combined with the language of “ownership”, could easily lead users to feel that they have reclaimed some power over their data (despite that the binary option to sell or not sell hardly gives users more control over what is done with their information), thus encouraging them to overshare. And even when consumers know the exchange clearly favours the platform, many may still partake with a better than nothing mentality because they “feel resigned to the inevitability of surveillance and the power of marketers to harvest their data”.

It may also embolden platforms to demand more data or labour from users, who feel obliged to supply because they’re being paid for it. One New York Times article suggests:

“Facebook could directly ask users to tag the puppy pictures to train the machines. It could ask translators to upload their translations. Facebook and Google could demand quality information if the value of the transaction were more transparent.”

Here, monetisation would further facilitate data exploitation by requiring users to share more than they previously would have.

Worst of all, depending on the terms of sale, an individual could effectively sign away all their rights to the data they’re selling. This is especially worrying because of the opaque nature of the informational ecosystem that social media platforms operate in. When most individuals don’t know how their data is used or how it could be used against them, monetisation would only encourage misuse of information under the justification of “consent”.

Thirdly, as Privacy International notes, data ownership is market-driven and therefore “will only result in the exploitation of people’s economic concerns at the expense of their personal data and fundamental right”. This could be especially damaging for those who are already economically vulnerable. For example, one of the ways Cambridge Analytica obtained data was through harvesting the data of workers on Amazon’s Mechanical Turk for $1-2 USD. On average, it was estimated that data scientist Aleksandr Kogan paid less than two cents for each Facebook profile used.

The fact that personal data on an individual level is worth so little may not incentivise more data sharing among people who are financially viable, but will more likely incentivise those in poverty. And this brings the possibility of price discrimination — companies may offer to buy data from people of a certain income level or race at higher or lower prices depending on what price they think these demographics would be willing to sell their data at. This raises questions about whose information is considered “valuable”, and along what parameters this is decided.

Lastly, it is practically impossible to have “ownership” over all the personal information that platforms hold, due to the lack of transparency surrounding the information ecosystem. For one, we have no idea the full extent of who has our data and what data they hold on us. We can’t even access the full copy of our Facebook data (the downloadable file provides an incomplete dataset), much less access the information held by data brokers that most don’t even know exist.

Another issue concerns the inferred information companies use to profile us, which is increasingly being used to inform judgments about everything from criminality to creditworthiness. These inferences are derived not just from information collected from individuals directly, but aggregated data about others thought to be similar to the individual in question (“lookalike audiences”) as well. Claiming ownership over this type of information is difficult because it is generated by companies and not the individuals themselves, and may not even have been generated using a consumer’s personal data. But if these inferences are being used to make decisions about people, then they should have the right to access and control this information. We need to look beyond data knowingly provided, and data ownership is too simplistic to do this.

Though not perfect, the current system of data protection does more for consumers than a model of data ownership because it ensures that individuals will always have rights over their data. What needs to be advocated for are strong legal safeguards and a move towards technical and operational practices that are privacy protecting, such as requirements of data minimisation, portability, and interoperability. Ultimately, granting individuals the ability to sell their personal information on social media platforms will not give them more control over their data, nor will it protect them from being exploited. And I think we all deserve more than being ripped off by social media platforms under the illusion of regaining control.

Natalie Chyi is UCL law graduate who will be starting an LLM in law, technology and entrepreneurship at Cornell Tech in autumn 2018. She is the winner of the BARBRI International Privacy Law Blogging Prize.

BARBRI International will be hosting a 4 July Independence Day party at its London office. Register to attend here.

Want to write for the Legal Cheek Journal?

Find out more

Please bear in mind that the authors of many Legal Cheek Journal pieces are at the beginning of their career. We'd be grateful if you could keep your comments constructive.

3 Comments

Anonymous

Excellent piece. Well done! Best of luck at Cornell!

Brian Gray

Congratulations on a topical piece, Natalie. I recommend you look into some of the projects being developed around self sovereign identity and personal data monetization being developed in the cryptosphere.

Your essay looks a little off the pace i assuming platforms like Facebook will be involved in personal data ownership. Decentralised networks will enable users and advertisers to disintermediate platforms like Facebook for mutual profit and consumer control.

Have a look at Basic Attention Token, Wibson, GXS, Ternio and Holochain and you will start to get a flavour for how ownership might empower in a way that mere privacy laws never will.

Anonymous

Paying for data in the manner you describe in the article makes no sense at all –

a) what is the the incentive for companies to adopt that model when the current model works well for them (unless the idea is to legislate, which would be frankly bizarre)?

b) that would make the data somewhat worthless because being paid by the tweet, Facebook comment, like etc. would incentivise spam posting and so the data output would be skewed. Any insights gained for advertising or other purposes would be fundamentally flawed which negates the impetus to collect the data in the first place and brings me back to my point in a).

The points you’ve made in the article are good ones and valid but to be honest I think the idea of monetization of data in the way you describe is more fundamentally flawed.

I mean there are working models existing which essentially ‘pay you’ for your data – store loyalty cards for instance (which collect demographics and shopping habits info under the guise of giving you loyalty vouchers depending on how much you shop/spend) being just one example. But there, the data they collect is verifiable and cannot really be skewed by someone deliberately handing over more data for more reward, so it works. With the type of data Facebook/Twitter etc collect, much of that is skewable – for instance, if paid by the tweet what’s to stop me making a bot to autopost things for more reward?

Join the conversation

Related Stories