The hills have eyes, but it would be a ‘scandalous breach of data protection’ if they have ears too
A couple of weeks ago, I texted my sister to ask her if she’d pick up some chilli oil on the way back from work. Within the next few days, I noticed three new adverts on my Facebook feed: one about a face wash for oily skin from Kiehl’s, a lip oil from MAC, and a reminder from Thames Water not to flush oil down the sink.
It sounds coincidental, but then I started noticing things like this more and more. Okay so plenty of ads passed me by that didn’t arouse suspicion, but I started collecting screenshots of adverts for topics that did, including hand soap and pizza from Zizzi. These two appeared on Instagram and are screenshotted below.
The link between Facebook, its messenger service and other messenger platforms like Whatsapp made me wonder whether the social media giant, described in a recent Panorama programme as “the most targeted advertising machine in history”, really was screening my messages.
And then something even more uncomfortable came to the fore: the possibility of social media apps not just knowing what you’re texting, but also hearing your conversations too.
After my friend said Facebook sent her an advert for a book she’d been talking about with colleagues at work just days before, me and my sister couldn’t resist, so we started talking about car insurance around our phones and, lo and behold, a car insurance advert appeared in the following days. (Neither of us have a car, nor drive.) And, minutes after a discussion with my sister about the fellow I’m A Celebrity campmates treatment of Iain Lee, who has battled anxiety and depression, this ad popped up on Facebook.
I’m not the only one that’s experienced this. A video uploaded last year entitled ‘Facebook iPhone Listening into our Conversations for Advertising TEST’ has racked up close to 1.5 million views, and has prompted a number of ‘this happened to me too’ comments. One said: “Family was talking about earthquake preparedness last night, and some earthquake insurance ads show up this morning”; another commented: “I am a teacher and my Facebook ads change depending on whatever I am teaching. It is really creepy, and they are definitely listening.” Legal Cheek has taken steps to contact Facebook and Instagram but is yet to receive a response.
Suitably freaked out I presented my theory to Jim Killock, executive director of a campaign organisation for protecting digital rights. “It would definitely be a scandalous breach of data protection if Facebook could hear your voice, one that would see them fined lots of money,” the head of Open Rights Group says.
But, legally, it seems what the company is doing is inside the line, this line in the new media age hinging on users agreeing to terms & conditions and privacy notices that no one reads. This system isn’t perfect but it is lawful, explains media law barrister David Hirst. He is as content as Killock to answer “nonsense” when I float my Facebook mole theory, saying:
“There really is no excuse in 2017 for not obtaining express consent or authorisation from social media users, and it’s almost inconceivable to think companies would be adopting any more extreme uses of personal data from voice or the contents of messages without permission for these purposes.”
Hirst, a barrister at media and privacy law set 5RB, can find no explanation for my oil ads apart from “coincidence” or the result of Facebook’s above-board data collection/ad-tailoring policy. Do raise your eyebrows as you read ‘above-board’ — in a Guardian long read dedicated to “Facebook’s war against free will”, podcaster Franklin Foer says the blue-logoed company “is always surveilling users”, very legally allowed to amass data on them based on their Google searches and their time spent on webpages that feature one of Facebook’s widgets.
These widgets follow users through the internet and can provide Facebook with a pretty comprehensive account of their browsing history, building a picture of their interests and monetising this data through advertising accordingly.
So while Facebook may not have ears it does certainly have eyes, a Mona Lisa gaze that’s turned us into “walking barcodes” (Panorama‘s words) and is difficult to escape. Dependent on getting rid of cookies and installing ad blockers, opting out requires expertise, careful thought and time – which many of Facebook’s two billion plus users do not have. It’s also made more problematic, in Killock’s words, by the fact that “Facebook’s ad tailoring is based on algorithmic practices we don’t know a lot about”.
This Eye of Providence looks inwardly, too. Ads can be sent specifically to users based on the groups and subjects they’re interested in on social media. The practice is widespread: both sides of the European Union referendum, for example, sent micro-targeted ads during the campaign, while it’s estimated the US election in November earned Facebook $250 million (£187 million) in advertising money.
These ads are known as “dark ads” simply because of the darkness that shrouds their use and content. New Statesman reports, for example, that Vote Leave claims to have spent 98% of its advertising budget on digital adverts. However only those targeted by them can see them (and therefore metaphorically ‘shine a light’ by scrutinising them).
This darkness has left many feeling cold and calling for a more ethically-run social media. Scorn was directed at Facebook just weeks ago when it was demonstrated the social media site allowed would-be advertisers to target users interested in anti-Semitic topics such as “How to burn Jews”. It’s also worth saying Facebook’s ad policy is not the only thing it’s taken flak for over the years. Its failure to remove offensive content swiftly, its place in the fight against terrorism, fake profiles and ‘catfishing’, cases of stalking, trolling and bullying and its psychological impact on children and other vulnerable people feature on a growing list of complaints about the company.
The growing public pressure for a more ethical internet transcends social media and is beginning to be expressed in case law. Vidal-Hall v Google was brought in 2014, the claim based on the distress suffered by the claimants finding out their personal characteristics formed the basis of the defendant’s targeted ads. The case settled, but is of interest to privacy lawyers because, 5RB writes:
“The judge indicated in a preliminary view that damages for a breach of the [Data Protection Act] could include non-pecuniary damage. The impact of the DPA in privacy law has to date been limited by the requirement that damage for distress could only be recovered if pecuniary damage had been suffered. This development has the potential to make claims under the DPA far more common in the field of privacy law.”
Recently, a class action has been launched against search engine Google. It’s claimed Google unlawfully harvested personal data in an eight-month period beginning in 2011 and ending in 2012 by bypassing the iPhone’s default privacy setting. Claimant Richard Lloyd, the former executive director of consumer body Which?, says he hopes this claim “will send a strong message to Google and other tech giants in Silicon Valley that we’re not afraid to fight back if our laws are broken”. It seems we’re beginning to heed the warning of famous linguist and philosopher Noam Chomsky:
“It’s dangerous when people are willing to give up their privacy.”
It’s not just claimants demonstrating the world’s growing awareness of privacy law and growing resistance to the internet’s monetisation of its users. The footfall through a recent three-week exhibition near Leicester Square, ‘THE GLASS ROOM’, which promised to empower visitors to “reclaim your digital self”, was 19,000.
The exhibition’s promo material, ‘We Know You’, eerily adopts the voice of the five companies that have come to be known as GAFAM (Google, Apple, Facebook, Amazon, Microsoft), which it says collectively “now wield an unprecedented level of power and influence” over our lives. It asks:
“How much do these companies know about you?”
Discontent from below seems to have spurred a hard, legal response from above: the EU’s General Data Protection Regulation (GDPR) coming into force will soon mean the answer to that question will be “not as much as they could have done”.
This law, which comes into effect on 25 May 2018, will require businesses to protect the data of EU citizens to a standard described by the editor of security and risk management news resource CSO as “quite high”, requiring “most companies to make a large investment to meet and to administer”.