"As of September 28th, 2015 at 01:52pm Eastern Standard Time, I do not give Facebook or any entities associated with Facebook permission to use my pictures, information, or posts, both past and future. By this statement, I give notice to Facebook it is strictly forbidden to disclose, copy, distribute, or take any other action against me based on this profile and/or its contents. The content of this profile is private and confidential information."
Does this look familiar? If you’re a Facebook user, chances are that in the last week you’ve either shared this fake legal notice or seen it in your timeline. It is meaningless; you already own your data on Facebook but you’ve also agreed to its terms of service.
These terms allow Facebook to collect and use all information you post as well as associated information such as what device you’re using, its location and battery life, how long you are logged in for, your IP address and even your activity after you leave to visit other websites.
Facebook is by no means alone; every web service has its own terms and conditions around the storage, ownership, use and reuse of your data and it’s difficult to know what, if any, basic “data rights” the citizen of the web is guaranteed.
Those who shared the above viral hoax shouldn’t feel stupid for what is the social media equivalent of hanging a horseshoe over the front door. In the absence of clear legislation and guidelines on how we can protect our personal and private data – not just on Facebook but across the web and on our devices – we will continue to repost protective spells in our timeline to ward off would-be data snatchers.
Prof Barry O'Sullivan, director of the Insight Centre for Data Analytics at University College Cork, sums up the situation, saying there is "a fundamental tension that exists between data, its uses, and its users". The Insight Centre has drawn a line in the sand with a white paper, its "Magna Carta for Data".
Complex issues
While big data research is clearly beneficial to society in its applications across sectors as diverse as education, healthcare and urban planning, current data protection legislation, says O’Sullivan, is “no longer fit for purpose” because it cannot deal with the complex issues around privacy, ownership and the proper use of big data.
Insight’s 21st-century version of the Magna Carta aims to cover offline data too – everything from supermarket loyalty card information to the footsteps counted by smartphone sensors.
The pressing need for legislation comes on the back of advancements in data analytics technology. Social media data, for example, can be used to predict future behaviour; researchers at the University of Virginia in the US were able to predict crimes like stalking and theft based on analysis of tweets. This was funded by the US Army, so Minority Report's pre-crime division doesn't seem so outlandish anymore.
“There’s an entire economy around data, and the notion of privacy is very different now than it was five or 10 years ago. Even the sharing of modest amounts of de-identified information can reveal a lot about individuals,” says O’Sullivan.
He envisions a system whereby people always, by default, own their own data, granting access to various uses on a case-by-case basis, opting out whenever they want.
What would this look like for you and me? A stamp of approval, perhaps, says O’Sullivan. Imagine the near future where you’re signing up for Spotify or a similar streaming music service. You have the option of wading through the endless terms and conditions or scrolling down and hitting “agree” until those tunes start pumping. Then you spot a sticker saying “Magna Carta for Data compliant”.
“It would give you some sense that these services honour the accepted norms around rights and responsibilities for the use of your data.”
Creating such protections, however, is a double-edged sword. “On the one hand you want to protect people’s privacy and on the other, from the point of view of innovation, you do want people to share their data because it allows new products and services to be developed.”
These kinds of services are already in place for large companies from Starbucks to McDonalds and MasterCard and Nestlé. Data on what customers buy, where they buy it, and how frequently they do so, is being used to create targeted ads, personalised offers and better customer support. Ultimately, what this boils down to is consumer convenience with the caveat that these corporations know a lot about us.
There are other concerns such as "social sorting" says Prof Rob Kitchin, a senior researcher at the National Institute of Regional and Spatial Analysis, Maynooth University and a member of the Magna Carta for Data working group.
“By socially sorting, I mean people using your data to make decisions about whether you get that job, apartment or mortgage. It is even used to assess health risks for insurance premiums and this often goes on without people even realising.”
Information brokers
This could be information that is freely available on the web, such as social media updates and publicly shared photographs, or it could be data collected from a third party application running on your smartphone that is then sold on to “information brokers” who can combine this data with other sources to create a profile on the person. These profiles are potentially of great interest to insurance companies, financial institutes, and more.
Part of the problem is the lack of informed consent around various services. “In many cases, just downloading the app is considered to be consent that whatever data comes off the app can be used by the provider.
“There is a lot of data streaming off smartphone and tablet apps, way more than people think,” he says, referring to a US journalist who last November found and listed all the data that comes off a smartphone through the Uber app. Shockingly, this included information on users’ call history, email log, text message log, information on neighbouring wifi connections, battery temperature, your altitude and the speed you are walking at . . . the list goes on.
“Some are suggesting the implementation of what is known as privacy by design. Everything is locked down until you say it is open. Right now, the opposite is the case: everything is open until you say it is locked down and even then it can be quite complicated to lock down everything on your smartphone.”
The Magna Carta for Data extends far beyond the scope of the above examples. People are inclined to think of data solely in terms of something that is commercially mined, and big data in terms of algorithms, when in fact everything on the internet is data, says Pauline Walley, a barrister who, along with a number of professionals and academics, is proposing a global Digital Bill of Rights.
This data includes the online publication of text, images or video that may be defamatory or unlawfully damaging to another individual. Walley points out that while most discussions are around data privacy and what third party services are doing with your data, there are also issues of defamation and reputation.
“The difficulty is that very often when you talk about having some type of regulation in relation to online postings, internet service providers [ISPS)]often respond by saying that it is an encroachment upon free speech. It is only when you look at this closer you realise that ISPs restrict speech every day of the week by enforcing, for example, copyright laws, child pornography laws and the Official Secrets Act.”
Walley argues that there has to be a minimum bill of rights protecting people so they don’t have to go to court to get defamatory or unlawfully damaging content taken down. Habeas corpus and trial by jury are both legal affordances handed down from the Magna Carta and recognised globally; a Magna Carta for Data would have similar affordances for a global, digital society that has ceased to be governed by geographic borders.
“Ordinary punters will not, I feel, trust the mining of big data until they see a minimum bill of rights in relation to issues such as reputation and privacy,” she says.
Currently, the only European legislation governing these issues is the Electronic Commerce Directive, adopted in 2000, before Facebook, Twitter or YouTube had come into existence.
“The directive is hopelessly out of date and it relates specifically to the posting of unlawful content online. ISPs are not liable in a damages claim for publishing defamatory material if they take it down once aware of the unlawful nature of the material; once they are notified they are under obligation to take it down, but not before,” explains Walley.
Qualified immunity
The problem, she says, is that the EU commission and lawmakers gave qualified immunity to ISPs in the E-Commerce Directive in relation to unlawful content, but they didn’t bring into effect a Notice and Takedown procedure. “There should be a human point of contact in each jurisdiction so that when someone presses a complaint button on one of these sites, this is followed up within, say, 12 hours.
“Part of the problem is that when you attempt to get content taken down from somewhere like YouTube there is no landline number or postal address. Meanwhile, an hour is an eternity in internet time.”
For Walley, these issues are part of a larger global Digital Bill of Human Rights or Magna Carta for Data, and with so many head offices of these internet companies based in Ireland, we are well placed to offer such a service.
"The reason this matters so much is because you are who Google says you are," says Walley. How many of you will Google yourself after reading this? Don't forget to read the terms and conditions.