The Privacy Paradox

data-privacy-user-experience

It’s easy to forget that one of the main reasons tech giants collect masses of our data — other than to sell or abuse it — is to help craft and improve the products and services they offer us.

If they didn’t, the world would be a much duller and much more frustrating place.

There’d be no logging on to find your childhood sweetheart after having a mid-life realisation.

No pulling out your phone at 2am to order a taxi because you’re in the middle of nowhere and don’t have any cash.

No opening your laptop to quickly find and read this article or use any device, for that matter, without throwing it through a window (ahem, PC).

A lot of these capabilities we see as mere consequences of having an internet. But as anyone over 25 will tell you, they’re not — they’re ultra-refined products of web 2.0.

Web 2.0 describes a shift that emerged in the early 2000’s when we first began moving away from a web solely created by service providers, to one co-created by them and us — the people that use their services.

The most obvious outcome of web 2.0 is the vast realm of user-generated content — bloggingpodcasting, video, social media, wikis, Reddit. But arguably where it’s been most influential is behind the scenes; in how our personal and behavioural data has been quietly and consistently funnelled into driving product innovation, improving services, and bettering user experiences.

This is known today as big data. And in 2018, it is such an integral part of digital services — and digital services an integral part of our lives — that it has become inextricably woven into the very fabric of how we live.

The problem is, after more than a handful of catastrophic data breaches, several political scandals, and a sprinkling of general privacy concerns, people are beginning to wonder: To continue living in an environment of free and seamless digital services, are we going to have to finally say bye-bye and farewell to our data privacy?

Everything of value has a price

To explore this question, let’s take Google as an example, with its over 200 services that touch every part of our lives from work and education to entertainment and navigation.

As it’s so pervasive, you’ll be somewhat unsurprised to hear that Google has a shockingly accurate picture of who you are and what you’re doing. Things like your income, how much you weigh, if you’re about to conceive, where you are at all times, and, not forgetting, your deepest darkest secrets you thought you ‘deleted’ from your search history.

In its own words, all the “things that make you ‘you'”.

Data is so fundamental to Google’s products and services that we barely even notice it. Like when you make a typo during a search but it gives you the correct results anyway. Or when shopping online without your card and Google automatically fills in the form for you. Or when you arrive at your destination on time thanks to Google maps routing you through the least congested areas.

This is because, whereas on the surface it may appear to be a web company, Google is really a big data company. Google’s products wouldn’t nearly be as good, and many simply wouldn’t exist, without its ability to gather up and crunch massive amounts of data.

But this is where things get a little problematic. As companies like Google, Facebook, Instagram, and others, position themselves as digital services that are ‘free’ to use, their users are deceived of the reality that they’re paying (sometimes heavily, ahem, Facebook) with their data. Whether they know it or not, their users are, in fact, working for Silicon Valley — keeping such freemium businesses afloat by supplying their advertising revenue with every click, tap, purchase, and decision they make.

This is the immense value of your data. And as many of us don’t really have much use for it, we’re pretty happy to feed it into making our world’s and digital environments that little bit better.

The only thing we ask for, in return, is trust. 

Most of us would be happy enough with the data-advertising relationship, providing it was based on trust — i.e. we knew when it was happening, had control over what data is used, and could ensure that it, and we, are never exploited.

This is where GDPR comes in. The point of GDPR is to bring privacy regulations up to speed so that people know, understand, and have to explicitly consent to their data being collected and used in specific ways. GDPR, in essence, shifts the balance from businesses to individuals so that control of personal data is firmly back in the hands of its owners and creators. A good example is it introduces the right for individuals to have data erased.

This will make it much harder for businesses that operate on deception (ahem, Facebook), and other sketchy practices that undermine trust, to keep doing what they do. Mobile digital advertising giants like Drawbridge have already been forced to exit Europe due to being unsure how their sneaky ad tactics will comply with the rules around consent (not to mention the huge lawsuits against Google and, you guessed it, Facebook).

It’s clear there’s going to be a lot of casualties due to GDPR. But it’s exactly such strict actions we need in order to start shifting our digital relationships from those based on selling data and surviving through deception, to those based around providing value and thriving through trust.

Whether it’s personal, governmental, or commercial, trust is the key ingredient to making any relationship successful over the long term. And as a result, when it comes to continuing to enjoy our free digital services and fluid user experiences, we don’t need to give up our data privacy to do so.

We just need to give up supporting the companies and services — unless they beat us to it and fail first (ahem, Facebook) — that put a higher price on our data than our trust.

__

Joseph Pennington is a freelance writer and long-term traveller from the North of England. Follow him on LinkedIn or Medium for more articles like this one.