Trust

Mat Honan’s horrific experience where hackers were able to gain access to his Apple, Amazon, and Google accounts is just the latest consequence of a troubling progression.


Path silently uploads and saves users’ address books on its servers.

Google’s street view cars collect data pulled from accessible wifi networks. Google agrees to delete the data, but then later admits that it didn’t.

Facebook retains user data after a user account has supposedly been deleted.

Google exploits a flaw in Mobile Safari in order to continue tracking logged-in users even when doing so circumvents the browser’s settings.

LinkedIn, Last.fm, and a host of others are target by hackers and lose control of users’ passwords.


There are many more stories in a similar vein–a never-ending stream of news reporting that a popular website is found to be doing something skeevy with its user’s data. Whether it’s proactively doing something questionable with user data or passively letting itself become a ripe target for hackers, so many sites–so many companies–are ultimately operating in such a way that abuses users’ trust.

As users, we trust these online services to not be creepy, to not be irresponsible with our data; basically we expect them to be open and honest.

Don’t be creepy

There are norms that reasonable people will agree on–norms for doing the right thing. Yet companies continue to straddle the line or outright blow past the line by circumventing these norms in order to extend their reach. There are plenty of excuses for doing such things. “We want to improve the user’s experience” is usually the most common justification. “We’re doing this for you.” But when did we ask you to bypass browser settings? Or upload our address book so that contact matching can be done faster? There’s a line (albeit a fine one) that exists between constantly barraging users with prompts and explanations–a disjointed, jarring experience if done too often or communicated poorly–and silently operating on users’ personal data without their knowledge or explicit consent.

iOS prompt when an app tries to access photos

Whenever there’s a question (and there should be one) of whether users should be notified or whether anything that can potentially be perceived as untoward should be done “on their behalf,” it’s always best to communicate what’s happening and provide users the ability to opt-out of the event.

Instapaper contact prompt

Rather than appearing creepy by silently doing things “for your users,” communicating clearly and giving users control makes companies appear trustworthy.

Don’t be irresponsible

As more and more of our lives are going on the internet, we’re required to give more of our trust to more parties. When we give our credit card info to a company we have to trust that they’ll properly secure that info and not give it out to someone on the phone claiming to be us. When we supply a password in order to “securely log in” we have to trust that that password text is stored securely. We have to trust that private data we upload isn’t being accessed by company employees. There often isn’t a lot of oversight–as these password leaks show–ensuring that security best-practices are followed. There’s no New York restaurant-style inspection ratings hanging up at the top of every website. We simply have to trust that a website is not being negligent with our data and is employing competent engineers to ensure that it stays protected.

Don’t lie

How do we know, when we click a button labeled, “Delete my account” that our account and all information associated with it is actually deleted? Or that our address book has actually been deleted from the servers? There’s know way to know; no way to verify short of examining source code–we’re forced to trust. We have to trust that a company that wasn’t honest–that abused our trust to begin with–has now changed its ways and will actually do what it says it will do. When a Google or a Path says that it has deleted private data we have to trust that it did delete the data.

Go ahead, trust us

Before the prevalence of web software, companies, and their software, didn’t require as much trust from their users. Data was stored locally on users’ hard drives, which meant that the potential audience–the potential set of hackers trying to gain access to that data–was in the dozens, as opposed to what is essentially the whole world now. When you “deleted” something you could verify that it was wiped from your machine. You could “delete your account” by essentially uninstalling an application.

But now, as cloud-based software becomes an increasingly bigger part of our lives, it requires an increasing amount of our trust. Yet the sites that now host significant portions of our lives have been shown to continually abuse that trust either through negligence or by using underhanded techniques to extend their reach. The scary part is that it usually takes an enterprising hacker–and many aggrieved victims–to bring these abuses to light. Web companies have become more and more opaque in order to keep competitors at bay. But users are left feeling helpless and doubtful. “Consider everything you do online, all of your data, all of your ‘private’ interactions as though they’re public” is an adage that has become popular lately because that’s what it’s come down to.


Will there be a massive user exodus away from Google, Facebook, Amazon, Apple, and other companies that have shaken users’ trust? That’s unlikely. Will there be more security breaches resulting in losses of passwords and other user data? And shady tactics being performed “on behalf” of users? Undoubtedly.

We have to trust that companies that store our lives will become more open, more communicative, and better able to earn the trust that we’ve been obliged to give them.