As UX Designers, it’s easy to blame technical complexity on developers. Designers work with coders, but code isn’t our job — our job is to provide users with a seamless experience, helping them to achieve their goals efficiently. We forget to ask: is efficiency good for us? Is it moral? What might we be hiding from the user?
“As the world around us increases in technological complexity, our understanding of this diminishes.” — James Bridle
From the 2018 Cambridge Analytica scandal to Shoshana Zuboff’s ground-breaking book, The Age of Surveillance Capitalism, we’ve seen an increasing focus on the exploitation of personal data in Big Tech. A lot of attention is given to the decisions made by CEOs, the code written by developers, and the reactions of users. But few people have stopped to consider the role of UX in these technologies, and how it might be used as a tool to exploit users.
To consider the role of seamless interaction in exploitative design, we must first look critically at the relationship between designers and users. Designers are often given the role of ‘user advocate’: someone whose job it is to understand what users want, and design a solution that addresses it. In reality, designers’ jobs go far beyond that — to design a solution, they need to have a full understanding of the mechanisms that underpin it. Designers spend their time working with developers, product managers, legal teams, marketers, technical architects and more to deliver a piece of software. They must know more than the user to do their job, so the relationship between designer and user is always a product of information asymmetry. As designers, we must be aware of this asymmetry to consider the power imbalances we might be perpetuating.
Let’s explore this point further. Digital interactions result from two different views of a system: the view the user experiences (commonly the UI) and the view the development team, and therefore the UX team, has (commonly the full system). The former is described by UX experts as the user’s ‘mental model’, although Jacob Nielsen expands the definition to, “what the user believes about the system at hand.” Typically, UX-ers might address this gap in understanding by changing the UI: how can we make what the user sees conform to their expectations? But our analysis of this gap in understanding must go further:
What mechanisms are in place to question the user’s expectations? When do we have a moral imperative to teach the user more about the system?
From a UX perspective, it’s not desirable for the user’s view of a system to be the same as the designer’s. In fact, for a ‘seamless’ user experience, they have to be different. Since GDPR was introduced, this has become a common source of tension. Each time a user is offered the opportunity to learn about data collection or data use, is a point at which they might drop out of the user journey; it’s a point of friction in achieving their goal. By focussing on seamless interaction as the key criteria for what users see, we prevent users from understanding how technology impacts them. As designers, the information asymmetry we benefit from can lead us to produce exploitative designs.
Let’s take an example to illustrate this further. A user looks online to find inspiration for a new haircut — let’s call this their goal. They type “women’s haircuts” into Google, hit enter, and see the results. For the user, the experience is undoubtedly seamless — one click from intention to goal. But what’s happening underneath? Google’s search implicitly enters the user into a contractual relationship with Alphabet, Google’s parent company. This allows Alphabet’s systems to collect a multitude of inferences about the user — including location, device type, gender and purchasing forecasts — which they can sell to advertisers at a profit. Shoshana Zuboff calls this hidden system the ‘shadow text’ — a view of the system used “as raw material to be […] analysed as means to others’ market ends”. In other words, the inferences that the user doesn’t see are a central source of Alphabet’s advertising profit. Designers have to be aware of these mechanisms in order to contribute to Alphabet’s bottom line; they design solutions that get users to share data. The gap between what the user sees and what the designers know results in extraction and profit. In this case, the UX Designers at Google know more about the system, more about profit to be made, and perhaps even more about users, than the users themselves. To come back to the questions I posed: is this good for the user? Is it moral? And what are we hiding from them? In this example, seamless interaction comes at the cost of the user’s autonomy: they don’t get to decide how, why or when their personal data is used. This information asymmetry, created through seamless design, enables the financial exploitation of user data without users’ knowledge or true consent.
Seamless interaction comes at the cost of the user’s autonomy: they don’t get to decide how, why or when their personal data is used.
So, what should we do? Perhaps the first step is to critically rethink the importance of seamless interaction. Humans are complex: we reflect on experiences to make decisions. If, as designers, we focus too much on seamlessness, we rob users of this ability to make informed decisions. If our goal is seamless interaction, then our perfect users are thoughtless consumers of perfectly frictionless systems. In a world where digital services are increasingly diminishing people’s attention spans, UX-ers need to decide: are we going to perpetuate these quick, thoughtless and exploitative interactions? Or can we reshape our approach to UX to redress the balance of power?
Secondly, we can reconsider how we define user goals outside of a commercial context. Could a goal include being informed about the use of personal data? Could a goal be to stop using your service if it conflicts with the user’s true needs, when considered more broadly?
Finally, UX designers can consider how to rebalance the information asymmetry with users, and reshape how this is reflected in design. Using more open dialogue during user research can allow us to share knowledge about how systems work. In turn, this can force us to have more open conversations about user goals. Our role should involve helping users to understand the different factors at play in their online experience, rather than narrowly focussing on simple goals. Our goal should be to provide users with the ability to make decisions on how the entirety of a system impacts them. By doing so, we can research and design for informed, empowered users.
How does ‘seamless interaction’ drive exploitative design? was originally published in UX Planet on Medium, where people are continuing the conversation by highlighting and responding to this story.