Consider the uncle.
A friendly enough figure from your childhood, he’s now part of the constellation of family members with whom you share much and little at once. He tells amusing stories about your parents. He’s an avid fan of the same sports team that you are. You love his children, your cousins. But he also has unpleasant political opinions that you strongly dislike hearing.
He approaches you at a holiday gathering and begins to reminisce about your mother’s infamously rowdy youth. It’s riveting and hilarious. You’re wide-eyed, nodding along, looking directly at him with attention and a slight, involuntary smile on your face. But after a few stories, he digresses into his take on some political controversy, and here is another side of the man: his position seems not merely incorrect but deeply objectionable. Your replies are short, the bare minimum; you don’t maintain eye contact; perhaps you excuse yourself, mentioning that you need to refill your drink. (And perhaps you do.) Whereas the first subject yielded great conversation, the second halts it.
At the next gathering—assuming that he’s normatively socialized—your uncle might be likelier to bend your ear about your mom and cousins than about his opinions on politics. Your subtle, soft signals conveyed to him that you prefer some subjects to others, and both of you get more of what we all seek in social interactions if he respects those preferences. He gets your affection, attention, and appreciation; you’re entertained by stories of mom’s salad days. Best of all, no painful confrontations or laborious, preemptive declarations of acceptable subjects were needed. Fluidly, you came to an understanding that will be iterated on over the course of your lives. He will occasionally test your interest in proximate areas—as you will his—and together you’ll negotiate a conversational arrangement that works fairly well for both of you.
If we can only dream of such a successful resolution with family members, we at least know this process with friends and acquaintances. This “mutual personalization” of relationships is a constant, ubiquitous, and vital part of how we order our lives. We send and receive signals about one another’s attention, interest, and mood unceasingly, often involuntarily. Likewise, we tailor our own attention, expression, and behavior to achieve appropriate concord with interlocutors, and in doing so as individuals we aggregate into groups aligned around shared norms.
Our signals and responses range from the subtle and unconscious to the overt and deliberate, and they’ve evolved with us over the course of millennia. They are sometimes described as part of etiquette; they help us maintain harmonious relationships in different areas of our lives (and at different times). For groups, they constitute community standards and can even become the status quo. A rich set of subtle and multivalent signals allows individuals to preserve themselves even as they meet the demands of others and of groups, for good and ill.
Online, it’s a different story.
Instead of using the rich signaling vocabulary humanity has developed, our digital social relations are governed by very simple data models and UI schemes. There are often just a handful of actions users can take in social software, and most are overt and public. Except in the most advanced systems, the options regularly sum to a single choice: “I want to see everything from my uncle” or “I never want to see anything from my uncle.”
If only your uncle were that simple; if only anyone were.
Containing Contending Multitudes
Humans “contain multitudes,”1 but it’s hard to feel at ease with our multiplicity when any utterance might be met with confrontation or sudden, summary rejection. While we can fault the judgmental, the truth is that we designers have created this situation; for example, by giving hundreds of millions of users a single room in which to discuss football games and funerals, protest marches and gossip, or by making stressfully explicit who “follows” whom. In such spaces, it’s amazing that any expressions occur without blowback.
This is bad enough for each of us, but worse is that individual anxieties about judgment, expression, and norms aggregate into group tensions. The impossibility of subtly negotiating multiple communities’ expectations and boundaries results in much of the notoriously intense harassment, moralizing, othering, and shaming we see online. These tribal behaviors are, after all, some of the tools communities use to substantiate themselves for their members’ well-being.
That’s not because there’s anything necessarily wrong with these communities, either. To feel safe and to communicate efficiently, people must have shared norms. Individuals and groups incessantly and instinctively attempt to establish such norms online, largely without stable success. We have few walls, little privacy, less tradition, no soft signaling, and more emboldened—often anonymous—interlopers. The online scrum is, in many ways, a battle for reliable community norms in spaces that hold many partially or fully incompatible people and groups.
In sum: we are living in simple software, and preferences collide. Deprived of gentle means for achieving mutual personalization, we cannot escape undesirable interactions and content without social costs. Painfully, we also become the objectionable other to people with whom we’d have perfectly rewarding, fluid, continually refined relationships in real life. Everyone must take everyone else in full or not at all, and if everyone is either in or out—of social circles, of scenes—community membership becomes a contentious proposition. Belonging becomes binary; total identification with a community is mandatory, and communities must aggressively assert their norms and both protect and police their memberships. They punish non-compliance within and react against the other outside, as threatened communities do.
For people and communities, this has not only social implications but moral ones, as online spaces become zones of culture conflict in which we must judge and be judged. The “chilling effect” on expression is real; some individuals muzzle the selves they suspect aren’t universally palatable, while the brash come to dominate discourse. For systems designers, it is one of many problems that approach the political in nature. Many attempt to address the problem with increasingly legislative policies about what is and isn’t acceptable behavior. But who decides what’s acceptable is itself a political question.
One serious error is to think that there are “good” users and “bad” users, and that we need merely to provide reporting tools to allow the ferreting out and banning of the latter. While there are truly bad actors who must be removed, they cause a minority of clashes. In real world terms, crime is less common than incompatibility in its many forms. So social software designers shouldn’t aspire to be legislators of what’s “good” but rather framers whose systems allow individuals and communities to determine their own mores. This is a difficult challenge, but a mandatory one; as the Russian novelist Aleksandr Solzhenitsyn wrote:
If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.2
For designers of products with many users, it’s crucial to understand not only the practical relativity of good and evil, but also that humans have many selves, some of which come and go during their lifetimes. A well-designed system—like a well-designed government—mitigates the costs of discordant differences while allowing individuals the maximum degree of freedom to be themselves, even as it encourages communities to form and benefit from their own norms and traditions.
Your uncle isn’t an evil person, after all. But when you must judge him in full, he—like most humans—falls short of perfection. On the other hand, you know some of your opinions must irritate him. Do you want a world in which unanimity of opinion is required even for mere acquaintanceship? Of course not!
Still, seeing his posts bums you out, makes you feel argumentative, sets you off on vexing internal debates with imagined foes about issues you don’t even intend to be thinking about. So: do you unfollow your uncle? Do you care how that makes him feel?
And what if, rather than an uncle, it’s a friend or colleague, or a boss or mentor? And imagine this dilemma repeated for every relationship between every pair of people! How will your community—whatever it is—achieve a safe and reliable composition that lets members “be themselves” without getting aggressive about intruders who don’t share your norms (aggression which may itself be norm-violating)?
Why do these ostensibly social systems make social life harder? And what can be done about it?
Patterns We Copy
Most social software is based on existing software patterns rather than how we live and coexist. Real-world social dynamics are so complex that we can hardly understand them, let alone imagine how they might be mirrored in, say, a user interface. Even if we were to try and match their complexity—presenting a user with hundreds of sliders, checkboxes, and options for responding to posts or reacting to another user—all we’d accomplish is overburdening her with administrative tasks. It would never accurately capture her full social sentiments, and regardless, it would be time-consuming and annoying.
Indeed most of our interfaces require explicit, conscious action, and that in itself is problematic for the replication of our full range of signals, many of which are, again, unconscious or ambiguous. Sometimes the precise mechanism of a signal is that its ambiguity—your uncle may wonder, “Does she really need a drink, or is she tired of my talking politics?”—permits both parties to interpret it in the most personally palatable way. Face-saving is important. User interfaces are not generally well-suited for ambiguous signals, let alone unconscious ones.
But clever designs find ways around this. Consider the issue for a dating app: How can we make finding a partner no more painful than it is in the real world, and hopefully less? How can we mitigate the anxiety of people in a delicate social situation involving approval and rejection?
We can start by considering how people protect feelings in real life. One very common method is lying. Say you ask for a phone number from someone at a bar and get it, but it’s fake. This saves face that night—while you’re intoxicated, with your friends, in public—and allows you to process your feelings however you like the next day: “I must have been too drunk to hear the number right!” Even if you do feel rejected, it’s still less likely to embarrass you than being rejected face-to-face; and besides, what can you do? Indeed, lying is a popular solution: “I’m seeing someone” also works in this case. We lie even to our friends: “Sure, I’d love to do that!” we say face-to-face, and later send the email “Oh my gosh, it turns out we have plans.” And so on.
But lying isn’t really supported in software. We can lie to other people through software—for example, all profile bios—but lying to software—having software operate with false ideas of what we want or think—isn’t compatible with achieving utility. A dating app that people lie to about whom they like will not work very well!
Another solution is to use intermediaries: “Pat, can you ask Lee if Jesse likes me?” Long after grade school, forms of this persist. We attempt to validate whether we’re liked (or not) through a third party in part because intermediaries translate and soften signals. But dating services in which you involve your friends as wing-people are rare.
The answer provided by the double-opt-in mechanic common to Tinder and many other services borrows from both of these real-world solutions: Have an intermediary systematic function depersonalize some of what happens, rendering signals ambiguous. This way, no one can know that they’ve been rejected. Individuals can be more at ease and the community will have fewer disturbances caused by the social costs of approval and rejection.
In effect, this outsources lying and uses a third party to soften the blow. When you “approve” of a person but never hear back, it is the service’s refusal to distinguish between “people who haven’t seen you” and “people who reject you” that saves you face, as though the service is giving you fake phone numbers. You can only wonder: “Was there just a harmless miss, or was I rejected?” This is an outstanding solution, because it not only restores but actually amplifies the ambiguity of the real-world social process. In truth, it’s hard to approach people and ask for numbers. It’s often the case that we can tell when we’re liked or disliked; and with mobile phones, creeps test phone numbers right away anyway!
So this solution enhances the capacity of individuals to make free choices with reduced fear of social cost. In this sense, these services improve on reality by taking the solutions we use in the real world, abstracting them to consider their consequences, and then figuring out how software can achieve the same consequences with different mechanisms.
Translating Mechanisms
Let’s try to generalize the problem for any social software: How can we enable mutual, painless personalization of social experiences online? What features of evolved real-world individual and community social dynamics can we replicate with current technology?
There are countless possibilities at many levels of design. I’ll mention one abstractly: systems should be able to fluidly recognize and concentrate communities of users with soft borders, permitting less explicit affiliations and departures but still supporting zones where community norms abide. There are systematic and user-interface problems to solve, but doing so would likely reduce the community defining and protecting behaviors that make public spaces online so problematic. Networks in which we can be our bar-selves, work-selves, gossip-selves, activist-selves, parent-selves, critical-selves, and other-selves without interference—city-like networks in which the bar and city hall aren’t the same space, but also aren’t private, rigidly defined, members-only spaces—are hard to imagine visually but will exist someday.
In the meantime, there are other technologies being used to solve these sorts of problems. Among them, machine-learning personalization is the assistive intermediary function par excellence. Best-known as what powers Facebook’s “Top Stories” news feed, machine-learning personalization aggregates hundreds of explicit and implicit signals, including some that are subtle or even unconscious. It acts as the intermediary whom we blame or credit and whose role lessens the social cost of our preferences. It continually explores our preferences and refines its model as it (and we) change over time. Meanwhile, it requires little to no administration and is fundamentally diversifying, as it creates maximally individuated software experiences.
It achieves this diversity through a process very much like that used in real-world social situations. When a machine-learning system first “meets” you, it must make some truly random guesses, unless there’s any inherited contextual information from the start (for example, you’ve connected another service that it can mine for data). As it learns about you, it can increasingly relate you with cohorts (based on vectors of signals). It can also continually introduce test content in the proportion you seem to favor, from proximate or orthogonal cohorts or even randomized. This is more or less how humans operate when they meet, of course: some inherited data—perhaps an outfit or an introduction—guides initial explorations, but as we form a mental model of whom we’re dealing with, we get better at guessing whether they’ll enjoy talking about sports or politics or technology or food. If we’re smart and decent, we don’t stereotype; such signals are directional, but not exclusionary. So too with machine learning, which never “finishes” learning about each user or reduces her to a flat, unchanging profile.
Indeed, machine-learning personalization of content is possibly the most democratic editorial process yet deployed at scale. In a well-personalized feed, no one’s conception of what’s best matters but yours, and that remains true even if you don’t know what you like or lack the time, ability, or interest to describe all the valences of interests and habits that constitute your full identity. A system with sophisticated machine learning has, in effect, deployed an attentive assistant whose priority is to find out what you care about, which people you want to hear from, what content you find objectionable, and even how your moods and tastes vary with time and context.
But machine-learning personalization has been controversial in the design community, partly because of confusion about how we socialize in reality.
Firehose or Fascism
Critics of machine-learning personalization tend to make one of three claims.
First, some fear that personalization concentrates “control” in the hands of the network owners, who tinker with opaque algorithms whose details we can never know. But networks owners don’t want control; they want our use and attention. Personalization can be computationally costly, but companies choose to bear those costs because they must provide users with good experiences—whatever that means to each of us—or we’ll find other networks. Machine-learning personalization doesn’t mean that networks—let alone persons working for the network—decide what you see; it means that you decide what you see. A bad feed, which through omission censors content users want, will eventually drive us away from any network, no matter how popular or powerful.
A second view is that even if we control our feeds, personalization partializes our view of the world, trapping us in “filter bubbles” that deny us access to novel or dissenting views. However, this is mistaken too. Personalization is a constant, daily fact, not a new technological phenomenon. We all adjust our signals, our environments, our social circles, our media intakes to be as we want, and typically we only gainsay the choices of others (especially others whose opinions we disagree with). But no one should cede control of their bookshelves, evenings, television remote, party invitation list, or the like to an imposed conception of “what a person should experience,” dictated by these critics or anyone else.
Furthermore, non-personalized social software is not an option: as networks scale and every user’s graph grows, simple chronological feeds become unmanageable. We can burden the user with the social and administrative costs, or we can have systems bear those costs for them, as traditions and norms do in the real world. But we cannot prescribe the social and informational diet, as it were, for others, and it’s especially important that designers remember this; we are not arbiters of what’s good; we create so that humans can be empowered to pursue their own ends, not ours.
The third major concern is that machine-learning personalization is difficult, and poor execution results in frustrating software, content, and social experiences. This is absolutely true, and will remain an issue—as it is on Facebook, for example—until machine-learning solutions improve, are standardized, and are commoditized. But this is true of everything in technology, and these problems are soluble.
The Illusion of Control
Machine-learning personalization is just one means of achieving real-world ends in software, of course. But it’s illustrative of how open-minded we should be in evaluating technology. It’s crucial that designers think seriously and pragmatically about consequences rather than mapping their reactions to moralizing narratives. The idea that personalization is about corporate or political control is an emotionally satisfying but inaccurate one. It ignores how humans, human societies, and machine learning all work. It also ignores the problems personalization is trying to solve: to help people navigate an ocean of content and many types of social connections.
If some of our experiences have made us wary of personalization, most of us have had moments where the opposite is true, too. How brittle personalization is—how dependent our experiences are on it working well—is itself variable between products and designs. How much personalization interferes with a user’s cognitive model of your software, for example, is something to think about and mitigate.
But the time when there were primarily power users online is over. Most users do not want the “control” of RSS and Twitter lists and blocking, muting, and unfollowing their fellows. Nor do they want our view of what they should read, whom they should know, or how they should act. They want to be empowered to find the information that matters to them, share and interact with the people they choose, and experience the world on their terms. Not only does personalization not thwart information diversity, it helps diverse individuals live and learn as they please. And empowering people with that kind of control should be—for designers who favor democracy—a lifelong goal.