Comments

A Better Internet Is Waiting for UsSkip to Comments
The comments section is closed. To submit a letter to the editor for publication, write to letters@nytimes.com.

A Better Internet Is Waiting for Us

My quest to imagine a different reality.

Nothing lasts
forever— not even on
the internet.
Opinion

A Better Internet Is Waiting for Us

My quest to imagine a different reality.

Social media is broken. It has poisoned the way we communicate with each other and undermined the democratic process. Many of us just want to get away from it, but we can’t imagine a world without it. Though we talk about reforming and regulating it, “fixing” it, those of us who grew up on the internet know there’s no such thing as a social network that lasts forever. Facebook and Twitter are slowly imploding. And before they’re finally dead, we need to think about what the future will be like after social media so we can prepare for what comes next.

I don’t mean brainstorming new apps that could replace outdated ones, the way Facebook did Myspace. I mean what will replace social media the way the internet replaced television, transforming our entire culture?

To find out what comes next, I went on a quest. I was looking for a deeper future than the latest gadget cycle, so I spoke to experts in media history, tech designers, science fiction writers and activists for social justice. I even talked to an entity that is not a person at all.

Collectively, they gave me a glimpse of a future where the greatest tragedy is not the loss of our privacy. It is the loss of an open public sphere. There are many paths beyond the social media hellscape, and all of them begin with reimagining what it means to build public spaces where people seek common ground.

I began on a steep, narrow street in San Francisco’s North Beach, a neighborhood overlooking the Bay where beatniks used to hang out in the 1950s. It’s miles away from techie-clogged SoMa, where Google employees eat their free lunches and the glowing Twitter sign looms over Market Street.

This is the home of Erika Hall’s design firm Mule. She co-founded it 20 years ago, and she’s watched the web move from the margins to the center of the business world. Back in the early aughts, companies were just trying to figure out how to have an “online presence.” She and her team built websites and digital campaigns for them, using the principles of “user-centered” design to help people navigate the confusing new world of the internet.

“I absolutely believe that you can design interfaces that create more safe spaces to interact, in the same way we know how to design streets that are safer,” she said.

But today, she told me, the issue isn’t technical. It has to do with the way business is being done in Silicon Valley. The problem, as most people know by now, is that tech companies want to grab a ton of private data from their customers without telling anyone why they need it. And this, Ms. Hall says, is bad design for users. It leaves them vulnerable to abuses like the Cambridge Analytica scandal, or to hacks where their data is exposed.

What’s more, companies like Facebook and Twitter lack an incentive to promote better relationships and a better understanding of the news “because they make money through outrage and deception,” Ms. Hall said. Outrage and deception capture our attention, and attention sells ads. “At a business model level, they are ad networks parasitic on human connection.”

There is a lot of pressure on tech companies from the government as well as from activist employees to change what they do with user data. But that doesn’t mean we’re going to see an improvement. We might even see Facebook getting more comfortable with authoritarianism.

“They’ve already shown a willingness to do this — they’ve bent to the demands of other governments,” said Siva Vaidhyanathan, a professor at the University of Virginia and author of a recent book, “Antisocial Media.”

He predicts that we’re about to see a showdown between two powerhouse social media companies — Facebook and WeChat. WeChat has more than one billion users in China and among Chinese diaspora groups, and their users have no expectation of privacy. Facebook has 2.4 billion users, dominating every part of the world except China. If Facebook wants to reach inside China’s borders, it might take on WeChat’s values in the name of competition.

As scary as that sounds, none of it is inevitable. We don’t have to lose our digital public spaces to state manipulation. What if future companies designed media to facilitate democracy right from the beginning? Is it possible to create a form of digital communication that promotes consensus-building and civil debate, rather than divisiveness and conspiracy theories?

That’s the question I posed to John Scalzi, a science fiction writer and enthusiastic Twitter pundit. His books often deal with the way technology changes the way we live. In “Lock In,” for example, people with full body paralysis are given brain implants that allow them to interact with the world through robots — or even, sometimes, other people. The technology improves lives, but it also makes the perfect murder a lot easier.

Mr. Scalzi is fascinated by the unintended consequences that flow from new discoveries. When he thinks about tomorrow’s technology, he takes the perspectives of real, flawed people who will use it, not the idealized consumers in promotional videos.

He imagines a new wave of digital media companies that will serve the generations of people who have grown up online (soon, that will be most people) and already know that digital information can’t be trusted. They will care about who is giving them the news, where it comes from, and why it’s believable. “They will not be internet optimists in the way that the current generation of tech billionaires wants,” he said with a laugh. They will not, he explained, believe the hype about how every new app makes the world a better place: “They’ll be internet pessimists and realists.”

What would “internet realists” want from their media streams? The opposite of what we have now. Today, platforms like Facebook and Twitter are designed to make users easy to contact. That was the novelty of social media — we could get in touch with people in new and previously unimaginable ways.

It also meant, by default, that any government or advertiser could do the same. Mr. Scalzi thinks we should turn the whole system on its head with “an intense emphasis on the value of curation.” It would be up to you to curate what you want to see. Your online profiles would begin with everything and everyone blocked by default.

Think of it as a more robust, comprehensive version of privacy settings, where news and entertainment would reach you only after you opted into them. This would be the first line of defense against viral falsehoods, as well as mobs of strangers or bots attacking someone they disagree with.

The problem is that you can’t make advertising money from a system where everyone is blocked by default — companies wouldn’t be able to gather and sell your data, and you could avoid seeing ads. New business models would have to replace current ones after the demise of social media. Mr. Scalzi believes that companies will have to figure out ways to make money from helping consumers protect and curate their personal data.

This could take many forms. Media companies might offer a few cheap services with ads, and more expensive ones without. Crowdfunding could create a public broadcasting version of video sharing, kind of an anti-YouTube, where every video is educational and safe for kids. There would also be a rich market for companies that design apps or devices to help people curate the content and people in their social networks. It’s all too easy to imagine an app that uses an algorithm to help “choose” appropriate friends for us, or select our news.

This is where curation might go wrong, says Safiya Umoja Noble, a professor at the University of California at Los Angeles. She’s the author of the groundbreaking work “Algorithms of Oppression,” and was one of the first researchers to warn the public about bias in algorithms. She identified how data from social media platforms gets fed into algorithms, amplifying human biases about everything from race to politics.

Ms. Noble found, for example, that a Google image search for “beautiful” turned up predominantly young white women, and searches for news turned up conspiracy theories. Nevertheless, Facebook uses algorithms to suggest stories to us. Advertisers use those algorithms to figure out what we’d like to buy. Search engines use them to figure out the most relevant information for us.

When she thinks about the future, Ms. Noble imagines a counterintuitive and elegantly simple solution to the algorithm problem. She calls it “slow media.” As Ms. Noble said: “Right now, we know billions of items per day are uploaded into Facebook. With that volume of content, it’s impossible for the platform to look at all of it and determine whether it should be there or not.”

Trying to keep up with this torrent, media companies have used algorithms to stop the spread of abusive or misleading information. But so far, they haven’t helped much. Instead of deploying algorithms to curate content at superhuman speeds, what if future public platforms simply set limits on how quickly content circulates?

It would be a much different media experience. “Maybe you’ll submit something and it won’t show up the next minute,” Ms. Noble said. “That might be positive. Maybe we’ll upload things and come back in a week and see if it’s there.”

That slowness would give human moderators or curators time to review content. They could quash dangerous conspiracy theories before they lead to harassment or worse. Or they could behave like old-fashioned newspaper editors, fact-checking content with the people posting it or making sure they have permission to post pictures of someone. “It might help accomplish privacy goals, or give consumers better control,” Ms. Noble said. “It’s a completely different business model.”

The key to slow media is that it puts humans back in control of the information they share.

Before I chucked algorithms out altogether, I wanted to find out what our future media might look like if we let algorithms take over fully. So I contacted Janelle Shane, an algorithm designer and author of a book about (and named by) A.I., “You Look Like a Thing and I Love You.” She has spent years creating humorous art with OpenAI’s GPT-2 algorithm, a neural network that can predict the next word in text after learning from eight million web pages.

I asked Ms. Shane whether her algorithm could give us some text that might reveal something about the future after social media. She prompted the algorithm by feeding it the terms of service from Second Life, a virtual reality social network.

To generate its answers, GPT-2 drew on those terms of service along with everything it had learned from humans on the web. “In a sense, GPT-2 is based on just about every code of conduct on the internet, plus everything else on the internet,” Ms. Shane told me. That means GPT-2 is as biased as every bonkers thing you’ve read online. Even if the future of media isn’t here yet, perhaps its imaginary code of conduct would give us clues about what it would be like.

The algorithm came up with some rules that sounded almost real:

“You may not make money from feature uploads of any kind that upset, ridicule or damage the virtual property.”

“You may not develop artificial or undesired entities for use in Photon Emission Products (PEPs).”

“We retain 14 of your detachable drones to monitor, detect, reproduce and regularly update your virtual questions, software and datalogues.”

“You may not transmit a child virus, via Bluetooth, Group Connection Beans/Sweets, or bee Collision Marketing Eradia virus with your Student or Student Solutions Phone.”

What the neural network seemed to be telling me was that even when we’re all in a distant future of “Photon Emission Products,” “detachable drones” and “Group Connection Beans,” we’re still going to be worried about how we treat one another in public spaces. We’ll want to set up rules that limit undesirable outcomes and protect children.

More important, Ms. Shane’s neural network makes it obvious why media run by algorithms is doomed to fail. It might look as if it’s working — after all, “bee Collision Marketing Eradia virus” almost makes sense. But it’s just a word mush, without real meaning. We need humans to maintain and curate the digital public spaces we actually want.

And even if our algorithms become miraculously intelligent and unbiased, we won’t solve the problem of social media until we change the outdated metaphors we use to think about it.

Twitter and Facebook executives often say that their services are modeled on a “public square.” But the public square is more like 1970s network television, where one person at a time addresses the masses. On social media, the “square” is more like millions of karaoke boxes running in parallel, where groups of people are singing lyrics that none of the other boxes can hear. And many members of the “public” are actually artificial beings controlled by hidden individuals or organizations.

There isn’t a decent real-world analogue for social media, and that makes it difficult for users to understand where public information is coming from, and where their personal information is going.

Illustration by Delcan & Company, Photograph by Siripong Jitchum/Shutterstock

It doesn’t have to be that way. As Erika Hall pointed out, we have centuries of experience designing real-life spaces where people gather safely. After the social media age is over, we’ll have the opportunity to rebuild our damaged public sphere by creating digital public places that imitate actual town halls, concert venues and pedestrian-friendly sidewalks. These are places where people can socialize or debate with a large community, but they can do it anonymously. If they want to, they can just be faces in the crowd, not data streams loaded with personal information.

That’s because in real life, we have more control over who will come into our private lives, and who will learn intimate details about us. We seek out information, rather than having it jammed into our faces without context or consent. Slow, human-curated media would be a better reflection of how in-person communication works in a functioning democratic society.

But as we’ve already learned from social media, anonymous communication can degenerate quickly. What’s to stop future public spaces from becoming unregulated free-for-alls, with abuse and misinformation that are far worse than anything today?

Looking for ideas, I talked to Mikki Kendall, author of the book “Amazons, Abolitionists, and Activists.” Ms. Kendall has thought a lot about how to deal with troublemakers in online communities. In 2014, she was one of several activists on Black Twitter who noticed suspiciously inflammatory tweets from people claiming to be black feminists. To help figure out who was real and who wasn’t, she and others started tweeting out the fake account names with the tag #yourslipisshowing, created by the activist Shafiqah Hudson. In essence, the curated arena of Black Twitter acted as a check on a public attack by anonymous trolls.

Ms. Kendall believes that a similar mechanism will help people figure out fakes in the future. She predicts that social media will be supplanted by immersive 3-D worlds where the opportunities for misinformation and con artistry will be immeasurable.

“We’re going to have really intricately fake people,” she said. But there will also be ways to get at the truth behind the airbrushing and cat-ear filters. It will hinge on that low-tech practice known as meeting face to face. “You’re going to see people saying, ‘I met so-and-so,’ and that becomes your street cred,” she explained.

People who aren’t willing to meet up in person, no matter how persuasive their online personas, simply won’t be trusted. She imagines a version of what happened with #yourslipisshowing, where people who share virtual spaces will alert one another to possible fakes. If avatars are claiming to be part of a group, but nobody in that group has met them, it would be an instant warning sign.

The legacy of social media will be a world thirsty for new kinds of public experiences. To rebuild the public sphere, we’ll need to use what we’ve learned from billion-dollar social experiments like Facebook, and marginalized communities like Black Twitter. We’ll have to carve out genuinely private spaces too, curated by people we know and trust. Perhaps the one part of Facebook we’ll want to hold on to in this future will be the indispensable phrase in its drop-down menu to describe relationships: “It’s complicated.”

Public life has been irrevocably changed by social media; now it’s time for something else. We need to stop handing off responsibility for maintaining public space to corporations and algorithms — and give it back to human beings. We may need to slow down, but we’ve created democracies out of chaos before. We can do it again.


Annalee Newitz (@Annaleen) is a contributing opinion writer and the author, most recently, of “The Future of Another Timeline.”

Cover illustration by Delcan and Company.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.