Earlier this month, Twitter did something radical: The social network famous for its 140-character limit doubled it to 280.
Weirdly, the world didn’t end. There was some whining from old-timers, which quickly died out. Mostly, everything was O.K. Some might even say the change was for the better.
There’s a lesson in this for Twitter: It should be bolder in how it manages its network. It can slay more sacred cows. And after a year in which it became blindingly obvious that Twitter was rife with abuse and harassment, and that it has become a haven for propagandists, bots and other manipulators, there’s one sacred cow in particular that deserves to be roasted more thoroughly than your Thanksgiving bird.
It’s time for Twitter to scrap one of its founding principles: the idea that it is an anything-goes paradise, where anyone who signs up for a voice on its platform is immediately and automatically given equal footing with everyone else, and where even the vilest, most hateful and antisocial behavior should be tolerated.
The company is currently remaking its unworkable verification system — the blue check mark it awards to some high-profile accounts, an icon whose precise meaning is unclear, but that confers many privileges. Last week, Twitter removed the icon from several accounts belonging to white supremacists. Now it says it is rethinking the whole system, and looking for ways to better police its network.
It ought to consider a radical, top-to-bottom change like this: Instead of awarding blue checks to people who achieve some arbitrary level of real-world renown, the company should issue badges of status or of shame based on signals about how people actually use, or abuse, Twitter. In other words, Twitter should begin to think of itself, and its users, as a community, and it should look to the community for determining the rights of people on the platform.
Is someone making a positive contribution to the service, for example by posting well-liked content and engaging in meaningful conversations? Is an account repeatedly spreading misinformation? Is it promoting or participating in online mobs, especially mobs directed at people with fewer followers? Did it just sign up two days ago? Is it acting more like a bot than a human? Are most of its tweets anti-Semitic memes? Can the account be validated with other markers of online reputation — a Facebook account or a LinkedIn profile, for instance? And on and on.
Twitter should not just embrace such reputational guidelines, it should make them transparent and meaningful. If you’re new to Twitter, or if you’ve repeatedly flouted its community rules, your rights on the platform would be circumscribed. Perhaps you wouldn’t show up in other people’s timelines or replies or search results. The better you used the service — where “better” is determined, as much as possible, based on how others react to your account — the more status you’d earn, and the more you’d be allowed to do.
“You set up a system that encourages positive behaviors, and discourages others — so all of a sudden people see that the more trusted you are, the more reach you get,” said Anil Dash, a software executive in New York who has spent years creating and managing online communities, and who floated the broad outlines of this plan in a recent interview.
Mr. Dash suggested that instead of a blue check mark, everyone might start out with, say, a gray one. But as you gained trust and rights on the network, your check mark would change color — it’d turn blue, then green, then perhaps gold. Status would be something you could earn and could lose, rather than something you’d be awarded by Twitter from on high.
“It becomes a positive dynamic,” Mr. Dash said.
Twitter declined to discuss how it might change its verification system, and there are many questions about how such a plan might work. Before we get to those, let’s deal with the two most obvious criticisms of this idea. One: Who cares? Two: Isn’t Twitter supposed to be a bastion of free speech?
“Who cares?” is a fair question. Twitter’s user base is tiny compared with Facebook or Instagram, and its arcane conventions and the generally combative, depressive hellscape that is much of its content deters most normal people. But because it is catnip for journalists (including yours truly), who think of it as a fast feed of news and commentary, Twitter exerts influence beyond its numbers.
As I’ve argued before, Twitter has become the small bowel of the American news landscape — the place where the narratives you see on prime-time cable are first digested and readied for wider consumption. It’s no accident that it is President Trump’s social network of choice. And it’s also no accident that foreign powers are attracted to Twitter. According to its recent congressional testimony, Twitter was a primary target of Russian trolls who sought to influence last year’s presidential election; collectively, trolls created millions of election-related tweets, according to the company, some of which were widely cited across the media.
It is precisely because of Twitter’s wider social importance that even nonusers should demand fixes to how it works. Besides the propaganda problem, at the moment — as Jack Dorsey, Twitter’s co-founder and chief executive, recently acknowledged — Twitter is a hostile place for women, minorities and many others, who are routinely barraged by threats and hate speech.
Twitter now concedes that its system for mitigating some of these problems, the verification badge, has been badly mismanaged. The blue check system started out as a simple way to verify a person’s identity — a kind of trademark for ensuring that a tweet from an account with the name Donald J. Trump had come from the real Donald J. Trump.
But Twitter’s system for giving out the checks was never very transparent or logical. Dozens of Twitter users told me that they’d been denied check marks for reasons that were never explained. (I got mine when I worked at Slate, whose social media team had a connection with Twitter; in this business, it’s all about who you know.)
Twitter has since muddied the meaning of the blue badge. It has come to convey more than just an ID check; Twitter has rolled out special features to verified accounts, and blue check-marked tweets receive special treatment from its algorithms. This confounded Twitter’s efforts to police the network: On one hand, it was trying to fight abuse; on the other, it was verifying trolls who routinely directed their followers to harass people.
At the core of this problem is confusion over what kind of network Twitter should be. Twitter’s founders always talk about the service as a kind of public square, where everyone should be able to have a more or less unfettered voice. That’s a misguided analogy, because it misses the nuances of the real world.
Even a real public square imposes limits on how people can behave. Sure, the sign-wielding crazy guy is free to stand up on a crate and spout his nonsense — but you’re free to ignore him, and if you do, he’s not allowed to marshal all his acolytes (or to invent new ones) to follow and harass you. More than that, in the real world, we have many ways of determining who is worth listening to and who isn’t; there are body language, ways of speaking, ways of dressing and an overall history — an earned reputation that determines a person’s place in the community.
It wouldn’t be easy to recreate such a system on Twitter. The company faces Wall Street pressure to gain more users; imposing checks on people would most likely frustrate that effort. Twitter would also have to have frank discussions about which behaviors it considered beneficial and which it didn’t — and it would have to explain and defend those decisions to those whose trust it has repeatedly broken.
But what choice does it have? Twitter, as it is now, isn’t working well for anyone.
“Would it get complicated? Of course it would, but that’s the nature of running a business that’s not negligent,” said Sarah Szalavitz, the chief executive of 7 Robot, a digital design agency, who has long criticized Twitter’s efforts to police itself. “At the moment, I think they’re doing harm, and they know it.”