Why doesn’t Twitter remove more white supremacists from the platform? It’s one of the most enduring questions about the platform, largely because of Twitter’s contradictory attitudes. On one hand, the company has a hateful conduct policy that explicitly bans tweets meant to cite fear of, or violence toward, protected groups. On the other, any number of prominent white nationalists can still be found on the platform, using Twitter’s viral sharing mechanics to grow their audiences and sow division.

In Motherboard today, Joseph Cox and Jason Koebler talk with an employee who says Twitter has declined to implement an algorithmic solution to white nationalism because doing so would disproportionately affect Republicans. They write:

The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued.

There is no indication that this position is an official policy of Twitter, and the company told Motherboard that this “is not [an] accurate characterization of our policies or enforcement—on any level.” But the Twitter employee’s comments highlight the sometimes overlooked debate within the moderation of tech platforms: are moderation issues purely technical and algorithmic, or do societal norms play a greater role than some may acknowledge?

Twitter denied the substance of Motherboard’s report. What strikes me about the article is how it seems to take for granted the idea that an algorithm could effectively identify all white supremacist content, and presents Twitter’s failure to implement such a solution as a mystery. “Twitter has not publicly explained why it has been able to so successfully eradicate ISIS while it continues to struggle with white nationalism,” the authors write. “As a company, Twitter won’t say that it can’t treat white supremacy in the same way as it treated ISIS.”

The biggest reason Twitter can’t treat white supremacy the same way it treats ISIS is laid out neatly in a piece by Kate Klonick in the New Yorker today exploring Facebook’s response to the Christchurch shooting in its immediate aftermath. She writes:

To remove videos or photos, platforms use “hash” technology, which was originally developed to combat the spread of child pornography online. Hashing works like fingerprinting for online content: whenever authorities discover, say, a video depicting sex with a minor, they take a unique set of pixels from it and use that to create a numerical identification tag, or hash. The hash is then placed in a database, and, when a user uploads a new video, a matching system automatically (and almost instantly) screens it against the database and blocks it if it’s a match. Besides child pornography, hash technology is also used to prevent the unauthorized use of copyrighted material, and over the last two and a half years it has been increasingly used to respond to the viral spread of extremist content, such as isis-recruitment videos or white-nationalist propaganda, though advocates concerned with the threat of censorship complain that tech companies have been opaque about how posts get added to the database.

Fortunately, there’s a limited amount of terrorist content in production, and platforms have been able to stay on top of the relatively limited quantity of it by hashing the photos and videos and sharing them with their peers. There is no equivalent hash for white nationalism, particularly of the text-based variety that so often appears on Twitter. A major problem on Twitter is that people are often joking — as anyone who has been suspended for tweeting “I’m going to kill you” to a close friend can tell you. That’s one reason the alt-right has embraced irony as a core strategy: they maintain plausible deniability as long as they can, while thwarting efforts to ban them.

The idea that Twitter could create a hash of all white supremacist content and remove it with a click is appealing, but it’s likely to remain a fantasy for the foreseeable future. One of the lessons of the Christchurch shooting is how hard it is for platforms to remove a piece of content, even when it has already been identified and hashed. Facebook began to make headway against the thousands of people uploading minor variants of the shooting video only when they began matching audio snippets against the video; and even then parts of the shooting can still be found on Facebook and Instagram a month later, as Motherboard reported.

Of course, there are obvious things Twitter could do to live up to its hateful conduct policy. It could follow Facebook’s lead and ban prominent users like David Duke, the former Ku Klux Klan leader, and Faith Goldy, who the New York Times describes as “a proponent of the white genocide conspiracy theory, which contends that liberal elites are plotting to replace white Christians with Jews and nonwhites.” (Goldy has a YouTube account as well.)

It could also make moves to limit freedom of reach for people who violate hateful conduct. It could hide accounts from search results and follow suggestions, remove their tweets from trending pages, or even making their tweets available only to followers.

And it could remove tweets that incite violence against elected officials — yes, even when those tweets come from a sitting president. The bar ought to remain high — I’m sympathetic to the idea that some borderline posts should be preserved for their newsworthiness. But Rep. Ilhan Omer deserves more than a consolation phone call from Jack Dorsey for his decision to leave the president’s tweet up. There should be no shame in removing tweets on a case-by-case basis when they inspire very real death threats against an elected official. Otherwise, what’s a hateful conduct policy even for?

What Twitter should not do, though, is treat white nationalism the same as terrorism. The latter can be removed reasonably well with algorithms — although even then, as Julian Sanchez noted, they likely trip up innocent bystanders who lack the leverage to pressure Twitter into fixing them. The former, though, is a world-scale problem, made ambiguous from tweet to tweet by the infinite complexity of language. And we’re not going to solve it with math.

Democracy

Facebook broke Canadian privacy law, according to regulator

A day after Facebook said it expected to pay at least $3 billion to US regulators over the Cambridge Analytica data privacy scandal, Canadian regulators said that the company had acted illegally and was refusing to submit to privacy audits:

For these reasons outlined in the report, the regulators intend to take their findings to Canadian federal court to seek an order that would require Facebook to change its privacy practices to account for their findings.

“The stark contradiction between Facebook’s public promises to mend its ways on privacy and its refusal to address the serious problems we’ve identified – or even acknowledge that it broke the law – is extremely concerning,” Therrien said.

New York’s attorney general is investigating Facebook after contact-scraping scandal

If Cambridge Analytica is a $3 billion fine, accidentally collecting email contacts from 1.5 million people is going to cost Facebook, what … $500?

“It is time Facebook is held accountable for how it handles consumers’ personal information,” Attorney General Letitia James said in a statement. “Facebook has repeatedly demonstrated a lack of respect for consumers’ information while at the same time profiting from mining that data.”

Facebook bans personality quizzes after Cambridge Analytica scandal

Or, if not outright bans, then at least heavily discourages personality quizzes. Hopefully you already know your personality and won’t be affected too badly here.

Fake news and public executions: Documents show a Russian company’s plan for quelling protests in Sudan

Tim Lister, Sebastian Shukla and Nima Elbagir document how a Russian firm came to the aid of Sudan’s since-deposed president with a misinformation campaign built for social networks. Its aim was to protect the Kremlin’s influences, they report:

One document from early January, reviewed by CNN, proposes spreading claims that protesters were attacking mosques and hospitals. It also suggested creating an image of demonstrators as “enemies of Islam and traditional values” by planting LGBT flags among them. And it proposed a social media campaign claiming that “Israel supports the protesters.”

The strategy also suggested the government “simulate a dialogue with the opposition and demonstrate the openness of the government” in order to “isolate leaders of the protest and gain time.”

How one country blocks the world on data privacy

A year in, Nicholas Vinocur asks why Ireland hasn’t brought any major cases against Google or Facebook under the General Data Protection Regulation:

Despite its vows to beef up its threadbare regulatory apparatus, Ireland has a long history of catering to the very companies it is supposed to oversee, having wooed top Silicon Valley firms to the Emerald Isle with promises of low taxes, open access to top officials, and help securing funds to build glittering new headquarters.

Now, data-privacy experts and regulators in other countries alike are questioning Ireland’s commitment to policing imminent privacy concerns like Facebook’s reintroduction of facial recognition software and data sharing with its recently purchased subsidiary WhatsApp, and Google’s sharing of information across its burgeoning number of platforms.

Protecting the EU Elections From Misinformation and Expanding Our Fact-Checking Program to New Languages

Facebook is bringing fact-checking to more countries and more languages:

Today we’re announcing the expansion of this program in the EU with five new local fact-checking partners: Ellinika Hoaxes in Greece, FactCheckNI in Northern Ireland, Faktograf in Croatia, Observador in Portugal and Patikrinta 15min in Lithuania. These organizations will review and rate the accuracy of content on Facebook.

Elsewhere

How Amazon automatically tracks and fires warehouse workers for ‘productivity’

Colin Lecher reports that at a single Amazon fulfillment center in 2017 and 2018, hundreds of employees were fired when automated software determined they were not productive enough:

Critics see the system as a machine that only sees numbers, not people. “One of the things that we hear consistently from workers is that they are treated like robots in effect because they’re monitored and supervised by these automated systems,” Mitchell says. “They’re monitored and supervised by robots.”

The system goes so far as to track “time off task,” which the company abbreviates as TOT. If workers break from scanning packages for too long, the system automatically generates warnings and, eventually, the employee can be fired. Some facility workers have said they avoid bathroom breaks to keep their time in line with expectations.

Google is changing how employees can report harassment and discrimination

Another noteworthy piece from Colin Lecher: Googlers now have a dedicated website on which they can report harassment and discrimination.

’It’s not play if you’re making money’: how Instagram and YouTube disrupted child labor laws

Julia Carrie Wong has a great piece about how parents get around laws meant to ban child labor on YouTube:

Pierce argued that YouTube should institute policies requiring that children featured in monetized videos are entitled to a share of the revenues from the account owner. “If they had that kind of policy, then the money that was earmarked for the child, under California law, would be the child’s money, and if the parent misused it, then the child could sue,” he said.

Pierce also theorized that influencer deals made by parents on behalf of their children could be invalid unless the earnings are owned entirely by the child, because a parent consenting to the use of their child’s image in advertising in order to enrich himself would be “self-dealing and in breach of the covenant of good faith and dealing”.

Snapchat hires CMO to freshen up app marketing like he did McDonald’s burgers

Snap has a new chief marketing officer from McDonald’s.

Snapchat is quietly cracking down on unsanctioned sponsored content

Snap is starting to delete sponsored content that does not advertise itself as such, Kerry Flynn reports:

For one creator, who works on a team of six that manages six Snapchat accounts — that together reach more than 2 million people daily — the move finally provided insight into why their and peers’ sponsored posts on Snapchat have been deleted over the last year. For the last few years, the creator has built and managed accounts on Snapchat and makes money through orchestrating brand deals and placing ads for them within those accounts.

China’s Gen Z Skips the Stores and Shops on Social Media

Daniela Wei and Shelly Banjo report that the smartphone really is the new mall for young people in China:

China’s Gen Z isn’t impressed by glitzy brand names and traditional advertising campaigns. Many are looking beyond the physical stores and e-commerce portals their predecessors preferred. They’re buying goods suggested by social media influencers known as wanghong. And they’re using messaging, short videos, livestreaming, and social media apps as gateways to making those purchases.

This new era in Chinese shopping offers a glimpse into the likely future of retail around the world. More than $413 billion of goods will be sold through social e-commerce in China by 2022, an almost fivefold increase from $90 billion in 2017, according to researcher Frost & Sullivan.

Launches

Snapchat is bringing its Bitmoji avatars into video games

I agree with Ba Blastock, who may have the best tech name since Brogan BamBrogan:

“It’s kind of a no-brainer to bring Bitmoji into games. Games can be so much more engaging with you…in the game,” Bitmoji co-founder Ba Blackstock tells me. “We’re adding an identity layer to gaming that has the potential to have a transformational effect on the industry.”

Strengthening our approach to deliberate attempts to mislead voters

Twitter created a dedicated reporting feature within the app for tweets that contain misinformation about election dates or how to vote.

Takes

What should the press learn from its use of Russian hacked content?

Kathleen Hall Jamieson says the press should act now to prevent itself from being exploited by the next generation of Wikileaks-style targeted document dumps:

To ensure that past is not prologue, the nation’s news outlets would do well to promulgate policies regarding use of hacked materials that confirm that they will examine stolen, leaked material with care, tell their audiences whether it has been independently verified, and disclose relevant information about its origins. Doing so would not only prevent decision-making on the fly but also would warn aspiring hackers that future theft-and-release will not be rewarded in 2020 and beyond in the ways in which it was in 2016.

Sri Lanka’s Decision to Censor Social Platforms Is Indefensible

Trevor Timm calls on Sri Lanka to end its ban on social networks:

Just one month ago, Stanford’s Jan Rydzak released a working paper looking at India’s attempts in 2016 to shut down portions of the internet, which was carried out with the state intention of stopping violence. “Bottom line,” Rydzak said about his study’s conclusions, “shutdowns are followed by a clear increase in violent protest and have very ambiguous effects on peaceful demonstrations.”

And finally …

Did Trump Show Printed-Out Tweets to Jack Dorsey? We Asked Digital Forensics Experts

When President Trump met with Twitter CEO Jack Dorsey this week, he brought what Sarah Emerson calls “a conspicuous stack of papers.” And those papers likely contained … printed-out tweets, she reports:

When Motherboard asked Twitter about the printouts, a spokesperson for the company ignored the question.

“Jack had a constructive meeting with the President of the United States [yesterday] at the president’s invitation,” the spokesperson said. “They discussed Twitter’s commitment to protecting the health of the public conversation ahead of the 2020 US elections and efforts underway to respond to the opioid crisis.”

They were definitely printed-out tweets.

Talk to me

Send me tips, comments, questions, and printed-out tweets: [email protected].

This article is from The Verge

You May Also Like

33 Awesome Outdoor Gear Deals From Winter Clearance Sales

Our days are growing warmer, which means outdoor retailers are blowing out…

The Team Helping Women Fight Digital Domestic Abuse

Pick up any piece of tech and Emma Pickering knows how it…

Why is Google not working? Website ‘down’ today, confused users moan

SOME Google users are reporting problems with the search engine after several…

The ‘Bad Astronomer’ Takes You on a Tour of the Cosmos

In the early 2000s, Phil Plait wrote his first book, Bad Astronomy, which…