In March, in the aftermath of the Christchurch shooting, I tried to distinguish between internet problems and platform problems. Internet problems ari
In March, in the aftermath of the Christchurch shooting, I tried to distinguish between internet problems and platform problems. Internet problems arise from the existence a free and open network that connects most of the world; platform problems arise from features native to the platform. The fact that anti-vaccination zealots can meet online is an internet problem; the fact that Facebook recommended that new mothers join anti-vaccination groups is a platform problem.
The recent rise in white supremacist violence around the world has given us fresh reason to ask which aspects of the problem belong to the entire internet, and which belong to our biggest social networks. It seems apparent that the internet is cultivating loose but potent networks of extremists. But what are the mechanics of this radicalization? And what role could platforms play in discouraging it?
I thought about that question while reading Joe Bernstein’s unsettling piece about Soph, a 14-year-old YouTuber who has gained a measure of fame (and 800,000 subscribers) by preaching a slur-laden gospel of homophobia, Islamophobia, and racism.
Bernstein’s profile paints a picture of a child who gained notoriety as a foul-mouthed 9-year-old broadcasting herself playing video games. The more outrageous her behavior, the more YouTube’s algorithms rewarded her with attention, until she was making $1,700 a month from Patreon subscribers and feeling comfortable enough to make death threats against YouTube’s CEO:
Last month, after YouTube deactivated comments on her videos — the platform disabled comments on all videos with children in response to an outcry over the aforementioned network of exploitation — Soph uploaded a 12-minute video in which she seemed to be daring the platform to suspend her, knowing full well that it wouldn’t.
“Susan, I’ve known your address since last summer,” Soph said, directly addressing YouTube CEO Susan Wojcicki. “I’ve got a Luger and a mitochondrial disease. I don’t care if I live. Why should I care if you live or your children? I just called an Uber. You’ve got about seven minutes to draft up a will. … I’m coming for you, and it ain’t gonna be pretty.”
By mid-afternoon on Tuesday, the video had been removed, and Soph’s channel suspended. In part, channels like Soph’s seem inevitable — offer everyone in the world a microphone, and offer the biggest rewards to those who get our attention in the most novel ways, and some of your creators are going to break bad.
On the other hand, Bernstein’s profile carries with it a poignant sense that Soph is not well. She has an illness, she’s unhappy in school, and she feels alone. There’s nothing wrong with feeling that way, or with seeking comfort in an online audience of friends and strangers. And it’s fair to ask where Soph’s parents are in all this.
But YouTube allows children to start channels as young as 13 — and Soph was apparently active on her channel at the age of 9. As Bernstein writes: “Soph’s popularity raises another, perhaps more difficult question, about whether YouTube has an obligation to protect such users from themselves — and one another.” Put another way: a child who becomes a hero to bigots because of her performances on the platform, and who is recommended to other users by its algorithm, is a platform problem.
Bernstein’s profile of Soph recalls another recent story about a teenager’s embrace of the alt-right. In this month’s issue of the Washingtonian, an anonymous parent recalls the experience of their child’s gradual radicalization after being falsely accused of sexual harassment. Reddit and 4Chan were happy to tell 13-year-old Sam what he wanted to hear, the author writes:
Those online pals were happy to explain that all girls lie—especially about rape. And they had lots more knowledge to impart. They told Sam that Islam is an inherently violent religion and that Jews run global financial networks. (We’re Jewish and don’t know anyone who runs anything, but I guess the evidence was convincing.) They insisted that the wage gap is a fallacy, that feminazis are destroying families, that people need guns to protect themselves from government incursions onto private property. They declared that women who abort their babies should be jailed.
Sam prides himself on questioning conventional wisdom and subjecting claims to intellectual scrutiny. For kids today, that means Googling stuff. One might think these searches would turn up a variety of perspectives, including at least a few compelling counterarguments. One would be wrong. The Google searches flooded his developing brain with endless bias-confirming “proof” to back up whichever specious alt-right standard was being hoisted that week. Each set of results acted like fertilizer sprinkled on weeds: A forest of distortion flourished.
His parents attempt to reason with him, to no avail. Sam becomes a moderator of a right-wing Reddit forum, and his parents begin questioning their own reality:
One weekend morning as we were folding laundry in our room, Sam sat on the edge of our bed and instructed us on how to behave if the FBI ever appeared at our door.
What was posturing and what was real? We suspected the former and doubted the latter, but we had no way to be sure. The situation evolved faster than we could frame the questions, much less figure out the answers. When we did confront Sam—say, if we caught a glimpse of a vile meme on his phone—he assured us that it was meant to be funny and that we didn’t get it. It was either “post-ironic” or referenced multiple other events that created a maze-like series of in-jokes impossible for us to follow.
What finally snaps Sam out of it, in his parent’s telling, is visiting a pro-Trump rally in 2017. There he sees a lone counter-protester holding up a picture of Heather Heyer, the demonstrator murdered at a white supremacist rally in Charlottesville, and Sam marvels at the counter-protester’s bravery. He later tells his parents he feels as if he had been hostage to a cult.
Sam’s story is ultimately a hopeful one, because it shows a path away from right-wing radicalization. It takes time, skillful parenting, and a capacity for self-reflection on the part of young people like Sam. But young people can and very much do shed old identities as they grow up.
That said: not everyone has skillful parents or the capacity for self-reflection. Not everyone is 14 years old and in the middle of an experimental phase — the alleged Christchurch shooter was 28. Some of the extremists who are nurtured on these platforms never come back.
The project of depolarizing the globe, and pushing extremists back to the margins, will require far more than software fixes and policy updates. But as I think about YouTubers like Soph, I hope those platforms are doing the same kind of self-reflection that Sam did. Coming in after the bigot gets 800,000 followers, and removing her hate videos after they have gone viral, should only be the first step. The more urgent question question is why Soph found such a rapt audience — and how YouTube helped her build it.
Russian hackers gained access to voter databases in two Florida counties ahead of the 2016 presidential election, Republican Gov. Ron DeSantis said at a news conference Tuesday.
DeSantis said the hackers didn’t manipulate any data and the election results weren’t compromised. He and officials from the Florida Department of Law Enforcement were briefed by the FBI and Department of Homeland Security on Friday.
I noted this story via a link in yesterday’s lead item, but it’s worth placing here — and asking, what’s the point of another 20-year consent decree, given the obvious limitations of the one Facebook is currently under? These decrees seem to create lots of work for lawyers — but few obvious benefits for consumers
Business Insider conducted a poll on the Chris Hughes op-ed. Rob Price:
Following the publication of Hughes’ essay, INSIDER surveyed 1,072 people’s attitudes towards antitrust action against Facebook through SurveyMonkey Audience. Around 17% said they strongly supported antitrust action, and another 12% and 11% supported or somewhat supported it respectively.
Meanwhile, 28% of respondents neither supported nor opposed antitrust action, 5% somewhat opposed, 4% opposed, and 5% strongly opposed. 17% of respondents didn’t know. The survey’s margin of error is plus or minus 3.12 percentage points.
Eric Schmidt, who is still on Alphabet’s board, is talking up Project Dragonfly, Ryan Gallagher reports:
In an interview with the BBC on Monday, Schmidt said that he wasn’t involved in decisions to build the censored search platform, codenamed Dragonfly. But he insisted that there were “many benefits” to working with China and said he was an advocate of operating in the country because he believed it could “help change China to be more open.”
Ladies and gentlemen, Sen. Elizabeth Warren!
“I love town halls. I’ve done more than 70 since January, and I’m glad to have a television audience be a part of them. Fox News has invited me to do a town hall, but I’m turning them down—here’s why,” she wrote Tuesday morning in a series of tweets. “Fox News is a hate-for-profit racket that gives a megaphone to racists and conspiracists—it’s designed to turn us against each other, risking life & death consequences, to provide cover for the corruption that’s rotting our government and hollowing out our middle class.”
Sarah Perez has a story about how Twitter tripped over its own shoelaces once the General Data Protection Regulation went into effect:
Twitter is finally allowing a number of locked users to regain control of their accounts once again. Around a year after Europe’s new privacy laws (GDPR) rolled out, Twitter began booting users out of their accounts if it suspected the account’s owner was underage — that is, younger than 13. But the process also locked out many users who said they were now old enough to use Twitter’s service legally.
While Twitter’s rules had stated that users under 13 can’t create accounts or post tweets, many underage users did so anyway thanks to lax enforcement of the policy. The GDPR regulations, however, forced Twitter to address the issue.
Cyrus Farivar and David Ingram interview California’s attorney general ahead of the rollout of the state’s new privacy law on January 1st. He’s nervous:
Becerra, whose office will be responsible for enforcing the law when it goes into effect Jan. 1, 2020, said he might not have enough staff to carry out the job, and that as a result the law could collapse under its own weight.
“I don’t think you ever want to give people a reason to believe that you hoodwinked them,” Becerra said in an interview. “Think back to the launch of the Affordable Care Act’s website. That really depressed people’s belief that this was going to work.”
Relevant in light of yesterday’s news about raises for Facebook contractors: Angela Chen interviews anthropologist Mary L. Gray about her new book with Siddharth Suri: Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.
GRAY: I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.
A scary WhatsApp vulnerability, first reported by the Financial Times, has now been fixed:
A vulnerability discovered in Facebook’s WhatsApp messaging app is being exploited to inject commercial spyware onto Android and iOS phones by simply calling the target, reports The Financial Times. The spyware, developed by Israel’s secretive NSO group, can be installed without trace and without the target answering the call, according to security researchers and confirmed by WhatsApp.
Once installed, the spyware can turn on a phone’s camera and mic, scan emails and messages, and collect the user’s location data. WhatsApp is urging its 1.5 billion global users to update the app immediately to close the security hole.
Yesterday I posted a link here to this good Vox piece about the rise of fear-based social networks. Now one of them, Nextdoor, has raised another $123 million.
It seems like Facebook is close to releasing its long-in-development “Clear History” feature, Kurt Wagner reports:
Facebook first announced the privacy feature, called “Clear History,” more than a year ago – a response following the Cambridge Analytica data scandal to demonstrate the company’s commitment to protect users’ personal information. That product, which will allow Facebook users to separate their internet browsing history from their profiles, hasn’t been unveiled, and the company said it has been a more difficult technical challenge to build than expected.
In a blog post Tuesday, Facebook reiterated that Clear History will launch “in the coming months,” and that advertisers should prepare for the fact that de-coupling browsing data from profiles means the company’s targeting won’t be as strong.
Even on a platform that subsists largely on drama between its stars, the James Charles drama has absolutely everyone living for it, Julia Alexander reports:
A week ago, beauty vlogger James Charles was attending the Met Gala on YouTube’s behalf and jetting off to Australia to meet thousands of fans. This week, he’s become YouTube’s latest villain, losing subscribers at a record rate, all because of a 43-minute takedown video by one of YouTube’s original beauty gurus.
The drama is a perfect example of how power dynamics work within the world of YouTube. Since the lengthy video was posted on Friday, Charles has lost more than 2.7 million subscribers and been pushed away by prominent vloggers. Meanwhile, the takedown video’s creator, Tati Westbrook, has gained close to 3.5 million subscribers.
Few of us trust our smart speakers, but we’re buying them anyway, Rani Molla reports. (DM me about this and I’ll tell you a quick story about meeting Rachael Ray and what she had to say on this subject.)
A new smart device survey by Consumers International and the Internet Society highlights this seeming contradiction. Some 63 percent of people find connected devices to be “creepy,” and 75 percent don’t trust the way their data is shared by those devices, according to a survey of people in the United States, Canada, Japan, Australia, France, and the United Kingdom.
That hasn’t stopped them from buying these devices, which — through an array of cameras, microphones, and other sensors — have intimate access to our lives.
Roísín Lanigan examines the phenomenon of “Munchausen by internet,” which “is rattling tight-knit online support groups.”
Around the time Marchand stopped posting in the Facebook group, she was arrested in Colorado for faking terminal cancer on the crowdfunding platform GoFundMe and accepting donations through multiple accounts. It seemed she had faked her illness to the Facebook group, too. At trial, she pleaded guilty and was sentenced to community service. “The entire group was devastated, angry, and in a state of disbelief,” Angelacos says. “Everyone felt they had come to know her so well. There was a huge sense of betrayal.” (Marchand and her lawyer did not respond to requests for comment.)
This was not the first time many of the group’s members had felt this way. As harrowing as the experience can be for those involved, people in online cancer support groups are routinely outed as healthy. It’s difficult to speculate exactly how common this phenomenon is: There have been no large-scale scientific investigations into the internet’s cancer fakers, and the evidence is limited to only those who have actually been suspected or caught. But among the internet’s cancer communities, it’s an often acknowledged problem, albeit still a shocking one. Among 10 people from three groups I spoke with recently, every person recalled someone being outed for faking in their communities at least once, if not more.
One of the core promises of this newsletter is that I will strive to bring you all news of technology-related schemes perpetrated by teenagers. Here’s one from Julie Jargon, which is an absolutely killer byline by the way:
Jalyn began sneaking into her parents’ room to get her phone after they had gone to sleep. She usually fell asleep with it. When her parents would find it in her bedroom the next morning, they would take it away.
That’s when Jalyn turned to burner phones, a term popularized by TV shows like “Breaking Bad” and “The Wire,” which featured drug dealers using cheap phones they would later dispose of, or “burn,” to avoid detection. In today’s teen parlance, a burner phone can be a prepaid cellphone or any out-of-service phone they can still get to work on Wi-Fi.
Good note here in this long, discursive piece on TikTok regarding competition with Facebook. It picks up on a fact reported by Bloomberg: “Over the past three months … 13 percent of all the ads seen by users of Facebook’s Android app were for TikTok.”
We know this incursion is a heavily-funded onslaught that’s working perfectly. It reminds me of how brutally well executed Facebook’s launch of Instagram Stories was, except instead of leveraging a social graph for growth, it’s nonstop app install ads, and this time, Facebook is the one serving the ads.
I’m the first to call Facebook anti-competitive, but you have to give them credit for taking millions of dollars in ad revenue for install ads from a direct competitor.
Makena Kelly reports on a good move from Twitter.
If you search for tweets related to vaccines as of Friday, the first thing you see on Twitter is a post from the United States Department of Health and Human Services pointing you to reliable health information instead of anti-vax misinformation.
Last week, Twitter announced that it would be launching a new tool in search that would prompt users to head to vaccines.org, which is run by officials at HHS. Over the past few months, social media companies like Facebook and Twitter have faced intense pressure from lawmakers and the public to remove anti-vaccination propaganda from their platforms.
You can once again view your profile from the perspective of someone you aren’t friends with on Facebook, Chaim Gartenberg reports:
The news was announced in a tweet today after Facebook had disabled the option back in September 2018 due to a major security flaw that allowed an attacker to steal access tokens for over 50 million accounts by exploiting a related feature that would allow you to see what a Facebook profile would look like to a specific user. In order to address the issue, Facebook was forced to make over 90 million users log back into their accounts to ensure that they were secure.
Twitter has a new developer API, and it could help researchers study the platform, Josh Constine reports:
today it’s launching Twitter Developer Labs, which app makers can sign up for to experiment with pre-release beta APIs. First up will be re-engineered versions of GET /Tweets and GET /Users APIs. The first functional changes will come next, including real-time streaming access to the Twitter firehose with the expansion of tweet filtering plus impressions and engagement metrics that were previously only available in its expensive enterprise API tiers. Twitter will also be adding newer features like Polls to the API.
Giving developers longer lead-times and more of a voice when it comes to rebuilding its APIs could help Twitter get more app makers paying for its premium API ($339 to $2,899 per month for just one specific API) and enterprise API tiers (even more expensive). It might also stimulate the creation of dev-made analytics, measurement and ads businesses that convince brands to spend more money on Twitter marketing. The Labs program and the first API endpoint changes will roll out in the coming weeks.
My take on this is that most people don’t need dating coaches — they need apps that let people express themselves in more authentic ways! But whatever, Match:
The online dating site is launching a new service, AskMatch, that will connect its paid users to a dating coach for a chat over the phone. The service is launching in New York City this month, with the goal of expanding nationwide by 2020.
“Match’s mission has always been around relationships and bringing people together. We want to go beyond just being an app on your phone,” said Match CEO Hesam Hosseini in an interview with Engadget. Match users will be able to find the option to “Talk to a coach” under the “Discover” area of the app. If selected, Match will connect you to one of its dating experts for a phone conversation. After the phone call, you can update your coach through the app with any progress you’ve made or ask further questions.
Will Oremus doesn’t like Chris Hughes’ nebulous ideas about a government agency devoted to internet regulation:
Hughes fails to make the case for why online speech should be subject to extra government scrutiny, let alone be made the province of a special government agency. If the plan is to establish new legal restrictions on speech, the courts seem unlikely to uphold them. If it’s just to set up optional “guidelines,” it’s hard to imagine how those would differ substantially from platforms’ existing policies.
Setting aside the question of whether federal regulation of online speech is constitutional, it’s worth thinking about whether it would even be desirable. There are assuredly flaws in how the major social networks, including Facebook, moderate speech. But the idea that a government agency would necessarily do better is naive.
Max Read loves his group chats:
To me, the reorientation of Facebook around private groups feels less like the company “building the kind of future we want” and more like its attempt to force itself back into a social life I’d rescued from its feed. Last year, the technology writer Navneet Alang wondered in a column in the Globe and Mail if it would be possible “to save social media from Facebook.” That is, could we extricate from the globe-spanning behemoth that is Facebook, Inc., the many uses and experiences that can make Facebook, the website and app, so enjoyable? The flowering of group chats points us in one direction. In almost all ways, I find the group chat an improvement over the machine-sorted feed. Freed from the pressure to stand out from thousands of other posts, conversations on group chats tend to be comfortably subdued — even appealingly boring — in a way that Facebook status updates or tweets never can be. Because most group chats exist on platforms or apps that don’t rely on advertising money or user engagement to support themselves, they’re only as addicting or exploitative as any social interaction might be.
And finally …
With all this talk of breaking up Facebook, are we missing the more obvious danger to modern life? Mike Shields makes the case:
Ben Silbermann is a lovely man. I met him once in Cannes and he couldn’t have been kinder. But I don’t think even he realizes the Victorian Sunroom monster he’s created. Nearly every day I find myself, am I keen to crock pot chilaquiles because I want to? Or because Ben wants me to?
I shouldn’t have to answer that question. And neither should you.
Makes you think.
Talk to me
Send me tips, comments, questions, and de-radicalization strategies: [email protected].
This article is from The Verge