If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED
With the news that Meta is ending its third-party fact-checking program, we dig into the future of content moderation. From Community Notes to automated systems, how do you manage trust and safety for a site with two billion daily active users?
You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Zoë Schiffer on Threads @reporterzoe. Write to us at uncannyvalley@wired.com.
How to Listen
You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:
If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Michael Calore: How is everybody feeling about the truth these days?
Zoë Schiffer: Oh, Mike, what is the truth? What is truth? Truth is, it’s up for debate. Yeah.
Michael Calore: It is. Literally.
Zoë Schiffer: We’re a post-truth era.
Michael Calore: Ain’t that the truth?
Lauren Goode: Yeah. I feel like we’ve been saying that for years, but now it really feels true.
Michael Calore: Well, the truth and how to moderate it online, and specifically how Mark Zuckerberg is thinking about it is what we are here to examine. So I hope the two of you have some time on your hands.
Zoë Schiffer: Let’s do it.
Michael Calore: This is WIRED’s Uncanny Valley, a show about the people, power, and influence of Silicon Valley. I’m Michael Calore, director of Consumer Tech and Culture here at WIRED.
Zoë Schiffer: I’m Zoë Schiffer, WIRED’s director of Business and Industry.
Lauren Goode: I’m Lauren Goode. I’m a senior writer at WIRED.
Michael Calore: Today we’re talking about content moderation because the big news over the past couple of weeks is that Meta is ending its third-party fact-checking program and replacing it with a Community Notes model. Now to be clear, the company is still keeping its automated systems for flagging problematic content, but Mark Zuckerberg has also said that Meta would be refocusing or tweaking those systems to flag what he calls high-severity violations. So we’ll get into all of these changes and talk about their potential impacts.
Lauren Goode: Zoë, how would you describe Community Notes for the uninitiated?
Zoë Schiffer: It’s basically crowdsourced fact-checking. The idea is that you get users to apply to these programs. If they get in, they become part of the Community Notes forum, which means that when a post is flagged, they have a debate with other people in the forum, should it have a note attached to it with a bit more context. The issue is that on X, what we’ve seen from inside these forums is that a lot of the people involved are pretty politically motivated. But in theory, it’s a way to kind of allow the community to moderate content on the platform themselves.
Lauren Goode: Right. The idea of Community Notes may have worked really well about 20 to 30 years ago in the early days of the internet, the World Wide Web. But when you’re looking at the amount of content that is posted online now, it’s not exactly the same. I mean, I think a lot of these Meta changes are just going to encourage more content to surface, just more of everything, which is very good for engagement for Meta, for its bottom line, and possibly or probably bad for Humanity.
Zoë Schiffer: It feels like it’s bigger than just Meta. Like you mentioned. It’s kind of following in the footsteps of X in terms of turning to a Community Notes style fact checking program. So to me, it gets to this bigger question of are we in a new era of content moderation and what does that mean for the world?
Michael Calore: So let’s back up for a moment and clarify what we’re talking about when we talk about content moderation.
Lauren Goode: When I look at content moderation, I see it as part of usually what is a trust and safety team at a tech company or a social media company. Trust and safety has a larger mandate than just content moderation below, but content moderation tends to fall into secondary and tertiary levels of trust and safety work. The idea with content moderation is that you’re looking at types of posts that could cause real harm, and that’s either through misinformation or hate speech. It’s not personalized. It’s not someone on Facebook saying they have a cat named Bo when their cat’s name is Boo. Or someone saying, “I won an award,” and they didn’t really win an award, and they’re inflating themselves. This is misinformation generally accepted on truth that affect people on a societal level.
Zoë Schiffer: I think that that’s a good definition. I think of it really like the people and policies that decide which posts stay up and which ones come down. Often the ones that are supposed to come down are ones that violate a platform’s rules. So we talk about the First Amendment and free speech a lot in relation to content moderation, but in reality, none of these companies have to abide by the First Amendment. It’s not a government entity. So it’s really about what policies have we set up as a company, have we said that we’re okay with versus not okay with? Then what systems do we have in place to regulate which posts stay up and come down?
Michael Calore: In the past, the ancient past, it fell on human beings, like moderators in small communities. I’m talking like pre-Facebook internet. You had Listservs, you had forums, you had places where people would hang out and it would be dozens or hundreds of people. You had human moderators sort of making decisions on a case-by-case basis of what sorts of things would be allowed, what sorts of things step over the line of the community guidelines that the community decided on for itself. Of course, there are a lot of sites and platforms and communities that are still employing this type of content moderation. Reddit famously has mods. There are even subreddits for complaining about mods, things like that. But if you belong to a community and there’s two or 3,000 people in those communities, you show up, there’s a list of rules, things that are allowed, things that are not allowed and they’re what you would expect. We do not allow name-calling. We don’t allow threats of violence. We don’t allow things that are not within the scope of this community to be posted here. It’s a moderator’s job, often with the help of bots, to go through all of the posts that are made and decide which ones don’t belong and get rid of them.
Zoë Schiffer: This also happens on Wikipedia. I feel like Wikipedia obviously has volunteer editors. There’s a hierarchy. You have to be one of the chosen. But I will say, I know Reddit and Wikipedia have had a lot of issues with content moderation, but these processes seem to work pretty well on the platforms. I would say part of the reason for that is that we have the media literacy to treat what we read on Wikipedia with a certain level of suspicion. So it’s both surprisingly accurate a lot of the time. I think a lot of us are used to saying, “Okay, I’ll Wikipedia as a first order of research, but I’ll obviously do a deeper dive and read the source material when needed.”
Lauren Goode: There are an incredible number of citations on Wikipedia too. So it’s easy then to go to one of the source materials, and then from there use that same level of media literacy to determine if that seems like a valid source. But then there are people contributing, who are looking for those valid sources to cite. I think that’s one of the differences too, not that we’re talking about Google Gen AI in this episode so much, but the difference is people just look at those summarized search results now and say, “That must be true,” even if there is a little citation somewhere. Whereas in Wikipedia, you go and then hopefully you go further from there.
Michael Calore: Yeah. But then in the Facebook era and then in the Twitter era, these platforms, these social platforms evolved that were all owned by one company, and then the company could set a policy that would apply to all of their communities. I think that’s when things really started to change.
Lauren Goode: Well, and it started to change too, just when the number of users on Facebook swelled to over a billion people. I can’t say exactly what date Facebook first deployed an algorithm to try to detect harmful content. But we do know it was around late 2016 when Facebook first said it was partnering with places like the Poynter Institute’s International Fact Checking Network. So at that point, it was using both human moderators and automated systems to try to limit what everyone at the time was concerned about, which was basically fake news.
Zoë Schiffer: Right. I think just to take it back a little bit further even, it’s like a lot of these tech platforms start out with similar values around free speech. They’re like, “We started these companies to maximize speech or to create a global town square,” whatever, but then they all kind of agree like, “Oh wait, we can’t have child sexual exploitation material. We can’t have illegal content. You can’t sell drugs on the platform.” So slowly, slowly they start to limit the amount of speech that can actually happen on the platform.
Michael Calore: So in the most recent modern era, since 2016, what has Meta been doing?
Lauren Goode: Well, 2016 feels like a turning point in a sense because this was when they started partnering with a handful of traditional news organizations and fact checking sites. The thing is that a lot of the impetus for this was misinformation around the 2016 presidential election, like the fake news that the Pope had endorsed Donald Trump or conspiracy theories around Hillary Clinton. So Meta was, or Facebook at the time, they were really sort of targeting that by leaning on these third party organizations. Then I think they still had the automated systems in place to try to tackle some of the other hateful or harmful speech that was happening on the platform. What’s interesting about this is that it feels like at the time Facebook did in a sense believe that it had a moral imperative to do this, to try to crack down on that. It was acknowledging that by having all of this misinformation on a platform as big as Facebook could have a negative impact on society,
Michael Calore: Right.
Zoë Schiffer: Yeah. I feel like this was the era of Mark Zuckerberg really taking ownership and being like, “We know we’ve messed up. Here’s all the investments that we’re making to make this better.” But it’s also true that each one of these systems had advantages and disadvantages.
Lauren Goode: Yes.
Zoë Schiffer: Meta invested enormously in teams of contract human moderators whose entire job it was to look at the worst that the internet had to offer, at least the worst that Meta’s platforms had to offer, and decide which posts should stay up, which should come down, and occasionally which ones should be escalated for further review. The issue is that these people are not necessarily working at the corporate offices. They’re allegedly often being paid pretty low wages from contracting firms and they’re looking at really traumatizing content all day. Really, really violent images, child sexual abuse material, that sort of thing. Honestly, there’ve been a lot of allegations about how traumatized they’re getting and the fact that the companies they work for allegedly don’t have the mental health resources to really support them. So it’s creating a lot of issues for these people.
Michael Calore: So we’re facing a shift on all of Meta’s platforms to a Community Notes style of fact checking. What does that look like on X? How is it going to look on Instagram? How’s it going to look on Meta, Facebook?
Zoë Schiffer: Yeah, so when someone posts on these platforms, rather than having these fact checking or news organizations deciding, is this post correct? We’re going to have real people in the community, so users, and it’s supposed to be a large enough user base that you have people who come at this from all different sides of the political spectrum who are having that debate and then posting a note attached to it. On X, at least in terms of big breaking news events, it hasn’t worked that well. But I actually feel like there is potential for this system to work well. It just hasn’t been implemented correctly at X for unsurprising reasons. So when Mark Zuckerberg made this announcement, I felt like all of my group chats were popping off and 100% of the voices were aghast and really anti this decision. I kind of had a moment where I was like at least for this part of the change. To be clear, I think a lot of the changes are extremely problematic and will create a lot of real world harm. I’m genuinely worried about that. But in terms of scrapping the third party fact checking system and going to Community Notes, I am willing to entertain the possibility that this could be good if implemented correctly.
Lauren Goode: I think you bring up a good point, Zoë, about how for unsurprising reasons Community Notes might not be working on X. I think what you’re referring to is the fact that Musk is using X as his personal megaphone now, and he has a lot of stans on that platform. I read that at least 50 of Musk’s own posts last year promoted information about the US elections that fact-checkers said were false or misleading, and they amassed something like a billion views and none of these had Community Notes attached. This is according to the Center for Countering Digital Hate, which has been studying this.
Zoë Schiffer: It’s easily gamified on X right now. I’m actually not sure if that’s just because they don’t have enough people in the Community Notes forum or if it’s that… I’ve kind of seen this firsthand and in talking to sources, who allege that Community Notes can be pretty politically motivated. It seems like at least on X, there’s a large contingent of the fact-checkers who are very, very pro-Musk and are looking for stories that paint him in what they see as a biased light. They’re attaching spurious Community Notes on those stories, and then they’re not adequately fact-checking Musk’s own posts. So there’s a lot of problems. But again, to me, this isn’t so much a failure of Community Notes overall as much as it is a failure of X’s version of Community Notes.
Michael Calore: Yeah. Also, I’ll say that I don’t really have a lot of faith that this is going to stop the spread of misinformation or stop the spread of hateful content, simply because we’ve all been there where we see a post that we understand is fake or we understand is possibly could be considered by most people to be hateful, and it may have some sort of note under it, or maybe people are sounding off in the comments about the fact that it’s fake or that it’s hateful and maybe it’s been flagged. But it’s still there and it’s still being shared. I think that people who are going to share those types of things are going to be inclined to share them even if it has some sort of mechanism attached to it that says, “Our community has flagged this as possibly not good.”
Zoë Schiffer: I think the purpose of this change is not to stop the spread of harmful content. To me, the purpose of this change is definitely to ingratiate Meta with the Trump administration, and it honestly seems like that’s gone over pretty well.
Michael Calore: That’s a big claim, and it’s a big moment, and I agree with you. I think when Mark Zuckerberg talks about this, he says that stopping people from talking about things like transgender issues puts Meta in a position that is out of step with modern discourse. So he says that people talk about these things and we have to let them talk about them on our platforms in a way that is not sort of overseen or regulated just because that’s the way that people talk now.
Lauren Goode: Right, but to Zoë’s point, he also basically admits that he was very put off by what he saw as an overreach by the Biden administration to try to, quote, unquote, censor Facebook during the COVID pandemic. So I think that Zuckerberg is giving all kinds of reasons for why he’s making this decision now, whether it’s, “Oh, we’ve scaled, we’re at such a scale that our systems can’t possibly address all of this, or this is just the way people talk these days, and we want to give people the ability to say what they want or what anyone who quits Facebook at this point virtue signaling, have at it. We still have plenty of users.” When really it is a political reaction and maneuver. I was talking to someone in tech and asked them, what do you think about the changes that Meta is making to its content moderation policies? They said, “Zuckerberg is kind of like a weather vane. He just moves in the right direction depending on the wind. But the problem is he’s also so powerful now that he generates a lot of wind, so when he tips the weather vane, he’s now moving the wind in a new direction.”
Zoë Schiffer: I found this whole topic so interesting because I was coming at it from a place of maybe this change could be positive. I want to believe that Community Notes could work. At the same time, mark Zuckerberg’s actions and the way he talked about them made me feel like he’s very opportunistic. He’s company first, and he’s doing what he needs to do to get in good with this incoming administration. It definitely did not seem to come from a principled stance in so far as those principles are important to him, personally.
Michael Calore: Yeah. So how is Zuckerberg’s decision resonating with other people in Silicon Valley? Because I know what the headlines say. The headlines say that Meta is just giving up on content moderation and that kind of anything goes, and we’re in Meta’s AI slop era now.
Lauren Goode: Right, right. Well, and Meta would probably, or Zuckerberg himself might even respond and say, “Well, those are just the elite established media who are reacting that way,” because he talked about that on the Joe Rogan podcast too. That person I spoke to in the Valley made that remark about Zuckerberg being a weather vane. I’ve also talked to people who worked in trust and safety directly who have said, “It’s just a really hard problem to solve.”
Zoë Schiffer: Yeah, I feel like there’s the media which seemed to think this was a very bad idea, and again, there were newsrooms that were involved in the fact checking process, so obviously we’re going to think it’s bad when these programs are killed. There’s the trust and safety world, which I saw saying, “We really disagree with this change,” but interestingly, a lot of them were not pointing to the switch from human moderators and algorithms over to Community Notes as much as they were the fact that Meta is not going to proactively look for lesser violations. That was the one that started to really worry that group of people. Then we had the tech elite, the kind of Silicon Valley billionaires, the venture capitalists, the people who are friends with Elon Musk. These people seem extremely excited. We all listened to Mark Zuckerberg’s interview with Joe Rogan. Rogan seemed thrilled about the change, as did the guys on the All In podcast. While they might’ve disagreed with why Zuck made the change, I think everyone seemed very, very excited that in their view he was coming around.
Lauren Goode: There are so many layers to this, and I think what we’ve talked about a lot so far are his motivations. That’s a little bit harder because all we can do is sort of look at his actions and his policy decisions over the past several years and say, “Okay, this lines up, or this is what he said, this podcast or in this interview, and this is all pretty consistent.” We don’t really know his motivations. We can’t get inside Mark Zuckerberg’s head. Then there’s the practical implementation of what he’s putting forward, and I think everyone can agree content moderation on a platform that has at least 2 billion daily active users around the world, people logging into it every day, is a very hard problem to solve. It is, right? Now, my harsher take on that is that I have 215 billion reasons why it is actually Mark Zuckerberg’s job to solve this. That is his net worth. He wanted the job, he started this company, he wanted voting control, and this is the job. He said on Joe Rogan, “Well, if I just did this all the time, I wouldn’t have time to focus on AI or smart glasses. Things that we understand are very important to the company’s bottom line.” Mark, I just want to say we don’t necessarily need the smart glasses. We need to fix content moderation. We need you to fix content moderation. That is the job.
Michael Calore: Zoë’s snapping her fingers in agreement. I agree with you too. I mean, it is the thing that makes it a fun place to hang out is the fact that it’s a happy place on the internet. You can go there, you can see things that make you smile, and the more that you see things that don’t make you smile, the harder it gets to keep going back there. Knowing that he’s made these decisions makes it harder for me to decide to go back there. I just know that it’s a different kind of beast now. What we’re going to be feeding it and the types of conversations that are going to be happening there are going to be very different, and I think it’s going to feel very different going into the year. We’re going to take a quick break. When we come back, we’re going to get into Lauren’s 215 billion reasons why content moderation is important, and we’re also going to talk about what might happen on social media at large in the wake of these changes. Stay with us. You’re listening to Uncanny Valley. So we’ve been talking about the recent changes around content moderation and speech moderation at Meta. One of the things that we cannot ignore is that the same week that Mark Zuckerberg announced all these changes coming to Meta’s content moderation policies, he started talking about it publicly. Here he is on the Joe Rogan podcast.
Mark Zuckerberg [Archival audio]: It goes back to our original mission is just give people the power to share and make the world more open and connected.
Michael Calore: I do think it’s important to talk about Zuck’s appearance on the show because he does talk at length about his motivations. We can’t step inside of his head and we can’t say what’s really going on there, but we have a pretty clear idea about why these changes are coming to Meta. So where should we start?
Zoë Schiffer: I mean, he really framed this as a return to his roots. On the podcast he kind of speaks to that ethos and the idea that he never really wanted to be in a position of deciding what’s true and what’s not. He says that in his view, misinformation and hate speech in particular in recent years became very politicized. So it definitely feels like under the Biden administration, Meta was being asked to take down a lot of COVID misinformation, hate speech, other things. Zuck says he was growing increasingly uncomfortable with that, and so this is an opportunity to right the ship in his view.
Lauren Goode: We talk a little bit more about what exactly went down between him and the Biden administration because it wasn’t just the requests from the Biden administration for him to moderate COVID information. I mean, this goes back to when he was grilled in front of Congress. Right?
Zoë Schiffer: This went all the way up to the Supreme Court in a famous job owning case recently. But yeah, I mean basically Zuck is saying, “Look, the Biden administration was coming to us. We had people from the administration literally yelling at my employees saying, you have to take this stuff down. So we were putting a lot of investment in place and trying to moderate content at a scale and in a way that we’d never done before.” It’s so interesting because moderating this type of content, moderating misinformation and hate speech opened Meta up to a lot of criticism from both the right and the left. The right, because conservatives are moderated more on these grounds because they tend to speak in this way more, and the left because we were still seeing all sorts of content stay up that a lot of people on the left thought should come down. So making this decision saying, “We’re actually going to moderate a lot less speech than we have in recent memory,” endears Meta to the right and opens it up to criticism from the left. But it’s like in some ways a better position to be in than where they were at previously, where both sides thought they were horrible and messing up.
Lauren Goode: He also made some interesting remarks about the government’s role and how it treats tech companies.
Mark Zuckerberg [Archival audio]: The US government should be defending its companies, not be the tip of the spear attacking its companies.
Lauren Goode: Look, Zuckerberg is clearly a very highly intelligent individual and he’s trying to work through these things. But I thought this was interesting because he said that he thought the US government shouldn’t be adversarial towards its own tech companies because that paves the way for other countries to also penalize their companies.
Mark Zuckerberg [Archival audio]: It’s basically just open season around the rest of the world. The EU, I pull these numbers, the EU has fined the tech companies more than $30 billion over the last, I think, it was like 10 or 20 years.
Lauren Goode: He pointed to Europe as an example, and all the fines that it has levied at American tech companies in recent years. He also pointed out that we already have plenty of adversaries like Russia and Iran to deal with. But it’s actually not the US government’s singular job to be a booster for American enterprise. It also has a responsibility to protect its citizens, and there are entire agencies that exist, and for good reason, to make sure companies in the private sector aren’t taking advantage of or harming consumers. I think he just, I don’t know, conveniently ignored that.
Zoë Schiffer: You do hear the Silicon Valley billionaire class talk about this a lot. They’re like, “Well, Trump just wants America to win.” That’s really appealing to them because they see their own companies and their venture capital firms as a core part of that winning strategy.
Michael Calore: What I think is interesting is the timing of all of this. We cannot ignore the politics when we look at the timing because if you consider content moderation coming into focus for Meta in 2016, all eyes were on the company during the 2020 election. Again, all eyes were on the company during the 2024 election. So Meta kept its policies in place for fact checking up until the election, the election happened, Trump won the presidency again, and then two months later they’re shutting down the shop. I kind of feel like, “Okay, let’s do the bare minimum until we can get past this point where we don’t really have to worry about it anymore, and then we can stop worrying about it.” That really feels like what happened. Whether or not that’s appeasing Republicans or whether or not that is trying to open up conversation to the winners in this scenario so that they feel more comfortable on our platform because they’re in power now. I really feel like it’s more like, “Okay, we’ve done the thing that we promised we would do, and now we just don’t really have to worry about it anymore.”
Zoë Schiffer: Yeah, I think there’s another factor at play here, which is that previously, I think not every company thought of it this way, but there was this line that content moderation is a product, and it’s a product that in some ways you sell to advertisers. When you have a platform that is ad supported, you need to make sure that Nike isn’t paying millions of dollars and seeing their ads show up next to a violent video, or hate speech, or what have you. With X in particular, when Elon Musk rolled back a lot of the fact-checking programs, the trust and safety policies, we saw advertisers take a real stand and flee, under a Trump administration. Do we think these massive multinational conglomerates, these big corporations are going to take a principled stand like that? Or are they going to say, “Okay, whatever, it’s an advertising platform, we’ll keep spending money.” I would be surprised if we saw a big advertiser exodus. Maybe they will make those decisions and they’ll just make it quietly. But I almost think that the business calculus for content moderation has changed under a Trump administration.
Lauren Goode: Zoë, you’ve mentioned something before that I find really interesting, which is this idea that if some organization or company were to get content moderation right, they could sell it as a product. What does that look like?
Zoë Schiffer: I mean, I think if you are a place where you’re like, “We have the best algorithms in place, we have the best trust and safety policies, you can be 100% certain that if you spend thousands, hundreds of thousands, millions of dollars on our platform to advertise with us, 100% of the time, your ad will not show up next to anything bad. It will show up next to the tweet from LeBron James, which is all you’ve ever wanted.” I think that that would be a compelling use case for advertisers because advertisers don’t want to spend all of that money and see their ad… It’s embarrassing for them if it runs next to something really vile or really awful. At the same time, if every platform is doing the opposite of what we’re saying, if they’re moderating less content, and advertisers are kind of like, “Okay, this is just what it means to advertise on social media.” I don’t know if they’re going to be pulling ad dollars like they did with Elon or with Facebook in 2020.
Michael Calore: I think we’ll have to see how bad it gets because soon we’ll probably see more AI generated images of things that are fake, fake images of famous people, fake images of political figures. We’re probably going to see more hateful content, things that are hateful towards women, towards transgender people, towards immigrants. We’ll probably see more fake news, things that are intended to rile us up if we feel a certain way about the world, things that are very shareable. There’s probably also going to be a lot of AI profiles, like AI people sharing things, non-player characters in our feeds. However many there are now, which is probably a lot more than we realize, there’s going to be more. So as these things increase, it’s going to start to feel a lot different to hang out at those places for users and for advertisers. I don’t know if advertisers are going to stick around if that becomes like what is shared dominantly on the platforms.
Zoë Schiffer: Well, Mike, that immediately makes me think that advertisers can have whatever stance they want, but ultimately they are paying to have their ad in front of the maximum number of eyeballs. So if users find it bad to hang out on these platforms for all of the reasons you just stated and they start to flee, then advertisers are going to vote with their dollars and they’re also going to exit. So this is going to be, I think, a really interesting phenomenon to watch.
Michael Calore: There’s a lot of people who think that that’s fun. It’s fun to show up and share all the images of Trump in the Superman costume.
Zoë Schiffer: Well, they can stick around.
Michael Calore: One thing that I’m curious about that’s very distressing to me to think about is what the future of the moderation industry is. I didn’t even really think about this at first until our colleague, David Gilbert, wrote a fantastic story for WIRED, which you can read, about the moderation industry and how so many of these companies, basically when they got the Meta contracts in the 2010s, they set up departments and they hired all these people to do the fact checking for Meta. Now that Meta is shutting down the program and not keeping all of those contracts anymore, a lot of these companies are going to be out of the social media content moderation business, which means that this change is going to be much more far-reaching. It’s going to have much bigger impact than just Meta. I’m wondering what that future looks like across all of social media.
Lauren Goode: I mean, to me, this goes back to the idea that we’re in a post social media era. It’s not just about post-truth, it’s about these apps, and services, and feeds, and containers that came to define how we connected to people and how we shared things over the past, let’s say 15 years, even though Facebook’s been around since 2004. It’s fundamentally shifting, and I don’t think we have yet landed on what is the next social experience online then totally. I certainly don’t know what it is. I don’t know if it’s… I think a lot of us are using private group chats. I think people are leaning on newsletters instead of other types of distributed information. I think it’s time to acknowledge that the social media we knew it as the 2010s is dead.
Michael Calore: Yeah.
Zoë Schiffer: Yeah. I agree with Lauren on that.
Michael Calore: I think you’re right.
Lauren Goode: Content moderation is a huge part of that.
Zoë Schiffer: To the extent we ever had a global town square, which I kind of roll my eyes at that phrasing, but I certainly think that we do not have one now and we will not have one in the foreseeable future.
Lauren Goode: No, we don’t. We’ve all had these personal feeds, these personal experiences, and it really was all, we talked about this on an earlier podcast, but it really was all about the self and feeding the self. Then there are these collective experiences that are supposed to be happening on social media too. I think people are just continuing to posts to their personal feeds as much as they want, but you cannot ignore the collapse of what the collective experience has been.
Michael Calore: To that collective experience, to that global town square, that was absolutely my favorite thing about the old Twitter was that it was people of all different political ideologies all in one place, and it was still fun most of the time. It was still fun. It kind of stopped being fun. Now it’s like the Internet’s getting more tribal. Now there’s a lib zone, there’s a red zone, there’s a libertarian zone. There’s all these different places where you can go and hang out with people who think like you, and I don’t know if that’s necessarily good for us. That doesn’t satisfy my definition of what humanity is, just hanging out with people who feel the same way that you do. So yeah, I’m spending a lot more time on Reddit, weirdly, and not because of this. I just find myself there more often now because it’s a place that feels like a community that doesn’t have any kind of bizarre ideological pull.
Zoë Schiffer: Meanwhile, I’m just in here with ChatGPT talking to my AI friend.
Michael Calore: How’s the parenting going?
Lauren Goode: How’s the sleep training?
Michael Calore: Is your child calling it mommy yet?
Zoë Schiffer: Oh my God, it’s so dark.
Michael Calore: Okay, well, let’s take another break, but stick around because we’ll be right back with more Uncanny Valley. All right, friends, the internet is a big place, and I know it takes us down many strange paths. So I would like to know, what is your last rabbit hole? What’s the last thing that you went deep on on the internet? What brought you there? Where did you go? How did it turn out? Zoë, let’s go to you first.
Zoë Schiffer: The thing that’s been on my mind, which is very closely related to what we were just talking about, is whether to stay on Threads. I honestly feel like exhausted even thinking about this. It’s kind of a natural shift that has already happened where I am just spending a lot more time on Bluesky because it’s more fun and more of my peers are there. So it feels like a more vibrant conversation. But over the past two years, I’ve made the decision to leave Twitter. I’ve made the decision to leave Substack in a previous job along with my colleague. Now I’m looking at Threads, and I feel kind of this fatigue of like, am I just going to keep issuing purity tests to these companies that they then fail? Or is it like we do have a real moral imperative to look at where we are spending our time, how we are helping these companies, and whether it makes sense to continue investing our time, energy, and content on platforms that are hostile to our work? But I’m curious what you both think of this, Lauren, I just know you’ve been waiting for this.
Lauren Goode: Rubs hands together, continues the conversation about Meta. I’m not really on Threads anymore. The account still exists, but I haven’t used it in, I don’t know, a few weeks at least. I, too, am all in on Bluesky, and I respect that you’ve gone down this rabbit hole. It’s important to think about. There’s only so much that we can do individually, but that ends up being a collective action if a bunch of people leave something,
Michael Calore: I was never really on Threads, so that’s an easy one for me. But Instagram is a harder one. I’m conflicted about Instagram because I need it for my side hustle, which is music. If you want to book shows, if you want to connect with other like-minded musicians, Instagram is where that’s all happening, and I don’t see that changing. So I haven’t really decided what to do there yet.
Lauren Goode: Especially during the recent LA wildfires. I got a lot of solid information from Instagram about that. Reading the LA Times through Instagram, seeing people’s firsthand accounts, and videos that they were capturing. Let’s assume those were not altered or AI generated. I don’t believe that they were. Even locally, here in San Francisco, seeing what people were doing to collect donations, and food, and clothing items, and stuff like that, just local actions that people were taking that I ended up participating in, it was really useful for that.
Zoë Schiffer: Yeah, I feel like Lauren, you maybe made this calculation on one of our other podcasts where it’s like, to the extent that the platforms are serving us, sure, stay, use it. When we begin serving them more than they’re serving us, then maybe we make a decision about whether to stay or whether to go.
Lauren Goode: That’s exactly it. Do you miss Twitter at all?
Zoë Schiffer: I never had the experience that Mike describes often on here where it really… Mike, I don’t want to put words in your mouth, but it seems like you had a real sense of community on Twitter that was enjoyable, and nice, and a fun place to hang out. I really felt like it was a pretty stressful place for me. That said, I felt like I got a lot of opportunities from having a somewhat sizable Twitter following. So yeah, it’s kind of painful to walk away. I’ve not found another platform where I used it in kind of the manic dedicated way that I did with X and Twitter. It felt like that decision was a bigger decision to just step back from social media.
Michael Calore: It was the place where all my friends were for maybe three or four years, and then it stopped feeling like that, but I stuck around, and now I’m gone.
Zoë Schiffer: Yep. Lauren, what’s your rabbit hole?
Lauren Goode: My rabbit hole is no buy 2025.
Michael Calore: No buy.
Lauren Goode: From AI shopping assistance to no buying.
Michael Calore: Wait, is this the part of the podcast where you gas up Facebook, buy nothing groups?
Lauren Goode: No, but I will say Facebook Marketplace is sometimes the thing that draws me back into that app, if anything at all.
Michael Calore: Yeah.
Lauren Goode: No. So I’ll admit that it started from a place of fervent consumerism, which is that I want to buy a new couch, and I’ve been doing this thing where every time I go over a friend’s house recently, I sit on their couch and I’m like, “What couch is this? Do you like it? How deep is this? Is this 41 inch deep?” I’m obsessed with couches. So I started looking for couches and I was getting so obsessed with it that I had to take a step back, and then it was like the TikTokers found me, the no buy 2025 girlies appeared in my peripheral vision. They were like, “Don’t buy.”
Zoë Schiffer: Is it like a finance thing? You’re not buying for a budgeting reason or it’s like a movement?
Lauren Goode: It’s both. It’s both for financial reasons, because the girlies are telling… I shouldn’t, sorry. The girlies is what we’re referred to affectionately as the women on TikTok, and they call themselves that. But they tend to be tallying up, “Here’s how much I’ll save in 2025 if I don’t do this thing.” But it’s also part of an anti-consumerism movement, broadly, and it’s not new, but now it has a nice ring to it, right? Don’t buy 2020. It’s part of a new year’s resolution, basically. No new shoes, unless I have completely worn through the other ones. Or I’m not going to buy any new health and beauty products unless I do something throughout the week that gives me a $50 allotment that I can spend on anything. Everyone can create their own rules, but it’s budgeting, but it’s fun budgeting.
Zoë Schiffer: I like it.
Lauren Goode: Yeah.
Michael Calore: That’s great.
Lauren Goode: I’m really enjoying that. Yeah. I’m not sure I’m going to do it, but I’m enjoying the hashtag content. What about you, Mike?
Michael Calore: My rabbit hole is pens. I’m serious.
Zoë Schiffer: You always have the best answers.
Michael Calore: Okay, so I’ve been a reporter pretty much my entire adult life, and one of the first things that I learned from my very first editor is that you always carry a pen with you. So I do. I always carry a pen with me. Lauren knows this. I’m pretty fanatical about my devotion towards Fisher Space Pens. If you know the Bullet Space pen, it’s very small. It fits in your pocket without bulging or anything, and when you post it, when you pull the cap off and put it on the end, it’s a full-size pen. It can go through the wash. You can leave it in a car that’s super hot, you can freeze it. It’s not going to explode because it has this pressurized ink thing in it. So I traveled at the beginning of January. I went to Las Vegas for CES. God help me. I had my space pen in my pocket. I used it, but it kept falling apart. I think it’s so old that it’s just not really holding together anymore. I got tired of picking up all the little pieces off the floor. So laying in bed one night, I was like, “I need a new Space pen.” I went to my favorite pen shop on the internet, which is Jet Pens, and I started researching other small pens, and I found five or six that I really like. I bought two of them that are Space pens.
Lauren Goode: Wow.
Zoë Schiffer: Indulgent.
Michael Calore: I spent probably two hours reading buying guides and pen content on the internet.
Lauren Goode: That’s amazing.
Michael Calore: I got this cool one though. A Kaweco Sport.
Zoë Schiffer: Aww.
Michael Calore: A German pen.
Lauren Goode: Can you hold it in front of the camera?
Michael Calore: Yeah.
Zoë Schiffer: It’s kind of red, square. It’s got an interesting modern looking design.
Lauren Goode: It looks like the kind of pen that you would give to a kid if they were first learning how to write.
Michael Calore: Yes, which is me. I’m a large child. But it’s octagonal. It doesn’t roll, so you put it down and it doesn’t roll.
Zoë Schiffer: That’s cool. That’s nice.
Michael Calore: Right? I wonder if there’s pentok, if there’s a TikTok for pens.
Zoë Schiffer: There probably is. I feel like there’s truly a TikTok community for anything.
Michael Calore: Then I can get really into it, and then TikTok goes away, and then I’ll be on the outside again.
Zoë Schiffer: Mike’s a pen influencer.
Michael Calore: A Penfluencer.
Zoë Schiffer: A Penfluencer
Michael Calore: A Penfluencer. That’s our show for today. We will be back next week with an episode about Marc Andreessen, the venture capitalist captain of the internet, and very influential figure in Silicon Valley. Thanks for listening to Uncanny Valley. If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you’d like to get in touch with any of us for questions, comments, or show suggestions, write to us at uncannyvalley@wired.com. Today’s show is produced by Kyana Moghadam and Gianna Palmer. Amar Lal at Macrosound Mixed this episode. Jordan Bell is our executive producer. Conde Nast’s head of Global Audio is Chris Bannon.