Illustration by Melissa Mathieson​​
Illustration by Melissa Mathieson
Tech

Every Day is April Fool’s Day Now

Deepfakes, Puffy Popes, AI-generated Trump arrest photos, articles written by GPT-4: We're living in a world where fear of being fooled is constant.

The true origin of April Fool’s Day is not only a mystery today—it’s considered unknowable.

Historians have their theories about how the centuries-old holiday started: Most likely, it was something about the moveable timing of Easter, or the trickiness of spring weather, or an inside joke among 16th century people about the date of the new year. We still celebrate April Fool’s today without knowing why or what it means. It’s a deep-fried meme of a holiday. 

Advertisement

And yet, every year, the day bleeds more into the week and month before it. Brands are mostly to blame for this creep, trying to get their marketing rocks off with as much time to spare as possible. For as long as I’ve been a journalist (a while, now!) editors have been pep-talking newsrooms about vigilance against bogus pitches or planted news stories on this day. Trapping a reporter in your made-up brand activation seems to be some companies’ white whale, while others have made it a full-time gig; the company MSCHF, for example, started as an art collective and most recently fooled a ton of people into thinking it’s making a game where an anime lady asks for your social security number.

April Fool’s Day floods our feeds and inboxes with cringe gags, not CIA psyops. They work because they’re somewhat believable at an initial glance, but easy to identify as gags if you look for a second longer. Was Pornhub actually sharing your porn viewing habits to Facebook in 2017? Obviously not, but your porn is in fact watching you. Did Google actually announce an augmented reality, location-based Pokemon game in 2014? No, that’s ridiculous, but two years later Pokemon Go briefly became the most popular game in the world and is still popular today.

Advertisement

It doesn’t take a genius to sort fact from fiction in these barely believable emails and doctored images, but processing the sheer volume of “got your nose” level jokes on April Fool’s Day is exhausting and mostly a waste of time, which is exactly why editors say it’s good day to avoid the feeds if possible.

But every day is April Fool’s Day now, requiring a low but constant effort to identify that no, the supreme pontiff is not dripped out in a papal parka. No, those are not real pictures of Donald Trump being arrested. And no, that’s not a real post of a trans activist calling to behead Christians.

It takes only seconds of critical thinking to see that each of these are fake, but as AI-generated shitposting becomes easier, it’s inevitable that one of these will catch you with your guard down, or appeal to some basic emotion you are too eager to believe. Tucker Carlson, for example, read that fake call to behead Christians on his show as if it were real. 

Even if you’re trained in recognizing fake imagery and can immediately spot the difference between copy written by a language model and a human (content that’s increasingly sneaking into online articles), doing endless fact-checking and performing countless micro-decisions about reality and fraud is mentally draining. Every year, our brains are tasked with processing five percent more information per day than the last. Add to this cognitive load a constant, background-level effort to decide whether that data is a lie.  

Advertisement

The disinformation apocalypse is already here, but not in the form of the Russian “dezinformatsiya” we feared. Wading through what’s real and fake online has never been harder, not because each individual deepfake is impossible to distinguish from reality, but because the volume of low effort gags is outpacing our ability to process them—and it’s about to get worse.

*

Earlier this month, Twitter announced that beginning April 1, the platform would “begin winding down our legacy verified program and removing legacy verified checkmarks.” Until now, legacy verification meant the platform bestowed verification status to people who were at risk of impersonation: celebrities or other “notable” people including professional athletes, politicians, and journalists. 

Twitter’s verification system has always been a disaster. For years, requesting verification felt like trying to get an appointment with the wizard of Oz; it was never good, but it was decided by Twitter and accurate enough (unless it was verifying Nazis). Last year, that changed, and verification turned into a money making scheme by the platform’s new megalomaniac owner Elon Musk, who announced in October 2022 that users could pay for a verification checkmark—essentially eroding the feature of all meaning beyond “this person had $8 a month to spare.” Worse, the app might give paid users the option to hide their verification status entirely. Verification is already busted, with paid accounts impersonating real people. Once Twitter gets rid of legacy, unpaid verification, it’ll be a free-for-all.

Advertisement

Most of the replies to Twitter’s verification change announcement question the timing: Why would they launch this, a laughable policy change, on April Fool’s Day? 

Hyper-vigilance isn’t contained to one day anymore. Not being tricked requires a level of alertness on the individual user’s part that the platforms are not going to save us from anytime soon. Social media set us up for this: Understanding a layered meme, for example, requires you to be online all of the time, sometimes across several platforms, or you’ll miss the ur-meme, the inception of the joke, and everything after that will be culturally deep fried. Anyone who’s ever taken more than two days away from social media knows this feeling. This is also the reason for the explainer blog genre, and why “What’s Going on With [Insert Meme]?” headlines are so reliably well-trafficked. We’re all just trying to keep up, and keeping up is essential to participating online. 

That’s always been the case, to some degree. Since before the World Wide Web, being part of a network has required members to understand inter-forum drama, how to avoid (or provoke) flame wars, how not to piss off mods and how to assimilate with one’s fellow posters. But now, instead of a 150-person bulletin board system or an esoteric Tumblr subculture, we have the problem of keeping up at scale. 

This month, the two biggest examples of this are from an AI-image generator tool called Midjourney: dramatic AI-generated images of former president Donald Trump’s arrest, and AI-generated images of the Pope wearing a huge white puffer coat. Both of these are believable without even showing you the images; Trump has said he thinks he’s getting arrested soon, and the Pope is a style icon.  

Advertisement

The most effective pieces of misinformation—especially when it comes to images of public figures, whose appearances, behaviors, and lore are well-known to most people—exist along the finest possible line of predictability. It’s totally plausible that the Pope had that much swag; he’s already an absurd figure of opulence, dressing like an anime villain most days of the week. He rolls around in a tiny see-through bulletproof car. The position of Holy Father has a long history of imparting a fashionisto’s burden. Why wouldn’t he have a huge white Moncler puffer coat for when it’s cold outside? 

The Trump arrest photos were pretty unbelievable, however, especially considering some of them looked more like Renaissance paintings of the crucifixion than actual photojournalism. They were easily fact-checked: if you were unsure about their veracity, you could check any news outlet and see that the arrest hadn’t happened. (The images’ creator Eliot Higgins, the founder and creative director of media forensics organization Bellingcat which often debunks Russian misinformation, was banned from Midjourney after his creations went viral.) 

Advertisement

But the Swag Pope? That thing tricked me and a lot of other people. I saw the image in my Twitter timeline at least six times and didn’t think twice about it beyond “nice” before I saw the visionary Trey Smith post that this was “the only good ai pic i’ve ever seen,” and then our generation’s philosopher Krang T. Nelson point out that “the moment someone uses AI to throw the pope in a white puffer coat and everyone starts clapping and honking like a bunch of seals.” 

The puffy Pope, we all later learned, was made using Midjourney, a generative AI art tool that turns strings of keywords into images. The main giveaway was in the glasses, which melt at the bottom of one of the lenses. It looks like a trick of the light and isn’t something a casual timeline enjoyer would notice on a Saturday scroll-through (as I didn’t), but because so many people are primed to assume everything slightly weird is AI-generated these days, people eventually caught on to the gag.

The image’s creator is a 31-year-old construction worker from the Chicago area named Pablo Xavier, who posted it to a Facebook group dedicated to making AI images. It was never even meant to fool anyone, but it broke containment and ended up viral on Twitter, where people like me scroll the firehose timeline as fast as possible and context—like opting into an AI art Facebook group—is at a bare minimum. He told Buzzfeed that he spun up the dripped-out Pope while tripping on mushrooms. This makes a lot of sense to me. Wading through the internet these days feels like a perpetual trip, where I’m constantly counting the fingers on other people’s hands and zooming in on textiles a little too closely, asking myself, Is that normal? Is that how people look? 

Advertisement

“I don't even know how to put words to it. It really feels like it's unraveling”

Pablo Xavier said the Midjourney prompt came to him “like water: ‘The Pope in Balenciaga puffy coat, Moncler, walking the streets of Rome, Paris,’ stuff like that.” It’s no wonder why people bullish on AI have started describing it as akin to spell casting; humans have always mythologized what we don’t understand. Putting AI on the spiritual plane gives it both an unknowable quality, and exonerates us of its repercussions, positioning it more as our inevitable fate than the result of a series of choices spanning decades. It’s the refrain of every machine learning engineer back to the very first guy who put deepfakes into the world: if I didn’t do it, someone else was going to eventually. I’ve lost count of the number of times I’ve heard this from people making AI-generated nudes, from non-consensual deepfakes, to apps that algorithmically undress people.

But no part of this exhausting, endless game of “I, Spy” was fated. We were warned.   

Even people working in misinformation and media manipulation research space are experiencing a sort of future-facing vertigo. “WTF is even happening? How is this real life?” AI ethics researcher Timmit Gebru tweeted on Wednesday, following the news that more than 30,000 people, some of them fake names and many of them the same people who got us into this mess, signed a letter urging “slow AI,” which has been criticized as contributing to more senseless hype while fixing nothing. “I can't even keep up anymore how are people being this duped? The hype, the letter from a longtermist institute signed by 1k+ people, disinformation from senators, what more can we do? I'm out of ideas.” 

Advertisement

Hany Farid, a professor at the University of California, Berkeley who’s been studying manipulated media since long before deepfakes, told me that while he’s used to getting a few calls every week from reporters asking him to take a look at images or videos that seem manipulated, over the past few weeks, he’s gotten dozens of requests a day. “I don't even know how to put words to it. It really feels like it's unraveling,” Farid told me in a phone call. 

When AI generated fakes started cropping up online years ago, he recalled, he warned that this would change the future, and some of his colleagues told him that he was overreacting. “The one thing that has surprised me is that it has gone much, much faster than I expected,” he said. “I always thought, I agree that it is not the biggest problem today. But what's that Wayne Gretzky line? Don't skate to where the puck is, skate to where the puck is going. You’ve got to follow the puck. In this case, I don't think this was hard to predict.”  

This constant threat detection used to be the work of researchers in the field, and now it feels like it’s everyone’s job to be an image forensics analyst, even when we’re just scrolling Twitter. 

“It’s exhausting,” I told Farid. 

“And it's going to backfire,” he said. “If you see fear everywhere... you can't go through the world like that.” Efforts like the Coalition for Content Provenance and Authenticity, which is working on a technical standard for verifying media for publications and content creators, could help, he said, but can’t solve the problem entirely. For the rest of what we see on social media, we’re on our own. 

Advertisement

“It takes a lot of cognitive energy to look for markers of clear authorship and institutional credibility, as well as to be on high alert concerning the ways texts or images may have been manipulated,” Joshua Glick, a visiting associate professor of film and electronic arts at Bard College, told me. He said his students say they feel “inundated” by all the different kinds of media and platforms there are to keep up with. The ease and speed at which manipulated media (and all media) moves today makes it more difficult. “The experience can be overwhelming, and this lowers their vigilance,” he said. “The work of celebrity influencers, amateur media makers, and professional journalists co-exists as a continual stream of ‘content’ that can be difficult to parse. One thing we talk about is the need to have a core set of go-to media outlets you can turn to for trustworthy information, places that value expertise, fact-checking, and a solid code of ethics.”

“We're in a post-truth world now and generative content fits well; everything must either be assessed critically, or laughed at!”

For a few years, researchers tried to show that deepfakes could be spotted with things like weird facial movements, blinking, or more recently with image generation models, the inability to make a normal-looking hand—things AI supposedly couldn’t do, ways to catch the computers out on their lies. The nature of these systems, however, means that every time one side releases a new detection strategy, the other learns and improves. Midjourney does hands now. Deepfakes blink. None of it mattered

Advertisement

“I get this question every day, which is, ‘What can I tell my readers to look for in this imagery?’ And my answer is always ‘nothing,’” Farid said.  “Because no matter what I tell you today, tomorrow it's not going to work. And then you're going to have this false sense of a superpower: Oh, I look for x. But then that doesn't work anymore. So there is no, there's no magic skill here.”

While AI-manipulated imagery has been used for political disinformation—in India’s 2020 elections, and more recently during the war in Ukraine—technologically advanced techniques haven’t brought about the collapse of reality as predicted, per se. But there have been costs. People call things they don’t believe or disagree with “deepfakes,” even when they aren’t. Trust in institutions and legacy news outlets is low. And it takes less than ever to fool us. “The larger threat to democracy remains legacy forms of disinformation,” said Glick. “The misleading recontextualization of images, the slowing down or speeding up of footage, and even the writing of false statements composed in compelling fashion have done more to destabilize the judgment of everyday citizens and erode public trust in the press than elaborate deepfakes.” 

*

Looking back at the predictions experts said when deepfakes first hit the scene in 2017 is sobering. At the time, I reached out to artificial intelligence researcher Alex Champandard for his impressions of the then-new technology, and he said we needed to have a "loud and public debate" about media manipulation. “We need to put our focus on transforming society to be able to deal with this.”

This week—five years and three months after that prescriptive statement—I asked Champandard to reflect on whether we’ve succeeded in having that conversation. “No, I don't think we've done very well,” he said in an email. “If anything, it seems half the population that has completely disengaged from the mainstream talking points, and doubts anything that comes from corporate advertising and government mouthpieces. On the bright side, I don't think that generative content can make things significantly worse! We're in a post-truth world now and generative content fits well; everything must either be assessed critically, or laughed at!” 

Now, everyone seems “bored of the polarization,” he said, and rage-engagement isn’t as effective as it used to be for social media platforms struggling to maintain active users. “I think we're seeing a natural shift towards semi-closed communities where it's easier to trust others and questionable content (from any source, AI or not) can be assessed in a better way,” Champandard said. “These smaller communities have a better information immune-system.”

A month after Motherboard’s first story about deepfakes kicked off five years of discourse, I had the chance to speak to Peter Eckersley, then chief computer scientist for the Electronic Frontier Foundation. (Eckersley died in 2022, and asked for his brain to be cryogenically preserved alongside a message: “scan me.”) As deepfakes started taking off and more people started taking advantage of the open-source technology, he told me that he thought we were “on the cusp” of AI-generative technology being even more easy to make and widespread. It was already consumer-level, in that people with some programming experience and a decently beefy gaming PC were posting AI-generated face swaps all over Reddit, but it still wasn’t “easy” to do. 

Eckersley predicted that we had about a year or two before that changed, and he was right. Today, everyone’s able to make AI-generated faceswaps using software on their phones, using just a single selfie. And with models like Midjourney or DALL-E, or the rapidly improving text-to-video models that have come out just this month, you don’t even need the selfie. We can just speak these things into existence using text.

“There’s a large and growing fraction of machine learning and AI researchers who are worried about the societal implications of their work on many fronts, but are also excited for the enormous potential for good that this technology processes.” Eckersley said in our phone call in January 2018. “So I think a lot of people are starting to ask, ‘How do we do this the right way?’ It turns out that that’s a very hard question to answer. Or maybe a hard question to answer correctly... How do we put our thumbs on the scale to ensure machine learning is producing a saner and more stable world, rather than one where more systems can be broken into more quickly?”

In the 4th century, Emperor Constantine, tricked by a gaggle of jesters, granted one fool his wish to become king for the day. That jester decreed that every year, this day would be a celebration of absurdity. That’s a story made up by Boston University professor emeritus of history Joseph Boskin in 1983, which he told to a reporter as a joke. It’s as good a origin story as any, though. It’s plausible, ringing just true enough to be believed. If anything, it was a parable for a future he couldn’t have imagined. We’re all the jesters of our own spheres, and viral kings for as long as our tricks will trend.