Meta puts the ‘Dead Internet Theory’ into practice

Meta puts the ‘Dead Internet Theory’ into practice

Meta’s mission statement is to “build the future of human connection and the technology that makes it possible.”

According to Meta, the future of human connection is basically humans connecting with AI. 

The company has already rolled out — and is working to radically expand — tools that enable real users to create fake users on the platform on a massive scale. Meta is hoping to convince its 3 billion users that chatting with, commenting on the posts of, and generally interacting with software that pretends to be human is a normal and desirable thing to do. 

Meta treats the dystopian “Dead Internet Theory” — the belief that most online content, traffic, and user interactions are generated by AI and bots rather than humans — as a business plan instead of a toxic trend to be opposed. 

In the old days, when Meta was called Facebook, the company wrapped every new initiative in the warm metaphorical blanket of “human connection”—connecting people to each other. 

Now, it appears Meta wants users to engage with anyone or anything—real or fake doesn’t matter, as long as they’re “engaging,” which is to say spending time on the platforms and money on the advertised products and services.

In other words, Meta has so many users that the only way to continue its previous rapid growth is to build users out of AI. The good news is that Meta’s “Dead Internet” projects are not going well. 

Meta’s aim to get people talking and interacting with non-human AI has taken several forms. 

The Fake Celebrities Project

In September 2023, Meta launched AI chatbots featuring celebrity likenesses, including Kendall Jenner, MrBeast, Snoop Dogg, Charli D’Amelio, and Paris Hilton. 

Users largely rejected and ignored the chatbots, and Meta ended the program. 

The Fake Influencer Engagement Program

Meta is testing a program called “Creator AI,” which enables influencers to create AI-generated bot versions of themselves. These bots would be designed to look, act, sound, and write like the influencers who made them, and would be trained on the wording of their posts. 

The influencer bots would engage in interactive direct messages and respond to comments on posts, fueling the unhealthy parasocial relationships millions already have with celebrities and influencers on Meta platforms. The other “benefit” is that the influencers could “outsource” fan engagement to a bot. 

(“Here at meta, we engage with your fans so you don’t have to!”)

And Meta has even started testing a new feature that automatically adds AI images of users (based on their profile pics) privately into their Instagram feeds, presumably to drive demand and acclimate the public to the idea of turning themselves into AI. 

The Fake Users Initiative

Meta launched its AI Studio in the United States in July 2024; it empowers users without AI skills to create user accounts of invented fake users, complete with profile pics, voices, and “personalities.” 

The idea is that these computer-generated “users” have profiles that exist just like human-user profiles, which can interact with real people on Instagram, Messenger, WhatsApp, and the web.  Meta plans to enable these personas to do the same on Meta’s “metaverse” virtual reality platforms.

A senior Meta executive recently defended the AI-powered fake user concept. “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Connor Hayes, vice president of product for generative AI at Meta, said in a Financial Times article. “They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform . . . . That’s where we see all of this going.”

Hayes added that while “hundreds of thousands” of such characters have already been created by users, most have been kept private (defeating their purpose of driving engagement).

The Fake Experiences Folly

Meta also plans to release its text-to-video generation software to content creators. This will essentially enable users to place themselves into AI-generated videos, where they can be depicted doing things they never did in places they’ve never been.

The Fake Facebook Folks Fiasco

About a year ago, Meta created and managed 28 fake-user accounts on Facebook and Instagram. The profiles contained bios and AI-generated profile pictures and posted AI-generated content (responsibly labeled as both AI and “managed by Meta”) on which any user could comment. Users could also chat with the bots. 

Recently, the public started noticing these accounts and didn’t like what they saw. Social media mobs shamed Meta into deleting the accounts. 

One strain of criticism was that the fake users simulated human stereotypes, which were found to not represent the communities they were pretending to be part of. 

Also, as with most AI-generated content, the output was often dull, generic, corporate-sounding, wrong, and/or offensive. It didn’t get much engagement, which, for Meta, was the entire purpose for the effort. (Another criticism was that users couldn’t block the account; Meta blamed a “bug” for the problem.)

AI slop is a problem; Meta sees an opportunity 

All this intentional AI fakery takes place on platforms where the biggest and most harmful quality is arguably bottomless pools of spammy AI slop generated by users without content-creation help from Meta. 

The genre uses bad AI-generated, often-bizarre images to elicit a knee-jerk emotional reaction and engagement

In Facebook posts, these “engagement bait” pictures are accompanied by strange, often nonsensical, and manipulative text elements. The more “successful” posts have religious, military, political, or “general pathos” themes (sad, suffering AI children, for example). 

The posts often include weird words. Posters almost always hashtag celebrity names. Many contain information about unrelated topics, like cars. Many such posts ask, “Why don’t pictures like this ever trend?”

These bizarre posts — anchored in bad AI, bad taste, and bad faith — are rife on Facebook.

You can block AI slop profiles. But they just keep coming — believe me, I tried. Blocking, reporting, criticizing, and ignoring have zero impact on the constant appearance of these posts, as far as I can tell. 

And the apparent reason is that Meta’s algorithm is rewarding them. 

Meta is not only failing to stop these posts, but is essentially paying the “content creators” to make them and using its algorithms to boost them. Spammy AI slop falls perfectly into line with Meta’s apparent conclusion that any garbage is good if it drives engagement. 

The AI content crisis

AI content, in general, is a crisis online for a very simple reason: Social media users, content creators, would-be influencers, advertisers, and marketers don’t quite seem to realize that AI-generated content, for lack of a better term, sucks.

AI-generated text, for example, uses repetitive, generic language that doesn’t flow and doesn’t have a “voice.” Word choices tend to be “off,” and the AI usually can’t tell the difference between what’s important and what’s irrelevant. 

AI-generated images are especially problematic. According to multiple studies, people feel more negatively about AI-generated images than real photos. 

Social networks are filled with AI-generated images. Billions have been created using text-to-image AI tools since 2022, many posted online. 

To quantify: A year ago, some 71% of images shared on social media in the US had been AI-generated. In Canada, that figure was 77%. In addition, 26% of marketers were using AI to create marketing images, and that percentage rose to 39% for marketers posting on social.

According to the 2024 Imperva Bad Bot Report by Thales, bots accounted for 49.6% of all global internet traffic in 2023. One-third (32%) of internet traffic was attributed to malicious bots. And 18% came from “good bots” (search engine crawlers, for example). 

In 2023, only 50.4% of internet traffic was human activity. Now, in the first month of 2025, human traffic is definitely a minority of all internet activity. 

The “Dead Internet Theory” people are not only conspiracy theorists, they’re also ahead of the curve. If the theory holds that a majority of online activity is by AI, bots, and agents, then the theory is now objectively true. 

(The theory offers a host of reasons for that outcome that have not been proven true. Proponents believe bots and AI are intentionally created to manipulate algorithms, boost search results, and control public perception.)

Meta cheerfully boasts about its intentional creation of AI bots, but mainly to drive engagement. 

Meta’s fake-user initiatives remind me of its failed “metaverse” programs. 

As with the “Dead Internet Theory,” the “metaverse” concept was a dystopian nightmare dreamed up by novelists as a warning to mankind. The “Dead Internet Theory” is a conspiracy theory that attempts to explain how the internet went horribly wrong. But to Meta, the “metaverse” and “Dead Internet theory” are product roadmaps. 

Meta is proving itself to be an anti-human company that’s working hard to get people away from the real world and trapped for many hours each day, going nowhere, doing nothing, and interacting with no one. 

Meta will fail. The public will reject its dystopian goals.

But the rest of us should learn from their bad example. What the public really wants — something Meta used to understand — is human connection: people connecting to other people.  Advertising, articles, posts, comments, and chats made by people rather than bots are becoming harder to find and, as such, also more valuable.

Because a “connection” with nobody is no connection at all. 

close chatgpt icon
ChatGPT

Enter your request.