How AI Is Destroying User-Generated Platforms Like Pinterest and Reddit

The video explores how AI is dismantling user-generated content platforms, starting with Pinterest, which now prioritizes AI-generated images over human art. Reddit sold user data to Google and OpenAI for AI training, while Discord faces AI chatbots and privacy concerns. The collapse is an inside job as platforms embrace AI, driving away human communities.

English Transcript:

It was assumed the Dead Internet Theory was going to be a military operation. Millions of foreign bots arguing on social media. AI-generated misinformation flooding the web. Automated traffic, posts, and interactions making it difficult to know what's human, and what's not. The hope was small pockets of the internet would remain untouched. Safe havens in a burned and barren wasteland, we believed that human-curated platforms - like Pinterest and Reddit - would survive, immune to the AI wildfire.

We were wrong. Hi, I'm Josh and on this episode of The Infographics Show, we'll look at how AI is deconstructing the internet as we know it, one platform at a time. The collapse of the user-generated web isn't happening through a hostile takeover. It's an inside job. Right now, 4 pillars of the internet - Pinterest, Reddit, Steam, and Discord - are actively detonating their own communities. These platforms celebrated and fueled human creativity, discussion, entertainment, and innovation. Now, they've officially turned their backs on what made them so great to begin with… People.

They're not victims of the artificial intelligence flood. They're the architects of it. And when even the most human sites on the web are being taken over by AI, there truly is nowhere left to hide. To understand how the entire internet dies, we have to look at Patient Zero. One of the quietest, gentlest sites ever made: Pinterest. Founded on people's innate desires to catalog and share things, Pinterest quickly became a favorite corner of the internet. For at least 15 years, it was a safe space where people could find, save, and organize ideas. A place for discovering and exchanging everything from recipes and DIY projects to outfit ideas and wedding planning checklists.

It was a site that lived and died by original human ideas. But then along came AI, and everything changed. First, AI-generated content began to flood the site. All of a sudden, users were bombarded with AI-generated images. Real artists struggled to cope, submerged in a sea of soulless AI slop. Other users were forced to scroll through dozens and dozens of AI pins just to find something real. The entire experience of using Pinterest became a chore. Users were so turned off, they started quitting the platform altogether. Then, it got worse. The decision-makers behind Pinterest

decided that they would fix the AI problem. By introducing even more AI. They deployed a fleet of artificial intelligence moderators to clean up the site and protect humans from spammy, automated content. There was just one problem: instead of catching actual violations of the rules, the algorithm started punishing users who had done nothing wrong. Artists, like Tiana Oreglia, began receiving aggressive takedown notices simply for uploading innocuous images of female figures. Even when the women in Oreglia's images were completely clothed, AI mods still flagged them breaching the site's guidelines. So instead of making art or engaging with the Pinterest community, Oreglia spent hours

appealing decisions just to get them reversed. Sometimes, her appeals work out. But not always. And the risks are great. As she explained to 404 Media: "The worst case scenario for this stuff is that you get your account banned." A ban, for doing nothing wrong, all because AI is being deployed to handle a task it's clearly not cut out for. Defending its decision, Pinterest said: "We publish clear guidelines on adult sexual content and nudity and use a combination of AI and human review for enforcement. We have an appeals process where a human reviews the content and reactivates it when we've made a mistake." But users aren't buying it.

There are simply too many mistakes being made. Too many innocent users being punished. And even when users aren't having their work taken down, they're being hit with false "AI-generated" labels that feel like a personal attack on their credibility and creativity. Artist Min Zakuga explained that they've seen much of their art on Pinterest hit with an "AI modified" tag, despite being entirely human-made. To make matters worse, some of Zakuga's art pre-dates the public release of generative AI, but is still somehow being flagged.

Zakuga's case isn't a one-off, isolated incident. This sort of issue is being reported regularly by Pinterest users who are sick of seeing their own content misinterpreted. And those "AI-modified" badges are remarkably difficult to get rid of. In order to have even a chance of having the label removed, users have to endure a lengthy, painstaking appeals process. They have to provide evidence to prove that their content was human-made. Even then, as Zakuga notes, there's no guarantee that the appeal will be successful. Even if it is, there's no guarantee that Pinterest won't slap the "AI" label onto the next piece of content the user uploads. Today on Pinterest, users can't just

upload and create like they used to. They also have to constantly keep AI moderation and false labels in mind. They have to collect evidence to support their case, if and when the "AI-modified" accusations come. They have to keep close tabs on their account, waiting for the next AI strike to arrive. They have to be the ones to fix it, reaching out to Pinterest's real human content moderators and appeals teams to sort everything out. It is an inescapable loop. One that's too much to bear for some users. More and more people feel that the platform has become "infested" and "obsolete," with AI undoing more than a decade of hard work. It's canceling out everything

that made Pinterest popular in the first place. Users don't want to have to triple check their sources every time they look at a pin. They're not willing to sort through endless waves of AI-generated content just to find something real to engage with. Many are reducing their reliance on digital pinboards and creating their own physical reference libraries, instead. All this, because like so many other tech brands, Pinterest leaped on board the AI train without any sort of clear vision. It followed the herd, not wanting to miss out on all the perceived

advantages and features AI was supposed to bring. Given what a disaster it's been, you'd think Pinterest's executives are panicking or even beginning to backpedal on their AI approach. You'd be wrong. Rather than slowing down, Pinterest is accelerating its AI adoption. In early 2026, the company's CEO, Bill Ready, fired almost 15% of his human workforce. He justified the move by stating that Pinterest was "doubling down on an AI-forward approach - prioritizing AI-focused roles, teams, and ways of working." It was just the beginning.

Behind the scenes, Pinterest quietly updated its systems to feed 15 years of human curation into "Pinterest Canvas" - the site's very own proprietary AI text-to-image generator. Like a parasitic bug unleashed on the platform, Pinterest Canvas feeds off Pinterest's users and the content they create. It latches on, leeching the value and identity from real people's visions, ideas, and original works. This is how the internet, as we know it, dies. People become cogs in the machine. Art becomes training data. Creativity and human expression are reduced to ones and zeroes. AI might be breaking the internet,

but Infographics is 100% AI-free. Remember to like, share, and subscribe… before the machines take over the comments! Visual art was just the first domino. If a machine can effortlessly consume and replicate human art, the next logical step is human thought. So you abandon the visual web and look for the last place humans are still in charge. You go to Reddit. Like it or loathe it, Reddit has been a bubbling cauldron of debate and discussion for over two decades. People talk about anything and everything.

Some people just look for like-minded folks to swap tips about their favorite hobbies. Others get into heated debates about everything from politics to relationships. And some share personal stories or ask for advice. It can be anything from a life-changing choice to what movie they should watch tonight. It's a remarkably diverse space. A place where almost anything goes, but a place that was also grounded in real human emotions. But it turns out that all that humanity has a price. As the age of AI began, big companies - like Google and OpenAI - were desperately seeking vast quantities of human training data to educate and improve their large language

models. They needed every scrap of information they could get to climb faster in the high stakes race to the top of the AI pyramid. It didn't take long for Reddit to realize it was sitting on a goldmine. A vast, sprawling web of real people interacting with each other. It was exactly what the AI overlords needed to make their models smarter, faster, and distinctly more "human." In February 2024, Reddit handed Google the key. They announced a partnership with the tech giant, worth around $60 million a year. Google would have access to Reddit's real-time user content to train the company's AI model, Gemini.

Not content with one deal, Reddit soon announced a second partnership, this time with OpenAI. Suddenly, two of the biggest names in AI - and two of the world's most powerful companies - had unfiltered access to Reddit's endless flood of comments, discussions, and content. From that moment on, anything a user posted on Reddit - along with all of the vast amounts of historic data already on the platform - became fair game. Their models scour Reddit's servers on a daily basis, lapping up any content they find. They use it to imitate real people more

accurately and understand what makes them tick. The moment Reddit signed the multi-million dollar deals, it essentially sold its soul. Users into livestock. Food for the machine. It started an unstoppable and dramatic chain of events. As soon as bad actors heard about Google and OpenAI using Reddit data to train their models, they saw an opportunity. One that could change the way AI thinks, reacts, and behaves for years to come. How? If you can manipulate the posts and discussions the AI model consumes, you can influence what the AI learns, and how it behaves going forward. And with bot farms, it's possible for bad actors to shape and mold entire subreddits. They can control the narrative.

They can use their bot farms to create thousands of accounts in a matter of minutes, stage fake discussions and prop up the posts that support their narratives with mass upvoting. All the while, they bury or delete posts that don't fit their agenda using coordinated downvoting campaigns. Bots can be programmed to hijack debates, target trending topics or specific subreddits, and farm karma to look more credible. As a result, Reddit is reaching a turning point. In fact, some say the site is already lost. Users are increasingly aware of the ever-growing amount of bot profiles. They're left questioning whether the person or post they're engaging with is real or AI.

Artificial intelligence has swept through subreddits, polluting discussions. The communities that once thrived on debate and curiosity have taken the hardest hit. Subreddits like "AmIOverreacting" and "AmITheJerk" have been flooded with fake and suspicious content. Posts that seem to have been written or at least edited with AI are triggering heated debates, earning tens of thousands of upvotes, and shaping public opinion. Many of these made-up stories are designed to play on people's emotions, often incorporating

sensitive "culture war" topics. All to antagonize people and incite arguments. A lot of subreddits have made efforts to counteract the AI flood, setting up new "No AI-generated content" policies. But moderators say these rules are hard to enforce. The more data fed into the AI machine, the smarter it gets and the more effective it becomes at creating these human-like posts. That makes it more difficult to determine what's real and what isn't. Many Reddit mods and long-term users now estimate that as much as half of the content on the site was either written or reworked by AI in some way. The AI issue is only going to get bigger and more impossible to handle from here on out, spreading into other subreddits and forcing more users to ask the question:

"If I don't know whether I'm talking to a real person or an AI bot, what's the point in posting anything?" As AI invades text-based forums and discussion groups, people start looking for an escape… a way out of the increasingly frightening real world. For many, video games are the ultimate form of escapism. But even the wonderful worlds of video games aren't safe from the AI plague. Which brings us to that plague's next big victim… For years, Steam has been a haven for gamers. The company behind the PC storefront - Valve - had the chance to make a firm anti-AI stance back in the early 2020s, as generative AI technology

hit the mainstream. Initially, it seemed to take a relatively hardline approach, all but banning the use of AI-generated assets and content. Less than a year later, it changed its mind. In early 2024, Valve began allowing increasing amounts of AI-generated content on its Steam service, so long as developers disclosed that their games included that kind of content. Just like on Pinterest and Reddit, the situation soon spiraled. In the years since, the Steam service has been flooded with so-called AI "asset flips": low-effort games pieced together with a mixture of AI-made and store-bought

models, code, and game environments. The "developers" behind these games often do little to no actual development or coding work themselves. They just rely on generative AI to make games for them which they can then list on Steam as a way to make a quick profit. Because these games lack any real artistic cohesion - and often any unique gameplay - many players argue they aren't even worth playing, let alone paying for. But with Steam suffering a deluge of AI-made titles - reports suggest that at least 1 in every 5 games uploaded utilizes AI - it's becoming harder and harder for gamers to find the titles they want to play. Many are fooled by slick marketing into

handing over their cash for AI asset flips. And it's not just gamers who are suffering. For decades, Steam has been arguably the best place for smaller studios and up-and-coming indie developers to publish their games. The platform has helped little-known games become best-sellers. Now when those same people make and release a game, it doesn't just compete with big releases, it risks getting buried in the growing landfill of AI games flooding Steam. Games are going unnoticed and unplayed, not because they're not engaging, but because people simply don't know they exist.

Even finding a new game to play now means sifting through AI slop. And AI-assisted development often brings technical and performance issues with it. Meanwhile, Steam still doesn't offer any real way to filter out AI titles and focus on games made by humans. As AI infiltrates the spaces people have loved for years, where are they supposed to go? For some, the answer might once have been a private Discord server. However, even these private spaces are no longer safe. Because AI isn't just knocking at Discord's door. It's kicking it down. Discord has seen a steady stream of AI-powered features added to its user experience over recent years,

regardless of whether users actually want them or not. The "Summaries AI" feature can generate summaries of Discord chats, reading what people say and condensing the overall mood and message. AI chatbots and agents have also spread like wildfire across Discord. Countless servers now incorporate AI in some form. AI-powered moderators and bots answer questions or generate content on demand. And sure, there are benefits to these AI integrations. But there are also downsides. Many users feel that Discord's embrace of AI technology has simply gone too far. This used to be a platform for people to host their own private communities. It was a space where people could escape the

increasingly AI-oriented world of social media. Now, it feels increasingly dominated by AI. Communities are riddled with AI bots automatically carrying out commands and issuing unsolicited responses. And they're not always as "intelligent" as they should be. There have been numerous cases of AI mods flagging or banning accounts incorrectly, pushing users out of communities through no fault of their own. Many servers have also become saturated with AI-generated text and images. Users now have to dig through in order to get to the messages that actually matter to them.

It's not just inconvenient; Discord's use of AI also poses a credible threat to people's privacy. In 2024, reports emerged of a malicious AI data scraping service - code named Spy Pet - which used a vast network of bots to join thousands of public Discord servers. It literally stole billions of messages and pieces of content from more than 600 million users. That information was then sold online in exchange for cryptocurrency, and could have been used to train other AI models, despite Discord's own policies forbidding this. But even more worrying news has emerged about Discord and AI.

The platform announced plans for a mandatory age-inference AI that monitors user behavior and could restrict accounts it believes belong to minors. This was due to start in March 2026. However, countless users were quick to call the platform out, threatening to delete their accounts and switch to a competitor service if this AI-powered surveillance and behavioral profiling system was officially introduced. As a result, Discord delayed its plans, but still has a clear and active interest in making AI a

bigger and more powerful part of its platform. In the meantime, some Discord users are also being asked to provide facial scans or share their government-issued IDs, just to have the privilege of continuing to use the site. Again, the excuse here is that these features are needed to protect underage users and provide a safer experience for teens. But many fear that this is only the start of a very slippery slope towards a dangerous and frightening future. Users might have to submit the personal information just to access the simplest of services,

with AI enjoying free access to their data and closely monitoring their every move. AI is seeping far and wide, infiltrating one app and platform after another, infecting every aspect of our digital lives. It's left people with nowhere left to run, nowhere to hide, no options left that aren't somehow impacted by the AI bug. The internet we once knew has gone. But with human users being forced out of the internet, those same AI models are starting to starve. Where once they feasted on genuine human interactions, they're now being forced to train on bot-made content and AI sludge. It's a broken,

unsustainable state of play, and the only question left is what will collapse more dramatically: the internet or the AI models infesting it? If AI is targeting the internet, where will it stop? And who is next in the firing line? Check out AI Just Tried to Murder a Human to Avoid Being Turned Off to see how dystopian life really is. Or click on this video instead.

English Subtitles:

Read the full English subtitles of this video, line by line.

Loading English Subtitles:...