The idea of the dead internet isn’t really a conspiracy anymore. But to fully understand this topic, one must first grasp what the dead internet theory is.
Simply put, it suggests that much of the internet, including comments, interactions and even people we engage with, are actually AI-generated bots rather than real individuals. While this might sound far-fetched, Meta announced in 2023 that it was using AI-generated users to boost engagement. According to Meta, they created many AI-driven accounts to interact with real people. However, as of Jan. 2025, some of these accounts have been deleted after controversy.
The idea of filling social media with AI-generated characters that blur the line between human and artificial interactions is scary to think about. This diminishes the very purpose of social media: connecting people. I think it’s a slippery slope when interactions become more dominated by bots than real humans. While AI is a powerful tool with many benefits, it shouldn’t disrupt meaningful human connections.
Additionally, if we can’t distinguish between real users and AI, there could be serious mental health impacts.
In 2024, a 14-year-old boy died by suicide. While there were likely multiple contributing factors, his mother is now suing Character AI. “The lawsuit claims the platform failed to respond adequately when her son expressed thoughts of self-harm to an AI chatbot,” according to a CNN article.
If someone struggling with depression confided in an AI user thinking it was a real person and the only response they received was a generic message like, “If you are feeling suicidal, please call this number,” it would be devastating. The realization that the person they trusted wasn’t even real could feel like an even deeper betrayal.
At the end of the day, social media is meant for human interaction. These interactions are already complicated, especially online and adding AI into the mix could create unhealthy and avoidable situations.