The more social media gets flooded with crappy AI images and engagement bait, the more I find myself drawn to my books and notebooks. To be fair, I’ve loved reading and writing for as long as I can remember, so it’s not like I’m venturing into unknown territory. But, as the interactions I’m having online become shallower and, frankly, shittier, going back to analog technology is a much-needed reprieve. It’s like stepping into a hot sauna after a cold dip– I can feel my mind (and clenched jaw and drawn shoulders) start to relax almost instantly.
I’m not anti-technology or anti-AI. To call me a Luddite would be both a compliment and profoundly inaccurate. I work in the software industry as a product manager. My particular area of expertise is AI systems for marketers. Staying at the cutting edge of technology and figuring out novel ways to apply machine learning technologies to the challenges marketers face is how I pay my mortgage and keep food on the table. I have a frickin’ pending patent on AI technology I helped develop. It would be hypocritical of me to suggest that advanced technology is bad. In many ways, I think it’s great.
But it’s also a double-edged sword. For every avenue companies like mine explore to make tedious marketing work more efficient, a dozen applications of the same technology do more harm than good.
The Dead Internet Theory started as something of a conspiracy, but I think some version of it is now our online reality. Hinting of malicious intent, the Dead Internet theory “asserts, due to a coordinated and intentional effort, the Internet now consists mainly of bot activity and automatically generated content manipulated by algorithmic curation to control the population and minimize organic human activity.”
While I’m wary of claiming that there is an active effort to “control” the population, bot activity, artificial content, and inorganic interactions dominate our online environment in a way that minimizes organic human activity. If this doesn’t control the population, it certainly dumbs it down and inhibits our ability to establish empathy for one another.
Twitter– particularly in the post-Musk-acquisition days of X– is a prime example. As a platform, Twitter used to be my go-to for staying in touch with folks in my industry and seeing what they were working on. It lacked the pretense of a platform like LinkedIn and made for more natural interactions with folks whose work you admired. Nowadays, I log in and find myself scrolling through nothing but drama, clickbait, and engagement farming in the form of controversial (or outright upsetting) statements being made by accounts that have no deeper goal than garnering enough clicks, replies, and retweets to earn a payout from X’s monetization system that rewards engagement at any cost.
I can no longer see posts by the people I follow on Twitter. Instead, it’s a near-constant stream of grainy videos with hyperbolic captions, almost always followed by a community note indicating that the post contains a hidden ad for a gambling website or is being used to promote someone’s OnlyFans account. On top of that, the majority of the replies to viral tweets are so generic or incendiary that you have to step back and ask yourself if a person wrote those words out or if it was, in fact, written by a bot that can make API calls to an LLM.
And Twitter isn’t the only clusterfuck website where interactions have degraded over time. Logging into Facebook, innumerable posts make their way into my timeline from pages, groups, and accounts I don’t follow that have shared AI-generated images alongside some generic caption and have somehow managed to generate tens of thousands of interactions. You’ll see a post from a page devoted to architecture and interior design, for example, where someone has posted an image of a kitchen that they claim to be their own where the countertops are all seemingly carved from a single tree trunk while everything else appears ultra-modern and tidy. The caption will proclaim pride in one’s work in carving such a unique countertop, and the comments on the post will alternate between such non-productive comments as “wow beautiful,” “This is AI,” and “impressive.” And yet, no matter how many times folks comment “this is AI” or complain about AI imagery in a group devoted to a specific area, the process continues. To the unsuspecting or naive, the posts that get shared are a marvel of ingenuity (or so it seems– the Dead Internet Theory could also suggest that the eager engagement is not a human error in media literacy and more so a bot-driven exercise in boosting a post’s virality), and to those who correctly identify such posts as not being real, they represent a frustrating breach of in-group norms and a detractor toward the group’s goals.
Progressively, our social interactions online are bent toward a world where the chief reason to be online is purely for numerical engagement. Unlike the social media of yesteryear– where such innocuous posts as “feeling excited about going to the mall tomorrow” (on Facebook, all posts used to start with “your name” and “is,” so that post would have appeared on the timeline as “Blake Reichenbach is feeling excited about going to the mall tomorrow”) would elicit responses from friends such as “I’m going too!” or a basic “have fun”– the online ecosystem of today seems to no longer prioritize human interactions that mimic offline interactions.
Instead, online social interaction prioritizes two key metrics. First, social media platforms optimize for the amount of time you spend online. They want you to see content that’s going to intrigue you, frustrate you, enrage you, or simply numb your brain so that you don’t realize how much time you spend scrolling. Have you ever seen a video where the top half of the screen shows one video and the bottom half of the screen shows something like Minecraft or Subway Surfer? It’s a dopamine bomb that does nothing for your brain other than lull it into a sense of numbness so that you’re unlikely to look away (read The Chaos Machine by Max Fisher for a deep dive on this topic). The longer you spend on a website or apps like Facebook, X, Instagram, YouTube, or TikTok, the more opportunities those companies have to show you ads (their primary revenue generator), collect your personal data, and brag about their usage metrics to shareholders.
Second, individuals posting on these platforms optimize for quantity of engagement rather than quality of engagement. This is particularly frustrating because this type of interaction seems to primarily be rooted in our individual senses of validation and the dopamine we get from others’ attention in the form of notification rather than the understandable (if not infuriating) economic motivations of the companies themselves. X may be the exception since Musk has ensured that the worst behavior is financially compensated. Across most social platforms, though, users increasingly adopt tribalistic stances on everything from musical acts (stan culture) to politics (MAGA) and optimize for personal commodification, such as in the aspiration to be viewed as an influencer or in the direct commodification of their personas through channels like Substack and OnlyFans. It’s not uncommon to see captions along the lines of “will delete if this flops” on posts, indicating that, to the poster, the only reason to share something is to get external validation, and if it doesn’t meet their personal expectations for how much validation is enough, it should be deleted.
This type of shallowness– on the parts of both the companies that only care about optimizing for ad revenue and for individuals who only care for validation from their assumed tribes– has quite a few downstream consequences that we are only recently starting to understand at a societal level.
One of my biggest concerns about online media is that it will result (or, perhaps, already has resulted) in worsened literacy skills for the general population. Each time I log into Twitter or Facebook, it seems I’m confronted by a new era of readers and film watchers who equate their personal tastes with quality or, at times, objective moral value. Even people who have built moderately large audiences online by engaging with literature (“BookTok,” for example, which has become a major marketing influence for publishing as a whole) often fail to speak about books in a way that would suggest thoughtful reflection. Often, books are reduced to an aesthetic in online spaces in which one wants to look bookish and charmingly nerdy but fails to offer anything meaningful to conversations around books other than “this book is good/bad.”
Where things really go down the drain is when influential accounts in bookish spaces online make moral claims about books, their authors, and the publishing companies behind them with seemingly no regard for what books are and how they function. It isn’t uncommon to see posts across Twitter or Facebook where readers equate their own comfort with a book’s moral value. Tweets like “I can’t recommend this book because it’s homophobic” or “Author A is fucked up for writing a book about dr*gs and grooming” pop up when readers are confronted with books that handle difficult topics. Using the accusation of a book being homophobic as an example, when you look into the book being discussed, what you’ll often find is that there are homophobic characters within the book or that a queer character struggles to navigate a heteronormative world. In the discourse that spins out of books like this (and god, how I’ve come to hate the term discourse), it seems that the accounts posting on the topic fail or completely neglect to ask why characters in a book may be homophobic. What does it say about the world if an author’s inclusion of homophobia around a queer character makes us uncomfortable?
Instead, because the presence of homophobia makes the reader uncomfortable, they treat the book as if it’s a bad book (perhaps by virtue of nothing more than the reader lacking the emotional regulation to sit with discomfort). Their discomfort becomes virtue signaling, which feeds the beast of garnering validation from others who wish to be seen atop their moral high horses.
One of the more egregious examples of this happening in recent years concerns the YA novel The Black Witch by Laurie Forest. In this fantasy novel, there is a racial hierarchy or caste system revolving around the fantasy (let me emphasize: fantasy) races of the novel, such as Selkies, Fae, and so on. A Twitter user and book blogger read the book prior to its release and decided that this framing was too “problematic” (another term I can’t stand– perhaps more than “discourse”) in how it handled race. As Kat Rosenfield wrote for Vulture,
It was this premise that led Sinyard to slam The Black Witch as “racist, ableist, homophobic, and … written with no marginalized people in mind,” in a review that consisted largely of pull quotes featuring the book’s racist characters saying or doing racist things. Here’s a representative excerpt, an offending sentence juxtaposed with Sinyard’s commentary:
“pg. 163. The Kelts are not a pure race like us. They’re more accepting of intermarriage, and because of this, they’re hopelessly mixed.”
Yes, you just read that with your own two eyes. This is one of the times my jaw dropped in horror and I had to walk away from this book.
What cascaded from the reviewer’s barely literate review, in which they pulled quotes from characters who are intentionally depicted as bad people to portray the book as also bad, was an onslaught of pile-ons from other Twitter users who wanted to be seen as equally ethical as the original poster. Before the book was even released, users left scathing reviews on Goodreads, and some even called for the publisher to pull the book from their release schedule— all because the book made someone who couldn’t be bothered to think critically uncomfortable.
While not every Tweet about a book making someone uncomfortable has resulted in the same level of fallout for authors and publishers as what happened with The Black Witch, the pattern has continued over the last eight years since The Black Witch was published. Users who post about a book being bad because it made them uncomfortable– on either end of the political spectrum, this crops up as a book being either too woke (They’re shoving the gays down our throats! Female protagonists! Corporate bad guys!) or not woke enough (Racism exists! Homophobia exists! The trans character didn’t fit my definition of trans!)– often use their discomfort and underlying political beliefs to shield themselves from any criticism.
Disagree with me? Well, you’re just a woke leftie lib cuck or a right-wing racist fascist.
Frankly, a good book should make you uncomfortable. Any work of art is a response to the society in which it is created. It will either reaffirm dominant beliefs or subvert dominant beliefs, depending on whether or not the author has explicit political intentions. In the world of uncritical online discourse, there is seemingly no winning when it comes to reflecting society back to itself. The most vocal readers online seem to only want books that are perfectly aligned with their worldview and that don’t challenge or expand it.
A complete failure to grasp basic literary criticism aside, there’s also an insidious view of difficulty as discomfort that has gained more visibility online as AI tools that simplify language proliferate.
Twitter users worldwide advocate for using tools like ChatGPT to “modernize” classic works of literature, from Shakespeare to Dickens. They promote tools and approaches that distill works into digestible paragraphs, completely stripping the work of its context, storytelling, worldbuilding, cultural commentary, and so on. In other words, they want literature to have a single, simplified meaning and not be something that readers have to work through to make sense of on their own.
These users– who I can’t even fully deride as stereotypical tech bros since you’ll find Economics professors and parents among their ranks– advocate for the idea that texts that are difficult to read, either due to the subject matter, the language used, or the age of the text, aren’t valuable to modern audiences in their current forms. They see no reason why someone in 2025 should be trying to piece together a dense text from the 1800s or earlier, especially not when we have AI technology to simplify and modernize the text.
In reality, reading texts that contain unfamiliar elements– new words, new sentence structures, references to historical events you weren’t alive for– is how we improve our reading skills. And the value of having those skills in 2025 is that, surprise, reading is still a good skill to have. In the workplace, being able to understand what people from different backgrounds, countries, and experiences than you are saying is key to being able to thrive. Even if you can feed your emails into ChatGPT and ask it to translate them in a way you’ll understand, that’s not so easily done in real-time conversations. Plus, it’s, frankly, weak. If you can’t put in the work to develop a degree of mastery of your personal domain because you’re too reliant on technology to simplify it for you, you’re not good at what you do, and you’re only a few years from being completely replaced by a computer (I’ll glue googly eyes on your screen so it still feels human).
Critical thinking, depth in a subject, and divergent thinking are amazing skills to have, and you can only acquire those skills by progressively challenging yourself– by stretching your mind little by little. Tackling difficult mental challenges is like training for progressive overload at the gym. If you don’t push your brain, it will stagnate. If you force yourself to take on bigger challenges and piece together Faulkner or Shakespeare or whatever else is currently foreign to you, your brain adapts. You learn how to extract meaning from dense, complicated texts. Your brain gets stronger and more resilient.
Advocating for the use of technology to make literature easier, more approachable, and sanitized is a direct route to a soft brain that can’t parse contradiction, complexity, or nuance. To put it more bluntly: it’s advocating for making yourself dumber(er).
And yet, our online communities reward polarization, not nuance. Algorithms are designed to keep us online and engaged at all costs, so posts that operate in emotionally driven absolutes that generate controversy are the ones that get the most air time. The popularity and success of these posts (and in the case of Twitter, financial compensation that Premium accounts can acquire) reaffirm the notion that these are the right kind of posts to make. “If it bleeds, it leads” has taken on a new context online in which the content that evokes the most immediate gut reaction gets the most air time. If it enrages, it engages.
Despite digital mediums often dumbing us down and failing to facilitate the most basic literacy skills, modern publishing can’t stay away. Authors are pushed to build their online platforms and presence and to be a brand. There seems to be an assumption that publishers need to be able to sell their authors, not just their authors’ work. Publishing houses find it less intimidating, after all, to buy the work of a person who has an established audience and seemingly built-in customer base, and the most transparent way to establish that base in our current economy is via social media.
But, from what I’ve seen so far, it seems that authors are typically poorly equipped to navigate social media and build sizeable followings. There have been more than a few missteps where authors have responded to reviews or criticism in what has been deemed “the wrong way,” and facing harsh backlash because of it. Or, they’ll take an unpopular stance, speak in generalities, or crack misguided jokes– just as any individual is wont to do on social media– and be faced with hordes of callout accounts tagging their publisher and trying to pressure publishing houses into dropping the author, akin to what Laurie Forest faced with The Black Witch.
All of this doesn’t even scratch the surface of how negatively these interactions can impact individuals. Don’t forget that there is also significant evidence suggesting that social media plays a role in young people developing body image issues and eating disorders, perpetuating bullying and harassment, and increasing stress.
I don’t know if the answer to these problems is to give up social media and online interactions entirely. I haven’t. But, I do think that we have to do something to fight the enshittification of information superhighways and to preserve a world in which subsequent generations aren’t afraid to read difficult texts, can appreciate art, and value critical thinking and hard work.
As for what we should be doing, it’s hard to say for sure. There are digital rights activists, tech ethicists, and pseudo-luddites like myself who all have different perspectives on what a better version of online interactions and digital media should look like, and who’s to say that one camp is more right than the others? For me, though, I’ve found that getting offline and spending more time in the analog world has done wonders for treating the symptoms of digital media, even if it can’t cure the root cause of the ailment. Spending more time journaling, reading print, and working through books that are big and complicated and frustrating are like hypertrophy training for the brain. They force you into a place where your mind has to be engaged– where you can’t think in shallow generalities and have to wade waist-deep into nuance.
Recently, I finished reading Alan Moore’s The Great When, and couldn’t help but feel ecstatic. It’s a peculiar book that weaves through time periods and versions of reality. It incorporates Cockney rhyming slang and lengthy passages that are more concerned with evoking a feeling or sense of motion rather than telling a straightforward narrative. His vocabulary is expansive, simultaneously erudite and playful. For someone like Moore, whose work includes titles such as Watchmen and V for Vendetta, The Great When feels like the victory lap of an author who has accomplished so much within his craft and wants to step back and showcase his robust skillset.
The Great When was the first book I’ve read in a long time that I would consider a moderately difficult read. I have degrees in English and studied critical theory at the University of Oxford. I’m no schlub when it comes to the variety of books I read. But in Moore’s work, I found myself pausing to look up the precise definitions of words I could piece together with context clues, but doubted my innate ability to use them in my own sentences. I had passages that I had to read and re-read and read again to fully digest– who was that character? Why do I know that name? Is that a significant detail or just a flavor of worldbuilding?
I don’t share this to pat myself on the back for reading a novel– my god, how self-indulgent would that be? I’m a bookstore owner! – but rather to say share that as I read that book, I found myself coming up against the atrophied parts of my brain that I haven’t been using… the parts of my brain that go numb when I’m scrolling Reels or listening to YouTube videos while playing video games. It was like looking at myself in the mirror and being confronted by the fact that I used to have to juggle multiple books on multiple topics each week, and that I used to constantly be encountering new ideas and artistic styles. Most hauntingly, I was confronted with the reality that I had stopped encountering new ideas and artistic styles in recent years. In wading back into the depths of big, messy, complicated stories, it was like a part of myself was coming back to me, and I found myself resenting the lazy lack of intellectual resilience I’ve come to accept as normal.
The internet's evolution from a space of genuine human connection to an algorithmic engagement farm hasn't just changed how we interact online—it's reshaping how we think, read, and process information. As AI-generated content floods our feeds and platforms optimize for outrage over understanding, we're collectively losing our appetite for complexity and nuance. The rise of technology that promises to make everything easier—from reading classic literature to parsing difficult ideas—is paradoxically making us less capable of handling intellectual challenges on our own.
Yet there's a profound satisfaction in rediscovering our capacity for depth, whether through the tactile pleasure of writing in notebooks or the mental workout of wrestling with a challenging text like Moore's The Great When. These analog experiences aren't merely exercises in nostalgia—they're vital practices that strengthen our resistance to the digital world's push toward shallow thinking and instant gratification. As we navigate this increasingly artificial landscape, perhaps our best defense is to intentionally seek out what's difficult, what's real, and what demands our full engagement, even—or especially—when it makes us uncomfortable.