Handing it to Our Overlords
While society and its institutions are still grappling with the changing nature of human experiences and interactions, wrought by digitization and smartphones, they are wholly unprepared for the cultural upheaval that large language models are poised to herald. Whereas previous cultural technologies such as the printing press were transfigured into a means of diffusing messages at scale, thus educating people, codifying law and establishing institutions, the applications of large language models so far largely seem to contribute to a decline in reading and critical media literacy. This is reflected by declining literacy rates across the global north, the transmutation of our increasingly digital commons into an algorithmic slopfest, and the uncritical imposition of digital solutions to the problems that our institutions need more funds to solve.
To support its case, this paper will talk about the commercialization of information — not just its form and means, but the content itself — and the presumption of our critical media literacy and ability to identify AI slop. It then talks about what copious levels of consuming algorithmically driven content has done to people's reading habits, including leading to the fall of literacy rates and the opening this creates for digital enterprises to try and assert themselves as solutions to our essential needs — while the institutions created to address those needs are repeatedly asked to prove their value and struggle to secure funding. While the paper touches upon how technology firms attempt to entrench and make themselves indispensable in our lives, the primary goal is to emphasize the ways people and institutions have voluntarily relinquished any reservations, safeguards or regard for literature and critical thinking that technology seeks to replace.
Influencers, engagement and exploration
Despite being a \$24 billion industry today, it wasn't too long ago when corporate management and older people, generally, were astonished by the rise of influencer content consumption and the way it changed marketing. Older folks simply couldn't wrap their heads around why social media users would flock towards these seemingly mundane accounts that are quite obviously peddling marketed goods. Nevertheless, influencers played a huge role in driving user engagement on these platforms, catapulting them to the behemoth enterprises that they are today. Reports indicate that at least three out of every ten platform users aged between 13 and 25 use TikTok as their primary "search engine." When a user was asked why she preferred TikTok over other sources, her answer revealed underlying problems with how we explore the open web — "They know what I want to see," she said. "It's less work for me to actually go out of my way to search."
Influencers did not invent sponsored content. They are not even particularly deceptive about being sponsored. Several platforms, either voluntarily or by legislation, require influencers to disclose whether they have been paid by a company when they talk about its products — much like advertisements of products, endorsed by celebrities, in a magazine. The difference, however, lies in how influencers are able to portray authenticity. By streaming most parts of their day to their followers, influencers are able to seem relatable. This allows viewers to look to their influencers for style or music recommendations, unlike celebrities who seem like they have large management teams behind their every choice, making them unrelatable. It is the "perception of expertise and accessibility" that separates influencers from celebrities. Through their content, and the rewarding nature of doomscrolling, influencers are able to promote consumption as a form of entertainment, which is why viewers/consumers do not mind spending hours on the internet — funding both influencers and platform enterprises with their attention and data.
LLMs, advertisements, and maybe some information too
As AI "capped-profit" organizations continue searching for ways to…generate profits, solutions under consideration include "native ads." Native ads refers to advertising that is woven into the LLM's response output such that the ad blends in with the form and function of the platform. Skeptics are quick to dismiss this model, stating that neither would nobody be influenced by such advertisements, nor would the LLM service be able to retain users scorned by the deceptive nature of such advertising. They also assert that such product placements would make users question the value of using the LLM itself — the likely result of such a move would be the platform failing to retain its users, who will indubitably see diminished utility in a technology that provides information they cannot trust. Based on the use case of TikTok as a search engine, however, we can argue that these skeptics would be wrong.
Given the sheer amount of information (sponsored or otherwise) that people consume on a regular day, the threshold for how "right" sponsored content needs to seem is very low. This is why sponsored content is still good enough to make TikTok a search engine, and it is why native ads on LLMs will work — because LLMs are able to use the intimate information users provide about their lives to streamline the exploration process further.
Anthropomorphizing LLMs, and still thinking critically
Another reason skeptics are going to stay skeptical is the lack of a human element to LLMs. The algorithm feeding users with micro-targeted videos of influencers — strangers from Los Angeles — peddling products relevant to the user, by this logic, is still more likely to be convincing than a text-based system that lacks a human face to it, which must make it less relatable or convincing. Perhaps these skeptics would change their mind if they were confronted with the degree to which people tend to anthropomorphize LLMs and robots at large. For better or for worse, people seem to find great comfort in their chatbots. Just like other digital platform services that are designed to increase engagement, LLMs are designed to be agreeable and affirmative. This risks creating the same positive feedback loops seen in social media platforms where more of the same kind of information is provided to users to pander to their confirmation bias, thus creating different siloes of information, driving polarization across polities. This has led to reported cases of "AI-induced psychosis." This term does not just refer to cases where the affected user had debilitating mental impairments. It includes able-bodied individuals who are functioning members of society, surrounded by family and friends. When we speak about AI safety, we tend to narrowly focus on the most alarming cases where affected people tend to delude themselves into thinking they have made scientific breakthrough, or discovered a deep conspiracy that requires them to take action without alerting their family and friends. However, there are also people who turn to chatbots in times of stress, duress or simply out of loneliness. By providing the support of a friend, family member or even a mistress, LLMs are privy to users' most private thoughts and feelings. Algorithms are well-suited to use this intimate information to hyper-target its users with native content in ways that would make social media firms envious. If LLMs, as these users claim, could convince users to keep secrets from their friends and even "cheat" on their partners, the ways they can disseminate political information, or prey on insecurities, certainly pose communal risks to peaceful society.
To combat these risks, learning to parse through output just to differentiate sponsored content from contextual output will not be enough. Regulatory legislation aside, people will also need to be taught to develop strong critical media literacy skills — a challenge that our institutions failed to even get right with social media. Even if we had ever figured that out, the nature of LLMs and their ability to extract precise nuggets of information that their users need (rather than directing them to a webpage for them to parse through) complicates this problem several fold. Agentic LLMs are being trained to be the "everything app," that address your home's and even shopping needs. To comprehensively instill pedagogies of critical media literacy that prepare people for consuming information through AI, it would be insufficient to train people to identify how information is framed and whether certain information is designed to prey on their insecurities or elicit an emotional response — which is a gargantuan task on its own — it will also have to instill general critical thinking skills such as the political nature of consumption, or the risks of self-diagnosing psychological illnesses with the help of a chatbot. While most people are not or choose not to be conscious of the societal consequences of the information or material goods they consume, the fragmented nature of information that LLMs are able to extract from larger context for its users makes these risks especially potent.
So often we are told, when it comes to regulating big tech, that it is too late to enact legislations that mitigate the harm that the monopolistic power of big tech causes without breaking the economy — for several reasons including that the network effect and path dependency created by our reliance on these technological solutions. It is imperative that we address the risks posed by LLMs before their institutional effects take a permanent hold of our economy (example: while AI-enabled concentration of merchandising power would be undesirable, striking it down would be unfeasible after other distribution channels have gone defunct). Creating a conscious class of LLM users would only be possible when people are aware of the world beyond their immediate material needs. To be kept abreast with politics, liberties and civil-protection, however, requires people to seek that information out. Algorithmically driven social media platforms, built by near monopolistic private companies, are not incentivized to keep people informed. They can devise other means to drive engagement. Some popular platforms have chosen to abandon the news entirely. Given this context, staying in the know requires people to be curious, seek information, and most importantly — actually read.
Oral culture and techno-degeneracy
Literacy rates across the global north have been declining for the first time in over a century. These patterns are being noted among students and adults alike, with most reported to not have read a book in over a year. Consuming information may be entertaining, but we took for granted how demanding reading supposedly is. Not only do you need access to good quality for reading to be engaging and rewarding, it also requires a lot of time and patience to think through, interpret and summarize information from the text — and leisure time is something people are running very short of in this economy. Doomscrolling short-form video content is simply easier — potentially the easiest — to engage with. It is too short to require you to summarize, the influencer speaking is merely sharing their interpretation of what has already been written about, and it does not require you to think very hard — all while providing you the satisfaction of having engaged with information, news etc. While reading requires you to think and interpret, even if the content was designed to invoke outrage, a video of a person speaking hysterically can elicit emotions far more easily. Not only does this drive polarization, it is also good for engagement and retaining users' attention on the app. Some are calling it a revival of oral culture. There are whole industries of people that work to capitalize on this need for people to engage with informative content without requiring them to lift open a book.
Users are outsourcing the process of parsing through information, to find what they like, to algorithms. By streamlining the process of exploring the information and knowledge that the internet was initially prized for storing, digitizing and making more accessible to people, data-driven algorithms have enabled people to brush off any need to foster curiosity in order to explore and find information, be it our choice of music, movies, books or clothes. The ability to acquire information instantly, without wondering how to go about wandering through information troves to find it or even learn about its idiosyncrasies — be it the cultural influence behind the clothes a local community wears or the music that everyone is listening to — seems to make following influencers all the more appealing. It can rightfully be argued that the age of algorithms and data, collected for micro-targeting purposes, has created irrefutable proof of more diverse markets, paving the way for multi-cultural musicians, authors, and filmmakers to create art that may actually reach their audiences — making it financially viable to do so. However, the immediacy of searching information on algorithmic platforms and the resultant lack of accidental discoveries has impeded the organic growth of subcultures. Social media has reduced subcultures to mere aesthetics and destroyed the sense of community that people used to find in the exploration of art. This is especially evidenced by how gothic culture has been reduced from a community that consumes a sub-genre of music, which evolved from political engagement, to merely a TikTok aesthetic.
Synthetic erosion of institutional capacity
The world has tried oral cultures before, yet it was print and reading that significantly advanced civilizations. Painstaking engagement with long-form texts, and the tradition of peer-review that it allowed for is what led to civilization's research and institutional capacity today. The tech industry's solutions have revolutionized the way we consume information, and has undoubtedly broadened the world's technical capacity — greatly benefitting institutions and people alike. Unfortunately, its newest solution, LLMs, seek to undercut the institutions whose labor it trained its services with, without compensating them fairly in most cases. Education can't simply be "prompt-engineering," because it disregards the value of methods developed by long-time educators and departments of social sciences in universities, and the vast academic heritage they preserve. These institutions have developed the capacity to pass down this knowledge through mechanisms like high schools, universities, majors, minors and assigned readings. They have done so to serve the primary purpose of educating younger generations. Institutions may be flawed, they are not accessible to everyone — but promoting LLM interpretations and prompt outputs as an equivalent to education is disingenuous at best and is mostly harmful.
Even before the advent of LLMs, big tech has embodied this persona of being a disruptor, here to break things and to show the world how to do it better. Embodying this ethos, the industry has long promised to meet the needs of "a different kind of student," one who doesn't learn by reading but by seeing and doing, rather than being held back by traditional methods of learning such as books and reading. This was the narrative they used to peddle their Augmented and Virtual Reality technologies for the longest time, proposing how "seeing" the pyramids or engrossing students in a war site virtually will surely improve educational outcomes. While sight might make lessons memorable and increase class participation, recollection is not the same as learning. These services were always plagued by the same issues as other ed-tech products. While they may not be entirely harmful to learning outcomes, they were never really designed by educators, but were rather parachuted in from a third-party corporation. Adoption never really took off at the rate the industry would have liked. To be fair, they never really suggested supplanting books entirely, but the core issue to solve would always be education being too bookish.
Perhaps big tech has finally managed to make education less reading-intense, while affecting learning outcomes at last. Students are now uploading their readings (articles or whole books) into LLMs to try and "talk" to their readings, engaging with their assigned readings through summaries and questions — much to the glee of tech firms who celebrate the transformation of university programs into "prompt engineering" training sites. It doesn't take any stretch of imagination to picture the next Anthropic ad, captioned "the best teacher is in your pocket," featuring the corniest kid the casting director could find, cheerfully asking, "Hey Claude, what would Rousseau think about Bitcoin?" Unfortunately, Claude wouldn't know, because it's designed with vector models that algorithmically generate the most broadly appealing sentences for a wide range of audiences and contexts. It simply lacks the ability to interpret texts the way humans do. As students find creative ways to summarize information from various readings — perhaps by asking their LLM of choice how this week's texts relate to those from previous weeks — they save a great deal of time, which might then be spent sending out five hundred job applications or doomscrolling. Unfortunately, in doing so, they hand over the task of interpreting their readings over to an algorithm. In an era of emphasis on hyper-productivity and generation of deliverables, maybe a summary of your readings is all you really need — rather than a deeper understanding. Perhaps, for the sake of preserving students' critical thinking skills — and for the wellbeing of future employers who benefit from well-educated graduates — institutions should consider establishing structures that incentivize students to engage more deeply with their lessons and prioritize quality over quantity in their work; but that is besides the point. The biggest threat to the employment of these graduates isn't that LLMs are going to be smarter than them, it is that students and universities largely, for whatever their reasons may be, are voluntarily handing the reins over to LLMs.
Unless the educators in these institutions see a need or benefit to adopting frontier technologies to their lesson plans, they ought to prioritize instilling critical thinking skills. Technology can be adopted in the classroom at its own pace. Any disparaging remarks on teaching methods by big tech simply hold no merit. While educators might be biased, they are still beholden to a non-profit educational model and are largely still trusted by the public to carry out educational endeavors. People simply wouldn't enroll into universities otherwise. While these institutions are not above criticism, it is hard to take the word of a profit-driven, private and notoriously monopolistic industry such as big tech, on how to go about delivering education — and the industry should refrain from interfering with teaching methods from the top-down. Most ed-tech firms fail at getting their products and services adopted widely because they expect teaching methods to fit into their technical capacity. Educators upon review, however, come to a consensus that the flashy new technology does not really add much to learning outcomes. This seems to be the case with LLMs too.
It is contingent upon administrators of universities, psychological therapy and institutions at large to refrain from adopting LLMs through a top-down imposition. While such a partnership might bring generous short-term benefits such as financial contributions from big tech, it is important that the institution does not hand away its mandate on education, healthcare, etc to a business that is actively working on eroding our faith in those very same institutions.
Conclusion
LLMs are a remarkable cultural technology that brings us new ways to represent and distill from vast troves of information. It is not, however, a call for us to hand in our expertise across various fields to a summarizing algorithm. To that end, it is incumbent upon us to not let algorithms prevent us from fostering curiosity, discovery and creativity — traits where our capabilities shine. Prompt-engineering can only usher us back to an oral culture if we voluntarily let it do so. Should gothic music's subculture survive beyond being reduced to a mere aesthetic, perhaps it could provide us a model to protect those traits from algorithms.
Works Cited
"Adult Skills in Literacy and Numeracy Declining or Stagnating in Most OECD Countries | OECD." Accessed December 10, 2025. https://www.oecd.org/en/about/news/press-releases/2024/12/adult-skills-in-literacy-and-numeracy-declining-or-stagnating-in-most-oecd-countries.html.
Boorstin, Julia. "Netflix Will Spend $100 Million to Improve Diversity on Film Following Equity Study." CNBC, February 26, 2021. https://www.cnbc.com/2021/02/26/netflix-will-spend-100-million-to-improve-diversity-on-film-following-equity-study.html.
"ChatGPT Is Becoming an Everything App | The Verge." Accessed December 10, 2025. https://www.theverge.com/tech/798368/chatgpt-everything-app-chair-company-installer.
Gold, Hadas. "They Thought They Were Making Technological Breakthroughs. It Was an AI-Sparked Delusion | CNN Business." CNN, September 5, 2025. https://www.cnn.com/2025/09/05/tech/ai-sparked-delusion-chatgpt.
Hein. "How VR, AR, and Holograms Are Transforming Education and Training." Richard van Hooijdonk Blog, January 6, 2023. https://blog.richardvanhooijdonk.com/en/how-vr-ar-and-holograms-are-transforming-education-and-training/.
Huang, Kalley. "For Gen Z, TikTok Is the New Search Engine." Technology. The New York Times, September 16, 2022. https://www.nytimes.com/2022/09/16/technology/gen-z-tiktok-search-engine.html.
Hub, Influencer Marketing. "Influencer Marketing Benchmark Report 2025." Influencer Marketing Hub, January 24, 2022. https://influencermarketinghub.com/influencer-marketing-benchmark-report/.
"Influencers & Sponsorship Laws | Pfeiffer Law." Accessed December 10, 2025. https://www.pfeifferlaw.com/entertainment-law-blog/influencers-sponsorship-laws.
Lambert, Nathan. "Why AI Writing Is Mid." November 24, 2023. https://www.interconnects.ai/p/why-ai-writing-is-mid.
"Large AI Models Are Cultural and Social Technologies | Science." Accessed December 10, 2025. https://www.science.org/doi/abs/10.1126/science.adt9819.
Laurent, Constance De Saint, and Vlad Glăveanu. "AI Makes Silicon Valley's Philosophy of 'Move Fast and Break Things' Untenable." The Conversation, November 21, 2023. https://doi.org/10.64628/AB.nknpe37vy.
Substack. "The Dawn of the Post-Literate Society." Accessed December 10, 2025. https://substack.com/inbox/post/173338158.
Sun, Jasmine. "Talk Is Cheap." October 11, 2025. https://joinreboot.org/p/talk-is-cheap.
"Teen Subcultures Are Fading. Pity the Poor Kids. - The New York Times." Accessed December 10, 2025. https://www.nytimes.com/2024/02/21/magazine/aesthetics-tiktok-teens.html.
The Age of Surveillance Capitalism. 2017. https://www.hachettebookgroup.com/titles/shoshana-zuboff/the-age-of-surveillance-capitalism/9781610395700/?lens=publicaffairs.
The Baffler. "The Hatred of Podcasting | Brace Belden." October 27, 2025. https://thebaffler.com/outbursts/the-hatred-of-podcasting-belden.
"The Coming AI Hackers | The Belfer Center for Science and International Affairs." June 5, 2025. https://www.belfercenter.org/publication/coming-ai-hackers.
The Economist. "Is the Decline of Reading Making Politics Dumber?" n.d. Accessed December 10, 2025. https://www.economist.com/culture/2025/09/04/is-the-decline-of-reading-making-politics-dumber.
"The Ignorance And Danger Of Proposals To Regulate 'Big Tech.'" Accessed December 10, 2025. https://www.forbes.com/sites/tedladd/2023/08/15/the-ignorance-and-danger-of-proposals-to-regulate-big-tech/.
Thompson, Derek. "The End of Thinking." October 2, 2025. https://www.derekthompson.org/p/the-end-of-thinking.
Varanasi, Lakshmi. "Users Say They Are Seeing Ads on ChatGPT. OpenAI Says It's Not True." Business Insider. Accessed December 10, 2025. https://www.businessinsider.com/chatgpt-ads-rumors-openai-nick-turley-2025-12.
"What's an Influencer? The Complete WIRED Guide | WIRED." Accessed December 10, 2025. https://www.wired.com/story/what-is-an-influencer/.
Wiley.Com. "Platform Capitalism | Wiley." Accessed November 5, 2025. https://www.wiley.com/en-us/Platform+Capitalism-p-9781509504862.