How we speak and write is not just about communication; it is also an emotional, educational, conceptual, and highly personal fingerprint. The way we expressed ourselves in speech and in writing—before we adopted AI—was unique, with quirks and imperfections that added charm and authenticity.
Academic analysis confirms that AI-generated dialogue differs in style and variability from human speech. Large Language Models (LLMs) exhibit more consistent patterns that tend to compress style variability. AI-favored vocabulary and phrasing are also being mimicked in how humans write work emails or produce content for online social and blogging platforms. These signs of linguistic convergence hold important lessons for engaging with these systems without erasing ourselves. Homogenization is more than stylistic. It affects creativity, persuasion, and even thought itself.
Prompting till we make it
There is much concern about deep fakes, the AI manipulation (like face-swapping or manipulation of expressions) of media to make someone appear to say or do what they might never have. AI avatars are similar, but they are not designed to deceive–they are conceived as digital presentations and representations.
Contrasted with deep fakes, what appear to be seeping into our daily lives–especially in content production and personal communication–are what can describe as “shallow fakes” of our voices and of the ways we present ourselves through writing.
Whereas editing tools such as Grammarly are largely cosmetic, smoothing out wrinkles in our writing, GenAI can feel like full-on reconstructive plastic surgery. The individual remains the same, but how they present—and are seen by the world through writing—is altered to meet preconceived “beauty” ideals. GenAI embellishes our messages with adjectives, adverbs, phrases, and cadence that most of us do not use naturally.
READ: Sreedhar Potarazu and Carin Isabel Knoop | Our fatal attraction to AI: Basic (or Python) instinct (February 25, 2026)
Just as with plastic surgery, however, we know when something is not quite right or does not quite fit. Even if we adapt the output to “sound like us” (using AI-modulated speech), GenAI text still has its signatures: acceptable blandness or over-the-top language replete with adjectives and adverbs, paired with short declarative sentences. Marketing copy, emails, LinkedIn posts, social media comments, and even casual posts increasingly echo a shared AI-inflected tone: structured, neutral, safe—and, well—perhaps boring. We glance over them increasingly rapidly, not bothering to read what someone did not bother to write.
No one can predict how long this shared AI-inflected tone will last, especially since it is a seismic shift to go from communicating poorly and increasingly through emojis and text messages to full-blown essays with three-fold transitions.
It is as if slapstick comic Charlie Chaplin from the pre-World War II era suddenly started rapping.
For decades, we have lived by the advice of “fake it till you make it.” While the tools available to “fake it” are proliferating at warp speed, the paradox is that we may be undermining the ability to “make it” as who we really are. GenAI-derived or AI-assisted language sounds polished and projects ephemeral confidence, even when the humans behind it have not yet developed the voice they are imitating.
In this gray area between our “shallow fakes” and “deep reals” is also a need to stay recognizable and trustworthy, as one’s AI-assisted written voice may contrast with one’s “real life” communication style.
Commoditizing expression
Usage data from OpenAI show that users around the world send over 2.5 billion prompts to ChatGPT every day, totaling hundreds of billions annually, shaping how they phrase questions, requests, and explanations. When so many rely on the same patterns to ask questions, get advice, and craft text, the underlying structural patterns of AI responses begin to shape how people naturally express themselves.
We are well beyond early adopter demographics and a skew towards males; usage is now widespread and balanced across genders and geographies. The main uses are for practical things like “How-tos” and everyday advice (the largest single category), followed by writing help (editing, communication, or emails), and then information seeking, increasingly substituting for traditional search engines.
The usage patterns reflect structured prompts. Research classifies ChatGPT messages into three major buckets: asking questions (about 50%); doing tasks (about 40 % for drafting text, planning, and automation); and the balance for expressive or personal use. Writing is the most common work-related task. Yet, with 70 % of chats now non-work-related, we see GenAI creeping into our personal and routine communication spheres, which used to be less formal and formulaic and, in many ways, more personal.
The impact will be disproportionate on younger users (18–25). They contribute a large share of chats, though they use the tool less for work. Older groups, meanwhile, skew more toward task-oriented queries. This also means their language and expressions will propagate and affect the models through their use of the systems and prompting.
From pupil to tutor
Just as our collective syntax and speech train AI, AI trains and tutors us. LLMs thrive on patterns, not originality. And humans, using our predictive brains, tend to mimic what works and is efficient. We see the polished, AI-curated response and unconsciously adjust our own phrasing. Both Botox and GenAI nudge us toward an unnatural range of expressions. And the more exposed we are to what seem to be two forms of linguistic communication, the more stilted our speech will become.
According to OpenAI’s data on prompts, people increasingly phrase requests with the same syntax, structure, and tone. For example, common prompts (“Write a concise summary” or “Explain like I’m five “) are used across millions of users. This convergence makes speech more predictable and suggests that our thought patterns may be homogenizing, since we have choices predetermined that lure us toward the easy.
A similar situation occurs in coding. Vibe coding involves using the gist of what the requirements are to develop code. Spec coding, meanwhile, is more specific as a prototype and involves providing specific requirements for AI-generated code rather than a general “feeling” of what we are looking for.
Our speech mirrors the model because the model mirrors the behavior of billions of users. And humans are limited in extracting variety from AI, because our prompting skills are still poor.
Communicating at the tail ends: resonating… quietly!
Beyond hyperbole and a lack of originality, we should worry that LLMs also sacrifice nuance for efficiency as part of the learning process.
READ: Sreedhar Potarazu and Carin Isabel Knoop | Getting our needs met: Learning to speak ‘human’ from GenAI (February 11, 2026)
Our quest for efficiency challenges our ability to regulate our discomfort with nuance. The feedforward/recurrent split in our cognition helps us here. Quick, predictive responses are easy; deliberate, reflective phrasing is harder. However, just as with coding, if we were to take the time to be more specific in choreographing our voices, we would not all sound cloned.
Over time, the repeated practice of requesting clarity, brevity, and structure trains our predictive mind to prioritize the familiar, the clear, and the algorithmically neat. Subtle differences or too many counterpoints and alternatives get filtered out. The more complex the world becomes, the more we—and the models we use—seek simplicity, which might not enable us to grasp the complexities of the choices we need to make.
Autotuning our perfect imperfections
As we have written about, we may be the OG ChatGPT, but unlike the model, we can choose how to train ourselves: for efficiency, or for depth; for speed, or for originality. We can learn from how GenAI writes a challenging email to try to do it ourselves next time. We can learn from the discipline of prompting how to improve our requests of colleagues and loved ones.
And we can rejoice that the tools enable many more voices to be heard at all. This is especially true for individuals who do not like to write, have not gotten the training needed to do so easily, or are writing in their second or third language. That is why it is important to us to address internet access or digital literacy issues. In that respect, access to these tools can be a major driver of equity.
But access alone does not guarantee diversity of expression. In a world of converging syntax, choosing and accepting difference may be the last frontier of distinct human thought. As we delved into this challenge we were reminded of a set of New Yorker articles by Sasha Frere-Jones in 2008 and 2009. He noted that although music once promised infinite variety online, the era of Auto-Tune showed how quickly voices can converge on the same pitch. What matters is that we continue to nurture and connect to the individual behind the (prettier) façade.

