AI is not just stealing our attention. It is revealing what we hunger for.
What if we treated our loved ones more like GenAI and stopped expecting GenAI to become friends and family?
Are we talking to AI more than we do to each other? After all, AI is available 24/7 and is willing to listen and respond on command. That predictability and responsiveness can feel comforting, even create a sense of emotional ease. This evolution might sound unsettling, but what if there is something we can learn from how we prompt AI that can help us be better communicators with people? As we wrote in an earlier column, we are the OG ChatGPT. The original interface was always human.
If machines are trainable, after all, so are we. Machines force us to slow down, be clear, and state exactly what we want. They do not assume, interrupt, or judge—in fact, they tend to pander to our brilliance and thoughtfulness.
In our daily conversations, however, we believe we are clear when we communicate and assume the other person should know what we are looking for without us saying it directly. Our communications, often the most important ones, can become filled with shortcuts, unspoken expectations, and emotional guessing. Many of our hardest conversations fail not because people do not care, but because they are responding to what we said rather than what we hoped they would infer.
So, when we no longer feel heard, do not want to wait for our friends to get back to us, or feel poorly in an out-of-control world, we turn to systems that feel more predictable and responsive, as has been widely noted. This shift is becoming visible across society.
Today we ask: What if we slowed down and communicated with the same care we are now learning to use with machines—by being explicit about roles, tasks, context, reasoning, delivery, and completeness? That is the heart of prompt engineering. And prompting can just as easily reinforce sloppy thinking if it becomes a shortcut for unclear ideas.
Writing a strong prompt requires the same discipline we expect in serious human communication. It may also serve as a guide to meeting human needs more promptly and honestly.
READ: A peek into the 22nd century: Life in the age of AI and abundance (
The pattern of unmet needs
Through interaction with AI, we are learning to improve our prompt engineering, and AI becomes a mirror and a communication coach.
Vague instructions lead to unpredictable results. Good prompts spell out the role, the task, the context, how the system should reason, what the response should look like, and when it should stop. Models do not infer intent—they only respond to what is made explicit.
When prompts are unclear, the output can sound reasonable but still miss the mark. For example, an advertising team might ask a generative video system to create “a short video of a family enjoying a new SUV on a weekend getaway.” The idea is clear, but many details remain open, so the system fills in the gaps with its own defaults and biases. The same thing happens in human communication when we leave too much unsaid and assume others will know what we mean.
Role: Who am I to this conversation?
Most communication failures are role confusion problems in disguise.
One of the first lessons is learning to be clear about who we are speaking to. When you tell AI to respond as a manager, a friend, or a critic, you are forced to think about the role before words.
That same habit helps in real life. We speak differently to a supervisor than to a colleague, and differently to a loved one than to a stranger. Paying attention to the role of the person we are addressing helps us choose the right context, tone, and level of detail.
For example, imagine saying, “I need help,” to three different people: a boss, a close friend, and a team member. The words are the same, but what “help” means is very different in each case.
A boss may need to hear what decision is blocked, a friend may need to understand how you are feeling, and a team member may need a clear task. When we ignore roles, messages feel confusing or disappointing, even when intentions are good. Prompting teaches us to slow down and ask, “Who is this person, in this specific moment, and what do they need from me so that I can get what I need from them?”
Task: To what should I be paying attention?
Clear communication starts with clearly stating the task. When the task is vague, people and machines are forced to guess what the result should look like, using their own assumptions. For example, saying “take a look at this” could mean a quick opinion, detailed edits, or a final decision. When the task is not clear, the response can miss the mark—not because it was done poorly, but because the request itself was unclear.
Context: What else is going on in this interaction that is material?
Most communication problems happen because we assume a context that was never shared. We expect others to understand what feels obvious to us, based on hints, tone, or history, and we leave important background unsaid. When that happens, we fill in the gaps using our own experiences and biases, just like GenAI models make reasonable guesses based on their training.
Modern communication makes this worse. Short texts, emojis, and reactions carry very little context. A simple “👍” might mean “I saw this,” “I agree,” or “I am annoyed.” What one person intends as neutral can easily be read as dismissive.
Context also includes intent. When one person wants understanding, and the other wants to fix the problem, the conversation can feel off, even if both are acting reasonably. Naming intent early can help prevent misalignment and bring coherence to our relationships and worlds: When attention, context, and purpose mutually reinforce each other.
Reasoning: Why am I asking?
Communication often breaks down not because we do not know what we want, but because emotion interferes with how we ask for it and also how others hear it. Psychological work on shame and fear shows that people routinely soften, dilute, or disguise their requests to avoid appearing needy, demanding, or inadequate.
When this happens, reasoning gives way to feelings of threat. Instead of focusing on the problem, people focus on protecting their identity. In both human and machine systems, explaining the why behind a request reduces emotional noise and leads to clearer, more useful outcomes by shifting attention back to the problem rather than the person.
Delivery: How should this land?
How a message is delivered matters as much as what is said. Delivery includes tone, body language, and timing, not just our words. Even a clear request can go wrong if it is sent the wrong way or if it does not specify the kind of response needed. Asking for “feedback” without explaining whether you want a quick reaction or a final decision forces the other person to guess about your expectations. Nonverbal cues matter too. Good communicators adjust to them.
How something is said often matters just as much as what is said in shaping how it is received. Here, we still have an advantage over machines, even as they outperform us at sentiment analysis at scale.
Pause and Stop: When is enough enough?
Knowing when to stop communicating is just as important as being clear. Saying too much can confuse the message, weaken the main point, or cause the listener to disengage—whether that listener is a person or an AI system. With generative AI, vague or open-ended prompts often lead the model to keep going, adding content that sounds reasonable but drifts away from what you actually wanted.
The same thing happens with us. When we add every detail, explanation, and caveat, we make it harder for the other person to know what action is expected. Clear communication means knowing when you have said enough and stopping there.
READ: Filling in the blanks: How AI mirrors our habits, errors, and possibilities (
From prompting machines to understanding people: Clarity is care
Many communication problems stem from the same reason AI prompts fail: we are not as clear as we think we are. Or we are too rushed. We expect others to fill in the gaps, understand our hints, or know what we mean without us saying it directly. When their response does not match what we had in mind, we feel frustrated or defensive.
Being clear about role, task, context, reasoning, the response we want, and when the conversation is complete helps close that gap.
Communication is more than just words. Tone, body language, timing, and delivery shape how a message is received.
Practicing these skills with AI—by testing prompts, adjusting them, and seeing what comes back—can teach us to be more thoughtful and intentional with ourselves and others.
When we pull toward machines, it may be a signal, perhaps boredom or laziness, but certainly a signal of unmet need.
So this week, if a conversation disappoints, needs and expectations remain unmet, and we long for the dulcet sounds of GenAI instead of trying harder with our humans, let us consider the following:
- In the lead-up to this current situation, have I made my expectations explicit, or am I assuming others can read between the lines?
- Am I providing enough context and reasoning to guide the response accurately?
- Have I been clear about my response? Should it be delivered, and when is the conversation complete?
- If this interaction went poorly, how might I name the misalignment and try again, rather than withdrawing or blaming?
- How might I approach others with empathy when I feel that they have poorly communicated with me?
The future of communication may be highly technical and digital. Still, its practice remains deeply human, and we can make it better if we observe how we talk to machines and what that reveals about how we talk to each other. How careful, clear, and explicit we are with GenAI, and how rarely we extend that same care to one another.

