We’ve been asking the wrong question. The question isn’t whether AI can provide emotional value. The question is whether we’re willing to accept what it offers as emotional value.
A few months ago, I watched a friend spend twenty minutes talking to ChatGPT about a career crisis. She didn’t ask for advice. She didn’t expect a solution. She just needed someone—or something—to listen. And at the end of the conversation, she said, “That actually helped.”
Now, skeptics will say: That’s not real emotional support. The machine doesn’t care. It’s just pattern-matching. And they’re technically right. But here’s the thing—we often conflate the source of comfort with the experience of comfort.
Let’s break down what emotional value actually means in human interaction. Consider Robin Dunbar’s research on social bonding. He found that what really bonds people isn’t the depth of conversation, but the regularity of low-stakes emotional exchanges. Small check-ins. Venting without judgment. Acknowledgment of feelings without immediate problem-solving.
Now look at how AI performs this role. A language model can’t empathize. But it can simulate the structure of empathy—mirroring, validation, open-ended questions—with remarkable consistency. It never gets tired. It never interrupts to talk about itself. It never judges your messiness.
That’s why a 2023 study from the University of California found that users who interacted with a conversational AI for two weeks reported a 15% reduction in loneliness. Not because they thought the AI was human. But because the interaction fulfilled a specific emotional need—being heard.
The real problem is not the AI. It’s the user’s expectation.
If you approach AI expecting unconditional love, you’ll be disappointed. If you approach it as a therapeutic mirror—a tool to help you organize your thoughts, articulate your feelings, and feel less alone in the moment—you’ll find it surprisingly effective.
This reminds me of how we think about placebo effects. We tend to dismiss placebo as “fake” relief. But any effect on the brain is real. The same principle applies here: the utility of AI emotional support isn’t in the AI’s intention, but in the user’s experience.
Of course, there are boundaries. Over-reliance on AI for emotional regulation could erode our capacity to form messy, real human connections. It’s tempting to outsource emotional labor to a machine that never pushes back. But that’s not a problem with AI—it’s a problem with how we choose to use it.
So the answer to whether AI gives emotional value depends on what you define as emotional value. If it’s about genuine mutual vulnerability, no. If it’s about real-time validation, low-stakes processing, and cognitive clarity, then the evidence suggests yes.
I like the way psychologist Peter Kahn Jr. puts it: “We don’t have to pretend the machine is a person. But we also shouldn’t pretend the interaction has no effect.”
The most honest conclusion is this: AI doesn’t care. But you do. And that might be enough.