By Melissa Lunardini
The rise of digital mourning support
We witness a shift in how we process one of the most universal experiences of man: sorrow. Various companies have emerged in recent years to develop mourning technology, where users can communicate with AI versions of deceased loved ones or can turn to general AI platforms for mourning support.
This is not only curiosity, it is a reaction to a real lack of human connection and support. The rise of mourning AI reveals something uncomfortable to our society: people turn to machines because they don’t get what they need from the people around them.
Why people choose digitally over human support
The mourning technical industry is performed, with MIT Technology Review reporting that “at least half a dozen companies” in China AI services offer services for interaction with deceased loved ones. Companies such as Character.ai, Nomi, Replika, Storyfile and hereafter AI offer users the opportunity to create and get in touch the “resemblance” of deceased people, while many other users use AI as a way to normally normalize and seek answers to their sorrow. This digital migration does not happen in a vacuum. It is a direct answer to the failures of our current support systems:
- Social discomfort: Our mourning-illiterate society is struggling with how to respond to loss. Friends and family often disappear within a few weeks, so that mourning people remain isolated when they need support, especially months later.
- Professional barriers: Traditional funeral advice is expensive, with long waiting times. Many therapists miss a good mourning training, in which some do not report mourning -related education in their programs. This leaves people without accessible, qualified support when they need it the most.
- Fear of judgment: People often feel safer to share intimate mourning experiences with AI than with people who can judge, give unwanted advice or become uncomfortable with the intensity of their grief.
The Eliza effect
To understand why mourning AI succeeds, we must look back to 1966, when the first AI-companion program called Eliza was developed. Made by Mit’s Joseph Weizenbaum, Eliza Simulated Conversation with the help of a simple pattern agreement, which specifically imitates a Rogerian psychotherapist with the help of personal therapy.
Rogerian therapy was perfect for this experiment because it is highly dependent on reflecting what the person says. The role of the AI companion was simple: reflect back what the person said with questions such as “How do you feel that?” Or “Tell me more about that.” Weizenbaum was surprised that people were deep emotional connections with this simple program and their most intimate thoughts and feelings trusted. This phenomenon became known as the “Eliza effect”.
Eliza did not work because it was advanced, but because it embodied the core principles of effective emotional support, something we as a society can learn from (or in some cases learn again).
What AI and mourning bots have good
Modern mourning AI succeeds for the same reasons that Eliza did, but with improved possibilities. This is what AI does well:
- Non-judgment: AI does not return from the intensity of complaint. It will not tell you to “go further”, suggests that you now “have to over” or change the subject when you become pain uncomfortable. The witness is and just reflects.
- Unconditional availability: Grief does not follow at office hours. It finds at 3 o’clock on a Tuesday, during family gatherings, while you are working, or on a shopping run. AI works 24/7 and offers immediate support by normalizing common versatile mourning experiences, such as “I just saw someone who looked like my mother in the supermarket, am I getting angry?” AI’s reaction shows effective validation: “You don’t get angry at all. This is actually a common experience in mourning someone close to you. Your brain is wired to recognize well -known patterns, especially faces of people who were important to you … This is completely normal. Your mind still processes your loss, and these moments of recognition show how deep your mother is still with you in your memories and consciousness.” Simple, on-demand validation immediately helps Grievers normal and understood.
- Pure focus on the riever: Ai do not hijack your story to share his own experiences. It does not offer unsolicited advice about what you should do or get tired to hear the same story repeatedly. The attention is completely yours.
- Validation without agenda: Unlike people who may hurry to feel better (often for their own comfort), AI validates emotions without trying to repair them or change them. It normalizes sadness without pathologizing.
- Privacy and safety: AI has confidentially the room for the “good, bad and ugly” parts of sorrow. There is no fear of social judgment, no worries about taxing someone, no concern about saying the “wrong” thing.
- No strings confirmed: AI does not need emotional reciprocity. It will ultimately not comfort, get tired of your grief or leave if your healing takes longer than expected.
Ai can do it, but people can do better. Much better.
According to an article from 2025 in Harvard Business Review, the #1 Use of AI so far is therapy and company in 2025.
This tells us that there is a huge and growing gap when it comes to how we appear when life becomes difficult. Still, how accurate and practically a mourning bone is, almost all of us prefer to have care and understanding of our friends, family, colleagues and community than chatting with an AI.
So, what can we learn from AI, what are the things that people can do unique that AI can never do?
- AI can appear consistently, but people can show up with context: AI is available 24/7 and can validate the conversation with information on the spot. But people can bring historical references. You can ‘think of your thinking’ on the birthday of their beloved texts or check in during the holidays.
- AI can follow their leadership, but people can read between the lines: AI reflects what people share and asks open questions. But being able to feel when “I’m fine” does not mean “I’m doing well” and more support is needed.
- AI can encourage repetition, but people can weave stories together: AI can repeatedly listen to the same story without complaints. But every time people can notice new details or evoke changes that can take place over time. You can really say: “It has been a while since we last talked about your father. I would like to hear how your father has been coming to you lately.”
- AI can offer virtual presence, but people can offer practical presence: AI offers immediate support through a conversation. But people can appear practically by saying: “I’m going to the supermarket on Thursday, what can I pick up for you?”
- AI can acknowledge loss, but people can honor the whole person: Ai validates that someone was important. But people can keep their memory alive by sharing memories and saying their name naturally: “I remember how much Sarah loved spicy food. I bet she would have loved this restaurant.”
- AI can respond when they are called, but people can anticipate heavy sorrow days: AI responds when someone reaches. But people can offer preventive support: “I know that next week your first Mother’s Day is without your mother. I knew my schedule in case.”
- AI can offer comfort through words, but people can offer physical presence: AI validates feelings through reactions. But people can sit in shared silence, offer hugs that last as long as necessary, or just say, “I have no words, but I am here.”
The chance
We are so starved because of empathetic reactions and presence that we will now accept them from tools that are unable to real empathy. But what if, instead of surrendering to digital surrogates, we used this as a mirror to see what we can’t provide each other?
The lesson is not that AI replaces the human connection, but rather that AI shows us (or reminds) exactly what human connection should look like. Every position that makes funeral AI effective is something that people can do better, with the extra advantage of real empathy, shared experience and authentic, personal care.
We live through a mourning literacy crisis. Our discomfort with death and loss has created a society where grieving people feel isolated and misunderstood. But these digital mourning buddies offer us a blueprint for change.
The question is: shall we be open to learn from them?
Melissa Lunardini, Ph.D. Is the Chief Clinical Officer at Help Texts, where she supervises the delivery of clinically healthy, multilingual mourning support worldwide, via SMS.
#complaint #bots #teach #supporting #grieving #peoplede #Healthcare #blog