Artificial Death?

Designing Ethical AI & Digital Afterlives

When you think of death, what comes to mind? A final goodbye? Messages gone unreplied? Now imagine this: you open your phone and see a notification from a loved one who passed away years ago. Their voice, their humor, their warmth — it’s all there, just as you remember. What would you feel? Relief? Comfort? Or something else entirely — something uncanny? What if death didn’t have to be the end?


For decades, the science fiction genre has captivated audiences with ideas of digital immortality. Today, AI-powered chatbots, often called “griefbots” or “deadbots,” have turned this idea into reality. These tools reconstruct the voices, personalities, and even memories of the dead, pulling from social media posts, texts, and voice recordings. While this technology has existed in experimental forms for nearly a decade, it’s far from mainstream. The slow haunting of griefbots into public awareness raises messy, urgent questions for designers, researchers, and ethicists alike: Can we give consent after death? How do we build technology that supports grief without exploiting it? And are we ready for what happens when we try?




Griefbots: The Solution to an Our Eternal Dilemma?


Companies like HereAfter AI and Eternime blend machine learning with intimate user data to create interactive legacies. By analyzing years of conversations and personal information provided by loved ones, their algorithms attempt to recreate how someone spoke, joked, or even argued. The results aren’t perfect — responses can feel uncanny or a bit off— but for some, they’re enough to heal the hurt of grieving.


Early adopters often describe their experiences as a mix of comfort and unease. One user described it like this:


“It felt like I was having a real conversation, until the bot said something my partner would’ve never said. That moment broke the illusion.” (Markman, 2025)


These awkward moments highlight a gap familiar to anyone who works in user-centered design: algorithms can mimic, but they can’t fully grasp human nuance. Not yet, anyway.



Why Aren’t Deadbots Mainstream?


Despite their growing presence, griefbots haven’t become a part of everyday life. Their adoption is slowed by three key challenges:



The Uncanny Valley of Grief


Replicating a human isn’t like archiving a photo. When a griefbot misremembers a favorite story or misuses slang, the emotional impact can be devastating. A 2023 study by Google’s PAIR Initiative found that users who experienced “contextual dissonance” — such as a bot referencing events post-dating someone’s death — reported heightened feelings of distress. These aren’t just technical glitches; they’re failures to design with empathy.



Consent and the Invisible User


Who owns a person’s digital footprint after death? Startups like StoryFile offer “AI legacy vaults” where users can pre-consent to how their data will be used posthumously. But consent is messy. Fiorenza Gamba, an anthropologist studying digital mourning, points out that Western notions of individual consent often clash with communal grieving practices in collectivist cultures. In these contexts, relationships, not individuals, dictate how grief is processed (Gamba, 2021).



Inclusivity in the Digital Afterlife


Griefbots rely on data-rich profiles, which means they disproportionately serve people with robust digital footprints. This creates barriers for marginalized communities or oral cultures where storytelling and memory are preserved differently. Designers and ethnographers have a responsibility to ensure these tools honor diverse grieving rituals — like Ghana’s fantasy coffins or Mexico’s Día de los Muertos — without reducing them to mere datasets.



The Paradox of Comfort


A 2022 Microsoft Research project trained a griefbot using a late partner’s texts. While some users found solace, others felt trapped in what they called “algorithmic grief.” Initial relief gave way to frustration when bots couldn’t adapt to changing emotional needs. One participant said, “It’s like the griefbot froze me in time. I wanted to move forward, but it kept pulling me back.”


For designers, this raises essential questions: How do we create tools that support emotional journeys without anchoring users in the past? Startups like Seance AI are experimenting with “fade-out” modes, where griefbots gradually phase out their presence over time. These features, inspired by grief counseling frameworks, are early steps toward building more adaptive technologies.



Key insights for digital practitioners:


Emotional Usability: Move beyond task completion metrics and measure how tools affect closure and healing.


Adaptive Design: Build griefbots that evolve with users, gently encouraging acceptance rather than endless nostalgia.



Ethics Over Innovation


The tech industry’s mantra — “move fast and break things” — doesn’t work when it comes to grief. Spain’s proposed AI Act, which mandates explicit consent for posthumous avatars, offers a potential roadmap. But regulation alone isn’t enough. Designers, ethicists, and anthropologists need to collaborate to ensure these technologies reflect diverse cultural and emotional needs. As Gamba’s research reminds us, Eurocentric design assumptions often alienate non-Western users, creating tools that feel foreign or even harmful (Gamba, 2021).



Conclusion: Designing for the Unseen

As designers and researchers, we’re trained to center the living user. Griefbots challenge us to think differently: How do we design for the absent user — the deceased — while honoring the emotional ecosystems they leave behind?

Would I use a griefbot? Maybe. But I’d need to know who tested it, what ethical frameworks guided its design, and whose grief was excluded in the process.


The future of death-tech depends on balancing innovation with integrity. For every line of code written, there’s a human story waiting to be honored. We owe it to ourselves — and to those we’ve lost — to design with humility, curiosity, and care.




References:


Gamba, F. (2021). Digital Mourning in the Age of AI: Cross-Cultural Perspectives on Posthumous Personhood. Journal of Digital Anthropology.


Google PAIR Initiative. (2023). Contextual Dissonance in AI-Mediated Grief Support. PAIR Research Report.


Markman, J. (2024). AI “Griefbots” Ethical Analysis: Mapping User Concerns About Posthumous Data Consent.


Microsoft Research. (2022). Algorithmic Grief: Longitudinal Study of AI Chatbots in Mourning Practices.

Written by Joseph Markman — exploring the human side of technology.


Curious to connect or collaborate?