Why an Empathy Filter?
A human-centered proof-of-concept exploring safer, clearer AI interactions in high-stakes contexts.
The same question can land very differently depending on who is asking — a customer under pressure, a clinician, a caregiver, an investigator, or someone simply having a hard day. Most AI systems respond as if the user is neutral. People rarely are.
The Empathy Filter explores a simple idea: what if AI adapted to where the human is coming from — especially in regulated, high-impact environments?
This concept can apply across many industries — healthcare and Life Sciences, customer support, financial services, HR, and safety-critical operations — anywhere clarity and trust matter.
Life Sciences example: imagine a trial participant who receives an automated message about a protocol change or a missed visit window. They may be anxious, confused, or worried they did something wrong. A standard AI answer can feel cold or overly technical. With an empathy profile (how they’re feeling, what role they want the AI to play, and how they feel about AI), the response can prioritize reassurance, plain-language clarity, and the next safe step — while keeping boundaries and avoiding clinical advice.
In this demo, the empathy profile is stored in DynamoDB and used to shape tone, pacing, warmth, and depth before the model responds.
Built as a learning project on AWS (API Gateway, Lambda, DynamoDB) with Claude on Amazon Bedrock. Please avoid entering any sensitive, personal, or confidential information.
Want to see it in action?
This is an early proof-of-concept intended to demonstrate how context and user preferences might guide future AI experiences.
It’s not a clinical tool and it does not diagnose, treat, or provide medical advice. It’s an early exploration of how human context could reduce confusion, improve clarity, and strengthen trust — especially in regulated, high-impact environments.