Washington

How a Seattle-based startup uses AI to build trust and safety in clinical trials

An interview with Dr. Grin Lord, the founder and CEO of mpathic, a Seattle-based startup that’s using AI to improve clinical trial monitoring and medical conversations.

Four images of Dr. Grin Lord, the founder and CEO of mpathic, a Seattle-based startup that’s using AI to improve clinical trial monitoring and medical conversations.
2 min read

My entire career I've been focused on studying how people develop trust and empathy. It started in the early 2000s when I was a research assistant at a hospital in Seattle where a study showed that 15 minutes of structured, empathic listening for patients in the ER after drunk driving accidents had a staggering impact: Those who received the listening intervention saw a 50% drop in hospital readmissions for alcohol-related issues.

That powerful interaction scaled nationally and now saves level-one trauma centers about $2 billion a year. But the biggest revelation for me was that there are elements of empathy that can be formulaic. Anyone could be trained to do this, which sparked a big question for our research group: Could machine learning help us scale the expertise of psychologists? By 2008 we began exploring that, building all of our own AI from scratch, which eventually spun out of the university as my first startup.

A few years later, I had a realization. I could build an API to inject empathy directly into conversational agents. And that's how I founded mpathic. Our first models focused on empathy and trust, but today, we've expanded far beyond that, with over 200 detections that analyze not just how a conversation is conducted but what is happening within it.

Over time, we’ve evolved from a quality assurance platform to focusing on AI safety. Our team of psychologists helps foundational model builders improve the safety of their models, and then using our own AI and API, we help organizations safely implement those models in high-stakes medical settings like pharmaceutical companies running clinical trials. We have services around making safe prompts, benchmarking, and quality oversight of medical conversations. The impact is remarkable. Our AI is seven times more accurate than a human at detecting suicide risk in clinical trials.

Google AI is central to this. Customers using our product can process their audio and video transcripts through Google's Speech-to-Text model.

Internally, Google's tools have become indispensable. I use NotebookLM constantly.

If I need to find human annotators for a project, I can simply query our HR database loaded into NotebookLM: "Find me all our child psychiatrists who speak Russian." When I’m preparing a sales pitch, I drop our template into NotebookLM and ask it to tailor the presentation for a specific customer.

Dr. Grin Lord, founder and CEO of mpathic

For product design, our team uses Opal, a Google experiment that lets us quickly prototype and gain alignment on new features. It’s an incredible tool for rapid, collaborative iteration.

My journey began by observing the simple, structured power of human empathy in a hospital room. Today, by combining that foundational insight with the immense capabilities of partners like Google, we're building a future where every conversation in healthcare is safer, more effective, and fundamentally more human.