This is The Age of AI Series, where we talk to the foremost entrepreneurs and innovators around the planet using ML to transform industries. (Join our special mailing list!)
Today we cover a very futuristic, sci-fi sounding application of AI!
Imagine if, just from hearing your voice, an AI application could diagnose many diseases and accurately understand your emotional states.
Sounds fancy, and I was as skeptical as you are, but turns out it’s possible. Our voice packs a surprising number of biomarkers, which can be tracked to detect issues that are difficult or impossible with other diagnostic methods!
My guest was Dagmar Schuller, the CEO of audEERING (yes, it’s spelled that way).
If you’re as curious as I am, here’s what we cover:
- 03:00 — Background of audEERING and overview of their core products
- 16:06 — audEERING’s go-t0-market strategy and reasoning
- 19:07 — How does their technology work: technical overview
- 23:48 — What kind of diseases can be diagnosed using voice AI?
- 30:14 — Using AI to teach autistic children to convey emotions better
- 32:02 — Why European medical tech tends to be safer
- 36:56 — The competitive landscape for medical tech startups in Europe vs USA/Israel, and how that affects strategy
- 42:12 — audEERING’s interesting fundraising story: starting with only €7500 in the bank!
- 50:05 — Dagmar’s worst and best parts about building audEERING so far!
Aman’s 2-Minute Summary and Key Takeaways
How it works:
Our speech is a very complex system: it requires coordination of hundreds of muscles, different parts of the brain, the respiratory system and also our cardiac rhythms (we sound different after running or waking up or drinking alcohol or feeling heartbroken).
As humans can intuitively notice these variations, but AI can go far beyond — by extracting a host of objective “biomarkers,” such as speech-pause relation, intermittent hoarseness, etc, it can learn about what’s going on in our mind and body.
That’s what audEERING is building, in a nutshell. They first built tech for emotion recognition, selling the tech to market research companies.
Where else it’s being used:
Over time, they’ve built platforms that enable a series of other industries, from call centers providing customer support to game developers who want to inject voice commands that emphasize emotions. Another interesting use case was to teach autistic children to notice and convey emotions better.
It’s quite typical for a patented technology like this to just get licensed to bigger companies who then do what they want with it. But Dagmar chose to instead build their own platforms, taking a more active path to bringing this tech to the market.
So they package and distribute the technology in many ways, from web-based APIs to SDKs that can be used by app developers, and even B2C apps.
We also discuss the fundraising landscape for companies like hers. The European market is very heavy on regulation, which means they can’t cut corners. So it especially bothered her that a lot of investors want to believe ambitious fairy tales instead of actual evidence, and thus raised their initial rounds from strategic partners instead of VCs.
Btw they are now open for a Series A, for anyone interested!
(Ethics Policy: These opinions are 100% my own as an independent observer and educator. I don’t own stock in guests’ companies or their competitors, nor do I get paid by them in any form for any reason at the time of publishing, unless specifically stated. Episodes are also not intended to be an automatic endorsement of any company or its products and services.)