Show HN: Realtime, expressive AI personas that you can video call
playground.keyframelabs.comHey HN.
Over the last few months, we've (me and @parthradia) homebrewed some very fast, very cheap, and pretty expressive talking head models. Our latest enabled us to finally get a live-streaming API together, which you can try at the playground link above.
We actually fell down this rabbit hole because we were spending a significant amount of time building yet another language learning app. We came to the conclusion that the barrier for learning wasn’t the course content, but real, conversational practice. We tried prototyping with standard speech-to-speech models (OpenAI Realtime, Gemini Flash, etc.), but found that we weren’t really triggering our “fight or flight” response.
What did trigger it was talking to a person, face to face. We looked for existing realtime avatar APIs to bridge that gap, but they didn’t fit the constraints: they either didn’t cross the uncanny valley, were too slow (<15 fps), or were way too expensive. So we decided to make our own :-)
The model itself runs at less than a cent per minute and at >30fps on commodity hardware (4090s!), which is pretty cool.
But more importantly, we’ve actually found ourselves using it as a speaking partner for learning Spanish, which is a pretty remarkable feeling at times. This has inspired us to look beyond language learning; we are exploring other use cases (e.g., telehealth, mock interviews, refining elevator pitches) where face-to-face interaction measurably elevates the experience (and we’d love to hear from you here!).
With respect to our tech, there’s still low-hanging fruit to pick:
1. It takes roughly 6s to get a response end-to-end (the video gen is fast, but the chain of ASR -> LLM -> TTS adds up)
2. The resolution could be higher
3. The model feels expressive and natural during its speech, but less so during user turn (early prototypes of the model reacting in realtime to what you’re saying show promise)
While we build out consumer-facing application(s) powered by our model, we’re opening up an API early to see what other developers might build with it. We’d love for you to try it in our playground. Leave a comment below or shoot us a line if you want early access (access@keyframelabs.com)!
Looks really cool. It feels like a response take about 3 seconds once the UI switch from "listening" to "thinking" to get a response played on my headphones (bluetooth, so maybe that add latency). Something feels a bit canny when I don't say anything yet, and the AI persona look dead straight into the camera smiling at me. What tech stack are you using under the hood?
Thanks for trying it out!
Yea that latency makes sense; "listening" includes turn detection and STT, "thinking" LLM + TTS _and then_ our model, so the pipeline latency stacks up pretty quick. The actual video model starts streaming out frames <500ms from the TTS generation, but we're still working on reducing latency from parts of the pipeline that we are using off the shelf.
We have a high level blog post here https://www.keyframelabs.com/blog/persona-1 about the architecture of the video model, the WebRTC "agent" stack is Livekit + a few backend components hosted in Modal.