Endel makes AI-powered soundscapes that adapt to your body. Heart rate, time of day, motion — the sound shifts in real time. It's one of the most interesting products at the intersection of AI, music, and wellness. But there's a gap: you can't see why the sound was chosen for you.

I built a demo that fills that gap.

The Problem

Endel's personalization engine is doing real work. But users can't see it. When a soundscape shifts, there's no way to know if it's responding to your biometrics or just cycling through variations. The most common user feedback: "I can't tell if it's actually personalized."

This matters because Endel's entire value proposition depends on the listener believing the sound is for them. The intelligence is real. The visibility isn't.

What I Built

Endel Insights pulls real biometric data from an Oura Ring and translates it into three things:

A human-readable state interpretation. Not raw numbers, but language: "You're running on 6 hours of sleep with an HRV balance of 42 — your nervous system is still catching up."

A soundscape recommendation with visible reasoning. Instead of just playing sound, the system explains why: "Your HRV balance tells me your autonomic nervous system is still in recovery mode. Brown noise with slow harmonic drift supports parasympathetic activation — low frequencies your body can settle into without effort."

A generated soundscape. Brown noise, pink noise, or white noise shaped by a tonal drone, harmonic layer, and LFO modulation — all parameters derived from the biometric interpretation. Procedurally generated with the Web Audio API. No audio files.

The Insight

Explainability isn't just transparency — it's a product feature. When users can see why a soundscape was chosen for them:

  • Trust increases. Users who understand why they're hearing brown noise at 55Hz after a bad night of sleep are more likely to believe it's working.
  • Engagement deepens. The app becomes a mirror for self-knowledge: "Every time my HRV is below 50, it recommends recovery mode."
  • A feedback loop becomes possible. Once users can see the reasoning, they can evaluate it. Over time, this creates a preference model that makes personalization genuinely personal.

Try It Yourself

The demo ships with my Oura Ring data, but if you have your own Oura Ring you can paste your personal access token directly into the interface and see the whole experience regenerate from your biometrics. That moment — when you see your own data interpreted into a soundscape that makes sense — is when the idea clicks.

Try Endel Insights · View Source

Technical Stack

React + TypeScript frontend with Space Mono and a black-and-white aesthetic. Express backend proxying the Oura Ring API and Claude API. Claude generates the natural-language state interpretation and maps biometric signals to specific audio parameters. The Web Audio API generates the soundscape procedurally — oscillators, noise generators, filters, and LFO modulation. Rule-based fallback when the AI layer is unavailable.