🤖⚡

Reactions

on-device AI voice

Fast voice-to-action reactions for Reachy Mini
Fully on-device AI inference. Listens for speech, matches intent with semantic search, and responds with speech and movement. Tested on Raspberry Pi 5 and MacBook.

How it works

  1. 1
    Connect Reachy Mini & launch the app
    In your Reachy Mini Control navigate to the "Applications" tab, untick "Official", and find "Reachy_Mini_Reactions" and click "Install" then "Start".
  2. 2
    Talk to Reachy Mini
    Speak naturally. The on-device speech-to-text pipeline transcribes your voice in real time using Moonshine, a tiny but accurate ASR model.
  3. 3
    Reachy thinks
    Your query is matched against a configurable Q&A bank using MiniLM embeddings and cosine similarity. The best match determines the response action.
  4. 4
    React reacts
    Reachy Mini responds with synthesized speech (via Supertonic TTS) and expressive movements, all running locally with no cloud dependency.

Create custom reactions

In the app web dashboard, you can create your own Q&A pairs to customize how Reachy Mini responds to different queries. For example, you could set up a reaction for "How are you?" with a cheerful response and a dance movement, or a reaction for "Tell me a joke" with a witty comeback and a laugh. The possibilities are endless!

  1. 🔊
    Voice reply
    Supertonic TTS for natural-sounding voice output, running fully on-device.
  2. 🎭
    Movements
    Expressive head movements and antenna control from the emotions library — amazed, cheerful, curious, dance, confused, and many more.
  3. 🔍
    Tool calling
    Use the examples of {joke} and {time} to use custom tool calling to create a response.

Getting started

Mac

  1. 1
  2. 2
    Open Reactions web app at localhost:8042

Raspberry Pi

  1. 1
    Run reachy-mini-daemon and install via Reachy Web Dashboard
  2. 2
    Open Reactions web app at raspberrypi.local:8042