Give Claude Code
a voice.
Hear when tasks finish, get spoken summaries, clone your own voice. Local-first TTS with 3 backends, zero API calls.
Why Vox?
Your AI assistant works in the terminal. Now it can talk to you.
Task Notifications
Hear when builds finish, tests pass or fail, and deployments complete. Stay productive without watching the terminal.
Spoken Summaries
Claude reads back what it did after a complex task. Get the gist without scanning hundreds of lines.
Voice Cloning
Record a 10-second sample and Vox speaks with your voice. Or clone any voice from an audio file.
100% Local
No API keys, no cloud, no data leaving your machine. Everything runs on-device with local models.
Cross-Platform
macOS, Linux, WSL. The pure-Rust backend works everywhere. Metal & CUDA GPU acceleration available.
One-Command Setup
vox init mcp registers everything in Claude Code. Zero config, instant integration.
3 TTS backends, one CLI
Pick the right engine for your platform and quality needs.
System speech synthesis via macOS NSSpeechSynthesizer. Instant startup, zero downloads, dozens of built-in voices.
MLX-powered neural TTS on Apple Silicon. High-quality, natural-sounding speech with voice cloning support. Model auto-downloads (~1.2 GB).
Cross-platform neural TTS. Pure Rust implementation with Metal & CUDA GPU support. No Python, no dependencies.
See it in action
Simple CLI, powerful results.
$ vox "Build complete. 3 tests passed."
# speaks aloud
$ vox -b qwen-native "Cross-platform TTS."
# uses pure-Rust backend
$ echo "Pipeline done" | vox
# reads from stdin
$ vox clone record myvoice --duration 10
# records 10s of your voice
$ vox clone add patrick \
--audio ~/voice.wav \
--text "Transcription helps quality"
# register from audio file
$ vox -v patrick "Speaking with your voice."
# use cloned voice
$ vox config set backend qwen
# switch default backend
$ vox config set lang fr
# set default language
$ vox config set voice Chelsie
# set default voice
$ vox --list-voices
# show available voices
Claude Code integration
4 init modes. Pick what fits your workflow.
Registers vox serve as an MCP server in ~/.claude.json. Exposes 8 tools that Claude can call directly.
vox init mcp
Creates a CLAUDE.md with instructions and adds a stop hook that speaks "Terminé" when tasks complete.
vox init cli
Generates a /speak slash command in ~/.claude/commands/. Invoke with /speak Hello.
vox init skill
Runs all three modes at once. MCP server + CLAUDE.md + slash command. The full experience.
vox init all
8 MCP tools exposed
vox_speak
Read text aloud
vox_list_voices
List available voices
vox_clone_add
Add a voice clone
vox_clone_list
List voice clones
vox_clone_remove
Remove a clone
vox_config_show
Show preferences
vox_config_set
Set a preference
vox_stats
Usage statistics
Install in seconds
Rust-powered. No runtime dependencies for the native backend.
Quick Install
macOS, Linux & WSL
curl -fsSL https://raw.githubusercontent.com/rtk-ai/vox/main/install.sh | sh
Via Cargo
From source
cargo install --git https://github.com/rtk-ai/vox
With GPU
Metal or CUDA acceleration
cargo install --git https://github.com/rtk-ai/vox --features metal
Then connect to Claude Code
vox init mcp
Registers Vox as an MCP server. Claude Code can now speak, clone voices, and manage preferences natively.
Platform defaults
Vox picks the best backend for your system automatically.
| Platform | Default Backend | GPU Flag | Extra Dependency |
|---|---|---|---|
| macOS | say |
--features metal |
None |
| Linux / WSL | qwen-native |
--features cuda |
libasound2-dev |