OpenAI Compatibility
murmr is designed as a drop-in replacement for the OpenAI TTS API. Migrate your existing code with minimal changes and unlock powerful new features.
Quick Migration
The fastest migration path uses VoiceDesign — describe any voice in natural language instead of choosing from preset names:
curl -X POST "https://api.murmr.dev/v1/voices/design" \
-H "Authorization: Bearer YOUR_MURMR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Hello, world!",
"voice_description": "A warm, professional female voice",
"language": "English"
}' --output output.wavOpenAI Voice Equivalents
Instead of mapping OpenAI voices 1:1, describe what you want:
- OpenAI
alloy→ "A neutral, clear voice for general use" - OpenAI
echo→ "A warm, resonant male voice" - OpenAI
nova→ "A bright, energetic female voice" - OpenAI
shimmer→ "A playful, upbeat voice"
Voice Strategy
Unlike OpenAI's fixed voices, murmr uses VoiceDesign — describe any voice in natural language and generate speech with it. Here's the recommended migration approach:
Use the Playground or VoiceDesign API to describe the voice you want: "A warm, professional female voice, calm and clear"
Found a voice you love? Save it via the Voices API to get a stable ID like voice_abc123
Use your saved voice IDs with the batch or streaming endpoints for consistent, repeatable output.
Key Differences
/v1/audio/speech with Bearer token in Authorization header — same as OpenAI.
Instead of 6 fixed voices, create any voice with VoiceDesign. Describe exactly what you need.
SSE streaming for low-latency playback (~450ms TTFC), or batch mode for complete audio files.
Batch endpoint supports response_format: mp3, opus, aac, flac, wav (default), pcm
WebSocket API for voice agents and LLM integration. See Real-time docs.
/v1/audio/speech returns 200 with binary audio by default — same as OpenAI. For async processing, pass webhook_url to get a 202 with a job ID. For low-latency streaming, use /v1/audio/speech/stream (~450ms to first audio).
murmr adds a language parameter using full names ("English", "French", "Japanese"). Defaults to "Auto" for automatic detection. See Language Support.
OpenAI SDK Compatibility
/v1/audio/speech now returns 200 with binary audio by default — the OpenAI SDK's client.audio.speech.create() works as a drop-in (with a saved voice ID as the voice parameter).
After: murmr with Saved Voice
Once you've saved a voice, use it with the streaming endpoint for real-time playback or the batch endpoint for bulk generation:
Streaming (recommended for real-time)
curl -X POST "https://api.murmr.dev/v1/audio/speech/stream" \
-H "Authorization: Bearer YOUR_MURMR_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Hello!",
"voice": "voice_abc123",
"language": "English"
}'
# → SSE stream with base64 PCM chunks (~450ms to first audio)Batch (for bulk generation)
# Returns 200 with binary audio
curl -X POST "https://api.murmr.dev/v1/audio/speech" \
-H "Authorization: Bearer YOUR_MURMR_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Hello!",
"voice": "voice_abc123",
"response_format": "mp3"
}' --output hello.mp3Both text and input field names are accepted for OpenAI compatibility.
Voice Mapping
Create a replacement voice for each OpenAI voice you use. This is a one-time setup — save the voice ID and use it in production:
// One-time setup: create and save your voices
const wav = await client.voices.design({
input: 'This is a reference recording for the Nova replacement voice.',
voice_description: 'A warm, friendly female voice, mid-20s, American',
});
const saved = await client.voices.save({
name: 'Nova Replacement',
description: 'A warm, friendly female voice, mid-20s, American',
audio: wav,
ref_text: 'This is a reference recording for the Nova replacement voice.',
});
console.log(`Nova replacement: ${saved.id}`);Pricing
murmr offers competitive pricing compared to OpenAI TTS, especially at higher volumes. Check the Pricing page for details.