Overview
In addition to a plaintext websocket implementation, Rime also has an implementation that sends and receives events as JSON objects. Like the other implementation, all synthesis arguments are provided as query parameters when establishing the connection. The websocket API buffers inputs up to on of the following punctuation characters:.
, ?
, !
. This is most pertinent for the initial messages sent to the API, as synthesis won’t begin until there are sufficient tokens to generate audio with natural prosody. After the first synthesis of any given utterance, typically enough time has elapsed that subsequent audio contains multiple clauses, and the buffering becomes largely invisible.
Messages
Send
Text
This is the most common message, which contains text for synthesis. schema:contextId: null
, and the audio for the second will be tagged with its UUID.
Clear
Your client can clear out the accumulated buffer, which is useful in the case of interruptions.Flush
This forces whatever buffer exists, if any, to be synthesized, and the generated audio to be sent over.EOS
At times, your client would like to generate audio for whatever remains in the buffer, and then have the connection immediately closed.Receive
Chunk
The most common event will be the audio chunk.Timestamps
Word timestamps are provided to better understand what precisely has been already said, in the event of an interruption.Error
In the event of a malformed or unexpected input, the server will immediately respond with an error message. The server will not close the connection, and will still accept subsequent well-formed messages. It’s up to the client to decide if it wants to close upon receiving an error.Variable Parameters
Must be one of the voices listed in our documentation.
The text you’d like spoken. Character limit per request is 500 via the API and 1,000 in the dashboard UI.
Choose
mistv2
for Rime’s fastest, most accurate, and most customizable model, or mist
for Rime’s earlier model (default: mist
)One of
mp3
, mulaw
, or pcm
If provided, the language must match the language spoken by the provided speaker. This can be checked in our voices documentation.
When set to true, adds pauses between words enclosed in angle brackets. The number inside the brackets specifies the pause duration in milliseconds.
Example: “Hi. <200> I’d love to have a conversation with you.” adds a 200ms pause between the first and second sentences.
Example: “Hi. <200> I’d love to have a conversation with you.” adds a 200ms pause between the first and second sentences.
When set to true, you can specify the phonemes for a word enclosed in curly brackets.
Example: “{h’El.o} World” will pronounce “Hello” as expected. Learn more about custom pronunciation.
Example: “{h’El.o} World” will pronounce “Hello” as expected. Learn more about custom pronunciation.
Comma-separated list of speed values applied to words in square brackets. Values < 1.0 speed up speech, > 1.0 slow it down.
Example: “This is [slow] and [fast]”, use “3, 0.5” to make “slow” slower and “fast” faster.
The value, if provided, must be between 4000 and 44100. Default: 22050
Adjusts the speed of speech. Lower than 1.0 is faster than default. Higher than 1.0 is slower than default.
Skips text normalization of the input text prior to synthesizing audio. This will reduce latency at the cost of some possible mispronunciation of digits and abbreviations.
Controls how text is segmented for synthesis. Available options:
- “immediate” - Synthesizes text immediately without waiting for complete sentences
- “never” - Never segments the text, waits for explicit flush or EOS
- “bySentence” (default) - Waits for complete sentences before synthesis
immediate=true
in query params is equivalent to segment=immediate
. If a null value is provided, it will default to “bySentence”.If set to
true
, Rime shall save any currently OOV (out-of-vocabulary) words encountered in text
, and save them for the User or Team to review on the
Speech QA dashboard