Realtime Speech API
MaaS 4o realtime preview
Request Method
Websocket
Reques Path
wss://{domain}/v1/realtime?model={modelName}
Request path parameters.
Name |
description |
example |
model |
model name |
|
Reques Body
When connected to the MaaS 4.0 realtime preview server, the client can send the following events.
1. Client events
These are events that the OpenAI Realtime WebSocket server will accept from the client.
1.1 session
1.1.1 session.update
Send this event to update the session’s default configuration. The client may send this event at any time to update the session configuration, and any field may be updated at any time, except for "voice". The server will respond with a session.updated
event that shows the full effective configuration. Only fields that are present are updated, thus the correct way to clear a field like "instructions" is to pass an empty string.
Name |
type |
description |
event_id |
string |
Optional client-generated ID used to identify this event. |
type |
string |
The event type, must be "session.update". |
session |
object |
Realtime session object configuration. |
1.1.1.1 session
Name |
Type |
Description |
modalities |
array |
The set of modalities the model can respond with. To disable audio, set this to ["text"]. |
instructions |
string |
The default system instructions (i.e., system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format (e.g., "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g., "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session. |
voice |
string |
The voice the model uses to respond. Supported voices are ash, ballad, coral, sage, and verse (also supported but not recommended are alloy, echo, and shimmer; these voices are less expressive). Cannot be changed once the model has responded with audio at least once. |
input_audio_format |
string |
The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. |
output_audio_format |
string |
The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw. |
input_audio_transcription |
object |
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through Whisper and should be treated as rough guidance rather than the representation understood by the model. |
turn_detection |
object |
Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech. |
tools |
array |
Tools (functions) available to the model. |
tool_choice |
string |
How the model chooses tools. Options are auto, none, required, or specify a function. |
temperature |
number |
Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8. |
max_response_output_tokens |
integer or "inf" |
Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf. |
Name |
Type |
Description |
model |
string |
The model to use for transcription; whisper-1 is the only currently supported model. |
1.1.1.1.2 turn_detection
Name |
Type |
Description |
type |
string |
Type of turn detection; only server_vad is currently supported. |
threshold |
number |
Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments. |
prefix_padding_ms |
integer |
Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms. |
silence_duration_ms |
integer |
Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values, the model will respond more quickly, but may jump in on short pauses from the user. |
Name |
Type |
Description |
type |
string |
The type of the tool, i.e., function. |
name |
string |
The name of the function. |
description |
string |
The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything). |
parameters |
object |
Parameters of the function in JSON Schema. |
example
{
"event_id": "event_123",
"type": "session.update",
"session": {
"modalities": ["text", "audio"],
"instructions": "Your knowledge cutoff is 2023-10. You are a helpful assistant.",
"voice": "alloy",
"input_audio_format": "pcm16",
"output_audio_format": "pcm16",
"input_audio_transcription": {
"model": "whisper-1"
},
"turn_detection": {
"type": "server_vad",
"threshold": 0.5,
"prefix_padding_ms": 300,
"silence_duration_ms": 500
},
"tools": [
{
"type": "function",
"name": "get_weather",
"description": "Get the current weather for a location, tell the user you are fetching the weather.",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string" }
},
"required": ["location"]
}
}
],
"tool_choice": "auto",
"temperature": 0.8,
"max_response_output_tokens": "inf"
}
}
Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. In Server VAD mode, the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually. The client may choose how much audio to place in each event up to a maximum of 15 MiB, for example streaming smaller chunks from the client may allow the VAD to be more responsive. Unlike other client events, the server will not send a confirmation response to this event.
Name |
Type |
Description |
event_id |
string |
Optional client-generated ID used to identify this event. |
type |
string |
The event type, must be "input_audio_buffer.append". |
audio |
string |
Base64-encoded audio bytes. This must be in the format specified by the input_audio_format field in the session configuration. |
example
{
"event_id": "event_456",
"type": "input_audio_buffer.append",
"audio": "Base64EncodedAudioData"
}
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically. Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an input_audio_buffer.committed event.
Name |
Type |
Description |
event_id |
string |
Optional client-generated ID used to identify this event. |
type |
string |
The event type, must be "input_audio_buffer.commit". |
example
{
"event_id": "event_789",
"type": "input_audio_buffer.commit"
}
Send this event to clear the audio bytes in the buffer. The server will respond with an input_audio_buffer.cleared
event.
Name |
Type |
Description |
event_id |
string |
Optional client-generated ID used to identify this event. |
type |
string |
The event type, must be "input_audio_buffer.clear". |
1
2
3
4
{
"event_id": "event_012",
"type": "input_audio_buffer.clear"
}
1.3 conversation
1.3.1 conversation.item.create
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.
If successful, the server will respond with a conversation.item.created
event, otherwise an error
event will be sent.
Name |
Type |
Description |
event_id |
string |
Optional client-generated ID used to identify this event. |
type |
string |
The event type, must be "conversation.item.create". |
previous_item_id |
string |
The ID of the preceding item after which the new item will be inserted. If not set, the new item will be appended to the end of the conversation. If set, it allows an item to be inserted mid-conversation. If the ID cannot be found, an error will be returned and the item will not be added. |
item |
object |
The item to add to the conversation. |
#### 1.3.1.1 item |
|
|
Name |
Type |
Description |
id |
string |
The unique ID of the item, this can be generated by the client to help manage server-side context, but is not required because the server will generate one if not provided. |
type |
string |
The type of the item (message, function_call, function_call_output). |
status |
string |
The status of the item (completed, incomplete). These have no effect on the conversation, but are accepted for consistency with the conversation.item.created event. |
role |
string |
The role of the message sender (user, assistant, system), only applicable for message items. |
content |
array |
The content of the message, applicable for message items. Message items with a role of system support only input_text content; message items of role user support input_text and input_audio content; and message items of role assistant support text content. |
call_id |
string |
The ID of the function call (for function_call and function_call_output items). If passed on a function_call_output item, the server will check that a function_call item with the same ID exists in the conversation history. |
name |
string |
The name of the function being called (for function_call items). |
arguments |
string |
The arguments of the function call (for function_call items). |
output |
string |
The output of the function call (for function_call_output items). |
1.3.1.1.1 content
Name |
Type |
Description |
type |
string |
The content type (input_text, input_audio, text). |
text |
string |
The text content, used for input_text and text content types. |
audio |
string |
Base64-encoded audio bytes, used for input_audio content type. |
transcript |
string |
The transcript of the audio, used for input_audio content type. |
{
"event_id": "event_345",
"type": "conversation.item.create",
"previous_item_id": null,
"item": {
"id": "msg_001",
"type": "message",
"role": "user",
"content": [\
{\
"type": "input_text",\
"text": "Hello, how are you?"\
}\
]
}
}
1.3.2 conversation.item.truncate
Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.
Truncating audio will delete the server-side text transcript to ensure there is not text in the context that hasn't been heard by the user.
If successful, the server will respond with a conversation.item.truncated
event.
Name |
Type |
Description |
event_id |
string |
Optional client-generated ID used to identify this event. |
type |
string |
The event type, must be "conversation.item.truncate". |
item_id |
string |
The ID of the assistant message item to truncate. Only assistant message items can be truncated. |
content_index |
integer |
The index of the content part to truncate. Set this to 0. |
audio_end_ms |
integer |
Inclusive duration up to which audio is truncated, in milliseconds. If the audio_end_ms is greater than the actual audio duration, the server will respond with an error. |
{
"event_id": "event_678",
"type": "conversation.item.truncate",
"item_id": "msg_002",
"content_index": 0,
"audio_end_ms": 1500
}
1.3.3 conversation.item.delete
Send this event when you want to remove any item from the conversation history. The server will respond with a conversation.item.deleted
event, unless the item does not exist in the conversation history, in which case the server will respond with an error.
Name |
Type |
Description |
event_id |
string |
Optional client-generated ID used to identify this event. |
type |
string |
The event type, must be "conversation.item.delete". |
item_id |
string |
The ID of the item to delete. |
{
"event_id": "event_901",
"type": "conversation.item.delete",
"item_id": "msg_003"
}
1.4 response
1.4.1 response.create
This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.
A Response will include at least one Item, and may have two, in which case the second will be a function call. These Items will be appended to the conversation history.
The server will respond with a response.created
event, events for Items and content created, and finally a response.done
event to indicate the Response is complete.
The response.create
event includes inference configuration like instructions
, and temperature
. These fields will override the Session's configuration for this Response only.
Name |
Type |
Description |
event_id |
string |
Optional client-generated ID used to identify this event. |
type |
string |
The event type, must be "response.create". |
response |
object |
The response resource. |
1.4.1.1 response
Name |
Type |
Description |
id |
string |
The unique ID of the response. |
object |
string |
The object type, must be "realtime.response". |
status |
string |
The final status of the response (completed, cancelled, failed, incomplete). |
status_details |
object |
Additional details about the status. |
output |
array |
The list of output items generated by the response. |
usage |
object |
Usage statistics for the Response, corresponding to billing. A Realtime API session maintains a conversation context and appends new Items to the Conversation, thus output from previous turns (text and audio tokens) becomes |
1.4.1.1.1 status_details
Name |
Type |
Description |
type |
string |
The type of error that caused the response to fail, corresponding with the status field (cancelled, incomplete, failed). |
1.4.1.1.2 usage
Name |
Type |
Description |
total_tokens |
integer |
The total number of tokens in the Response, including input and output text and audio tokens. |
input_tokens |
integer |
The number of input tokens used in the Response, including text and audio tokens. |
output_tokens |
integer |
The number of output tokens sent in the Response, including text and audio tokens. |
input_token_details |
object |
Details about the input tokens used in the Response. |
output_token_details |
object |
Details about the output tokens used in the Response. |
Name |
Type |
Description |
cached_tokens |
integer |
The number of cached tokens used in the Response. |
text_tokens |
integer |
The number of text tokens used in the Response. |
audio_tokens |
integer |
The number of audio tokens used in the Response. |
output_token_details
Name |
Type |
Description |
text_tokens |
integer |
The number of text tokens used in the Response. |
audio_tokens |
integer |
The number of audio tokens used in the Response. |
{
"event_id": "event_234",
"type": "response.create",
"response": {
"modalities": ["text", "audio"],
"instructions": "Please assist the user.",
"voice": "alloy",
"output_audio_format": "pcm16",
"tools": [\
{\
"type": "function",\
"name": "calculate_sum",\
"description": "Calculates the sum of two numbers.",\
"parameters": {\
"type": "object",\
"properties": {\
"a": { "type": "number" },\
"b": { "type": "number" }\
},\
"required": ["a", "b"]\
}\
}\
],
"tool_choice": "auto",
"temperature": 0.7,
"max_output_tokens": 150
}
}
1.4.2 response.cancel
Send this event to cancel an in-progress response. The server will respond with a response.cancelled
event or an error if there is no response to cancel.
Name |
Type |
Description |
event_id |
string |
Optional client-generated ID used to identify this event. |
type |
string |
The event type, must be response.cancel . |
{
"event_id": "event_567",
"type": "response.cancel"
}
2.Server events
These are events emitted from the OpenAI Realtime WebSocket server to the client.
2.1 error
Returned when an error occurs, which could be a client problem or a server problem. Most errors are recoverable and the session will stay open, we recommend to implementors to monitor and log error messages by default.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "error". |
error |
object |
Details of the error. |
2.1.1 error
Name |
Type |
Description |
type |
string |
The type of error (e.g., "invalid_request_error", "server_error"). |
code |
string |
Error code, if any. |
message |
string |
A human-readable error message. |
param |
string |
Parameter related to the error, if any. |
event_id |
string |
The event_id of the client event that caused the error, if applicable. |
{
"event_id": "event_890",
"type": "error",
"error": {
"type": "invalid_request_error",
"code": "invalid_event",
"message": "The 'type' field is missing.",
"param": null,
"event_id": "event_567"
}
}
2.2 session
2.2.1 session.created
Returned when a Session is created. Emitted automatically when a new connection is established as the first server event. This event will contain the default Session configuration.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "session.created". |
session |
object |
Realtime session object configuration. |
2.2.1.1 session
Name |
Type |
Description |
modalities |
array |
The set of modalities the model can respond with. To disable audio, set this to ["text"] . |
instructions |
string |
The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format (e.g., "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g., "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session. |
voice |
string |
The voice the model uses to respond. Supported voices are ash , ballad , coral , sage , and verse (also supported but not recommended are alloy , echo , and shimmer ; these voices are less expressive). Cannot be changed once the model has responded with audio at least once. |
input_audio_format |
string |
The format of input audio. Options are pcm16 , g711_ulaw , or g711_alaw . |
output_audio_format |
string |
The format of output audio. Options are pcm16 , g711_ulaw , or g711_alaw . |
input_audio_transcription |
object |
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through Whisper and should be treated as rough guidance rather than the representation understood by the model. |
turn_detection |
object |
Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech. |
tools |
array |
Tools (functions) available to the model. |
tool_choice |
string |
How the model chooses tools. Options are auto , none , required , or specify a function . |
temperature |
number |
Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8. |
max_response_output_tokens |
integer or "inf" |
Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf . |
Name |
Type |
Description |
model |
string |
The model to use for transcription; whisper-1 is the only currently supported model. |
turn_detection
Name |
Type |
Description |
type |
string |
Type of turn detection; only server_vad is currently supported. |
threshold |
number |
Activation threshold for VAD (0.0 to 1.0), defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments. |
prefix_padding_ms |
integer |
Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms. |
silence_duration_ms |
integer |
Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values, the model will respond more quickly, but may jump in on short pauses from the user. |
Name |
Type |
Description |
type |
string |
The type of the tool, i.e., function. |
name |
string |
The name of the function. |
description |
string |
The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything). |
parameters |
object |
Parameters of the function in JSON Schema. |
{
"event_id": "event_1234",
"type": "session.created",
"session": {
"id": "sess_001",
"object": "realtime.session",
"model": "gpt-4o-realtime-preview-2024-10-01",
"modalities": ["text", "audio"],
"instructions": "",
"voice": "alloy",
"input_audio_format": "pcm16",
"output_audio_format": "pcm16",
"input_audio_transcription": null,
"turn_detection": {
"type": "server_vad",
"threshold": 0.5,
"prefix_padding_ms": 300,
"silence_duration_ms": 200
},
"tools": [],
"tool_choice": "auto",
"temperature": 0.8,
"max_response_output_tokens": null
}
}
2.2.2 session.updated
Returned when a session is updated with a session.update
event, unless there is an error.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "session.updated". |
session |
object |
Realtime session object configuration. |
2.2.2.1 session
Name |
Type |
Description |
modalities |
array |
The set of modalities the model can respond with. To disable audio, set this to ["text"]. |
instructions |
string |
The default system instructions (i.e., system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format (e.g., "be extremely succinct", "act friendly") and on audio behavior (e.g., "talk quickly"). Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session. |
voice |
string |
The voice the model uses to respond. Supported voices are ash, ballad, coral, sage, and verse (also supported but not recommended are alloy, echo, and shimmer; these voices are less expressive). Cannot be changed once the model has responded with audio at least once. |
input_audio_format |
string |
The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. |
output_audio_format |
string |
The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw. |
input_audio_transcription |
object |
Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through Whisper and should be treated as rough guidance rather than the representation understood by the model. |
turn_detection |
object |
Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech. |
tools |
array |
Tools (functions) available to the model. |
tool_choice |
string |
How the model chooses tools. Options are auto, none, required, or specify a function. |
temperature |
number |
Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8. |
max_response_output_tokens |
integer or "inf" |
Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf. |
Name |
Type |
Description |
model |
string |
The model to use for transcription; whisper-1 is the only currently supported model. |
turn_detection
Name |
Type |
Description |
type |
string |
Type of turn detection; only server_vad is currently supported. |
threshold |
number |
Activation threshold for VAD (0.0 to 1.0), defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments. |
prefix_padding_ms |
integer |
Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms. |
silence_duration_ms |
integer |
Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values, the model will respond more quickly, but may jump in on short pauses from the user. |
Name |
Type |
Description |
type |
string |
The type of the tool, i.e., function. |
name |
string |
The name of the function. |
description |
string |
The description of the function, including guidance on when and how to call it, and what to tell the user when calling (if anything). |
parameters |
object |
Parameters of the function in JSON Schema. |
{
"event_id": "event_5678",
"type": "session.updated",
"session": {
"id": "sess_001",
"object": "realtime.session",
"model": "gpt-4o-realtime-preview-2024-10-01",
"modalities": ["text"],
"instructions": "New instructions",
"voice": "alloy",
"input_audio_format": "pcm16",
"output_audio_format": "pcm16",
"input_audio_transcription": {
"model": "whisper-1"
},
"turn_detection": null,
"tools": [],
"tool_choice": "none",
"temperature": 0.7,
"max_response_output_tokens": 200
}
}
2.3 conversation
2.3.1 conversation.created
Returned when a conversation is created. Emitted right after session creation.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "conversation.created". |
conversation |
object |
The conversation resource. |
2.3.1.1 conversation
Name |
Type |
Description |
id |
string |
The unique ID of the conversation. |
object |
string |
The object type, must be "realtime.conversation". |
{
"event_id": "event_9101",
"type": "conversation.created",
"conversation": {
"id": "conv_001",
"object": "realtime.conversation"
}
}
2.3.2 conversation.item.created
Returned when a conversation item is created. There are several scenarios that produce this event:
- The server is generating a Response, which if successful will produce either one or two Items, which will be of type
message
(role assistant
) or type function_call
.
- The input audio buffer has been committed, either by the client or the server (in
server_vad
mode). The server will take the content of the input audio buffer and add it to a new user message Item.
- The client has sent a
conversation.item.create
event to add a new Item to the Conversation.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "conversation.item.created". |
previous_item_id |
string |
The ID of the preceding item in the Conversation context, allows the client to understand the order of the conversation. |
item |
object |
The item to add to the conversation. |
2.3.2.1 item
Name |
Type |
Description |
id |
string |
The unique ID of the item, this can be generated by the client to help manage server-side context, but is not required because the server will generate one if not provided. |
type |
string |
The type of the item (message, function_call, function_call_output). |
status |
string |
The status of the item (completed, incomplete). These have no effect on the conversation, but are accepted for consistency with the conversation.item.created event. |
role |
string |
The role of the message sender (user, assistant, system), only applicable for message items. |
content |
array |
The content of the message, applicable for message items. Message items with a role of system support only input_text content, message items of role user support input_text and input_audio content, and message items of role assistant support text content. |
call_id |
string |
The ID of the function call (for function_call and function_call_output items). If passed on a function_call_output item, the server will check that a function_call item with the same ID exists in the conversation history. |
name |
string |
The name of the function being called (for function_call items). |
arguments |
string |
The arguments of the function call (for function_call items). |
output |
string |
The output of the function call (for function_call_output items). |
content
Name |
Type |
Description |
type |
string |
The content type (input_text, input_audio, text). |
text |
string |
The text content, used for input_text and text content types. |
audio |
string |
Base64-encoded audio bytes, used for input_audio content type. |
transcript |
string |
The transcript of the audio, used for input_audio content type. |
{
"event_id": "event_1920",
"type": "conversation.item.created",
"previous_item_id": "msg_002",
"item": {
"id": "msg_003",
"object": "realtime.item",
"type": "message",
"status": "completed",
"role": "user",
"content": [\
{\
"type": "input_audio",\
"transcript": "hello how are you",\
"audio": "base64encodedaudio=="\
}\
]
}
}
This event is the output of audio transcription for user audio written to the user audio buffer. Transcription begins when the input audio buffer is committed by the client or server (in server_vad
mode). Transcription runs asynchronously with Response creation, so this event may come before or after the Response events.
Realtime API models accept audio natively, and thus input transcription is a separate process run on a separate ASR (Automatic Speech Recognition) model, currently always whisper-1
. Thus the transcript may diverge somewhat from the model's interpretation, and should be treated as a rough guide.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be conversation.item.input_audio_transcription.completed. |
item_id |
string |
The ID of the user message item containing the audio. |
content_index |
integer |
The index of the content part containing the audio. |
transcript |
string |
The transcribed text. |
{
"event_id": "event_2122",
"type": "conversation.item.input_audio_transcription.completed",
"item_id": "msg_003",
"content_index": 0,
"transcript": "Hello, how are you?"
}
Returned when input audio transcription is configured, and a transcription request for a user message failed. These events are separate from other error
events so that the client can identify the related Item.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be conversation.item.input_audio_transcription.failed. |
item_id |
string |
The ID of the user message item. |
content_index |
integer |
The index of the content part containing the audio. |
error |
object |
Details of the transcription error. |
error
Name |
Type |
Description |
type |
string |
The type of error. |
code |
string |
Error code, if any. |
message |
string |
A human-readable error message. |
param |
string |
Parameter related to the error, if any. |
{
"event_id": "event_2324",
"type": "conversation.item.input_audio_transcription.failed",
"item_id": "msg_003",
"content_index": 0,
"error": {
"type": "transcription_error",
"code": "audio_unintelligible",
"message": "The audio could not be transcribed.",
"param": null
}
}
2.3.5 conversation.item.truncated
Returned when an earlier assistant audio message item is truncated by the client with a conversation.item.truncate
event. This event is used to synchronize the server's understanding of the audio with the client's playback.
This action will truncate the audio and remove the server-side text transcript to ensure there is no text in the context that hasn't been heard by the user.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "conversation.item.truncated". |
item_id |
string |
The ID of the assistant message item that was truncated. |
content_index |
integer |
The index of the content part that was truncated. |
audio_end_ms |
integer |
The duration up to which the audio was truncated, in milliseconds. |
{
"event_id": "event_2526",
"type": "conversation.item.truncated",
"item_id": "msg_004",
"content_index": 0,
"audio_end_ms": 1500
}
2.3.6 conversation.item.deleted
Returned when an item in the conversation is deleted by the client with a conversation.item.delete
event. This event is used to synchronize the server's understanding of the conversation history with the client's view.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "conversation.item.deleted". |
item_id |
string |
The ID of the item that was deleted. |
{
"event_id": "event_2728",
"type": "conversation.item.deleted",
"item_id": "msg_005"
}
Returned when an input audio buffer is committed, either by the client or automatically in server VAD mode. The item_id
property is the ID of the user message item that will be created, thus a conversation.item.created
event will also be sent to the client.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "input_audio_buffer.committed". |
previous_item_id |
string |
The ID of the preceding item after which the new item will be inserted. |
item_id |
string |
The ID of the user message item that will be created. |
{
"event_id": "event_1121",
"type": "input_audio_buffer.committed",
"previous_item_id": "msg_001",
"item_id": "msg_002"
}
Returned when the input audio buffer is cleared by the client with a input_audio_buffer.clear
event.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "input_audio_buffer.cleared". |
{
"event_id": "event_1314",
"type": "input_audio_buffer.cleared"
}
Sent by the server when in server_vad
mode to indicate that speech has been detected in the audio buffer. This can happen any time audio is added to the buffer (unless speech is already detected). The client may want to use this event to interrupt audio playback or provide visual feedback to the user. The client should expect to receive a input_audio_buffer.speech_stopped
event when speech stops. The item_id
property is the ID of the user message item that will be created when speech stops and will also be included in the input_audio_buffer.speech_stopped
event (unless the client manually commits the audio buffer during VAD activation).
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "input_audio_buffer.speech_started". |
audio_start_ms |
integer |
Milliseconds from the start of all audio written to the buffer during the session when speech was first detected. This will correspond to the beginning of audio sent to the model, and thus includes the prefix_padding_ms configured in the Session. |
item_id |
string |
The ID of the user message item that will be created when speech stops. |
{
"event_id": "event_1516",
"type": "input_audio_buffer.speech_started",
"audio_start_ms": 1000,
"item_id": "msg_003"
}
Returned in server_vad
mode when the server detects the end of speech in the audio buffer. The server will also send an conversation.item.created
event with the user message item that is created from the audio buffer.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "input_audio_buffer.speech_stopped". |
audio_end_ms |
integer |
Milliseconds since the session started when speech stopped. This will correspond to the end of audio sent to the model, and thus includes the min_silence_duration_ms configured in the Session. |
item_id |
string |
The ID of the user message item that will be created. |
{
"event_id": "event_1718",
"type": "input_audio_buffer.speech_stopped",
"audio_end_ms": 2000,
"item_id": "msg_003"
}
2.5 response
2.5.1 response.created
Returned when a new Response is created. The first event of response creation, where the response is in an initial state of in_progress
.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.created". |
response |
object |
The response resource. |
2.5.1.1 response
Name |
Type |
Description |
id |
string |
The unique ID of the response. |
object |
string |
The object type, must be "realtime.response". |
status |
string |
The final status of the response (completed, cancelled, failed, incomplete). |
status_details |
object |
Additional details about the status. |
output |
array |
The list of output items generated by the response. |
usage |
object |
Usage statistics for the Response; this will correspond to billing. A Realtime API session will maintain a conversation context and append new Items to the Conversation, thus output from previous turns (text and audio tokens) will become the input for later turns. |
status_details
Name |
Type |
Description |
type |
string |
The type of error that caused the response to fail, corresponding with the status field (cancelled, incomplete, failed). |
reason |
string |
The reason the Response did not complete. For a cancelled Response, one of "turn_detected" (the server VAD detected a new start of speech) or "client_cancelled" (the client sent a cancel event). For an incomplete Response, one of "max_output_tokens" or "content_filter" (the server-side safety filter activated and cut off the response). |
error |
object |
A description of the error that caused the response to fail, populated when the status is failed. |
error
Name |
Type |
Description |
type |
string |
The type of error. |
code |
string |
Error code, if any. |
usage
Name |
Type |
Description |
total_tokens |
integer |
The total number of tokens in the Response including input and output text and audio tokens. |
input_tokens |
integer |
The number of input tokens used in the Response, including text and audio tokens. |
output_tokens |
integer |
The number of output tokens sent in the Response, including text and audio tokens. |
input_token_details |
object |
Details about the input tokens used in the Response. |
output_token_details |
object |
Details about the output tokens used in the Response. |
input_token_details
Name |
Type |
Description |
cached_tokens |
integer |
The number of cached tokens used in the Response. |
text_tokens |
integer |
The number of text tokens used in the Response. |
audio_tokens |
integer |
The number of audio tokens used in the Response. |
output_token_details
Name |
Type |
Description |
text_tokens |
integer |
The number of text tokens used in the Response. |
audio_tokens |
integer |
The number of audio tokens used in the Response. |
{
"event_id": "event_2930",
"type": "response.created",
"response": {
"id": "resp_001",
"object": "realtime.response",
"status": "in_progress",
"status_details": null,
"output": [],
"usage": null
}
}
2.5.2 response.done
Returned when a Response is done streaming. Always emitted, no matter the final state. The Response object included in the response.done
event will include all output Items in the Response but will omit the raw audio data.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.done". |
response |
object |
The response resource. |
response
Name |
Type |
Description |
id |
string |
The unique ID of the response. |
object |
string |
The object type, must be realtime.response. |
status |
string |
The final status of the response (completed, cancelled, failed, incomplete). |
status_details |
object |
Additional details about the status. |
output |
array |
The list of output items generated by the response. |
usage |
object |
Usage statistics for the Response, this will correspond to billing. A Realtime API session will maintain a conversation context and append new items to the conversation, thus output from previous turns (text and audio tokens) will become the input for later turns. |
status_details
Name |
Type |
Description |
type |
string |
The type of error that caused the response to fail, corresponding with the status field (cancelled, incomplete, failed). |
reason |
string |
The reason the Response did not complete. For a cancelled Response, one of turn_detected (the server VAD detected a new start of speech) or client_cancelled (the client sent a cancel event). For an incomplete Response, one of max_output_tokens or content_filter (the server-side safety filter activated and cut off the response). |
error |
object |
A description of the error that caused the response to fail, populated when the status is failed. |
error
Name |
Type |
Description |
type |
string |
The type of error. |
code |
string |
Error code, if any. |
usage
Name |
Type |
Description |
total_tokens |
integer |
The total number of tokens in the Response including input and output text and audio tokens. |
input_tokens |
integer |
The number of input tokens used in the Response, including text and audio tokens. |
output_tokens |
integer |
The number of output tokens sent in the Response, including text and audio tokens. |
input_token_details |
object |
Details about the input tokens used in the Response. |
output_token_details |
object |
Details about the output tokens used in the Response. |
input_token_details
Name |
Type |
Description |
cached_tokens |
integer |
The number of cached tokens used in the Response. |
text_tokens |
integer |
The number of text tokens used in the Response. |
audio_tokens |
integer |
The number of audio tokens used in the Response. |
output_token_details
Name |
Type |
Description |
text_tokens |
integer |
The number of text tokens used in the Response. |
audio_tokens |
integer |
The number of audio tokens used in the Response. |
{
"event_id": "event_3132",
"type": "response.done",
"response": {
"id": "resp_001",
"object": "realtime.response",
"status": "completed",
"status_details": null,
"output": [\
{\
"id": "msg_006",\
"object": "realtime.item",\
"type": "message",\
"status": "completed",\
"role": "assistant",\
"content": [\
{\
"type": "text",\
"text": "Sure, how can I assist you today?"\
}\
]\
}\
],
"usage": {
"total_tokens":275,
"input_tokens":127,
"output_tokens":148,
"input_token_details": {
"cached_tokens":384,
"text_tokens":119,
"audio_tokens":8,
"cached_tokens_details": {
"text_tokens": 128,
"audio_tokens": 256
}
},
"output_token_details": {
"text_tokens":36,
"audio_tokens":112
}
}
}
}
2.5.3 response.output_item.added
Returned when a new Item is created during Response generation.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be response.output_item.added. |
response_id |
string |
The ID of the Response to which the item belongs. |
output_index |
integer |
The index of the output item in the Response. |
item |
object |
The item to add to the conversation. |
2.5.3.1 item
Name |
Type |
Description |
id |
string |
The unique ID of the item, this can be generated by the client to help manage server-side context, but is not required because the server will generate one if not provided. |
type |
string |
The type of the item (message, function_call, function_call_output). |
status |
string |
The status of the item (completed, incomplete). These have no effect on the conversation, but are accepted for consistency with the conversation.item.created event. |
role |
string |
The role of the message sender (user, assistant, system), only applicable for message items. |
content |
array |
The content of the message, applicable for message items. Message items with a role of system support only input_text content, message items of role user support input_text and input_audio content, and message items of role assistant support text content. |
call_id |
string |
The ID of the function call (for function_call and function_call_output items). If passed on a function_call_output item, the server will check that a function_call item with the same ID exists in the conversation history. |
name |
string |
The name of the function being called (for function_call items). |
arguments |
string |
The arguments of the function call (for function_call items). |
output |
string |
The output of the function call (for function_call_output items). |
2.5.3.1.1 content
Name |
Type |
Description |
type |
string |
The content type (input_text, input_audio, text). |
text |
string |
The text content, used for input_text and text content types. |
audio |
string |
Base64-encoded audio bytes, used for input_audio content type. |
transcript |
string |
The transcript of the audio, used for input_audio content type. |
{
"event_id": "event_3334",
"type": "response.output_item.added",
"response_id": "resp_001",
"output_index": 0,
"item": {
"id": "msg_007",
"object": "realtime.item",
"type": "message",
"status": "in_progress",
"role": "assistant",
"content": []
}
}
2.5.4 response.output_item.done
Returned when an Item is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be response.output_item.done. |
response_id |
string |
The ID of the Response to which the item belongs. |
output_index |
integer |
The index of the output item in the Response. |
item |
object |
The item to add to the conversation. |
2.5.4.1 item
Name |
Type |
Description |
id |
string |
The unique ID of the item, this can be generated by the client to help manage server-side context, but is not required because the server will generate one if not provided. |
type |
string |
The type of the item (message, function_call, function_call_output). |
status |
string |
The status of the item (completed, incomplete). These have no effect on the conversation, but are accepted for consistency with the conversation.item.created event. |
role |
string |
The role of the message sender (user, assistant, system), only applicable for message items. |
content |
array |
The content of the message, applicable for message items. Message items with a role of system support only input_text content, message items of role user support input_text and input_audio content, and message items of role assistant support text content. |
call_id |
string |
The ID of the function call (for function_call and function_call_output items). If passed on a function_call_output item, the server will check that a function_call item with the same ID exists in the conversation history. |
name |
string |
The name of the function being called (for function_call items). |
arguments |
string |
The arguments of the function call (for function_call items). |
output |
string |
The output of the function call (for function_call_output items). |
2.5.4.1.1 content
Name |
Type |
Description |
type |
string |
The content type (input_text, input_audio, text). |
text |
string |
The text content, used for input_text and text content types. |
audio |
string |
Base64-encoded audio bytes, used for input_audio content type. |
transcript |
string |
The transcript of the audio, used for input_audio content type. |
{
"event_id": "event_3536",
"type": "response.output_item.done",
"response_id": "resp_001",
"output_index": 0,
"item": {
"id": "msg_007",
"object": "realtime.item",
"type": "message",
"status": "completed",
"role": "assistant",
"content": [\
{\
"type": "text",\
"text": "Sure, I can help with that."\
}\
]
}
}
2.5.5 response.content_part.added
Returned when a new content part is added to an assistant message item during response generation.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.content_part.added". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the item to which the content part was added. |
output_index |
integer |
The index of the output item in the response. |
content_index |
integer |
The index of the content part in the item's content array. |
part |
object |
The content part that was added. |
2.5.5.1 part
Name |
Type |
Description |
type |
string |
The content type ("text", "audio"). |
text |
string |
The text content (if type is "text"). |
audio |
string |
Base64-encoded audio data (if type is "audio"). |
transcript |
string |
The transcript of the audio (if type is "audio"). |
{
"event_id": "event_3738",
"type": "response.content_part.added",
"response_id": "resp_001",
"item_id": "msg_007",
"output_index": 0,
"content_index": 0,
"part": {
"type": "text",
"text": ""
}
}
2.5.6 response.content_part.done
Returned when a content part is done streaming in an assistant message item. Also emitted when a Response is interrupted, incomplete, or cancelled.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.content_part.done". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the item. |
output_index |
integer |
The index of the output item in the response. |
content_index |
integer |
The index of the content part in the item's content array. |
part |
object |
The content part that is done. |
2.5.6.1 part
Name |
Type |
Description |
type |
string |
The content type ("text", "audio"). |
text |
string |
The text content (if type is "text"). |
audio |
string |
Base64-encoded audio data (if type is "audio"). |
transcript |
string |
The transcript of the audio (if type is "audio"). |
{
"event_id": "event_3940",
"type": "response.content_part.done",
"response_id": "resp_001",
"item_id": "msg_007",
"output_index": 0,
"content_index": 0,
"part": {
"type": "text",
"text": "Sure, I can help with that."
}
}
2.5.7 response.text.delta
Returned when the text value of a "text" content part is updated.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.text.delta". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the item. |
output_index |
integer |
The index of the output item in the response. |
content_index |
integer |
The index of the content part in the item's content array. |
delta |
string |
The text delta. |
{
"event_id": "event_4142",
"type": "response.text.delta",
"response_id": "resp_001",
"item_id": "msg_007",
"output_index": 0,
"content_index": 0,
"delta": "Sure, I can h"
}
2.5.8 response.text.done
Returned when the text value of a "text" content part is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.text.done". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the item. |
output_index |
integer |
The index of the output item in the response. |
content_index |
integer |
The index of the content part in the item's content array. |
text |
string |
The final text content. |
{
"event_id": "event_4344",
"type": "response.text.done",
"response_id": "resp_001",
"item_id": "msg_007",
"output_index": 0,
"content_index": 0,
"text": "Sure, I can help with that."
}
2.5.8 response.audio_transcript.delta
Returned when the model-generated transcription of audio output is updated.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.audio_transcript.delta". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the item. |
output_index |
integer |
The index of the output item in the response. |
content_index |
integer |
The index of the content part in the item's content array. |
delta |
string |
The transcript delta. |
{
"event_id": "event_4546",
"type": "response.audio_transcript.delta",
"response_id": "resp_001",
"item_id": "msg_008",
"output_index": 0,
"content_index": 0,
"delta": "Hello, how can I a"
}
2.5.9 response.audio_transcript.done
Returned when the model-generated transcription of audio output is done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.audio_transcript.done". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the item. |
output_index |
integer |
The index of the output item in the response. |
content_index |
integer |
The index of the content part in the item's content array. |
transcript |
string |
The final transcript of the audio. |
{
"event_id": "event_4748",
"type": "response.audio_transcript.done",
"response_id": "resp_001",
"item_id": "msg_008",
"output_index": 0,
"content_index": 0,
"transcript": "Hello, how can I assist you today?"
}
2.5.10 response.audio.delta
Returned when the model-generated audio is updated.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.audio.delta". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the item. |
output_index |
integer |
The index of the output item in the response. |
content_index |
integer |
The index of the content part in the item's content array. |
delta |
string |
Base64-encoded audio data delta. |
{
"event_id": "event_4950",
"type": "response.audio.delta",
"response_id": "resp_001",
"item_id": "msg_008",
"output_index": 0,
"content_index": 0,
"delta": "Base64EncodedAudioDelta"
}
2.5.11 response.audio.done
Returned when the model-generated audio is done. Also emitted when a Response is interrupted, incomplete, or cancelled.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.audio.done". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the item. |
output_index |
integer |
The index of the output item in the response. |
content_index |
integer |
The index of the content part in the item's content array. |
{
"event_id": "event_5152",
"type": "response.audio.done",
"response_id": "resp_001",
"item_id": "msg_008",
"output_index": 0,
"content_index": 0
}
2.5.12 response.function_call_arguments.delta
Returned when the model-generated function call arguments are updated.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.function_call_arguments.delta". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the function call item. |
output_index |
integer |
The index of the output item in the response. |
call_id |
string |
The ID of the function call. |
delta |
string |
The arguments delta as a JSON string. |
{
"event_id": "event_5354",
"type": "response.function_call_arguments.delta",
"response_id": "resp_002",
"item_id": "fc_001",
"output_index": 0,
"call_id": "call_001",
"delta": "{\"location\": \"San\""
}
2.5.13 response.function_call_arguments.done
Returned when the model-generated function call arguments are done streaming. Also emitted when a Response is interrupted, incomplete, or cancelled.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "response.function_call_arguments.done". |
response_id |
string |
The ID of the response. |
item_id |
string |
The ID of the function call item. |
output_index |
integer |
The index of the output item in the response. |
call_id |
string |
The ID of the function call. |
arguments |
string |
The final arguments as a JSON string. |
{
"event_id": "event_5556",
"type": "response.function_call_arguments.done",
"response_id": "resp_002",
"item_id": "fc_001",
"output_index": 0,
"call_id": "call_001",
"arguments": "{\"location\": \"San Francisco\"}"
}
2.5.14 rate_limits.updated
Emitted at the beginning of a Response to indicate the updated rate limits. When a Response is created some tokens will be "reserved" for the output tokens, the rate limits shown here reflect that reservation, which is then adjusted accordingly once the Response is completed.
Name |
Type |
Description |
event_id |
string |
The unique ID of the server event. |
type |
string |
The event type, must be "rate_limits.updated". |
rate_limits |
array |
List of rate limit information. |
rate_limits
Name |
Type |
Description |
name |
string |
The name of the rate limit (requests, tokens). |
limit |
integer |
The maximum allowed value for the rate limit. |
remaining |
integer |
The remaining value before the limit is reached. |
reset_seconds |
number |
Seconds until the rate limit resets. |
{
"event_id": "event_5758",
"type": "rate_limits.updated",
"rate_limits": [\
{\
"name": "requests",\
"limit": 1000,\
"remaining": 999,\
"reset_seconds": 60\
},\
{\
"name": "tokens",\
"limit": 50000,\
"remaining": 49950,\
"reset_seconds": 60\
}\
]
}