Table of Contents

Class RealtimeEvent.Server.OutputAudioBuffer

public static class RealtimeEvent.Server.OutputAudioBuffer
Inheritance
object
RealtimeEvent.Server.OutputAudioBuffer

Fields

Cleared

WebRTC Only: Emitted when the output audio buffer is cleared. This happens either in VAD mode when the user has interrupted (input_audio_buffer.speech_started), or when the client has emitted the output_audio_buffer.clear event to manually cut off the current audio response. (https://platform.openai.com/docs/guides/realtime-conversations#client-and-server-events-for-audio-in-webrtc)

public const string Cleared = "output_audio_buffer.cleared"

Field Value

string

Started

WebRTC Only: Emitted when the server begins streaming audio to the client. This event is emitted after an audio content part has been added (response.content_part.added) to the response. (https://platform.openai.com/docs/guides/realtime-conversations#client-and-server-events-for-audio-in-webrtc)

public const string Started = "output_audio_buffer.started"

Field Value

string

Stopped

WebRTC Only: Emitted when the output audio buffer has been completely drained on the server, and no more audio is forthcoming. This event is emitted after the full response data has been sent to the client (response.done).
(https://platform.openai.com/docs/guides/realtime-conversations#client-and-server-events-for-audio-in-webrtc)

public const string Stopped = "output_audio_buffer.stopped"

Field Value

string