Class RealtimeEvent.Server.OutputAudioBuffer
Inherited Members
Namespace: Glitch9.AIDevKit.OpenAI.Realtime
Assembly: Glitch9.AIDevKit.Provider.OpenAI.dll
Syntax
public static class RealtimeEvent.Server.OutputAudioBuffer
Fields
| Edit this page View SourceCleared
WebRTC Only: Emitted when the output audio buffer is cleared. This happens either in VAD mode when the user has interrupted (input_audio_buffer.speech_started), or when the client has emitted the output_audio_buffer.clear event to manually cut off the current audio response. (https://platform.openai.com/docs/guides/realtime-conversations#client-and-server-events-for-audio-in-webrtc)
Declaration
public const string Cleared = "output_audio_buffer.cleared"
Field Value
| Type | Description |
|---|---|
| string |
Started
WebRTC Only: Emitted when the server begins streaming audio to the client. This event is emitted after an audio content part has been added (response.content_part.added) to the response. (https://platform.openai.com/docs/guides/realtime-conversations#client-and-server-events-for-audio-in-webrtc)
Declaration
public const string Started = "output_audio_buffer.started"
Field Value
| Type | Description |
|---|---|
| string |
Stopped
WebRTC Only: Emitted when the output audio buffer has been completely drained on the server, and no more audio is forthcoming.
This event is emitted after the full response data has been sent to the client (response.done).
(https://platform.openai.com/docs/guides/realtime-conversations#client-and-server-events-for-audio-in-webrtc)
Declaration
public const string Stopped = "output_audio_buffer.stopped"
Field Value
| Type | Description |
|---|---|
| string |