AI Dev Kit
Search Results for

    Show / Hide Table of Contents

    Class ChatCompletionRequest

    Inheritance
    object
    RequestBody
    ModelRequest
    CompletionRequestBase
    ChatCompletionRequest
    Inherited Members
    CompletionRequestBase.Prompt
    CompletionRequestBase.SystemInstruction
    CompletionRequestBase.Stream
    CompletionRequestBase.StreamOptions
    CompletionRequestBase.ModelOptions
    CompletionRequestBase.ResponseFormat
    CompletionRequestBase.ReasoningOptions
    CompletionRequestBase.Models
    CompletionRequestBase.Transforms
    CompletionRequestBase.KeepAlive
    CompletionRequestBase.AttachedFiles
    ModelRequest.Model
    ModelRequest.N
    ModelRequest.Metadata
    ModelRequest.User
    Namespace: Glitch9.AIDevKit
    Assembly: .dll
    Syntax
    public class ChatCompletionRequest : CompletionRequestBase

    Properties

    Audio

    Parameters for audio output. Required when audio output is requested with modalities: ["audio"].

    Declaration
    public SpeechOutputOptions Audio { get; set; }
    Property Value
    Type Description
    SpeechOutputOptions

    Messages

    Required. The messages in the conversation.

    Declaration
    public List<ChatMessage> Messages { get; set; }
    Property Value
    Type Description
    List<ChatMessage>

    Modalities

    Output types that you would like the model to generate. Most models are capable of generating text, which is the default:

    Declaration
    public Modality? Modalities { get; set; }
    Property Value
    Type Description
    Modality?

    ServiceTier

    Optional. Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:

    If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.

    If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.

    If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.

    If set to 'flex', the request will be processed with the Flex Processing service tier. Learn more.

    When not set, the default behavior is 'auto'. When this parameter is set, the response body will include the service_tier utilized. Defaults to auto

    Declaration
    public OpenAIServiceTier? ServiceTier { get; set; }
    Property Value
    Type Description
    OpenAIServiceTier?

    StartingMessage

    Declaration
    public string StartingMessage { get; set; }
    Property Value
    Type Description
    string

    Summary

    Declaration
    public string Summary { get; set; }
    Property Value
    Type Description
    string

    ToolChoice

    Controls which (if any) Function is called by the model. none means the model will not call a Function and instead generates a message. auto means the model can pick between generating a message or calling a Function. Specifying a particular Function via {"type: "Function", "Function": {"name": "my_function"}} forces the model to call that Function. none is the default when no functions are present. auto is the default if functions are present.

    Declaration
    public ToolCall ToolChoice { get; set; }
    Property Value
    Type Description
    ToolCall

    Tools

    Optional. List of tools in JSON for the model to use if supported.

    Declaration
    public ToolCall[] Tools { get; set; }
    Property Value
    Type Description
    ToolCall[]

    WebSearchOptions

    This tool searches the web for relevant results to use in a response. Learn more about the web search tool.

    Declaration
    public WebSearchOptionsWrapper WebSearchOptions { get; set; }
    Property Value
    Type Description
    WebSearchOptionsWrapper

    Extension Methods

    RequestExtensions.ExecuteAsync(ChatCompletionRequest)
    RequestExtensions.StreamAsync(ChatCompletionRequest, ChatStreamHandler)
    RequestExtensions.ExecuteAsync(ChatCompletionRequest)
    RequestExtensions.StreamAsync(ChatCompletionRequest, ChatStreamHandler)
    RequestExtensions.ExecuteAsync(ChatCompletionRequest)
    RequestExtensions.StreamAsync(ChatCompletionRequest, ChatStreamHandler)
    In this article
    Back to top Generated by DocFX