AI Dev Kit
Search Results for

    Show / Hide Table of Contents

    Namespace Glitch9.AIDevKit

    Classes

    AIDevKitAsset

    AIDevKitSettings

    AIProviders

    AIRequest

    AIRequest.ModelRequestBuilder<TBuilder, TRequest>

    AIResponse

    Base class for all responses from AI models.

    Annotation

    AnnotationConverter

    ApiFile

    ApiKey

    ApproximateLocation

    AudioBase64ContentPart

    AudioContentPart

    AudioRef

    BlockedPromptException

    BrokenResponseException

    ChatChoice

    ChatCompletion

    ChatCompletionChunk

    ChatCompletionRequest

    ChatCompletionRequest.Builder

    ChatCompletionStreamHandler

    ChatCompletionStreamReceiver

    ChatDelta

    ChatMessage

    CompletionRequest

    CompletionRequest.Builder

    CompletionRequestBase

    CompletionRequestBase.CompletionRequestBuilder<TBuilder, TRequest>

    CompletionRequestConverter<TRequest>

    CompletionResult<T>

    Base class for generated contents via CompletionRequest or ChatCompletionRequest. This class contains ToolCalls that the AI model wants to call.

    Content

    ContentPart

    ContentPartWrapper

    An array of content parts with a defined type, each can be of type Text or Image_Url when passing in images. You can pass multiple images by adding multiple Image_Url content parts. Image input is only supported when using the gpt-4-visual-preview model.

    DeprecatedModelException

    EmptyPromptException

    EmptyResponseException

    ErrorResponse

    ErrorResponseWrapper

    FileBase64ContentPart

    FileContentPart

    FileIdContentPart

    FileLibrary

    ScriptableObject database for storing file data. This database is used to keep track of the files available in the AI library.

    FileLibrary.Repo

    Database for storing file data.

    FileRef

    FunctionCall

    Represents a function call to be used with the AI chat.

    FunctionDeclaration

    Structured representation of a function declaration as defined by the OpenAPI 3.03 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client.

    FunctionDeclarationConverter

    FunctionResponse

    A predicted FunctionResponse returned from the model that contains a string representing the Name with the arguments and their values.

    GENAudioInputTask<TSelf, TOuput>

    GENAudioIsolationTask

    GENAudioOutputTask<TSelf, TPrompt>

    GENCompletionTask<TSelf, TOutput>

    Base class for GENTasks which utilize a CompletionRequest or a ChatCompletionRequest.

    GENImageTask

    Task for generating image(s) from text using supported models (e.g., OpenAI DALL·E, Google Imagen).

    GENImageVariationTask

    Legacy API. Only works with DALL·E 2.

    GENInpaintTask

    Task for editing an existing image based on a text prompt and optional mask (OpenAI or Google Gemini).

    GENModerationTask

    Audio not supported yet.

    GENResponseTask

    Task for generating text using an LLM model. Supports instructions and role-based prompts.

    GENSequence

    GENSoundEffectTask

    Task for generating sound effects based on a text prompt.

    GENSpeechTask

    Task for generating synthetic speech (text-to-speech) using the specified model.

    GENStructTask<T>

    Task for generating structured output (e.g., JSON) using an LLM model.

    GENTaskExtensions

    Provides fluent, chainable extension methods that create and configure AI generation tasks. These helpers let you start a task directly from the host object (string, AudioClip, Texture2D, etc.) and then continue the configuration via the task's fluent API.

    Typical usage:

    // Create a chat-like text generation
    "Describe a cat playing piano."
        .GENText()
        .SetModel(OpenAIModel.GPT4o)
        .ExecuteAsync();
    
    // Transcribe recorded speech
    audioClip.GENTranscript().ExecuteAsync();

    GENTask<TSelf, TPrompt, TOutput>

    Abstract base class for all generative AI tasks (text, image, audio). Supports text, image, and audio prompts with fluent configuration methods.

    GENTranscriptTask

    Task for converting speech audio into text (speech-to-text).

    GENTranslationTask

    Task for translating speech into English text using the speech translation model.

    GENVideoTask

    GENVoiceChangeTask

    GeneratedAsset<TAsset, TFile>

    Base class for generated assets: images, audios, and videos. These assets are 'files' that can be downloaded and used in the application.

    GeneratedAudio

    Represents a generated audio or a collection of generated audios. This class provides implicit conversions to AudioClip and Sprite types for easy usage.

    GeneratedImage

    Represents a generated image or a collection of generated images. This class provides implicit conversions to Texture2D and Sprite types for easy usage.

    GeneratedImageExtensions

    GeneratedResult<T>

    You will never know if the AI generated result is a single or multiple values. So this class is used to represent both cases: a value or an array of values.

    GeneratedVideo

    Unity does not support creation of a VideoClip on runtime. Therefore, GeneratedVideo will only save the urls of the video files.

    ImageBase64ContentPart

    ImageContentPart

    ImageContentPartConverter

    ImageFileIdContentPart

    ImageMessage

    ImageRef

    ImageUrlContentPart

    InpaintPrompt

    A specialized prompt for inpainting tasks.
    This class is used to pass the instruction and the image to the inpainting model for GENInpaintTask.

    InterruptedResponseException

    InvalidEndpointException

    Thrown when a GenAI provider does not support a specific feature.

    InvalidServiceStatusException

    JsonSchemaFormat

    LocalModelOptionsConverter

    Logprobs

    Log probability information for the choice.

    LogprobsContent

    A list of message content tokens with log probability information.

    Model

    ScriptableObject representation of a generative AI model with metadata, configuration, and pricing information. Supports token limits, ownership, creation time, and dynamic pricing for various content types (text, image, audio).

    ModelFamily

    Defines the family names of various AI models and services.

    ModelFilter

    ModelLibrary

    ScriptableObject database for storing model data. This database is used to keep track of the models available in the AI library.

    ModelLibrary.Repo

    Database for storing model data.

    ModelPrice

    ModelPriceArrayExtensions

    ModelSettings

    This class defines a flexible set of parameters that control how text is generated by a language model. These options are entirely optional, but tuning them allows for precise control over randomness, token filtering, sampling behavior, and performance tuning.

    ModelSettings.Builder

    ModerationOptions

    Configuration options for chat moderation.

    MultiResponseStreamHandler

    MultiResponseStreamReceiver

    NotSupportedFeatureException

    OllamaModelOptionsConverter

    ProjectContext

    PromptFeedback

    A set of the feedback metadata the prompt specified in GenerateContentRequest.Contents.

    PromptHistory

    ScriptableObject database for storing prompt history data. This database is used to keep track of the prompts sent to the AI and their responses. It is useful for debugging and analyzing the performance of the AI.

    PromptHistory.Repo

    Database for storing prompt history data.

    PromptRecord

    RateLimitExceededException

    ReasoningOptions

    ResponseFormat

    ResponseFormatConverter

    Converts Format(Enum) or ResponseFormat (struct wrapper) in Json format to ResponseFormat(Enum).

    ResponseFormatExtensions

    ResponseMessage

    SafetyFeedback

    Safety feedback for an entire request.

    This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.

    SafetyRating

    A safety rating associated with a {@link GenerateContentCandidate}

    SafetySetting

    Safety setting, affecting the safety-blocking behavior. Passing a safety setting for a category changes the allowed probability that content is blocked.

    SingleResponseStreamHandler

    SingleResponseStreamReceiver

    SpeechOutputOptions

    StreamOptions

    StreamingTextEventReceiver

    StrictJsonSchema

    StrictJsonSchemaAttribute

    StrictJsonSchemaConverter

    StructuredOutput<T>

    SystemMessage

    TextContentPart

    TextContentPartConverter

    TextRef

    ToolCall

    ToolMessage

    Transcript

    Usage

    Usage statistics for the completion request.

    UsageConverter

    UsageExtensions

    UsageTypeExtensions

    UserLocation

    UserMessage

    Voice

    VoiceFilter

    VoiceGenderConverter

    VoiceLibrary

    ScriptableObject database for storing voice data used for TTS (Text-to-Speech) and other voice-related tasks.

    VoiceLibrary.Repo

    Database for storing voice data.

    VoiceTypeConverter

    WebSearchOptions

    WebSearchOptionsWrapper

    WebSocketEventReceiver

    Interfaces

    IAIDevKitAssetFilter<T>

    IApiFile

    IChatCompletionStreamHandler

    IChatbot

    IGENTask

    IGeneratedResult

    IStreamingAudioEventReceiver

    IStreamingTextEventReceiver

    IToolCallReceiver

    IWebSocketEventReceiver

    Enums

    AnnotationType

    Api

    ChatRole

    ContentPartType

    HarmBlockThreshold

    Block at and beyond a specified harm probability.

    HarmCategory

    Represents the category of harm that a piece of content may fall into. This is used in moderation tasks to classify content based on its potential harm.

    HarmProbability

    Probability that a prompt or candidate matches a harm category.

    ImageDetail

    ImageFormat

    ImageQuality

    The quality of the image that will be generated. HighDefinition creates images with finer details and greater consistency across the image. This param is only supported for OpenAIModel.DallE3.

    ImageSize

    ImageStyle

    The style of the generated images. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for OpenAIModel.DallE3.

    Modality

    "Modality" refers to the type or form of data that a model is designed to process, either as input or output. In AI and machine learning contexts, modality describes the nature of the information being handled — such as text, image, audio, or video.

    For example:

    • A text-to-text model like GPT-4 processes text inputs and generates text outputs.
    • A text-to-image model like DALL·E takes text prompts and produces images.
    • A multimodal model like Gemini can process multiple types of data simultaneously, such as combining text and image inputs.

    The concept of modality helps categorize models based on the kinds of sensory or informational data they handle, and is especially important for understanding the capabilities and limitations of a model.

    ModelFeature

    Formerly known as ModelCapability, this enum represents the features that a model can support. It is used to determine the capabilities of a model and to check if a specific feature is supported.

    OpenAIServiceTier

    ProjectContext.LanguageTone

    ProjectContext.Platform

    ReasoningEffort

    OpenAI-style reasoning effort setting o-series models only

    ResponseType

    SearchContextSize

    StopReason

    Reason that a ChatCompletion request stopped generating tokens.

    TextFormat

    ToolType

    TranscriptFormat

    UsageType

    VoiceAge

    VoiceCategory

    The category of the voice.

    VoiceGender

    VoiceType

    In this article
    Back to top Generated by DocFX