AI DevKit
Search Results for

    Show / Hide Table of Contents

    Namespace Glitch9.AIDevKit

    Classes

    AIClientException

    AIDevKitManager

    AIDevKitSettings

    AIMethodNotSupportedException

    Thrown when a GenAI provider does not support a specific feature.

    AIProviders

    AIRequestException

    AIResponseException

    AIServerSentException

    AnimationFrame

    Annotation

    AnthropicTypes

    Types only used by Anthropic API.
    This shit is here instead of the Anthropic assembly because this is used by GENTask and the UnityEditor Generator Windows.

    ApiAsset

    ApiClientSettings

    Base class for AI client settings. This class is used to store API keys and other settings related to AI clients.

    ApiClient<TSelf, TSettings>

    ApiClient<TSelf, TSettings>.JsonSerializerSettingsData

    ApiFile

    ApiKey

    AssistantMessage

    AudioBase64ContentPart

    AudioContentData

    AudioContentPart

    BlockedPromptException

    BrokenModelException

    BrokenResponseException

    BrokenVoiceException

    ChatChoice

    ChatCompletion

    Response from a LLM (Large Language Model) for a chat completion request.

    This class contains a list of ChatChoice objects, each representing a message generated by the model in response to a chat prompt.
    ChatMessage can be accessed through the ChatChoice (e.g., Choices[0].Message).
    If 'n' used in the request is greater than 1, this can contain multiple choices. Otherwise, it will always contain a single choice.

    If this is a streamed response, the ChatChoice objects will contain ChatDelta instead of ChatMessage.

    ChatCompletionRequest

    ChatCompletionRequest.Builder

    ChatContent

    -- Class hierarchy --
    Text: ChatCompletion > ChatChoice[] > ChatMessage[] > ChatContent > TextOrChatContentPart > Text
    ChatContentPart: ChatCompletion > ChatChoice[] > ChatMessage[] > ChatContent > TextOrChatContentPart > ChatContentPart[]

    ChatContentPart

    ChatDelta

    ChatMessage

    ChatMessageExtensions

    CompletionRequest

    CompletionRequest.Builder

    CompletionRequest.CompletionRequestBuilder<TBuilder, TRequest>

    CompletionResult<T>

    Base class for generated contents via CompletionRequest or ChatCompletionRequest. This class contains ToolCalls that the AI model wants to call.

    CompletionTokensDetails

    Output token details for the completion. This includes tokens generated by the model, reasoning tokens, and tokens from predicted outputs.

    DefaultErrorHandler

    DeleteFileTask

    DeleteModelTask

    DeprecatedModelException

    DownloadFileTask

    ElevenLabsAudioIsolationOptions

    ElevenLabsOptions

    ElevenLabsSoundFXOptions

    ElevenLabsSpeechOptions

    ElevenLabsTypes

    Types only used by ElevenLabs API.
    This shit is here instead of the ElevenLabs assembly because this is used by GENTask and the UnityEditor Generator Windows.

    ElevenLabsVoiceChangeOptions

    EmptyPromptException

    EmptyResponseException

    ErrorResponse

    FileBase64ContentPart

    FileContentData

    FileContentPart

    FileIdContentPart

    FileLibrary

    ScriptableObject database for storing file data. This database is used to keep track of the files available in the AI library.

    FileLibrary.Repo

    Database for storing file data.

    FunctionCall

    Represents a function call to be used with the AI chat.

    FunctionDeclaration

    Structured representation of a function declaration as defined by the OpenAPI 3.03 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client.

    FunctionDeclarationConverter

    FunctionResponse

    A predicted FunctionResponse returned from the model that contains a string representing the Name with the arguments and their values.

    GENAudioGenerationTask<TSelf, TOptions, TPrompt>

    GENAudioIsolationTask

    GENAudioRecordingTask<TSelf, TResult>

    GENCodeTask

    Added new on 2025.05.28 Task for generating code snippets or scripts for Unity C#.

    GENImageGenerationTask<TTask, TPrompt>

    GENImageTask

    Task for generating image(s) from text using supported models (e.g., OpenAI DALL·E, Google Imagen).

    GENInpaintTask

    Task for editing an existing image based on a text prompt and optional mask (OpenAI or Google Gemini).

    GENModerationTask

    Audio not supported yet.

    GENPixelAnimationTask

    GENPixelArtGenerationTask<TSelf, TOptions, TPrompt, TResult>

    GENPixelArtTask

    GENPixelArtTaskExtensions

    GENPixelInpaintTask

    GENPixelRotationTask

    GENResponseTask

    Task for generating text using an LLM model. Supports instructions and role-based prompts.

    GENSequence

    GENSoundEffectTask

    Task for generating sound effects based on a text prompt.

    GENSpeechTask

    Task for generating synthetic speech (text-to-speech) using the specified model.

    GENStructTask<T>

    Task for generating structured output (e.g., JSON) using an LLM model.

    GENTask<TSelf, TOptions, TPrompt, TResult>

    Abstract base class for all generative AI tasks (text, image, audio). Supports text, image, and audio prompts with fluent configuration methods.

    GENTextGenerationTask<TSelf, TResult>

    Base class for GENTasks which utilize a CompletionRequest or a ChatCompletionRequest.

    GENTranscriptTask

    Task for converting speech audio into text (speech-to-text).

    GENTranslationTask

    Task for translating speech into English text using the speech translation model.

    GENVideoTask

    GENVoiceChangeTask

    GENVoiceGenerationTask<TSelf, TOptions, TPrompt>

    GeneralTaskExtensions

    GeneratedAudio

    Represents a generated audio or a collection of generated audios. This class provides implicit conversions to AudioClip and Sprite types for easy usage.

    GeneratedFile<TAsset, TFile>

    Base class for generated assets: images, audios, and videos. These assets are 'files' that can be downloaded and used in the application.

    GeneratedImage

    Represents a generated image or a collection of generated images. This class provides implicit conversions to Texture2D and Sprite types for easy usage.

    GeneratedResult<T>

    You will never know if the AI generated result is a single or multiple values. So this class is used to represent both cases: a value or an array of values.

    GeneratedText

    Represents a generated text result from an AI model.

    GeneratedVideo

    Unity does not support creation of a VideoClip on runtime. Therefore, GeneratedVideo will only save the urls of the video files.

    GenerativeOptions

    GenerativeTaskExtensions

    Provides fluent, chainable extension methods that create and configure AI generation tasks. These helpers let you start a task directly from the host object (string, AudioClip, Texture2D, etc.) and then continue the configuration via the task's fluent API.

    Typical usage:

    // Create a chat-like text generation
    "Describe a cat playing piano."
        .GENText()
        .SetModel(OpenAIModel.GPT4o)
        .ExecuteAsync();
    
    // Transcribe recorded speech
    audioClip.GENTranscript().ExecuteAsync();

    GetModelTask

    GoogleMediaOptions

    GoogleTypes

    Types only used by Google API.
    This shit is here instead of the Google assembly because this is used by GENTask and the UnityEditor Generator Windows.

    GoogleTypes.UploadMetadata

    GroqCloudTypes

    Types only used by GroqCloud API.
    This shit is here instead of the GroqCloud assembly because this is used by GENTask and the UnityEditor Generator Windows.

    ImageBase64ContentPart

    ImageContentData

    ImageContentPart

    ImageFileIdContentPart

    ImageUrlContentPart

    InpaintPrompt

    A specialized prompt for inpainting tasks.
    This class is used to pass the instruction and the image to the inpainting model for GENInpaintTask.

    InterruptedResponseException

    InvalidServiceStatusException

    JsonSchemaFormat

    ListCustomModelsTask

    ListCustomVoicesTask

    ListFilesTask

    ListModelsTask

    ListVoicesTask

    Location

    LogProb

    Logprobs

    Log probability information for the choice.

    LogprobsContent

    A list of message content tokens with log probability information.

    Model

    ScriptableObject representation of a generative AI model with metadata, configuration, and pricing information. Supports token limits, ownership, creation time, and dynamic pricing for various content types (text, image, audio).

    ModelFamily

    Defines the family names of various AI models and services.

    Warning!! DO NOT MAKE THIS INTO ENUM
    Enum will make it hard to maintain because if you insert a new family inbetween existing families, it will break the order.

    ModelFilter

    ModelLibrary

    ScriptableObject database for storing model data. This database is used to keep track of the models available in the AI library.

    ModelLibrary.Repo

    Database for storing model data.

    ModelNotFoundOnServerException

    ModelPrice

    Moderation

    ModerationOptions

    Configuration options for chat moderation.

    NoEndpointExeption

    NoRequiredComponentException

    NoRequiredParameterException

    NotSupportedModelFeatureException

    OpenAIImageOptions

    OpenAIModelBase

    OpenAIRequest

    OpenAIRequest.OpenAIRequestBuilder<TBuilder, TRequest>

    OpenAITypes

    Types only used by OpenAI API.
    This shit is here instead of the OpenAI assembly because this is used by GENTask and the UnityEditor Generator Windows.

    OpenRouterModel

    PixelAnimation

    PixelArt

    Base class for pixel art images generated by PixelLab. This class provides properties for size, isometric view, usage, and note.

    PixelLabTypes

    Types only used by PixelLab API.
    This shit is here instead of the PixelLab assembly because this is used by GENTask and the UnityEditor Generator Windows.

    PixelLabTypes.Keypoints

    PredefinedVoice

    ProjectContext

    PromptBase

    PromptFeedback

    A set of the feedback metadata the prompt specified in GenerateContentRequest.Contents.

    PromptHistory

    PromptRecord

    PromptTokensDetails

    Input token details for the prompt. This includes audio input tokens and cached tokens.

    RateLimitExceededException

    ResponseFormat

    ResponseFormatConverter

    Converts Format(Enum) or ResponseFormat (struct wrapper) in Json format to ResponseFormat(Enum).

    ResponseFormatExtensions

    SafetyFeedback

    Safety feedback for an entire request.

    This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.

    SafetyRating

    A safety rating associated with a {@link GenerateContentCandidate}

    SafetySetting

    Safety setting, affecting the safety-blocking behavior. Passing a safety setting for a category changes the allowed probability that content is blocked.

    SegmentObject

    ServerSentError

    SpeechOutputData

    SpeechToTextOptions

    StreamOptions

    StrictJsonSchema

    StrictJsonSchemaAttribute

    StrictJsonSchemaConverter

    StructuredOutput<T>

    SystemMessage

    TaskBase<TSelf, TResult>

    TextContentData

    TextContentPart

    TextContentPartConverter

    TextOrChatContentPart

    An array of content parts with a defined type, each can be of type Text or Image_Url when passing in images. You can pass multiple images by adding multiple Image_Url content parts. Image input is only supported when using the gpt-4-visual-preview model.

    TextPrompt

    A simple text prompt.
    This class is used to pass a simple text prompt to the model for generation tasks.
    It can be used for text generation, image generation, and other tasks that require a text prompt.
    The text can be formatted using Markdown or other formatting options.

    TextToImageOptions

    TextToSpeechOptions

    ToolCall

    ToolMessage

    Transcript

    UploadFileTask

    Usage

    Usage statistics for the completion request.

    UserLocation

    UserMessage

    VerboseTranscript

    Represents a verbose json transcription response returned by model, based on the provided input. Used by OpenAI, GroqCloud, and other compatible services.

    Voice

    VoiceChangerOptions

    VoiceFilter

    VoiceLibrary

    ScriptableObject database for storing voice data used for TTS (Text-to-Speech) and other voice-related tasks.

    VoiceLibrary.Repo

    Database for storing voice data.

    VoiceTypeConverter

    Weighted<T>

    A weighted prompt that can be used to pass a prompt with a specific weight.
    This is useful for tasks where you want to pass multiple prompts with different weights.
    The weight can be used to control the influence of the prompt on the model's output.

    WordObject

    Interfaces

    IApiAssetFilter<T>

    IApiClient

    IApiFileData

    Interface for file data retrieved from various AI APIs. (e.g., /v1/files) This interface defines the properties that all file data should implement.

    IApiModelData

    Interface for model data retrieved from various AI APIs. (e.g., /v1/models) This interface defines the properties that all model data should implement. It is used to standardize the model data across different AI providers.

    IApiUser

    Attach this interface to your user class to enable AIDevKit features. This interface is used to provide user-specific context and settings.

    IApiVoiceData

    Interface for voice data retrieved from various AI APIs.
    This interface defines the properties that all voice data should implement. It is used to standardize the voice data across different AI providers.

    IChatConversation<TMessage>

    IChatStreamParser

    ICompletionOptions

    IGENAudioIsolationOptions

    IGENAudioOptions

    IGENImageOptions

    IGENOptions

    IGENPixelAnimationOptions

    IGENPixelArtOptions

    IGENPixelInpaintOptions

    IGENPixelRotationOptions

    IGENSoundEffectOptions

    IGENSpeechOptions

    IGENTask

    IGENTextTask

    IGENTranscriptOptions

    IGENVideoOptions

    IGENVoiceChangeOptions

    IGeneratedFile

    IGeneratedResult

    IGeneratedText

    IPrompt

    Interface for all prompt types.
    This interface is used to ensure that all prompt types can be serialized and deserialized correctly. It is also used to ensure that all prompt types can be used in the AIDevKit system.

    IStreamingChatTask<T>

    Enums

    AnnotationType

    AnthropicTypes.ServiceTier

    AnthropicTypes.ToolType

    Api

    ArtStyle

    ChatContentPartType

    ChatRole

    ElevenLabsTypes.InputFormat

    The format of input audio. Options are ‘pcm_s16le_16’ or ‘other’ For pcm_s16le_16, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.

    ElevenLabsTypes.OutputFormat

    Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs. Default is mp3_44100_128.

    GameGenre

    GameTheme

    Gender

    GoogleTypes.AspectRatio

    GoogleTypes.PersonGeneration

    GoogleTypes.TaskType

    Task type for embedding content.

    GroqCloudTypes.ReasoningEffort

    this field is only available for qwen3 models. Set to 'none' to disable reasoning. Set to 'default' or null to let Qwen reason.

    GroqCloudTypes.ReasoningFormat

    Specifies how to output reasoning tokens

    GroqCloudTypes.ServiceTier

    The service tier to use for the request. Defaults to on_demand. auto will automatically select the highest tier available within the rate limits of your organization. flex uses the flex tier, which will succeed or fail quickly.

    HarmBlockThreshold

    Block at and beyond a specified harm probability.

    HarmCategory

    Represents the category of harm that a piece of content may fall into. This is used in moderation tasks to classify content based on its potential harm.

    HarmProbability

    Probability that a prompt or candidate matches a harm category.

    ImageFormat

    LanguageTone

    Modality

    "Modality" refers to the type or form of data that a model is designed to process, either as input or output. In AI and machine learning contexts, modality describes the nature of the information being handled — such as text, image, audio, or video.

    For example:

    • A text-to-text model like GPT-4 processes text inputs and generates text outputs.
    • A text-to-image model like DALL·E takes text prompts and produces images.
    • A multimodal model like Gemini can process multiple types of data simultaneously, such as combining text and image inputs.

    The concept of modality helps categorize models based on the kinds of sensory or informational data they handle, and is especially important for understanding the capabilities and limitations of a model.

    ModelFeature

    Formerly known as ModelCapability, this enum represents the features that a model can support. It is used to determine the capabilities of a model and to check if a specific feature is supported.

    OpenAITypes.ImageDetail

    OpenAITypes.ImageQuality

    The quality of the image that will be generated. HighDefinition creates images with finer details and greater consistency across the image. This param is only supported for OpenAIModel.DallE3.

    OpenAITypes.ImageResolution

    OpenAITypes.ImageStyle

    The style of the generated images. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for OpenAIModel.DallE3.

    OpenAITypes.ReasoningEffort

    OpenAITypes.ServiceTier

    OpenAITypes.UploadPurpose

    PixelLabTypes.CameraView

    PixelLabTypes.Direction

    PixelLabTypes.ImageDetail

    PixelLabTypes.Outline

    PixelLabTypes.Shading

    PixelLabTypes.Size

    Specific size values that are supported for PixelLab Animate Requests.

    PixelLabTypes.SkeletonLabel

    Platform

    RequestIntent

    StopReason

    Reason that a ChatCompletion request stopped generating tokens.

    TextFormat

    ToolType

    TranscriptFormat

    UsageType

    VoiceAge

    VoiceCategory

    The category of the voice.

    VoiceType

    Delegates

    TextProcessor

    In this article
    Back to top Generated by DocFX