Namespace Glitch9.AIDevKit
Classes
AIBehaviour
AIClientException
Base exception class for all AI client-related errors. All exceptions thrown by AI client operations derive from this class.
AIClientExceptionParser
AIClientSettings
Base class for AI client settings. This class is used to store API keys and other settings related to AI clients.
AIClient<TSelf, TSettings>
AIDevKitComponentExtensions
AIDevKitEnterpriseSettings
AIDevKitManager
Exposed central hub for AIDevKit for customization and user context.
AIDevKitSettings
AIDevKitUtility
Exposed utility class for AIDevKit for general helper methods and extensions.
AIFileUploadRequest
Upload a file that can be used across various endpoints. The size of all the files uploaded by one organization can be up to 100 GB. The size of individual files can be a maximum of 512 MB or 2 million tokens for Assistants. See the Assistants Tools guide to learn more about the types of files supported. The Fine-tuning API only supports .jsonl files.
Please contact us if you need to increase these storage limits. https://help.openai.com/en/
AIProviderException
AIResourceBase
AIResourceCatalogBase<TDatabase, TResource, TSelf>
AllowedTools
AmazonTitanImageEditOptions
AmazonTitanImageOptions
AmazonTitanImageOptionsBase
AmazonTypes
Amazon Bedrock specific types.
AmazonTypes.S3Location
A storage location in an Amazon S3 bucket.
Annotation
AnnotationEnvelope
Non-flattened wrapper for different types of annotations.
ApiPolicy
ApiPopupAttribute
ApiRefAttribute
ApiSpecificAttribute
ApiSpecificPropertyAttribute
ApiUtility
ApproximateLocation
AudioBufferStateChangeEvent
Event raised when the state of an audio buffer changes.
AudioContentData
AudioIsolationRequest
AudioIsolationSettings
AudioPart
AudioPrice
Represents per-second audio pricing, used for text-to-speech and voice change models.
AudioPrompt
A specialized prompt for various audio-related requests, such as voice change, audio isolation, etc.
This class is used to pass the instruction and the audio to the respective audio model for processing.
AudioPurpose
AudioUsage
BaseApiPropertyAttribute
BaseModelInfo
Holds identifying information about the base model that a fine-tuned model was derived from. Used to trace the lineage of custom models back to their original foundation model.
ChatChoice
Represents the final aggregated result of a streaming chat completion from an LLM. This is not the raw 'ChatCompletionChoice' from provider APIs, but a unified domain model that accumulates and structures the complete response after streaming is finished. Returned as Generated<ChatChoice> from various chat APIs across different providers.
ChatCompletionExtensions
ChatCompletionRequest
Request for generating chat completions from an LLM model.
ChatCompletionRequestBase<TSelf, TInput, TAsset>
ChatState
ClickAction
CodeGenerationRequest
Added new on 2025.05.28 Request for generating code snippets or scripts for Unity C#.
CodeInterpreter
A tool that runs Python code to help generate a response to a prompt.
CodeInterpreter.FileIdSet
Code interpreter container.
CodeInterpreterOutput
A tool call to run code.
CodeInterpreterOutputImage
CodeInterpreterOutputLogs
CodeInterpreterResult
Be careful. This is not a separate tool call, but a sub-object used within CodeInterpreterCall.
CodeInterpreterSettings
CollectionExtensions
Extension methods for AIDevKit types.
ComparisonFilter
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
CompoundFilter
Combine multiple filters using and or or.
ComputerAction
ComputerUse
A tool that controls a virtual computer.
ComputerUseCall
A tool call to a computer use tool. See the computer use guide for more information.
ComputerUseOutput
The output of a computer tool call.
ComputerUseSafetyCheck
ComputerUseScreenshotInfo
ContainerFileCitation
ContentPart
Base class for different types of content parts in a message. Each content part has a defined type, such as Text, Image(Url/Base64/FileId), Audio(Base64), or File(Base64/FileId).
ContentPart<T>
Base class for different types of content parts in a message. Each content part has a defined type, such as Text, Image(Url/Base64/FileId), Audio(Base64), or File(Base64/FileId).
ContextSizeExceededException
ConversationItem
ConversationItemExtensions
ConversationItemType
CountTokensRequest
CustomTool
A custom tool that processes input using a specified format.
CustomToolCall
A call to a custom tool created by the model.
CustomToolChoice
CustomToolDefinition
CustomToolFormat
Polymorphic format for custom tool input.
CustomToolFormatConverter
Polymorphic converter for CustomToolFormat.
CustomToolOutput
The output of a custom tool call from your code, being sent back to the model.
DefaultEmbeddingService
DefaultImageGenerationService
DefaultModels
DefaultSpeechToTextService
DefaultTextToSpeechService
DefaultVoices
DeleteFileRequest
DeleteModelRequest
DeleteVoiceRequest
DeltaText
Specialized text container for accumulating delta text.
DeltaTextState
DetokenizationRequest
DocumentContentData
DocumentPart
DomainPropertyAttribute
DoubleClickAction
DownloadFileRequest
DragAction
DragAction.Coordinate
ElevenLabsAudioOptions
Provider-specific audio options for ElevenLabs requests.
Pass this via SetSpecificOptions() or the SetElevenLabsFormat() convenience method.
ElevenLabsMusicOptions
ElevenLabs-specific options for music generation requests.
ElevenLabsTypes
Types only used by ElevenLabs API.
This shit is here instead of the ElevenLabs assembly
because this is used by GENTask and the UnityEditor Generator Windows.
Embedding
Represents an embedding output as an asset structure,
designed to be used with Generated<Embedding>.
This wrapper class allows for future extensibility,
supporting not only float[] but also other embedding types as required by different APIs.
EmbeddingPrompt
EmbeddingRequest
EmbeddingsSettings
EmptyResponseException
FieldRefAttribute
FileCitation
FileContentData
FilePart
FilePathAnnotation
FilePrompt
FileRequest<TSelf, TResult>
FileSearch
A tool that searches for relevant content from uploaded files.
FileSearch.RankingOptions
Ranking options for search.
FileSearchOutput
FileSearchResult
FileSearchSettings
FileSource
FileSourceConversionExtensions
FindAction
FineTunedModel
Represents a fine-tuned (custom-trained) AI model.
Extends Model with fine-tuned-specific semantics;
IsFineTuned always returns true for instances of this class.
FineTunedModelCatalog
ScriptableObject database for storing fine-tuned (custom-trained) model assets. Mirrors ModelCatalog but is typed to FineTunedModel.
FineTunedModelCatalog.Repo
Database for storing fine-tuned model data.
FineTuningFile
A JSONL file is a text file where each line is a valid JSON object. This format is commonly used for training data in machine learning tasks, including fine-tuning.
FineTuningRequest
FreePrice
Represents a free pricing tier with zero cost.
FrequencyPenalty
Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Function
Structured representation of a function declaration as defined by the OpenAPI 3.03 specification. Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client.
FunctionCall
A tool call to run a function. See the function calling guide for more information.
FunctionOutput
The output of a function tool call.
FunctionPropertyAttribute
Attribute for marking properties as function parameters in JSON Schema for LLM function calls.
Note: It's a duplicate of JsonSchemaPropertyAttribute for clarity and intent.
FunctionSchemaAttribute
OpenAI styled JSON Schema attribute for annotating classes for LLM function calls.
Note: It's a duplicate of StrictJsonSchemaAttribute for clarity and intent.
FunctionToolChoice
GeneratedImageData
OpenAI style Generated Image Data used by OpenAI and xAI. Represents the Url or the content of an image generated by Image Generation AI.
GeneratedText
Generated<T>
Represents the complete result envelope for content generated by AI services through the Unified API. Encapsulates generated values, metadata, usage statistics, and optional error information.
GenerationRecordOptions
Options that control how a GenerationRecord is created and stored after an AI generation request completes.
GenerationRecordOptions.GenerationMergeOptions
Defines how a new GenerationRecord should be merged into an existing one.
GenerationSettings
Serializable base class for AI generation settings.
GenerativeAudioRequest<TSelf, TInput, TOptions>
GenerativeAudioSettings
GenerativeImageRequest<TRequest, TProviderOptions>
GenerativeRequest<TSelf, TPrompt, TResult, TOptions>
Abstract base class for all generative AI tasks. Provides common properties and methods for handling prompts, models, outputs, and execution.
GenerativeServiceBase
GenerativeStreamExtensions
GenerativeStream<TDelta, TResult>
GenerativeTextRequest<TSelf, TInput, TResult, TEvent>
Base class for text generation tasks using LLM models. Supports instructions, role-based prompts, and attachments.
GenerativeVisualRequest<TSelf, TAsset, TEvent, TOptions>
GetCreditsRequest
Get total credits purchased and used for the authenticated user
GetFileRequest
GetModelRequest
GetSignedUrlRequest
GetVoiceRequest
GoogleDiffusionOptionsBase
Base class for Google diffusion model request options (Imagen, Veo). Contains parameters shared across Google's image and video generation APIs.
GoogleImagenOptions
Provider-specific request options for Google Imagen image generation.
Pass this to ImageGenerationRequest via SetProviderOptions.
GoogleTypes
Types only used by Google API.
This shit is here instead of the Google assembly
because this is used by GENTask and the UnityEditor Generator Windows.
GoogleTypes.UploadMetadata
GoogleVeoOptions
Provider-specific request options for Google Veo video generation.
Pass this to VideoGenerationRequest via SetProviderOptions.
GrammarCustomToolFormat
A grammar defined by the user.
HostedToolChoice
Only for Responses API.
Indicates that the model should use a built-in tool to generate a response.
Learn more about built-in tools: https://platform.openai.com/docs/guides/tools
Allowed types (2025-09-21):
- file_search
- web_search_preview
- computer_use_preview
- code_interpreter
- image_generation
HostedToolDefinitionBase
HyperParameters
The hyperparameters used for the fine-tuning job.
ImageAnalysis
Represents an image analysis output as an asset structure,
designed to be used with Generated<ImageAnalysis>.
ImageBackgroundRemovalRequest
ImageContentData
ImageEditRequest
ImageEraseRequest
ImageGenerationOutput
An image generation request made by the model.
ImageGenerationRequest
ImageGenerationSettings
ImageGenerationTool
A tool that generates images using a model like gpt-image-1.
ImageGenerationToolSettings
ImageInpaintRequest
ImageOutpaintRequest
ImagePart
ImagePrice
Represents per-image pricing, optionally scoped by resolution and quality tier.
ImagePrompt
A specialized prompt for various image-related requests, such as image inpainting, rotation, animation, etc.
This class is used to pass the instruction and the image to the respective image model for processing.
ImageQualitySwitchAttribute
ImageSearchAndRecolorRequest
ImageSearchAndReplaceRequest
ImageSizeSwitchAttribute
ImageStyleTransferRequest
ImageUsage
InappropriateRequestException
IncompleteDetails
Details on why the response is incomplete. Will be null if the response is not incomplete.
IncompleteResponseException
InterruptedResponseException
InvalidPromptException
ItemReference
JsonSchemaFormat
KeyboardTypeAction
ListFilesRequest
ListModelsRequest
ListVoicesRequest
LocalShell
A tool that allows the model to execute shell commands in a local environment.
LocalShellCall
A call to run a command on the local shell.
LocalShellOutput
The output from a local shell tool call.
Location
LogProb
LogitBias
Optional. Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling.
The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. Defaults to null.
Logprobs
Whether to return log probabilities of the Output tokens or not. If true, returns the log probabilities of each Output token returned in the content of message. This option is currently not available on the gpt-4-vision-preview model. Defaults to 0.
MalformedRequestException
Exception thrown when a request is malformed or contains invalid parameters. This indicates that the request structure or content does not meet API requirements.
Mcp
Give the model access to additional tools via remote Model Context Protocol (MCP) servers.
McpApprovalRequest
A request for human approval of a tool invocation.
Model > User
McpApprovalResponse
A response to an MCP approval request.
User > Model
McpConnectorDefinition
McpException
McpHttpException
McpListToolsCallOutput
A list of tools available on an MCP server.
Model > User
McpOutput
An invocation of a tool on an MCP server.
This is both a tool call (from model to user) and a tool call output (from user to model).
Model > User > And you should send the corresponding output back to the model
McpProtocalException
McpServerConnectorNotFoundException
McpServerDefinition
McpServerNotFoundException
McpToolApprovalFilter
Specify which of the MCP server's tools require approval.
McpToolChoice
McpToolDefinition
McpToolExecutionException
McpToolInfo
Information about a tool available on an MCP server.
McpToolPermissionInfo
McpToolRefList
List of allowed tool names or a filter object.
Message
MessageContent
Text:
- ChatCompletion > ChatChoice[] > Message[] > MessageContent > StringOrPart > Text
MessageContentPart: - ChatCompletion > ChatChoice[] > Message[] > MessageContent > StringOrPart > MessageContentPart[]
MissingCatalogItemException
Exception thrown when a requested catalog item is not found in the specified catalog. Catalogs contain registered items such as models, tools, or other resources.
Model
ScriptableObject representation of a generative AI model with metadata, configuration, and pricing information. Supports token limits, ownership, creation time, and dynamic pricing for various content types (text, image, audio).
ModelBase
ScriptableObject representation of a generative AI model with metadata, configuration, and pricing information. Supports token limits, ownership, creation time, and dynamic pricing for various content types (text, image, audio).
ModelCatalog
ScriptableObject database for storing model data. This database is used to keep track of the models available in the AI library.
ModelCatalog.Repo
Database for storing model data.
ModelErrorException
ModelExtensions
ModelFamily
Identifies a model family for a provider (for example, GPT, Gemini, or Llama). Keep this as string-based constants, not an enum, to avoid enum-order maintenance issues.
ModelInfo
ModelInterop
ModelNotFoundException
ModelNotReadyException
ModelPolicy
ModelPopupAttribute
ModelPrice
Abstract base class representing the pricing information for a model usage tier. Derived classes represent specific billing units such as per-token, per-image, or per-second.
ModelRefAttribute
ModelRequest<TSelf, TResult>
ModelStreamErrorException
ModelTimeoutException
ModelUtility
Moderation
Represents a moderation output as an asset structure,
designed to be used with Generated<Moderation>.
ModerationPrompt
Not directly used as a prompt, but other prompts can convert to this type for moderation requests.
This class is used to pass the text and optional images to the moderation model for processing.
ModerationRequest
Audio not supported yet.
ModerationSettings
MouseActionBase
MoveAction
MusicGenerationRequest
Request for generating music from a text prompt.
NCount
The number of responses to generate. Must be between 1 and 10.
NetworkSettings
NoInputItemsException
Exception thrown when a request requires at least one input item but none were provided. This is a specialized case of MalformedRequestException for missing input data.
NotRegisteredModelException
Exception thrown when a requested AI model is not registered in the model catalog. This is a specialized case of MissingCatalogItemException for model-specific errors.
OcrDocument
Represents an OCR output as an asset structure,
designed to be used with Generated<OcrDocument>.
OcrLineResult
OcrPrice
Represents per-page pricing, used for OCR and document analysis models.
OcrRequest
OcrUsage
OpenAIDalle3Options
Provider-specific request options for OpenAI DALL-E image generation.
Pass this to ImageGenerationRequest via SetProviderOptions.
OpenAIDiffusionOptionsBase
Base class for OpenAI diffusion model request options (DALL-E, GPT-Image, Sora). Contains parameters shared across OpenAI's image and video generation APIs.
OpenAIGptImageOptions
Provider-specific request options for OpenAI GPT-Image (gpt-image-1) generation.
Supports additional parameters such as background transparency and output compression.
Pass this to ImageGenerationRequest via SetProviderOptions.
OpenAISoraOptions
Provider-specific request options for OpenAI Sora video generation.
Pass this to VideoGenerationRequest via SetProviderOptions.
OpenAITypes
Types only used by OpenAI API.
This shit is here instead of the OpenAI assembly
because this is used by GENTask and the UnityEditor Generator Windows.
OpenAITypes.ImageCompressionLevel
output_compression integer or null Optional Defaults to 100 The compression level (0-100%) for generated images. This parameter is only supported for gpt-image-1 with webp/jpeg output formats.
OpenAITypes.ImageReference
A reference to an image, either by file ID or base64-encoded data.
OpenPageAction
OutpaintSides
OutputExtensions
Extension methods for AIDevKit types.
PerplexityTypes
PresencePenalty
Defaults to 0 Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
PressKeyAction
ProjectContext
ProjectContextExtensions
Prompt
Represents a text prompt used in LLM (Large Language Model) AI interactions. Probably the most common type of prompt in AIDevKit.
PromptBase
PromptBase<T>
PromptExtensions
PromptFeedback
A set of the feedback metadata the prompt specified in GenerateContentRequest.Contents.
PromptTemplate
A reference to a predefined prompt template stored on the AI provider's servers.
This allows you to use complex prompt templates without having to include
the full text of the prompt in your request.
Instead, you can simply reference the prompt by its unique identifier and
provide any necessary variables for substitution.
This can help to keep your requests smaller and more manageable,
especially when working with large or complex prompts.
Example Template:
"Write a daily report for ${name} about today's sales. Include top 3 products."
ProviderBridgeAttribute
RateLimitExceededException
RealtimeApiException
Reasoning
ReasoningOptions
RedactedReasoning
Anthropic-specific class. Represents a block of content where the model's internal reasoning or "thinking" has been intentionally hidden (redacted) before being returned to the client.
ReferenceContentData
ReferencePart
RequestPrice
Represents a flat per-request pricing model.
RequestType
RequestUsage
ResponseFormat
ResponseMessage
ResponseRequest
SafetyFeedback
Safety feedback for an entire request.
This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.
SafetyIdentifier
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. https://platform.openai.com/docs/guides/safety-best-practices#safety-identifiers
SafetyRating
A safety rating associated with a {@link GenerateContentCandidate}
SafetySetting
Safety setting, affecting the safety-blocking behavior. Passing a safety setting for a category changes the allowed probability that content is blocked.
SafetySettingExtensions
ScreenshotAction
ScrollAction
SearchAction
Seed
Random seed for deterministic sampling (when supported):
- Purpose — Reproduce the same output across runs with identical inputs.
- Scope — Holds only if provider, model/deployment, version, and all params are unchanged.
- null — Lets the service choose a random seed (non-deterministic).
- Range — 0–9,223,372,036,854,775,807 (signed 64-bit long).
- Support — Some models/services ignore seeds; if unsupported, this has no effect.
ServerDictionary
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
ShellCommand
ShellCommandCatalog
ShellCommandCatalog.Repo
ShellCommandEntry
SignedUrl
SoundEffectGenerationRequest
Task for generating sound effects based on a text prompt.
SourceCode
Represents a code generation output as an asset structure,
designed to be used with Generated<SourceCode>.
SpeechGenerationOptions
SpeechGenerationRequest
Task for generating synthetic speech (text-to-speech) using the specified model.
SpeechGenerationRequestBase<TSelf, TPrompt, TConfig>
SpeechGenerationSettings
SpeechSpeed
The speed of the model's spoken response as a multiple of the original speed.
1.0 is the default speed. 0.25 is the minimum speed.
1.5 is the maximum speed.
This value can only be changed in between model turns, not while a response is in progress.
This parameter is a post-processing adjustment to the audio after it is generated,
it's also possible to prompt the model to speak faster or slower.
SpeechTranslationRequest
Task for translating speech into English text using the speech translation model.
SpokenLanguagePopupAttribute
StabilityEraseOptions
StabilityImageOptions
StabilityInpaintOptions
StabilityOutpaintOptions
StabilitySearchAndRecolorOptions
StabilitySearchAndReplaceOptions
StabilityStyleTransferOptions
StabilityTypes
StatusChangedEvent<T>
StatusExtensions
StreamOptions
StreamSettings
StreamingAudioConfig
StreamingGenerativeRequest<TSelf, TWire, TResult, TEvent, TOptions>
StrictJsonSchema
OpenAI styled JSON Schema for strict response formatting.
StrictJsonSchemaAttribute
OpenAI styled Strict JSON Schema attribute for annotating classes.
StrictJsonSchemaAttribute > StrictJsonSchema > JsonSchemaFormat(ResponseFormat)
StrictJsonSchemaExtensions
StructuredGeneratorBase<T>
StructuredOutputRequestBase<TSelf, TAsset>
StructuredOutputRequest<T>
Task for generating structured output (e.g., JSON) using an LLM model.
SubscriptionRequiredException
SystemMessage
Temperature
Sampling temperature: controls randomness in output.
- Lower = deterministic
- Higher = creative
Range: 0.0–2.0 (typical: 0.7–1.0).
TextChunk
Represents a chunk of text content in an LLM streaming response. Used for real-time display of AI-generated text as it arrives from the model.
TextContentData
TextCustomToolFormat
Unconstrained free-form text format.
TextGenerationSettings
Unified configuration settings for text generation across different APIs, including base LLM parameters and API-specific options.
TextPart
Text, Refusal, InputText, OutputText content part.
TextResponseOptions
TextSegment
Represents a detokenization output as an asset structure,
designed to be used with Generated<TextSegment>.
ThinkingContentData
ThinkingPart
TimeWindowExtensions
TokenCount
Used to set 'max_tokens', 'max_completions_tokens', 'max_output_tokens', etc. Must be greater than or equal to 1024. Set it null to disable the limit.
TokenId
Represents a tokenization output as an asset structure,
designed to be used with Generated<TokenId>.
TokenMetrics
Represents a token count output as an asset structure,
designed to be used with Generated<TokenMetrics>
TokenPrice
Represents per-token pricing, distinguishing between input and output token types.
TokenPrompt
TokenUsage
TokenizationRequest
Tool
Base class for all tools, includes type.
ToolCall
ToolCallArguments
Represents tool call arguments as a specialized text chunk. When an LLM decides to invoke a tool/function, the arguments are streamed as JSON text through this chunk type.
ToolCallKey
ToolCallState
ToolChoice
This can be a String or an Object Specifies a tool the model should use. Use to force the model to call a specific tool.
ToolMessage
API don't send these anymore. It is only used to send the tool outputs from the 'client' side.
ToolOutput
ToolOutputTimeoutException
ToolReference
ToolResult<T>
ToolStatusEvent
Event raised when a tool's status changes.
ToolSupport
ToolTypeUtility
TopK
TopP
Transcript
TranscriptionPrice
Represents per-character pricing, used for transcription and character-based billing models.
TranscriptionRequest
Task for converting speech audio into text (speech-to-text).
TranscriptionRequestBase<TSelf>
TranscriptionSettings
TranscriptionUsage
TruncationStrategy
UnhandledToolCallException
UnifiedApiCallerExtensions
Beginner-friendly fluent extension methods that create request objects for generative AI.
These helpers do not send any network calls until you invoke .ExecuteAsync().
- Pattern:
host.GENXxx().SetModel(...).ExecuteAsync() - Thin factories only; they return strongly-typed
*Requestobjects. - No background work, no I/O, no async until
.ExecuteAsync().
UnifiedApiCallerExtensionsAws
UnifiedApiCallerExtensionsEnterprise
Enterprise-only fluent extension methods for advanced image-edit tasks.
These helpers only create request objects; network calls start at .ExecuteAsync().
UnifiedApiRequestBase<TSelf, TResult, TProviderOptions>
Base class for all Fluent API requests. Provides common properties and methods for configuring and executing requests.
UnknownAction
UnknownItem
UnsupportedEndpointException
Exception thrown when a requested endpoint is not supported by the specified API. Different APIs support different sets of endpoints, and this exception indicates an attempt to use an unsupported one.
UploadFileRequest
UploadedFile
Serializable implementation of IUploadedFile used by the SDK. Stores provider file metadata in a Unity-friendly form for runtime and editor workflows.
UploadedFileCatalog
ScriptableObject database for storing file data. This database is used to keep track of the files available in the AI library.
UploadedFileCatalog.Repo
Database for storing file data.
UrlCitation
UrlSearchSource
Usage
Usage metadata returned by AI service providers after a generation request. Contains token usage details for billing and monitoring.
UsageCalculator
UsageInfo
Usage<T>
UserMessage
ValueChangedEvent<T>
VerboseTranscript
Represents a verbose json transcription response returned by model, based on the provided input. Used by OpenAI, GroqCloud, and other compatible services.
VerboseTranscript.Segment
VerboseTranscript.WordObject
VideoContentData
VideoGenerationRequest
VideoGenerationSettings
VideoPart
Voice
VoiceCatalog
ScriptableObject database for storing voice data used for TTS (Text-to-Speech) and other voice-related tasks.
VoiceCatalog.Repo
Database for storing voice data.
VoiceChangeRequest
VoiceChangeSettings
VoiceInfo
Interface for voice data retrieved from various AI APIs. This interface defines the properties that all voice data should implement. It is used to standardize the voice data across different AI providers.
VoiceNotFoundException
VoicePolicy
VoicePopupAttribute
VoiceRequest<TSelf, TResult>
VoiceStyleConverter
VoiceUtility
WaitAction
WebSearch
Search the Internet for sources related to the prompt.
WebSearchAction
WebSearchFilter
Filters for the search.
WebSearchOutput
A tool call to perform a web search action.
This tool call does not have a corresponding output class, as the results are returned via text messages.
WebSearchPreview
This tool searches the web for relevant results to use in a response.
WebSearchPrice
Represents per-search pricing for models that include web search capability.
WebSearchSettings
WebSearchSource
WebSearchUsage
Structs
ApiAccess
ClipSpec
DocumentRef
FileRef
MediaSize
Represents a media size for images and videos with predefined presets for various AI models and social media platforms. Supports DALL-E, GPT Image, Sora, and common social media formats.
OpenAITypes.ImageQuality
The quality of the image that will be generated. HD creates images with finer details and greater consistency across the image. This param is only supported for OpenAIModel.DallE3.
ResponseStatusChangedEvent
ServiceTier
The service tier to use for the request. "auto" lets the system choose the appropriate tier based on context. Different providers may have different tier names and meanings. See provider documentation for details.
ToolCallEvent
Event raised when a tool call is requested by the model. Wraps the ToolCall domain model for event propagation.
ToolOutputEvent
Event raised when a tool produces output.
TranscriptDelta
TruncationType
UsageEvent
Event containing the usage metrics for the current operation and total conversation usage. Can be used to update the UI after each message is sent/received.
Weighted<T>
A wrapper to hold an item along with its associated weight value.
Interfaces
IAIResource
Represents data provided by an AI service provider like Model, Voice, or UploadedFile.
IAnnotationChunk
Base interface for annotation/citation data.
IAnnotationDeltaListener
Interface for listening to annotation delta events.
IAssistantsChatService
IAssistantsOptions
IAudioBufferStateListener
Interface for listening to audio buffer state change events.
IAudioRequestOptions
Provider-specific options for audio generation (TTS / voice) endpoints.
IChatApiStreamHandler
IChatCompletionsApiStreamHandler
IChatCompletionsOptions
IChatEventListener
IChatSettingsUpdater<TRequestSettings>
IComputerUseResult
IContentData
IConversationEventListener
Interface for listening to conversation events.
ICreditInfo
IEmbeddingRequestOptions
Provider-specific options for text/multimodal embedding endpoints.
IEmbeddingService
Service interface for text embedding generation.
IEmbeddingsOptions
IErrorEventListener
Interface for listening to error events.
IFileRequestOptions
Provider-specific options for file upload/management endpoints.
IFileSearchFilter
IFineTuningRequestOptions
Provider-specific options for fine-tuning endpoints.
IFineTuningResult
IGenerationOptions
IGenerativeRequest
Defines the common contract for all Unified API generative requests.
IGenerativeService<TInput, TOutput, TSettings>
Represents a generative AI service that produces a final output from an input.
IGenerativeStreamListener
IGenerativeStreamListener<TEvent, TAsset>
IGenerativeStream<TDelta, TResult>
IGenerativeTextRequest
IGenerativeVisualRequest
Common contract for image and video generation requests. "Visual" explicitly excludes audio ??this covers any generation request whose primary output is an image or video frame.
IHostedToolOptions
IImageAnalysisResult
IImageAnalysisTextResult
IImageDeltaListener
Interface for listening to image delta events.
IImageGeneration
Base marker interface for all image generation services.
IImageGenerationOptions
IImageGenerationService
Service interface for standard image generation (REST API).
IImageRequestOptions
Provider-specific options for image generation endpoints.
ILoadablePrompt
IManagementRequestOptions
Provider-specific options for resource management endpoints (e.g. listing, deleting models).
IMcp
IMcpApprovalRequestListener
Interface for listening to MCP approval request events.
IMcpApprovalSender
IModelInfo
Defines the contract for model metadata retrieved from AI provider APIs (e.g. GET /v1/models).
All provider-specific model data classes implement this interface to ensure a consistent
structure for model type, capabilities, token limits, pricing, and benchmark information.
IModeratable
IModerationOptions
IModerationRequestOptions
Provider-specific options for content moderation endpoints.
IOcrRequestOptions
Provider-specific options for OCR (optical character recognition) endpoints.
IPrompt
IProviderBridge
IProviderRequestOptions
Base marker interface for all provider (vendor)-specific request options.
Options exist because not every parameter is supported by every provider. Rather than polluting the shared FluentAPI with provider-specific fields, extra parameters are encapsulated in a dedicated options object and passed alongside the request.
All provider-specific option classes should implement this interface, either directly or through one of the endpoint-scoped sub-interfaces below.
IRealtimeApiStreamHandler
IRealtimeChatService
IRealtimeEventListener
IRealtimeOptions
IResponseEventListener
IResponseMessageProvider
IResponsesApiStreamHandler
IResponsesOptions
ISpeechToText
Base marker interface for all speech-to-text services.
ISpeechToTextOptions
ISpeechToTextService
Service interface for standard speech-to-text (REST API).
IStreamingAudioListener
IStreamingChatService
IStreamingChatService<TDelta, TResult, TSettings>
Interface for chat-based generative AI services that handle conversational interactions.
IStreamingGenerativeRequest<TEvent, TAsset>
IStreamingGenerativeService<TInput, TDelta, TOutput, TSettings>
Represents a generative AI service that can stream incremental output.
IStreamingImageGenerationService
Service interface for streaming image generation.
IStreamingImageListener
IStreamingSpeechToTextService
Service interface for streaming speech-to-text.
IStreamingTextToSpeechService
Service interface for streaming text-to-speech.
IStreamingTranscriptListener
ITextChunk
Base interface for streaming text data chunks.
ITextDeltaListener
Interface for listening to text delta events.
ITextGenerationOptions
ITextRequestOptions
Provider-specific options for text generation (chat / completion) endpoints.
ITextToSpeech
Base marker interface for all text-to-speech services.
ITextToSpeechOptions
ITextToSpeechService
Service interface for standard text-to-speech (REST API).
IToolCallOutput
IToolOutputListener
Interface for listening to tool output events.
IToolSettings
IToolStatusListener
Interface for listening to tool status events.
ITranscriptionRequestOptions
Provider-specific options for speech-to-text (transcription) endpoints.
IUploadedFile
Represents a normalized uploaded-file resource returned by provider APIs. This interface exposes shared file metadata so higher-level SDK code can remain provider-agnostic.
IUsageEventListener
Interface for listening to usage events.
IUsageProvider
IUserProfile
Attach this interface to your user class to enable AIDevKit features. This interface is used to provide user-specific context and settings.
IVideoGenerationOptions
IVideoRequestOptions
Provider-specific options for video generation endpoints.
Video generation parameters often overlap heavily with image generation options.
IVoiceInfo
Interface for voice data retrieved from various AI APIs. This interface defines the properties that all voice data should implement. It is used to standardize the voice data across different AI providers.
IWebSocketAudioInputService
[DEPRECATED] This interface has been replaced by IWebSocketSpeechToTextService. Use IWebSocketSpeechToTextService.PushInputAudioBuffer() instead. This file will be removed in a future version.
IWebSocketGenerativeService<TInputBuffer, TDelta, TOutput, TSettings>
Represents a generative AI service that maintains a persistent WebSocket connection for bidirectional real-time communication.
IWebSocketImageGenerationService
Service interface for WebSocket-based image generation.
IWebSocketSpeechToTextService
Service interface for WebSocket-based speech-to-text.
IWebSocketTextToSpeechService
Service interface for WebSocket-based text-to-speech.
Enums
AmazonTypes.TitanImageQuality
AnnotationType
Api
Identifies the available AI service providers for API integrations.
This enum represents logical AI backends rather than pure network providers.
Some entries may be cloud-based, self-hosted, local, or experimental.
AudioBufferState
Represents the state of an audio buffer.
CatalogType
ChatRole
CodeInterpreterStatus
CodeReferenceSource
ComparisonType
CompoundType
ConnectionMode
ContentFormat
ContentType
Identifies the content part type used across AI APIs (ChatCompletions, Responses, Assistants, and third-party providers).
ContextPurpose
CustomToolFormatType
ElevenLabsTypes.InputFormat
The format of input audio. Options are "pcm_s16le_16" or "other". For pcm_s16le_16, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
ElevenLabsTypes.MusicVocalType
The granularity of the timestamps in the transcription. "word" provides word-level timestamps and "character" provides character-level timestamps per word.
ElevenLabsTypes.OutputFormat
Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs. Default is mp3_44100_128.
ElevenLabsTypes.TimestampsGranularity
EmbedInputType
Type of input for embedding generation. RESTEnum has Marengo style string values.
EmbedTaskType
Google, Cohere Only. Task type for embedding content. It's in Google format, but it's converted to Cohere format with JsonConverter when using with Cohere provider.
EmbedTypes
FileResponseFormat
FileSourceType
FinishReason
The reason the model stopped generating tokens. It is also called "finish_reason" in some APIs.
GameGenre
GameTheme
GameType
GenerationStatus
GoogleTypes.AspectRatio
GoogleTypes.PersonGeneration
GoogleTypes.Resolution
HarmBlockThreshold
Block at and beyond a specified harm probability.
HarmCategory
Represents the category of harm that a piece of content may fall into. This is used in moderation tasks to classify content based on its potential harm.
HarmProbability
Probability that a prompt or candidate matches a harm category.
ImageAnalysisType
ImageEditType
IncludeOption
LanguageTone
McpOAuthType
MediaGenerationOp
Modality
Modality defines the data form a model accepts as input and returns as output. Even for similar generation tasks, models can differ in whether they use text, image, or audio data. The SDK uses this value to validate compatibility in model selection, request mapping, and UI filtering.
ModelCapability
Unified Model Capabilities Enum Combines capabilities across different model types for easier management.
ModelErrorType
ModelType
Types of AI Models. Multi-modal models such as Gemini should be classified under their primary function, typically as Language Models.
OSMask
OpenAITypes.AudioStreamFormat
OpenAITypes.Fidelity
OpenAITypes.ImageBackground
OpenAITypes.ImageDetail
OpenAITypes.ImageModeration
OpenAITypes.ImageStyle
The style of the generated images. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for OpenAIModel.DallE3.
OpenAITypes.MediaAspect
OpenAITypes.SpeechOutputFormat
PerplexityTypes.WebSearchMode
ReasoningEffort
ReasoningFormat
GroqCloud-specific parameter
ReasoningSummaryLevel
RequireApproval
ResourceKind
ResponseStatus
ResponseVerbosity
SearchContextSize
SearchStatus
StabilityTypes.AspectRatio
StabilityTypes.StylePreset
TextChunkType
Specifies the type of text content in an LLM (Large Language Model) streaming response.
TimeWindow
TokenCountPreset
TokenType
ToolChoiceMode
ToolOutputEventType
Defines the type of tool output event.
ToolStatus
ToolType
TranscriptFormat
UploadPurpose
Represents the intended use case assigned to an uploaded file by a provider API. Different purposes can affect validation rules, retention behavior, and where the file can be consumed.
UsageType
VoiceAge
VoiceCategory
The category of the voice.
VoiceGender
Mainly used as a TTS Voice property.
VoiceStyle
Describes the expressive speaking style of a voice. Style variants allow a single base voice to adopt different tonal characteristics suited to specific content types. Not all providers support styles; check provider documentation for availability.
VoiceType
WebSearchLocationMode
Delegates
RecordMerger
A delegate that merges two GenerationRecord instances into one. Used to combine a base record with a newly generated record, e.g. for appending streaming chunks.