Table of Contents

Namespace Glitch9.AIDevKit.Google

Classes

AttributionSourceId

Identifier for the source contributing to this attribution.

BatchEmbedContentsRequest

Generates multiple embeddings from the model given input text in a synchronous call.

BatchEmbedContentsResponse

Response from calling {@link GenerativeModel.batchEmbedContents}.

Blob

Interface for sending an image.

CachedContent

Content that has been preprocessed and can be used in subsequent request to ModelService. Cached content can be only used with model it was created for.

CachedContentRequest
Chunk

A Chunk is a subpart of a Document that is treated as an independent unit for the purposes of vector representation and storage. A Corpus can have a maximum of 1 million Chunks. Patch request has 'updateMask' query parameter: the list of fields to update. Currently, this only supports updating customMetadata and data.

ChunkBatchRequest<T>
ChunkData

Extracted data that represents the Chunk content.

CitationMetadata

Citation metadata that may be found on a {@link GenerateContentCandidate}.

CitationSource

A single citation source.

Condition

Filter condition applicable to a single key.

ContentEmbedding

A single content embedding.

ContentFilter

Content filtering metadata associated with processing a single request. ContentFilter contains a reason and an optional supporting string. The reason may be unspecified.

CorporaQueryRequest
CorporaQueryResponse

Response from corpora.query containing a list of relevant chunks.

Corpus

A Corpus is a collection of Documents. A project can create up to 5 corpora.

CreateChunkRequest

Request to create a Chunk.

DeleteChunkRequest

Request to delete a Chunk.

Document

A Document is a collection of Chunks. A Corpus can have a maximum of 10,000 Documents.

EmbedContentRequest

Generates an embedding from the model given an input Content.

EmbedContentResponse

The response to an EmbedContentRequest.

FineTuningData

A single example for tuning.

FineTuningDataset

Dataset for training or validation.

FineTuningExamples

A set of tuning examples. Can be training or validation data.

FineTuningTask
FunctionCallingConfig

Configuration for specifying function calling behavior.

FunctionLibrary
FunctionResponse

A predicted FunctionResponse returned from the model that contains a string representing the Name with the arguments and their values.

GeminiCandidate

Google Gemini version of ChatChoice. A response candidate generated from the model.

2025-09-24: Updated new properties to match the latest Gemini API.

GeminiConfig

Configuration options for model generation and outputs. Not all parameters may be configurable for every model.

Note: Formerly known as "GenerationConfig" to match the official Gemini API. But renamed to "GeminiConfig" to avoid confusion with other similar classes for other providers.

2025-03-30: Added new properties - responseMimeType, responseSchema, responseModalities 2025-09-24: Added new properties - thinkingConfig

GeminiContent

The base structured datatype containing multi-part content of a message. It's a Message in a demonic form.

GeminiContentPart

A datatype containing media that is part of a multi-part GeminiContent message.

A GeminiContentPart consists of data which has an associated datatype.A GeminiContentPart can only contain one of the accepted types in GeminiContentPart.Data.

A GeminiContentPart must have a fixed IANA MIME type identifying the type and subtype of the media if the InlineData field is filled with raw bytes.

GeminiCountTokensRequest

Params for calling {@link GenerativeModel.countTokens}

GeminiCountTokensResponse

Response from calling {@link GenerativeModel.countTokens}.

GeminiTool
GenerateAnswerRequest

Generates a response from the model given an input GenerateContentRequest. Input capabilities differ between models, including tuned models. See the model guide and tuning guide for details.

GenerateAnswerResponse

Response from the model for a grounded answer. If successful, the response body contains data with the following structure:

GenerateContentRequest

Generates a response from the model given an input GenerateContentRequest. Input capabilities differ between models, including tuned models.See the model guide and tuning guide for details.

GenerateContentRequestExtensions
GenerateContentResponse

Individual response from {@link GenerativeModel.generateContent} and {@link GenerativeModel.generateContentStream}. generateContentStream() will return one in each chunk until the stream is done.

GenerateImagesConfig
GenerateTextRequest

Generates a response from the model given an input message.

GenerateTextResponse

The response from the model, including candidate completions.

GenerateVideosConfig
GeneratedImageBlob
GoogleAIClient
GoogleAISettings
GoogleCodeExecution

The code execution tool allows the Gemini model to generate and run Python code. The model can then learn iteratively from the code execution results until it arrives at a final output. You can use code execution to build applications that benefit from code-based reasoning. For example, you can use code execution to solve equations or process text. You can also use the libraries included in the code execution environment to perform more specialized tasks. The code execution tool is available for all languages.

Gemini is only able to execute code in Python. You can still ask Gemini to generate code in another language, but the model can't use the code execution tool to run it.

GoogleFile

A file uploaded to the API.

GoogleFileData

URI based data.

GoogleGeminiRequest
GoogleImage
GoogleMetadata
GoogleModelData
GoogleModelOperation
GooglePermission

Permission resource grants user, group or the rest of the world access to the PaLM API resource (e.g. TunedModel or Corpus).

A role is a collection of permitted operations that allows users to perform specific actions on PaLM API resources. To make them available to users, groups, or service accounts, you assign roles. When you assign a role, you grant permissions that the role contains.

There are three concentric roles. Each role is a superset of the previous role's permitted operations:

- reader can use the resource (e.g. tuned model, corpus) for inference

- writer has reader's permissions and additionally can edit and share

- owner has writer's permissions and additionally can delete

GoogleSearch

Grounding with Google Search connects the Gemini model to real-time web content and works with all available languages. This allows Gemini to provide more accurate answers and cite verifiable sources beyond its knowledge cutoff.

GoogleUploadRequest
GoogleUrlContext

The URL context tool lets you provide additional context to the models in the form of URLs. By including URLs in your request, the model will access the content from those pages (as long as it's not a URL type listed in the limitations section) to inform and enhance its response.

GoogleVoice

This is not automatically generated. It's hard-coded.

GroundingAttribution

Attribution for a source that contributed to an answer.

GroundingChunk
GroundingMetadata
GroundingPassage

Passage included inline with a grounding configuration.

GroundingPassageId

Identifier for a part within a GroundingPassageId.

GroundingPassages

A repeated list of passages.

GroundingSupport
LogprobsResult
MetadataFilter

User provided filter to limit retrieval based on Chunk or Document level metadata values. Example (genre = drama OR genre = action): key = "document.custom_metadata.genre" conditions = [{stringValue = "drama", operation = EQUAL}, {stringValue = "action", operation = EQUAL}]

MultiSpeakerVoiceConfig

The configuration for the multi-speaker setup.

PrebuiltVoiceConfig
PredictionConfigBase
PredictionRequest

Generate image(s) from a prompt using the Imagen 3 model, or generate a video from a prompt using the Veo model. Both Imagen 3 and Veo models are only available on the google paid tier.

PredictionRequestConverter
PredictionResponse
RelevantChunk

The information for a chunk relevant to a query.

RetrievalMetadata
SearchEntryPoint
Segment
SemanticRetrieverChunk

Identifier for a Chunk retrieved via Semantic Retriever specified in the GenerateAnswerRequest using SemanticRetrieverConfig.

SemanticRetrieverConfig

Configuration for retrieving grounding content from a Corpus or Document created using the Semantic Retriever API.

SpeakerVoiceConfig

The configuration for a single speaker in a multi speaker setup.

SpeechConfig
Status

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details.

ThinkingConfig
ToolConfig

The Tool configuration containing parameters for specifying GeminiTool use in the request.

TopCandidates
TransferOwnershipRequest

Transfers ownership of the tuned model. This is the only way to change ownership of the tuned model. The current owner will be downgraded to writer role.

TunedModel
TunedModelSource

Tuned model as a source for training a new model.

TuningSnapshot
UpdateChunkRequest

Request to update a Chunk.

UrlContextMetadata
UrlMetadata
VideoGenerationReferenceImage
VideoMetadata

Metadata for a video File.

VoiceConfig
Web

Interfaces

IChunkRequest

Interface for all Chunk requests.

Enums

AnswerStyle
FileState

States for the lifecycle of a File.

FunctionCallingMode

Defines the execution behavior for function calling by defining the execution mode.

GoogleFileSource
GoogleGranteeType

Defines types of the grantee of this permission.

MediaResolution
Operator

Defines the valid operators that can be applied to a key-value pair.

State

States for the lifecycle of a Chunk.

UrlRetrievalStatus