Class GenerationConfig
Configuration options for model generation and outputs. Not all parameters may be configurable for every model.
Inheritance
Namespace: Glitch9.AIDevKit.Google
Assembly: .dll
Syntax
public class GenerationConfig
Properties
CandidateCount
Optional. Number of generated responses to return. Currently, this value can only be set to 1. If unset, this will default to 1.
Declaration
public int CandidateCount { get; set; }
Property Value
Type | Description |
---|---|
int |
EnableEnhancedCivicAnswers
Optional. Enables enhanced civic answers. It may not be available for all models.
Declaration
public bool? EnableEnhancedCivicAnswers { get; set; }
Property Value
Type | Description |
---|---|
bool? |
FrequencyPenalty
Optional. Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the response so far. A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more difficult it is for the model to use that token again increasing the vocabulary of responses. Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the maxOutputTokens limit.
Declaration
public float? FrequencyPenalty { get; set; }
Property Value
Type | Description |
---|---|
float? |
Logprobs
Optional. Only valid if responseLogprobs=True. This sets the number of top logprobs to return at each decoding step in the Candidate.logprobs_result.
Declaration
public int? Logprobs { get; set; }
Property Value
Type | Description |
---|---|
int? |
MaxTokens
Optional. The maximum number of tokens to include in a candidate. Note: The default value varies by model, see the Model.output_token_limit attribute of the Model returned from the getModel function.
Declaration
public int? MaxTokens { get; set; }
Property Value
Type | Description |
---|---|
int? |
MediaResolution
Optional. If specified, the media resolution specified will be used.
Declaration
public MediaResolution? MediaResolution { get; set; }
Property Value
Type | Description |
---|---|
MediaResolution? |
PresencePenalty
Optional. Presence penalty applied to the next token's logprobs if the token has already been seen in the response. This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use frequencyPenalty for a penalty that increases with each use. A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary. A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary.
Declaration
public float? PresencePenalty { get; set; }
Property Value
Type | Description |
---|---|
float? |
ResponseLogprobs
Optional. If true, export the logprobs results in response.
Declaration
public bool? ResponseLogprobs { get; set; }
Property Value
Type | Description |
---|---|
bool? |
ResponseMimeType
Optional. MIME type of the generated candidate text. Supported MIME types are:
- text/plain: (default) Text output.
- application/json: JSON response in the response candidates.
- text/x.enum: ENUM as a string response in the response candidates. Refer to the docs for a list of all supported text MIME types. Refer to the docs for a list of all supported text MIME types.
Declaration
public string ResponseMimeType { get; set; }
Property Value
Type | Description |
---|---|
string |
ResponseModalities
Optional. The requested modalities of the response. Represents the set of modalities that the model can return, and should be expected in the response. This is an exact match to the modalities of the response. A model may have multiple combinations of supported modalities. If the requested modalities do not match any of the supported combinations, an error will be returned. An empty list is equivalent to requesting only text.
Declaration
public List<Modality> ResponseModalities { get; set; }
Property Value
Type | Description |
---|---|
List<Modality> |
ResponseSchema
Optional. Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays. If set, a compatible responseMimeType must also be set. Compatible MIME types: application/json: Schema for JSON response. Refer to the JSON text generation guide for more details.
Declaration
public JsonSchema ResponseSchema { get; set; }
Property Value
Type | Description |
---|---|
JsonSchema |
Seed
Optional. Seed used in decoding. If not set, the request uses a randomly generated seed.
Declaration
public int? Seed { get; set; }
Property Value
Type | Description |
---|---|
int? |
SpeechConfig
Optional. The speech generation config.
Declaration
public SpeechConfig SpeechConfig { get; set; }
Property Value
Type | Description |
---|---|
SpeechConfig |
StopSequences
Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response.
Declaration
public string[] StopSequences { get; set; }
Property Value
Type | Description |
---|---|
string[] |
Temperature
Optional. Controls the randomness of the output. Note: The default value varies by model, see the Model.temperature attribute of the Model returned from the getModel function. Values can range from [0.0, 2.0].
Declaration
public float? Temperature { get; set; }
Property Value
Type | Description |
---|---|
float? |
TopK
Optional. The maximum number of tokens to consider when sampling. Models use nucleus sampling or combined Top-k and nucleus sampling.Top-k sampling considers the set of topK most probable tokens. Models running with nucleus sampling don't allow topK setting. Note: The default value varies by model, see the Model.top_k attribute of the Model returned from the getModel function. Empty topK field in Model indicates the model doesn't apply top-k sampling and doesn't allow setting topK on requests.
Declaration
public int? TopK { get; set; }
Property Value
Type | Description |
---|---|
int? |
TopP
Optional. The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability. Note: The default value varies by model, see the Model.top_p attribute of the Model returned from the getModel function.
Declaration
public float? TopP { get; set; }
Property Value
Type | Description |
---|---|
float? |