Table of Contents

Class ModelOptions

Namespace
Glitch9.AIDevKit

This class defines a flexible set of parameters that control how text is generated by a language model. These options are entirely optional, but tuning them allows for precise control over randomness, token filtering, sampling behavior, and performance tuning.

public class ModelOptions
Inheritance
object
ModelOptions

Properties

FrequencyPenalty

Penalizes tokens that occur frequently across the generated content. Range: -2.0–2.0 (typical: 0–1.0).

public float? FrequencyPenalty { get; set; }

Property Value

float?

LogitBias

Biases specific tokens by ID. Use to influence token selection. Key = token ID (as string), Value = bias (-100 to 100, 0 = no bias).

public Dictionary<string, double> LogitBias { get; set; }

Property Value

Dictionary<string, double>

Logprobs

OpenAI only. Whether to return log probabilities of the Output tokens or not. If true, returns the log probabilities of each Output token returned in the content of message. This option is currently not available on the gpt-4-vision-preview model. Defaults to 0.

public int? Logprobs { get; set; }

Property Value

int?

LowVram

Ollama only. Use VRAM-optimized loading.

public bool? LowVram { get; set; }

Property Value

bool?

MainGpu

Ollama only. ID of the main GPU to prioritize.

public int? MainGpu { get; set; }

Property Value

int?

MaxTokens

Optional. Maximum number of tokens (range: [1, context_length)).

public int? MaxTokens { get; set; }

Property Value

int?

MinP

Minimum probability for token filtering (less common). Range: 0.0–1.0

public float? MinP { get; set; }

Property Value

float?

Mirostat

Ollama only. Enables Mirostat sampling (0: off, 1 or 2: enabled).

public int? Mirostat { get; set; }

Property Value

int?

MirostatEta

Ollama only. Controls learning rate in Mirostat sampling. Typical: 0.1

public float? MirostatEta { get; set; }

Property Value

float?

MirostatTau

Ollama only. ontrols surprise level in Mirostat sampling. Typical: 5.0

public float? MirostatTau { get; set; }

Property Value

float?

NumBatch

Number of tokens to process in a single batch.

public int? NumBatch { get; set; }

Property Value

int?

NumCtx

Ollama only. Number of context tokens (max sequence length).

public int? NumCtx { get; set; }

Property Value

int?

NumGpu

Ollama only. Number of GPUs to use.

public int? NumGpu { get; set; }

Property Value

int?

NumKeep

Ollama only. Number of initial tokens to keep from context when truncating.

public int? NumKeep { get; set; }

Property Value

int?

NumPredict

Ollama only. Maximum number of tokens to predict (like max_tokens).

public int? NumPredict { get; set; }

Property Value

int?

NumThread

Ollama only. Number of CPU threads to use for inference. Typical: number of physical CPU cores.

public int? NumThread { get; set; }

Property Value

int?

Numa

Ollama only. Enable NUMA-aware optimization.

public bool? Numa { get; set; }

Property Value

bool?

PenalizeNewline

Ollama only. Whether to apply penalties to newline tokens.

public bool? PenalizeNewline { get; set; }

Property Value

bool?

PresencePenalty

Penalizes tokens already present in the generated content. Range: -2.0–2.0 (typical: 0–1.0).

public float? PresencePenalty { get; set; }

Property Value

float?

RepeatLastN

Ollama only. Number of previous tokens to consider for repetition penalty. Typical: 64–256

public int? RepeatLastN { get; set; }

Property Value

int?

RepeatPenalty

Penalizes repetition of recent tokens. Range: 0.0–2.0 (typical: 1.1).

public float? RepeatPenalty { get; set; }

Property Value

float?

Seed

Random seed for reproducibility. Set to the same value for deterministic output.

public int? Seed { get; set; }

Property Value

int?

Stop

List of strings that, if generated, will stop further generation.

public List<string> Stop { get; set; }

Property Value

List<string>

Temperature

Sampling temperature: controls randomness in output. Lower = deterministic, Higher = creative. Range: 0.0–2.0 (typical: 0.7–1.0).

public float? Temperature { get; set; }

Property Value

float?

TopA

Top-A sampling: limits the next token selection to the A most probable tokens. Range: 1–100 (typical: 40).

public float? TopA { get; set; }

Property Value

float?

TopK

Top-K sampling: limits the next token selection to the K most probable tokens. Range: 1–100 (typical: 40).

public int? TopK { get; set; }

Property Value

int?

TopLogprobs

Number of top log probabilities to return per token. Range: 0–20 (if supported).

public int? TopLogprobs { get; set; }

Property Value

int?

TopP

Top-P sampling (nucleus sampling): limits the next token selection to a cumulative probability. Range: 0.0–1.0 (typical: 0.9).

public float? TopP { get; set; }

Property Value

float?

TypicalP

Ollama only. Typical sampling, alternative to top_p. Range: 0.0–1.0

public float? TypicalP { get; set; }

Property Value

float?

UseMlock

Ollama only. Lock model in RAM.

public bool? UseMlock { get; set; }

Property Value

bool?

UseMmap

Ollama only. Use memory-mapped files.

public bool? UseMmap { get; set; }

Property Value

bool?

VocabOnly

Ollama only. Only load the vocabulary; do not load full model.

public bool? VocabOnly { get; set; }

Property Value

bool?