Table of Contents

Class LanguageModelRequest<TSelf, TInput, TOutput>

Namespace
Glitch9.AIDevKit

Base class for text generation tasks using LLM models. Supports instructions, role-based prompts, and attachments.

public abstract class LanguageModelRequest<TSelf, TInput, TOutput> : GenerativeRequest<TSelf, TInput, TOutput, string, INoopStreamEvent<TOutput>>, ILanguageModelRequest, IGenerativeRequest, ISequentialRequest where TSelf : LanguageModelRequest<TSelf, TInput, TOutput> where TInput : IPrompt where TOutput : IGeneratedOutput

Type Parameters

TSelf
TInput
TOutput
Inheritance
object
FluentApiRequest<TSelf, TOutput>
GenerativeRequest<TSelf, TInput, TOutput, string, INoopStreamEvent<TOutput>>
LanguageModelRequest<TSelf, TInput, TOutput>
Implements
Derived
Inherited Members
Extension Methods

Constructors

LanguageModelRequest()

protected LanguageModelRequest()

LanguageModelRequest(TInput)

protected LanguageModelRequest(TInput prompt)

Parameters

prompt TInput

Properties

ContainerId

Anthropic-specific container ID for request grouping.

public string ContainerId { get; set; }

Property Value

string

FrequencyPenalty

Penalizes frequent repetition of the same token sequence. Range: -2.0–2.0 (typical: 0–1.0).

public FrequencyPenalty FrequencyPenalty { get; set; }

Property Value

FrequencyPenalty

Instructions

Optional. A system (or developer) message inserted into the model's context.

public string Instructions { get; set; }

Property Value

string

LogitBias

Biases specific tokens by ID. Use to influence token selection. Key = token ID (as string), Value = bias (-100 to 100, 0 = no bias).

public Dictionary<string, double> LogitBias { get; set; }

Property Value

Dictionary<string, double>

Logprobs

Whether to return log probabilities of the Output tokens or not. If true, returns the log probabilities of each Output token returned in the content of message. This option is currently not available on the gpt-4-vision-preview model. Defaults to 0.

public Logprobs Logprobs { get; set; }

Property Value

Logprobs

MaxOutputTokens

Optional. An upper bound for the number of tokens that can be generated for a response.

public TokenCount MaxOutputTokens { get; set; }

Property Value

TokenCount

ModelType

public override ModelType ModelType { get; }

Property Value

ModelType

ParallelToolCalls

Optional. Defaults to true. Whether to allow the model to run tool calls in parallel.

public bool? ParallelToolCalls { get; set; }

Property Value

bool?

PresencePenalty

Penalizes tokens already present in the generated content. Range: -2.0–2.0 (typical: 0–1.0).

public PresencePenalty PresencePenalty { get; set; }

Property Value

PresencePenalty

ReasoningOptions

Optional. Configuration options for reasoning models.

public ReasoningOptions ReasoningOptions { get; set; }

Property Value

ReasoningOptions

ResponseFormat

Defines how the response should be formatted (e.g., text, JSON).

public virtual ResponseFormat ResponseFormat { get; set; }

Property Value

ResponseFormat

SafetySettings

Safety filters that define moderation thresholds for the model output.

public List<SafetySetting> SafetySettings { get; set; }

Property Value

List<SafetySetting>

StartingMessage

Optional. A conversation starter of the model's response.

public string StartingMessage { get; set; }

Property Value

string

StreamOptions

Optional. Defaults to null Options for streaming responses. Only set this when you set stream: true.

public StreamOptions StreamOptions { get; set; }

Property Value

StreamOptions

Temperature

Optional. Defaults to 1 What sampling temperature to use, between 0 and 2.

public Temperature Temperature { get; set; }

Property Value

Temperature

ToolChoice

Optional. How the model should select which tool (or tools) to use when generating a response.

public ToolChoice ToolChoice { get; set; }

Property Value

ToolChoice

Tools

Optional. An array of tools the model may call while generating a response.

public List<Tool> Tools { get; set; }

Property Value

List<Tool>

TopK

Samples the topK tokens with the highest probabilities. Range is [1, 1000]. Default is 40.

public TopK TopK { get; set; }

Property Value

TopK

TopLogprobs

Optional. An integer between 0 and 20 specifying the number of most likely tokens to return.

public Logprobs TopLogprobs { get; set; }

Property Value

Logprobs

TopP

Optional. Defaults to 1 Nucleus sampling parameter.

public TopP TopP { get; set; }

Property Value

TopP

Methods

AddMessage(params Message[])

Adds context messages for chat-based models.

public TSelf AddMessage(params Message[] messages)

Parameters

messages Message[]

Returns

TSelf

AddMessageRange(IEnumerable<Message>)

public abstract TSelf AddMessageRange(IEnumerable<Message> messages)

Parameters

messages IEnumerable<Message>

Returns

TSelf

GetMessages()

public abstract List<Message> GetMessages()

Returns

List<Message>

SetFrequencyPenalty(FrequencyPenalty)

Sets the frequency penalty parameter.

public TSelf SetFrequencyPenalty(FrequencyPenalty frequencyPenalty)

Parameters

frequencyPenalty FrequencyPenalty

Returns

TSelf

SetInstructions(string)

Sets the instruction for the task. This is a specific command or request for the model to follow.

public TSelf SetInstructions(string instructions)

Parameters

instructions string

Returns

TSelf

SetLogitBias(Dictionary<string, double>)

Sets a custom token bias map.

public TSelf SetLogitBias(Dictionary<string, double> logitBias)

Parameters

logitBias Dictionary<string, double>

Returns

TSelf

SetLogprobs(Logprobs)

Enables returning token-level log probabilities.

public TSelf SetLogprobs(Logprobs logprobs)

Parameters

logprobs Logprobs

Returns

TSelf

SetMaxOutputTokens(TokenCount)

Sets the maximum number of tokens the model can generate in its response.

public TSelf SetMaxOutputTokens(TokenCount maxTokens)

Parameters

maxTokens TokenCount

Returns

TSelf

SetPresencePenalty(PresencePenalty)

Sets the presence penalty parameter.

public TSelf SetPresencePenalty(PresencePenalty presencePenalty)

Parameters

presencePenalty PresencePenalty

Returns

TSelf

SetReasoning(ReasoningEffort, TokenCount, bool)

OpenRouter style reasoning configuration.

public TSelf SetReasoning(ReasoningEffort effort, TokenCount budgetTokens, bool exclude)

Parameters

effort ReasoningEffort
budgetTokens TokenCount
exclude bool

Returns

TSelf

SetReasoning(ReasoningEffort, SummaryLevel?)

Configures the reasoning settings for the task. For Anthropic models, use 'SetReasoning(TokenCount budgetTokens)' instead.

public TSelf SetReasoning(ReasoningEffort effort, ReasoningOptions.SummaryLevel? summaryLevel = null)

Parameters

effort ReasoningEffort
summaryLevel ReasoningOptions.SummaryLevel?

Returns

TSelf

SetReasoning(ReasoningFormat)

GroqCloud-specific parameter to set the reasoning output format.

public TSelf SetReasoning(ReasoningFormat format)

Parameters

format ReasoningFormat

Specifies how to output reasoning tokens

Returns

TSelf

SetReasoning(ReasoningOptions)

Configures the reasoning settings for the task.

public TSelf SetReasoning(ReasoningOptions reasoningOptions)

Parameters

reasoningOptions ReasoningOptions

Returns

TSelf

SetReasoning(TokenCount)

Anthropic-specific parameter to set the reasoning budget in tokens.

public TSelf SetReasoning(TokenCount budgetTokens)

Parameters

budgetTokens TokenCount

Required. Determines how many tokens Claude can use for its internal reasoning process. Larger budgets can enable more thorough analysis for complex problems, improving response quality.

Returns

TSelf

SetSafetySettings(List<SafetySetting>)

If set, the model will apply the specified safety settings to filter or moderate the generated content.

public TSelf SetSafetySettings(List<SafetySetting> settings)

Parameters

settings List<SafetySetting>

Returns

TSelf

SetStartingMessage(string)

Sets the starting message or initial prompt for the task. This can be used to provide context or a starting point for the model.

public TSelf SetStartingMessage(string startingMessage)

Parameters

startingMessage string

Returns

TSelf

SetTemperature(Temperature)

Sets the sampling temperature parameter.

public TSelf SetTemperature(Temperature temperature)

Parameters

temperature Temperature

Returns

TSelf

SetToolChoice(ToolChoice)

Sets the tool choice strategy for the model.

public TSelf SetToolChoice(ToolChoice toolChoice)

Parameters

toolChoice ToolChoice

Returns

TSelf

SetTools(params Tool[])

Sets the tools available for the model to use during generation.

public TSelf SetTools(params Tool[] tools)

Parameters

tools Tool[]

Returns

TSelf

SetTools(IEnumerable<Tool>)

Sets the tools available for the model to use during generation.

public TSelf SetTools(IEnumerable<Tool> tools)

Parameters

tools IEnumerable<Tool>

Returns

TSelf

SetTopK(TopK)

Sets the TopK parameter for sampling.

public TSelf SetTopK(TopK topK)

Parameters

topK TopK

Returns

TSelf

SetTopP(TopP)

Sets the nucleus sampling (TopP) parameter.

public TSelf SetTopP(TopP topP)

Parameters

topP TopP

Returns

TSelf