Class LanguageModelRequest<TSelf, TInput, TOutput>
Base class for text generation tasks using LLM models. Supports instructions, role-based prompts, and attachments.
public abstract class LanguageModelRequest<TSelf, TInput, TOutput> : GenerativeRequest<TSelf, TInput, TOutput, string, INoopStreamEvent<TOutput>>, ILanguageModelRequest, IGenerativeRequest, ISequentialRequest where TSelf : LanguageModelRequest<TSelf, TInput, TOutput> where TInput : IPrompt where TOutput : IGeneratedOutput
Type Parameters
TSelfTInputTOutput
- Inheritance
-
objectFluentApiRequest<TSelf, TOutput>LanguageModelRequest<TSelf, TInput, TOutput>
- Implements
- Derived
- Inherited Members
- Extension Methods
Constructors
LanguageModelRequest()
protected LanguageModelRequest()
LanguageModelRequest(TInput)
protected LanguageModelRequest(TInput prompt)
Parameters
promptTInput
Properties
ContainerId
Anthropic-specific container ID for request grouping.
public string ContainerId { get; set; }
Property Value
- string
FrequencyPenalty
Penalizes frequent repetition of the same token sequence. Range: -2.0–2.0 (typical: 0–1.0).
public FrequencyPenalty FrequencyPenalty { get; set; }
Property Value
Instructions
Optional. A system (or developer) message inserted into the model's context.
public string Instructions { get; set; }
Property Value
- string
LogitBias
Biases specific tokens by ID. Use to influence token selection. Key = token ID (as string), Value = bias (-100 to 100, 0 = no bias).
public Dictionary<string, double> LogitBias { get; set; }
Property Value
- Dictionary<string, double>
Logprobs
Whether to return log probabilities of the Output tokens or not. If true, returns the log probabilities of each Output token returned in the content of message. This option is currently not available on the gpt-4-vision-preview model. Defaults to 0.
public Logprobs Logprobs { get; set; }
Property Value
MaxOutputTokens
Optional. An upper bound for the number of tokens that can be generated for a response.
public TokenCount MaxOutputTokens { get; set; }
Property Value
ModelType
public override ModelType ModelType { get; }
Property Value
ParallelToolCalls
Optional. Defaults to true. Whether to allow the model to run tool calls in parallel.
public bool? ParallelToolCalls { get; set; }
Property Value
- bool?
PresencePenalty
Penalizes tokens already present in the generated content. Range: -2.0–2.0 (typical: 0–1.0).
public PresencePenalty PresencePenalty { get; set; }
Property Value
ReasoningOptions
Optional. Configuration options for reasoning models.
public ReasoningOptions ReasoningOptions { get; set; }
Property Value
ResponseFormat
Defines how the response should be formatted (e.g., text, JSON).
public virtual ResponseFormat ResponseFormat { get; set; }
Property Value
SafetySettings
Safety filters that define moderation thresholds for the model output.
public List<SafetySetting> SafetySettings { get; set; }
Property Value
- List<SafetySetting>
StartingMessage
Optional. A conversation starter of the model's response.
public string StartingMessage { get; set; }
Property Value
- string
StreamOptions
Optional. Defaults to null Options for streaming responses. Only set this when you set stream: true.
public StreamOptions StreamOptions { get; set; }
Property Value
Temperature
Optional. Defaults to 1 What sampling temperature to use, between 0 and 2.
public Temperature Temperature { get; set; }
Property Value
ToolChoice
Optional. How the model should select which tool (or tools) to use when generating a response.
public ToolChoice ToolChoice { get; set; }
Property Value
Tools
Optional. An array of tools the model may call while generating a response.
public List<Tool> Tools { get; set; }
Property Value
- List<Tool>
TopK
Samples the topK tokens with the highest probabilities. Range is [1, 1000]. Default is 40.
public TopK TopK { get; set; }
Property Value
TopLogprobs
Optional. An integer between 0 and 20 specifying the number of most likely tokens to return.
public Logprobs TopLogprobs { get; set; }
Property Value
TopP
Optional. Defaults to 1 Nucleus sampling parameter.
public TopP TopP { get; set; }
Property Value
Methods
AddMessage(params Message[])
Adds context messages for chat-based models.
public TSelf AddMessage(params Message[] messages)
Parameters
messagesMessage[]
Returns
- TSelf
AddMessageRange(IEnumerable<Message>)
public abstract TSelf AddMessageRange(IEnumerable<Message> messages)
Parameters
messagesIEnumerable<Message>
Returns
- TSelf
GetMessages()
public abstract List<Message> GetMessages()
Returns
- List<Message>
SetFrequencyPenalty(FrequencyPenalty)
Sets the frequency penalty parameter.
public TSelf SetFrequencyPenalty(FrequencyPenalty frequencyPenalty)
Parameters
frequencyPenaltyFrequencyPenalty
Returns
- TSelf
SetInstructions(string)
Sets the instruction for the task. This is a specific command or request for the model to follow.
public TSelf SetInstructions(string instructions)
Parameters
instructionsstring
Returns
- TSelf
SetLogitBias(Dictionary<string, double>)
Sets a custom token bias map.
public TSelf SetLogitBias(Dictionary<string, double> logitBias)
Parameters
logitBiasDictionary<string, double>
Returns
- TSelf
SetLogprobs(Logprobs)
Enables returning token-level log probabilities.
public TSelf SetLogprobs(Logprobs logprobs)
Parameters
logprobsLogprobs
Returns
- TSelf
SetMaxOutputTokens(TokenCount)
Sets the maximum number of tokens the model can generate in its response.
public TSelf SetMaxOutputTokens(TokenCount maxTokens)
Parameters
maxTokensTokenCount
Returns
- TSelf
SetPresencePenalty(PresencePenalty)
Sets the presence penalty parameter.
public TSelf SetPresencePenalty(PresencePenalty presencePenalty)
Parameters
presencePenaltyPresencePenalty
Returns
- TSelf
SetReasoning(ReasoningEffort, TokenCount, bool)
OpenRouter style reasoning configuration.
public TSelf SetReasoning(ReasoningEffort effort, TokenCount budgetTokens, bool exclude)
Parameters
effortReasoningEffortbudgetTokensTokenCountexcludebool
Returns
- TSelf
SetReasoning(ReasoningEffort, SummaryLevel?)
Configures the reasoning settings for the task. For Anthropic models, use 'SetReasoning(TokenCount budgetTokens)' instead.
public TSelf SetReasoning(ReasoningEffort effort, ReasoningOptions.SummaryLevel? summaryLevel = null)
Parameters
effortReasoningEffortsummaryLevelReasoningOptions.SummaryLevel?
Returns
- TSelf
SetReasoning(ReasoningFormat)
GroqCloud-specific parameter to set the reasoning output format.
public TSelf SetReasoning(ReasoningFormat format)
Parameters
formatReasoningFormatSpecifies how to output reasoning tokens
Returns
- TSelf
SetReasoning(ReasoningOptions)
Configures the reasoning settings for the task.
public TSelf SetReasoning(ReasoningOptions reasoningOptions)
Parameters
reasoningOptionsReasoningOptions
Returns
- TSelf
SetReasoning(TokenCount)
Anthropic-specific parameter to set the reasoning budget in tokens.
public TSelf SetReasoning(TokenCount budgetTokens)
Parameters
budgetTokensTokenCountRequired. Determines how many tokens Claude can use for its internal reasoning process. Larger budgets can enable more thorough analysis for complex problems, improving response quality.
Returns
- TSelf
SetSafetySettings(List<SafetySetting>)
If set, the model will apply the specified safety settings to filter or moderate the generated content.
public TSelf SetSafetySettings(List<SafetySetting> settings)
Parameters
settingsList<SafetySetting>
Returns
- TSelf
SetStartingMessage(string)
Sets the starting message or initial prompt for the task. This can be used to provide context or a starting point for the model.
public TSelf SetStartingMessage(string startingMessage)
Parameters
startingMessagestring
Returns
- TSelf
SetTemperature(Temperature)
Sets the sampling temperature parameter.
public TSelf SetTemperature(Temperature temperature)
Parameters
temperatureTemperature
Returns
- TSelf
SetToolChoice(ToolChoice)
Sets the tool choice strategy for the model.
public TSelf SetToolChoice(ToolChoice toolChoice)
Parameters
toolChoiceToolChoice
Returns
- TSelf
SetTools(params Tool[])
Sets the tools available for the model to use during generation.
public TSelf SetTools(params Tool[] tools)
Parameters
toolsTool[]
Returns
- TSelf
SetTools(IEnumerable<Tool>)
Sets the tools available for the model to use during generation.
public TSelf SetTools(IEnumerable<Tool> tools)
Parameters
toolsIEnumerable<Tool>
Returns
- TSelf
SetTopK(TopK)
Sets the TopK parameter for sampling.
public TSelf SetTopK(TopK topK)
Parameters
topKTopK
Returns
- TSelf
SetTopP(TopP)
Sets the nucleus sampling (TopP) parameter.
public TSelf SetTopP(TopP topP)
Parameters
topPTopP
Returns
- TSelf