Class AnthropicMessageRequest
Inheritance
Namespace: Glitch9.AIDevKit.Anthropic
Assembly: .dll
Syntax
public class AnthropicMessageRequest : RESTRequestBody
Properties
Container
Optional. Container identifier for reuse across requests.
Declaration
public string Container { get; set; }
Property Value
Type | Description |
---|---|
string |
MaxTokens
Required. The maximum number of tokens to generate before stopping. Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.
Declaration
public int MaxTokens { get; set; }
Property Value
Type | Description |
---|---|
int |
McpServers
Optional. MCP servers to be utilized in this request
Declaration
public List<McpServer> McpServers { get; set; }
Property Value
Type | Description |
---|---|
List<McpServer> |
Messages
Input messages.
Declaration
public List<AnthropicMessage> Messages { get; set; }
Property Value
Type | Description |
---|---|
List<AnthropicMessage> |
Metadata
Optional. An object describing metadata about the request.
Declaration
public AnthropicMetadata Metadata { get; set; }
Property Value
Type | Description |
---|---|
AnthropicMetadata |
Model
Required. The model that will complete your prompt.
Declaration
public Model Model { get; set; }
Property Value
Type | Description |
---|---|
Model |
ServiceTier
Determines whether to use priority capacity (if available) or standard capacity for this request. https://docs.anthropic.com/en/api/service-tiers
Declaration
public AnthropicTypes.ServiceTier? ServiceTier { get; set; }
Property Value
Type | Description |
---|---|
AnthropicTypes.ServiceTier? |
StopSequences
Optional.
Custom text sequences that will cause the model to stop generating.
Our models will normally stop when they have naturally completed their turn,
which will result in a response stop_reason of "end_turn".
If you want the model to stop generating when it encounters custom strings of text,
you can use the stop_sequences parameter. If the model encounters one of the custom sequences,
the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence.
Declaration
public List<string> StopSequences { get; set; }
Property Value
Type | Description |
---|---|
List<string> |
Stream
Optional. Whether to incrementally stream the response using server-sent events. https://docs.anthropic.com/en/docs/build-with-claude/streaming
Declaration
public bool? Stream { get; set; }
Property Value
Type | Description |
---|---|
bool? |
System
Optional. System prompt.
A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role.
https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts
Declaration
public string System { get; set; }
Property Value
Type | Description |
---|---|
string |
Temperature
Optional. Amount of randomness injected into the response.
Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks.
Note that even with temperature of 0.0, the results will not be fully deterministic.
Declaration
public float? Temperature { get; set; }
Property Value
Type | Description |
---|---|
float? |
Thinking
Optional. Configuration for enabling Claude's extended thinking.
When enabled, responses include thinking content blocks showing Claude's thinking process before the final answer. Requires a minimum budget of 1,024 tokens and counts towards your max_tokens limit.
See extended thinking for details.
Declaration
public ThinkingConfig Thinking { get; set; }
Property Value
Type | Description |
---|---|
ThinkingConfig |
ToolChoice
Optional. How the model should use the provided tools. The model can use a specific tool, any available tool, decide by itself, or not use tools at all.
Declaration
public AnthropicToolChoice ToolChoice { get; set; }
Property Value
Type | Description |
---|---|
AnthropicToolChoice |
Tools
Optional. Definitions of tools that the model may use.
If you include tools in your API request, the model may return tool_use content blocks that represent the model's use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using tool_result content blocks.
There are two types of tools: client tools and server tools. The behavior described below applies to client tools. For server tools, see their individual documentation as each has its own behavior (e.g., the web search tool).
Each tool definition includes:
name: Name of the tool. description: Optional, but strongly-recommended description of the tool. input_schema: JSON schema for the tool input shape that the model will produce in tool_use output content blocks.
Declaration
public List<AnthropicTool> Tools { get; set; }
Property Value
Type | Description |
---|---|
List<AnthropicTool> |
TopK
Optional. Only sample from the top K options for each subsequent token.
Used to remove "long tail" low probability responses.
Recommended for advanced use cases only. You usually only need to use temperature.
Declaration
public int? TopK { get; set; }
Property Value
Type | Description |
---|---|
int? |
TopP
Optional. Use nucleus sampling.
In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p.
You should either alter temperature or top_p, but not both.
Recommended for advanced use cases only. You usually only need to use temperature.
Declaration
public float? TopP { get; set; }
Property Value
Type | Description |
---|---|
float? |