Class ModerationResponse
Given some input Text, outputs if the model classifies it as potentially harmful across several categories. Related guide: https://platform.openai.com/docs/guides/moderation POST https://api.openai.com/v1/moderations
Inherited Members
Namespace: Glitch9.AIDevKit.OpenAI
Assembly: .dll
Syntax
public class ModerationResponse : ModelResponse
Properties
Results
A list of moderation objects.
Declaration
public List<ModerationDetail> Results { get; set; }
Property Value
Type | Description |
---|---|
List<ModerationDetail> |
Methods
IsFlagged(out List<SafetyRating>)
Declaration
public bool IsFlagged(out List<SafetyRating> results)
Parameters
Type | Name | Description |
---|---|---|
List<SafetyRating> | results |
Returns
Type | Description |
---|---|
bool |
ToString()
Returns a string that represents the current object.
Declaration
public override string ToString()
Returns
Type | Description |
---|---|
string | A string that represents the current object. |
Overrides
object.ToString()