Table of Contents

Class ModerationResponse

Given some input Text, outputs if the model classifies it as potentially harmful across several categories. Related guide: https://platform.openai.com/docs/guides/moderation POST https://api.openai.com/v1/moderations

public class ModerationResponse : OpenAIObject
Inheritance
object
ModerationResponse
Inherited Members

Properties

Results

A list of moderation objects.

public List<ModerationDetail> Results { get; set; }

Property Value

List<ModerationDetail>

Methods

IsFlagged(out List<SafetyRating>)

public bool IsFlagged(out List<SafetyRating> results)

Parameters

results List<SafetyRating>

Returns

bool

ToString()

Returns a string that represents the current object.

public override string ToString()

Returns

string

A string that represents the current object.