Represents if a given text input is potentially harmful.
Name | Type | Description | Notes |
---|---|---|---|
id | String | The unique identifier for the moderation request. | |
model | String | The model used to generate the moderation results. | |
results | List<CreateModerationResponseResultsInner> | A list of moderation objects. |