In this article, learn about content safety capabilities for models from the model catalog deployed using serverless API deployments.
Content filter defaults
Azure AI uses a default configuration of Azure AI Content Safety content filters to detect harmful content across four categories including hate and fairness, self-harm, sexual, and violence for models deployed via serverless API deployments. To learn more about content filtering, see Understand harm categories.
The default content filtering configuration for text models is set to filter at the medium severity threshold, filtering any detected content at this level or higher. For image models, the default content filtering configuration is set at the low configuration threshold, filtering at this level or higher. For models deployed using the Azure AI Foundry Models, you can create configurable filters by selecting the Content filters tab within the Guardrails & controls page of the Azure AI Foundry portal.
Content filtering isn’t available for certain model types that are deployed via serverless API deployments. These model types include embedding models and time series models.
Content filtering occurs synchronously as the service processes prompts to generate content. You might be billed separately according to Azure AI Content Safety pricing for such use. You can disable content filtering for individual serverless endpoints either:
- When you first deploy a language model
- Later, by selecting the content filtering toggle on the deployment details page
Suppose you decide to use an API other than the Model Inference API to work with a model that is deployed via a serverless API deployment. In such a situation, content filtering (preview) isn’t enabled unless you implement it separately by using Azure AI Content Safety. To get started with Azure AI Content Safety, see Quickstart: Analyze text content. You run a higher risk of exposing users to harmful content if you don’t use content filtering (preview) when working with models that are deployed via serverless API deployments.
How charges are calculated
Pricing details are viewable at Azure AI Content Safety pricing. Charges are incurred when the Azure AI Content Safety validates the prompt or completion. If Azure AI Content Safety blocks the prompt or completion, you’re charged for both the evaluation of the content and the inference calls.
Related content