Azure OpenAI Content Filtering for Secure Business Operations

Author

Reads 216

AI Multimodal Model
Credit: pexels.com, AI Multimodal Model

Azure OpenAI Content Filtering is a game-changer for businesses looking to maintain secure operations. This technology uses AI to analyze and filter out sensitive content from internal communications, reducing the risk of data breaches and intellectual property theft.

By integrating OpenAI's advanced natural language processing capabilities with Azure's robust security features, businesses can detect and prevent malicious content from spreading. This proactive approach helps protect sensitive information and maintains a secure work environment.

Azure OpenAI Content Filtering can also help businesses comply with regulatory requirements and industry standards. For example, it can help detect and flag content that may be in violation of GDPR or HIPAA regulations.

Prerequisites

To get started with Azure OpenAI content filtering, you'll need to have a few things in place.

You must have an Azure OpenAI resource and a large language model (LLM) deployment to configure content filters. Follow a quickstart to get started.

You'll also need an Azure subscription, which you can create for free.

Credit: youtube.com, 9. Content Filtering with Azure OpenAI: Ensuring Safe AI Output #responsibleai #contentfiltering

To set up your Azure OpenAI resource, you'll need to create it in the Azure portal. You'll need to enter a unique name for your resource, select the subscription you entered on the application form, select a resource group, supported region, and supported pricing tier.

You'll also need to have Azure CLI and cURL installed on your machine.

Configuring Content Filters

Configuring content filters is a crucial step in ensuring that your Azure OpenAI model generates safe and responsible content. You can configure content filters for both input prompts and output completions, and adjust the severity threshold to filter content at different levels.

To create a content filter, go to Azure AI Foundry and navigate to your project, then select the Safety + security page and the Content filters tab. From there, you can select + Create content filter and configure the input filters (for user prompts) and output filters (for model completion).

Credit: youtube.com, Create a Content Filter in Azure AI Studio

Content filters can be configured at the hub level in the Azure AI Foundry portal, and can be associated with one or more deployments. You can also create a blocklist as a filter, enabling the Blocklist option on the Input filter and/or Output filter page and selecting one or more blocklists from the dropdown.

Understanding Configurability

Configuring content filters is a crucial step in ensuring that your Azure OpenAI Service models produce safe and responsible outputs. You can configure content filters for inputs (prompts) and outputs (completions) for specific Azure OpenAI models.

The default safety settings applied to all models, excluding Azure OpenAI Whisper, provide a responsible experience with content filtering models, blocklists, prompt transformation, content credentials, and others. You can also configure content filters and create custom safety policies tailored to your use case requirements.

There are several filter categories you can configure in addition to the default harm category filters, including Prompt Shields for direct and indirect attacks, Protected material - code and text, and Groundedness. These filters can be applied to user prompts or model completions.

Credit: youtube.com, Demo Video: Configuring Content Filtering without MSP

The configurability feature allows you to adjust the settings separately for prompts and completions to filter content for each content category at different severity levels. You can use the sliders to set the severity threshold for each category, with options for Low, Medium, and High severity levels.

Here's a table summarizing the severity levels and their configurations:

Configurable content filters are available for the following Azure OpenAI models: GPT model series, GPT-4 Turbo Vision GA, GPT-4o, GPT-4o mini, DALL-E 2, and DALL-E 3. However, not all models have configurable content filters, and some may require approval to use modified content filtering.

Blocklist as Filter

A blocklist can be used as a filter to prevent unwanted content from being generated. You can apply a blocklist as either an input or output filter, or both.

To enable a blocklist filter, go to the Input filter and/or Output filter page and select the Blocklist option. You can choose from a dropdown list of available blocklists or use the built-in profanity blocklist.

Credit: youtube.com, OPNSense - Enable Ads Blocklist using Unbound DNS DNSBL

Blocklists can be combined into a single filter, allowing you to create a comprehensive content filtering system.

A blocklist can contain up to 10,000 terms, which can take around 5 minutes to be added to the list after creation.

Here's a step-by-step guide to creating a custom blocklist:

1. Replace {subscriptionId} with your subscription ID.

2. Replace {resourceGroupName} with your resource group name.

3. Replace {accountName} with your resource name.

4. Replace {raiBlocklistName} with a custom name for your list.

5. Replace {token} with the token you got from the "Get your token" step.

6. Optionally, replace the value of the "description" field with a custom description.

Once you've created a blocklist, you can apply it to a content filter in the Azure OpenAI Studio. To do this, follow these steps:

1. Select Content Filters from the left menu.

2. Select the Blocklists tab next to Content filters tab.

3. Select Create Blocklist.

4. Create a name for your blocklist, add a description, and select Create Blocklist.

Credit: youtube.com, How to Configure Content Filtering Service on a SonicWall Gen7 - SonicOS7 Firewall

5. Select your custom blocklist and select Add new term.

6. Add a term that should be filtered, and select Add term.

7. You can delete each term in your blocklist.

8. Once the blocklist is ready, navigate to the Content filters (Preview) section and create a new customized content filter configuration.

9. Add the blocklist to the configuration and review the settings.

10. Finish the content filtering configuration by clicking on Next.

Content Filtering Techniques

Content filtering is a crucial aspect of Azure OpenAI, and it's essential to understand the techniques used to ensure that the generated content is safe and respectful. You can configure content filters to detect and block various types of harmful content, including violence, hate, sexual, and self-harm categories.

There are several filter categories available in addition to the default harm category filters, including Prompt Shields for direct attacks (jailbreak) and Protected material - code. These filters can be applied to user prompts or model completions.

Credit: youtube.com, Azure OpenAI Service content filtering - build safer apps!

The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. This means that content detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered.

You can also create custom blocklists to filter specific terms or phrases. Blocklists can be created in the Azure OpenAI Studio or using the Azure OpenAI API. Each blocklist can contain up to 10,000 terms.

Here are the different types of blocklists available:

To create a custom blocklist, you can follow these steps: create a name for your blocklist, add a description, and then add specific terms or phrases to filter. You can also create a regex to filter more complex patterns.

Once you've created your custom blocklist, you can apply it to a content filtering configuration in the Azure OpenAI Studio. This will help ensure that the generated content is safe and respectful for your users.

Frequently Asked Questions

How to disable content filtering in Azure OpenAI?

To disable content filtering in Azure OpenAI, submit the Content Filter Control Form at Azure OpenAI Limited Access Review: Modified Content Filtering. This form is available online for your convenience.

How is harmful content detected in Azure OpenAI service?

Harmful content in Azure OpenAI Service is detected through an ensemble of classification models that analyze both input prompts and AI-generated content. This robust system helps prevent misuse and ensures a safer experience for users

Rosemary Boyer

Writer

Rosemary Boyer is a skilled writer with a passion for crafting engaging and informative content. With a focus on technical and educational topics, she has established herself as a reliable voice in the industry. Her writing has been featured in a variety of publications, covering subjects such as CSS Precedence, where she breaks down complex concepts into clear and concise language.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.