
Azure OpenAI internet access capabilities allow users to connect their AI models to the internet, enabling them to access vast amounts of data and information.
With Azure OpenAI, users can access the internet through a secure and managed connection, eliminating the need for manual configuration and management.
This capability is made possible through Azure OpenAI's integration with Azure Virtual Network, which provides a secure and private connection to the internet.
By unlocking Azure OpenAI internet access capabilities, users can unlock new possibilities for their AI models, such as accessing real-time data, interacting with users, and leveraging external APIs.
Resource Configuration
To configure your resources for optimal secure usage, you still need to follow all the steps even if you plan to only secure part of your resources.
Create a resource group to organize all the relevant resources, including Azure OpenAI resources and storage accounts.
Using selected networks with IP rules is not supported because the services' IP addresses are dynamic, so you'll need to find another approach.
You should create a resource group to group all the relevant resources together, making it easier to manage them.
Network Access
To disable public network access for your Azure OpenAI resource, you can do so in the Azure portal. This is a crucial step to ensure your resource is secure.
You'll need to create private endpoint connections to allow access from your client machines, such as Azure OpenAI Studio. This is a relatively straightforward process.
Azure OpenAI also supports VNETs and Private Endpoints, so you can use these to manage access to your resource. For more information, consult the Azure AI services virtual networking guidance.
Disable Public Network Access
To disable public network access of your Azure OpenAI resource, you need to create private endpoint connections that connect to your Azure OpenAI resource.
You can do this by allowing access to your Azure OpenAI Service from your client machines, like using Azure OpenAI Studio.
Azure OpenAI supports VNETs and Private Endpoints, so you can use these features to secure your resource.
To allow access to your Azure OpenAI Service from your client machines, you need to create private endpoint connections that connect to your Azure OpenAI resource.
You can create a shared private link from your search resource connecting to your Azure OpenAI resource, but this is only applicable for S2 pricing tier search resource.
Azure OpenAI has no public network access, so you need to set up Azure OpenAI to bypass Azure AI Search as a trusted service based on managed identity.
This can be done by setting networkAcls.bypass as AzureServices from the management API.
Azure AI Search must use the system assigned managed identity authentication to call the custom skill web API.
You can verify the traffic from your Azure AI Search by checking the claims in the JSON Web Token (JWT).
Capabilities Comparison
Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure.
The Azure OpenAI Service codevelops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
You can access the resource via the Azure AI Foundry portal.
The models aren't able to answer questions about themselves, so don't expect them to tell you when their knowledge cutoff is.
To check a model's knowledge cutoff, consult the models page, where you'll find the accurate information.
With Azure OpenAI, customers get the same models as OpenAI, but with the added security of Microsoft Azure.
Accessing My Data
Accessing My Data can be a straightforward process. Azure OpenAI customers can use Azure OpenAI on your data via the Azure AI Foundry portal.
The Azure AI Foundry portal is a great resource for accessing your data. You can also use the Rest API to access your data.
To get started, you'll need to have an Azure OpenAI account. With an account, you can access the Azure AI Foundry portal and begin using Azure OpenAI on your data.
Configure
To configure Azure OpenAI for secure usage, you need to create a resource group to organize all relevant resources. This group should include, but is not limited to, Azure OpenAI resources, Azure AI search resources, and storage accounts.
Use the resource configuration sections to set up network settings, specifically disabling public network for these resources. You'll also need to configure gateway and client settings to access the Azure OpenAI Service from your on-premises client machines.
To do this, create a virtual network gateway for your virtual network, and add point-to-site configuration with Microsoft Entra ID-based authentication. Download the Azure VPN Client profile configuration package, unzip it, and import the AzureVPN/azurevpnconfig.xml file to your Azure VPN client.
Configure your local machine's hosts file to point resource hostnames to private IPs in your virtual network. For Windows, this file is located at C:\Windows\System32\drivers\etc, and for Linux, it's at /etc/hosts.
GPT-4 and Models
To find out which models are available, you should consult the Azure OpenAI model availability guide.
Consulting this guide will give you a clear understanding of the models you can use with Azure OpenAI.
Where is Gpt-4 Turbo Preview?
GPT-4 Turbo Preview is the gpt-4 (1106-preview) model, which can be deployed by selecting model gpt-4 under Deployments and choosing model version 1106-preview.
To confirm which model is currently associated with a given deployment name, you can check the model name column on the Azure AI Foundry Management page.
The model doesn't have any native ability to query what model version is currently being run to answer your question, so you'll need to consult the model name column to confirm.
You can find the model name column on the Azure AI Foundry Management page by navigating to Management > Deployments.
GPT-4 Turbo Preview is available in certain regions, which you can check on the models page.
Available Models
If you're looking to explore the various models available, it's best to consult the Azure OpenAI model availability guide.
You can find the most up-to-date information on available models there.
Microsoft Launches ChatGPT Service
Microsoft uses its Azure OpenAI service to power GitHub Copilot, a $10 per month service that helps suggest lines of code to developers inside their code editor.
Microsoft's Azure OpenAI service is also used in the Power BI app to generate formulae and expressions using GPT-3 natural language models.
The upcoming Microsoft Designer app will utilize DALL-E 2 to generate art from text prompts, showcasing the potential of AI in creative endeavors.
Data Usage Costs
Using Azure OpenAI on your data can incur costs, but there's no additional cost for using the "your data" feature in the Azure AI Foundry portal.
You'll incur costs when using Azure AI Search, Azure Blob Storage, Azure Web App Service, semantic search, and OpenAI models.
One notable exception is the "your data" feature, which doesn't add to your expenses.
To give you a better idea, costs are associated with using Azure AI Search, Azure Blob Storage, Azure Web App Service, semantic search, and OpenAI models.
Sources
- https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/use-your-data-securely
- https://learn.microsoft.com/en-us/azure/ai-services/openai/faq
- https://www.proserveit.com/blog/introduction-to-microsoft-new-azure-openai-service
- https://www.wiz.io/blog/wiz-ai-spm-extends-support-to-azure-openai-service-models
- https://www.theverge.com/2023/1/17/23558530/microsoft-azure-openai-chatgpt-service-launch
Featured Images: pexels.com