Azure AI Studio is a powerful tool that allows you to build, train, and deploy AI models without extensive coding knowledge.
To get started, you'll need to create an Azure account and set up an Azure AI Studio workspace. This involves selecting the region where you want your workspace to be located and assigning a name to your workspace.
Once you've set up your workspace, you can start building your AI model by selecting a pre-built template or starting from scratch. You can choose from a variety of templates, including text analysis, image classification, and chatbots.
In the next steps, we'll walk you through the process of setting up your Azure AI Studio workspace, building your AI model, and deploying it to production.
Prerequisites
To get started with Azure AI Studio, you'll need to meet some basic prerequisites.
First, you'll need to read the guide on when to use Azure OpenAI fine-tuning.
You'll also need an Azure subscription, which you can create for free.
Make sure your Azure OpenAI resource is located in a region that supports fine-tuning of the Azure OpenAI model. You can check the Model summary table and region availability to see which regions are supported for your model.
You'll need Cognitive Services OpenAI Contributor access for fine-tuning. If you don't already have access to view quota and deploy models in Azure AI Studio, you'll need additional permissions.
Here are the specific requirements for getting started with Azure AI Studio:
- Read the When to use Azure OpenAI fine-tuning guide.
- Have an Azure subscription.
- Have an Azure OpenAI resource located in a supported region.
- Have Cognitive Services OpenAI Contributor access.
- Have additional permissions to view quota and deploy models in Azure AI Studio.
Additionally, you might need to upgrade your OpenAI client library to the latest version with pip install openai --upgrade.
Azure AI Studio Setup
To get started with Azure AI Studio, first navigate to the Azure OpenAI Studio at https://oai.azure.com/. Sign-in with credentials that have access to your Azure OpenAI resource.
You'll need to select the appropriate directory, Azure subscription, and Azure OpenAI resource during or after the sign-in workflow. This will give you access to the studio's features and tools.
From the Azure OpenAI Studio landing page, you can explore examples for prompt completion, manage your deployments and models, and find learning resources such as documentation and community forums.
Access OpenAI Studio
To access OpenAI Studio, head to https://oai.azure.com/, the official URL for the platform.
You'll need to sign-in with credentials that have access to your Azure OpenAI resource, which will guide you through the sign-in workflow.
Select the appropriate directory, Azure subscription, and Azure OpenAI resource during or after sign-in to ensure you're accessing the right resources.
From the Azure OpenAI Studio landing page, you can navigate further to explore examples for prompt completion and manage your deployments and models.
You'll also find learning resources such as documentation and community forums to help you get started and troubleshoot any issues.
The Chat Playground
The Chat Playground is a fantastic tool for configuring and training an AI chatbot. It has three panels: one for setting up the assistant, one for the chat itself, and one for configuration and parameters.
In the assistant setup panel, you can set up the system message, which instructs the behavior of the chatbot. You can choose from several built-in templates using a dropdown, like "Marketing Writing Assistant."
The system message counts against the 4000 token limit, so be mindful of that when selecting a template. You can also modify the built-in system messages before using them, but keep in mind that this will reset the chat window.
You can use the chatbot by typing questions in the textbox at the bottom of the chat session panel and clicking the send button. The model will respond based on the information provided in the system message.
In the configuration panel, there is a "Parameters" tab that holds parameters similar to what we saw in the Completions Playground. The main difference is that it has a "Max Response" setting instead of "Max length", which limits the number of tokens used for any one response in the chat.
Project Creation
Creating a project in Azure AI Studio is a straightforward process. You can start by creating a new project in the Azure AI Hub, which provides a collaborative environment and enterprise-grade security.
To create a project, go to the Create a new project wizard and set up your project with the required settings. This includes waiting until your project is created, as seen in the example where the AI project name is Genesis.
After creating your project, you can navigate to the Tools section and select the Playground page to start exploring the features and capabilities of Azure AI Projects.
Create Your
You can create a project in Azure AI Studio to build custom AI apps while keeping your work organized and secure. The Azure AI hub resource provides a collaborative environment and enterprise-grade security.
To create a project, navigate to the Create a new project wizard and set up your project with the following settings: project name, description, and tags. Once you've created your project, you can start building your AI app.
Azure AI Studio offers a structured workflow by acting as organizational containers for your projects. This helps you keep your work organized and makes it easier to collaborate with others.
Here are the key steps to create a project in Azure AI Studio:
- Create a new project in the Create a new project wizard
- Set up your project with the required settings
- Wait for your project to be created
- You can then start building your AI app using the Azure AI Studio tools and resources.
Note: cleaning up unused resources is essential to avoid unnecessary costs.
Create .NET Core Application
To create a .NET Core application, start by opening a console window, such as cmd, PowerShell, or Bash.
You'll use the dotnet new command to create a new console app with the name azure-openai-quickstart. This command creates a simple "Hello World" project with a single C# source file: Program.cs.
Change your directory to the newly created app folder. Then, build the application with the dotnet command. The build output should contain no warnings or errors.
You'll need an Azure subscription to create and deploy your .NET Core application. You can create one for free.
To get started, you'll also need the current version of the Java Development Kit (JDK) and the Gradle build tool, or another dependency manager.
Here's a list of tools you'll need to create a .NET Core application:
- PowerShell
- Command Line
- Bash
Project Configuration
In Azure AI Studio, you can configure your project parameters to fine-tune your AI model. This involves setting values for batch size, learning rate multiplier, number of epochs, and seed.
The batch size determines the number of training examples used to train a single forward and backward pass. You can choose a batch size between 0.2% of examples in the training set and a maximum of 256, or set it to -1 to calculate the batch size automatically.
The learning rate multiplier affects the fine-tuning learning rate, which is the original learning rate used for pre-training multiplied by this value. A larger learning rate tends to perform better with larger batch sizes, but may lead to overfitting.
Here are the available parameters for fine-tuning your job:
You can choose to leave the default configuration or customize the values to your preference. After you finish making your configurations, select Next.
Environment Variables
When working with project configurations, it's essential to create and assign persistent environment variables for your key and endpoint. This allows you to store sensitive information securely.
Always store your API key in a secure location, such as Azure Key Vault, to prevent it from being exposed directly in your code or publicly online.
The Completions Playground
The Completions Playground is a fantastic tool for exploring the capabilities of your AI chatbot. It's where you can configure and train your chatbot, and it's surprisingly easy to use.
To get started, you'll want to select the deployment you made earlier. If it doesn't show up, don't worry – just select it from the dropdown menu. The Completions Playground has three main panels: one for setting up the assistant, one for the chat itself, and one for configuration and parameters.
In the assistant setup panel, you can choose from several built-in templates for your system message. I recommend choosing "Marketing Writing Assistant" for now. Remember, the system message counts against the 4000 token limit, so keep that in mind.
The chat session panel is where the magic happens. You can type in questions for the bot and click the send button, and the model will respond based on the information provided in the system message. You can even modify the built-in system messages before using them, but be aware that this will reset the chat window.
The configuration panel has a "Parameters" tab that's similar to what you saw in the Completions Playlist. Instead of "Max length", it has a "Max Response" setting that limits the number of tokens used for any one response in the chat. This is a great way to fine-tune your chatbot's performance.
If you want to try text summarization, you can use the Completions playground. Just sign in to Azure OpenAI Studio, select your subscription and OpenAI resource, and select the Completions playground. Then, enter a prompt for the model and select Generate – and Azure OpenAI will do the rest.
Select the Base
Selecting the base model is a crucial step in creating a custom model. You have six options to choose from: babbage-002, davinci-002, gpt-35-turbo (0613), gpt-35-turbo (1106), gpt-35-turbo (0125), and gpt-4 (0613).
Your choice of base model will impact the performance and cost of your model.
The performance of your model will be influenced by the base model you select. This is because each base model is designed to handle different tasks and has its own strengths and weaknesses.
To choose a base model, select it from the Base model type dropdown. Then, click Next to continue the process.
Here are the available base models you can choose from:
- babbage-002
- davinci-002
- gpt-35-turbo (0613)
- gpt-35-turbo (1106)
- gpt-35-turbo (0125)
- gpt-4 (0613)
- Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
For more information about fine-tuning a previously fine-tuned model, see the Models section.
Configure Task Parameters
Configuring task parameters is a crucial step in fine-tuning your model. You can choose to use the default values or customize them to your preference.
The default values are determined algorithmically based on your training data, so you can select "Default" to use them. Alternatively, you can select "Custom" to display and edit the hyperparameter values.
The batch size is the number of training examples used to train a single forward and backward pass. Larger batch sizes tend to work better for larger datasets, but a larger batch size means that model parameters are updated less frequently, but with lower variance.
Batch size can be set to a specific value, or you can choose to have it calculated automatically. If you set it to -1, the batch size will be calculated as 0.2% of examples in the training set, with a maximum of 256.
The learning rate multiplier is used to determine the fine-tuning learning rate. It's recommended to experiment with values in the range 0.02 to 0.2 to see what produces the best results. Larger learning rates tend to perform better with larger batch sizes.
The number of epochs to train the model for is also a crucial parameter. You can set it to a specific value, or choose to have it determined dynamically based on the input data. If you set it to -1, the number of epochs will be determined dynamically.
Here's a summary of the task parameters:
The seed can be used to ensure reproducibility of the job, and can be set to a specific value or generated automatically.
Choose Your Validation
When choosing your validation data, you have a few options to consider. You can either select existing prepared validation data or upload new prepared validation data to use when customizing your model.
If your validation data is already uploaded to the service, select Choose dataset. This will allow you to use the existing data for validation.
If you need to upload new validation data, it's recommended to import it from an Azure Blob store for large files. This is because large files can become unstable when uploaded through multipart forms.
To upload new validation data, you can use one of the following options:
- Import from an Azure Blob store
- Upload a new prepared validation data file
Your validation data file must be formatted as a JSONL file, encoded in UTF-8 with a byte-order mark (BOM), and less than 512 MB in size.
If you choose to upload a new validation data file, make sure it meets the file size and formatting requirements. You can also use a shared web location, such as Azure Blob, to import your validation data.
Configure Your Parameters
Configuring your parameters is a crucial step in fine-tuning your model. You can choose to leave the default configuration or customize the values to your preference.
The following parameters are available for configuration:
- batch_size: This determines the number of training examples used to train a single forward and backward pass. Larger batch sizes tend to work better for larger datasets.
- learning_rate_multiplier: This multiplies the original learning rate used for pre-training to determine the fine-tuning learning rate. Larger learning rates tend to perform better with larger batch sizes.
- n_epochs: This specifies the number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
- seed: This controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results.
You can choose to leave these parameters at their default values or customize them to suit your needs. If you set batch_size or n_epochs to -1, the service will calculate the value dynamically based on your input data.
Here's a brief summary of the parameters:
After you finish making your configurations, select Next to proceed with training your model.
Model and Deployment
Azure AI Studio provides a comprehensive platform for designing, evaluating, and implementing generative AI solutions. You can develop unique copilots and generative AI models using the platform.
Azure AI Studio offers a variety of deployment options, including large language models (LLMs), flows, and web applications, suitable for production environments such as websites and applications. You can host models on servers or in the cloud, with APIs or other interfaces provided for user interaction.
To deploy a model, navigate to the Components section in the left pane of your project interface and select the Deployments page. Click on + Create to initiate the creation of a real-time endpoint deployment. You can choose from a list of available models, including gpt-35-turbo, davinci-002, and babbage-002.
Here's a list of models that support fine-tuning:
- babbage-002
- davinci-002
- gpt-35-turbo (0613)
- gpt-35-turbo (1106)
- gpt-35-turbo (0125)
- gpt-4 (0613)
- gpt-4o (2024-08-06)
- gpt-4o-mini (2024-07-18)
You can also fine-tune a previously fine-tuned model, formatted as base-model.ft-{jobid}, or deploy a fine-tuned model to a different region than where the model was originally fine-tuned.
Retrieve API Credentials
To retrieve API credentials, you'll need to find the endpoint and key for your Azure OpenAI resource. You can find the endpoint and key in the Keys & Endpoint section when examining your resource from the Azure portal, or via the Deployments page in Azure AI Studio.
The service endpoint can be found in the Keys & Endpoint section, and an example endpoint is https://docs-test-001.openai.azure.com/. You can use either KEY1 or KEY2 as the API key.
You'll need both the endpoint and API key to authenticate your API calls. The Keys & Endpoint section can be found in the Resource Management section of your resource in the Azure portal.
To secure your API credentials, it's recommended to use either KEY1 or KEY2, and to rotate and regenerate keys without causing a service disruption. You'll need a Microsoft Entra ID and an API key to authenticate your API calls.
Here's a summary of the necessary credentials:
Choose Your Model
Choosing the right model for your project can be a bit overwhelming, especially with so many options available. The good news is that you have a few models to choose from, including babbage-002, davinci-002, and a range of gpt-35-turbo models.
Some of these models, like gpt-4 (0613) and gpt-4o (2024-08-06), are currently in public preview for fine-tuning. Fine-tuning allows you to customize a model to fit your specific needs, which can be a game-changer for many projects.
You can also fine-tune a previously fine-tuned model, formatted as base-model.ft-{jobid}. This is a great way to build on existing work and create something truly unique.
If you're looking for a more streamlined approach, you can also fine-tune one of the base models, such as babbage-002 or davinci-002. These models are specifically designed for chat completion tasks.
Here are some of the models that support fine-tuning:
- babbage-002
- davinci-002
- gpt-35-turbo (0613)
- gpt-35-turbo (1106)
- gpt-35-turbo (0125)
- gpt-4 (0613)
- gpt-4o (2024-08-06)
- gpt-4o-mini (2024-07-18)
It's worth noting that fine-tuning is currently only supported in certain regions, so be sure to check the models page to see which regions are currently supported.
Models
Fine-tuning models is a crucial step in creating your own custom AI model. You can fine-tune models such as babbage-002, davinci-002, and gpt-35-turbo.
There are several models that support fine-tuning, including babbage-002, davinci-002, gpt-35-turbo (0613), gpt-35-turbo (1106), gpt-35-turbo (0125), gpt-4 (0613), gpt-4o (2024-08-06), and gpt-4o-mini (2024-07-18).
You can also fine-tune a previously fine-tuned model, formatted as base-model.ft-{jobid}. Consult the models page to check which regions currently support fine-tuning.
The following models support fine-tuning:
- babbage-002
- davinci-002
- gpt-35-turbo (0613)
- gpt-35-turbo (1106)
- gpt-35-turbo (0125)
- gpt-4 (0613)
- gpt-4o (2024-08-06)
- gpt-4o-mini (2024-07-18)
Fine-tuning for this model is currently in public preview.
Analyze Your Custom
After deploying your custom model, it's essential to analyze its performance to ensure it meets your expectations. You can do this by reviewing the result file named results.csv, which is attached to each fine-tuning job after it completes.
The result file contains a header row and a row for each training step performed by the fine-tuning job, with columns such as step, train_loss, train_mean_token_accuracy, valid_loss, and validation_mean_token_accuracy.
You can use the result file to analyze the training and validation performance of your custom model, looking for your loss to decrease over time and your accuracy to increase. If you see a divergence between your training and validation data, that may indicate that you are overfitting.
To view the data in your results.csv file as plots in Azure AI Studio, select the link for your trained model and you will see three charts: loss, mean token accuracy, and token accuracy. If you provided validation data, both datasets will appear on the same plot.
You can also analyze your fine-tuned model by downloading a result file named results.csv from the fine-tuned model page under the Details tab. The result file contains the same columns as the custom model's result file, and you can use it to analyze the training and validation performance of your fine-tuned model.
Here are the columns you can expect to see in the result file:
By analyzing your custom model's performance, you can identify areas for improvement and fine-tune your model further to achieve better results.
Vision
Vision fine-tuning is possible with images in your JSONL files, just like you can send one or many image inputs to chat completions. This means you can include those same message types within your training data.
Images can be provided either as publicly accessible URLs or data URIs containing base64 encoded images.
If your images get skipped, it's likely because they contain CAPTCHAs, people, or faces.
Here are some common issues that can cause images to be skipped:
To reduce the cost of training, you can set the detail parameter for an image to low, which will resize the image to 512 by 512 pixels and represent it with 85 tokens regardless of its size.
Frequently Asked Questions
What is the difference between Azure AI Studio and Copilot Studio?
Azure AI Studio is a cloud-based platform for building, training, and deploying AI models, similar to Copilot Studio but with a more extensive set of capabilities. While Copilot Studio is a low-code tool, Azure AI Studio offers a more comprehensive platform for advanced users.
Is Azure AI Studio free?
Azure AI Studio is free to use and explore, with no need for an Azure account. However, individual features accessed and consumed may incur normal billing rates.
Where to start with Azure AI?
To get started with Azure AI, read the documentation and explore the AI demos to gain a solid understanding of the foundation and capabilities. Then, dive into hands-on experience with Azure's machine learning tools and various services.
What is Microsoft Azure AI used for?
Microsoft Azure AI enables developers to quickly build intelligent applications with prebuilt and customizable APIs and models. It helps create cutting-edge, market-ready solutions that are responsible and efficient.
What is AI Azure Studio?
Azure AI Studio is a comprehensive platform for developing and deploying generative AI apps and APIs responsibly, enabling fast innovation at scale. It offers prebuilt and customizable models to build copilots and AI solutions using your own data.
Sources
- https://techcommunity.microsoft.com/blog/educatordeveloperblog/getting-started-with-azure-ai-studio/4095602
- https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning
- https://www.codecademy.com/article/getting-started-with-azure-open-ai-service
- https://learn.microsoft.com/en-us/azure/ai-services/openai/quickstart
- https://ivanatilca.medium.com/a-step-by-step-guide-to-deploying-open-ai-models-on-microsoft-azure-cab86664fbb4
Featured Images: pexels.com