Getting Started with Azure Openai Function Calling

Author

Reads 1.2K

OpenAI Text on TV Screen
Credit: pexels.com, OpenAI Text on TV Screen

To start using Azure OpenAI Function Calling, you need to have an Azure subscription. This will give you access to the Azure OpenAI service and allow you to create and manage functions.

First, sign up for an Azure account if you haven't already. This will give you a free trial or a paid subscription depending on your needs.

With an Azure subscription, you can create a new Azure OpenAI resource in the Azure portal. This will give you a unique endpoint to use for function calls.

Once you have an Azure OpenAI resource, you can create a new function using the Azure Function App. This will allow you to define a function that can be called using the Azure OpenAI service.

Using Azure OpenAI Functions

To use chat tools in Azure OpenAI, you define a function tool and include it in the options for a chat completions request. This allows the assistant to invoke defined functions and capabilities in the process of fulfilling a chat completions request.

Credit: youtube.com, Function Calling in Azure OpenAI Service

You can control the behavior of tool calls using the ToolChoice property on ChatCompletionsOptions. There are three options: Auto, which lets the model decide which tools to call, None, which instructs the model not to use any tools, and a reference to a named function definition or function tool definition, which restricts the response to calling the corresponding tool.

If you choose to use tool calls, the response message will include one or more "tool calls" that must be resolved via "tool messages" on the subsequent request. To provide tool call resolutions, you need to provide all prior historical context, including the original system and user messages, the response from the assistant that included the tool calls, and the tool messages that resolved each of those tools.

Here are the three options for controlling tool calls:

  • ChatCompletionsToolChoice.Auto: The model determines which tools to call.
  • ChatCompletionsToolChoice.None: The model does not use any tools.
  • A reference to a named function definition or function tool definition: The model restricts its response to calling the corresponding tool.

Use Functions

To use functions with Azure OpenAI, you need to define a function tool first. This tool can be a custom function or a predefined one that can be invoked during the chat completion process.

Credit: youtube.com, OpenAI Function Calling - Full Beginner Tutorial

You can define a function tool by specifying a named function definition or function tool definition. This will instruct the model to restrict its response to calling the corresponding tool.

To include the new function tool definition in a chat completion request, you'll need to add it to the options. This will allow the assistant to invoke the defined function and fulfill the chat completion request.

The response message from the assistant will include one or more "tool calls" that must be resolved via "tool messages" on the subsequent request. This is similar to a callback for chat completions.

To provide tool call resolutions to the assistant, you'll need to provide all prior historical context, including the original system and user messages, the response from the assistant that included the tool calls, and the tool messages that resolved each of those tools.

The model will ignore the ChoiceCount when providing tools, and all streamed responses should map to a single, common choice index in the range of [0..(ChoiceCount - 1)].

You can control the behavior of tool calls by using the ToolChoice property on ChatCompletionsOptions. There are three options: Auto, None, and a reference to a named function definition or function tool definition.

Credit: youtube.com, How does OpenAI Function Calling work?

Here are the details:

  • ChatCompletionsToolChoice.Auto: instructs the model to determine which tools to call. If tools are selected, a CompletionsFinishReason of ToolCalls will be received on response ChatChoice instances and the corresponding ToolCalls properties will be populated.
  • ChatCompletionsToolChoice.None: instructs the model to not use any tools and instead always generate a message. Note that the model's generated message may still be informed by the provided tools even when they are not or cannot be called.
  • Reference to a named function definition or function tool definition: instructs the model to restrict its response to calling the corresponding tool. Response ChatChoice instances will report a FinishReason of CompletionsFinishReason.Stopped and the corresponding ToolCalls property will be populated.

Choosing a GPT Model

When choosing a GPT model for your Azure OpenAI Function, you can select from gpt-35-turbo-0613 and gpt-4-0613.

Both gpt-35-turbo-0613 and gpt-4-0613 allow you to describe functions in an API call, enabling the model to choose the output format.

To use this feature, you'll need to understand how to get the output in the desired JSON format.

You can use gpt-35-turbo-0613 and gpt-4-0613 to have the model intelligently choose to output a JSON object containing arguments to call functions.

This model can replace system messages with function calling, making it a valuable tool in your Azure OpenAI Function.

Implementation Details

To implement Azure OpenAI function calling, you'll need to create an Azure Function with a HTTP trigger. This will allow you to send a request to the Azure Function, which can then call the OpenAI API.

The Azure Function should be configured to use the OpenAI API's endpoint and authentication method. For example, if you're using the OpenAI API's default endpoint, you'll need to set the `OpenAI_API_ENDPOINT` environment variable to `https://api.openai.com`.

The Azure Function will also need to handle the response from the OpenAI API, which can be a JSON object containing the model's output. You can then parse this output and return it to the user in your Azure Function's response.

Create a Client

Credit: youtube.com, How To Ace Your Client Implementation

To create a client, you'll need to authenticate the client first, which involves creating an instance of the OpenAIClient class. This requires providing a valid endpoint URI to an Azure OpenAI resource, along with a corresponding key credential, token credential, or Azure identity credential.

You can authenticate with Azure Active Directory using the Azure Identity library, but for most examples, client subscription key authentication is used. To use the DefaultAzureCredential provider, you'll need to install the Azure.Identity package.

To create the OpenAI client, you'll need to use the Azure SDK, and provide the URL and key extracted from the Azure OpenAI resource. This will allow you to configure the client for use with Azure OpenAI.

.NET Client Library

The .NET client library is a powerful tool for integrating OpenAI models into your .NET applications. It's an adaptation of OpenAI's REST APIs, providing an idiomatic interface and rich integration with the rest of the Azure SDK ecosystem.

Credit: youtube.com, The Best .NET REST API Client You Didn't Know About

With this library, you can connect to Azure OpenAI resources or the non-Azure OpenAI inference endpoint, making it a great choice for even non-Azure OpenAI development. This flexibility is a major advantage, allowing you to work with OpenAI models in a variety of scenarios.

You can use the client library to create chat completions using models like gpt-4 and gpt-35-turbo, or generate images with dall-e-3. These are just a few examples of the many capabilities of the library.

Here are some specific use cases for the .NET client library:

  • Create chat completions using models like gpt-4 and gpt-35-turbo
  • Generate images with dall-e-3
  • Transcribe or translate audio into text with whisper
  • Create a text embedding for comparisons
  • Create a legacy completion for text using models like text-davinci-002

Azure OpenAI is a managed service that allows developers to deploy, tune, and generate content from OpenAI models on Azure resources. This service is tightly integrated with the .NET client library, making it easy to get started.

Frequently Asked Questions

Is function calling available in Azure OpenAI?

Yes, function calling is supported in the Assistants API, enabling you to describe function structures and return required functions with arguments. This feature is part of the Azure OpenAI service.

Cora Stoltenberg

Junior Writer

Cora Stoltenberg is a skilled writer with a passion for crafting engaging content on a wide range of topics. Her expertise spans various categories, including Search Engine Optimization (SEO) Strategies, where she provides actionable tips and insights to help businesses improve their online presence. With a keen eye for detail and a knack for simplifying complex concepts, Cora's writing is both informative and accessible to readers of all levels.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.