Azure Cognitive Search Vector: Getting Started with Vector Search AI

Author

Reads 334

Experience a serene ocean view with an expansive blue sky and distant islands on the horizon.
Credit: pexels.com, Experience a serene ocean view with an expansive blue sky and distant islands on the horizon.

Azure Cognitive Search Vector is a powerful tool that allows you to add a new dimension to your search capabilities - literally. By incorporating vector search, you can enable your users to search for similar items, even if they don't know the exact keyword.

With vector search, you can represent each item as a vector, which can be thought of as a set of coordinates in a multi-dimensional space. This allows for more nuanced and flexible search queries, such as finding similar products or recommending related content.

Azure Cognitive Search Vector provides a range of benefits, including improved search accuracy and relevance, faster search performance, and enhanced user experience.

Prerequisites

To get started with Azure Cognitive Search vector, you'll need to meet some prerequisites. You can use Azure AI Search in any region and on any tier.

First, you'll need to have a vector index on Azure AI Search. This means you should check your index for a vectorSearch section to confirm a vector index. You can also add a vectorizer to your index for built-in text-to-vector or image-to-vector conversion during queries.

Expand your knowledge: Azure Cognative Search

Credit: youtube.com, Introducing Vector Search in Azure Cognitive Search | Azure Friday

To run the examples, you'll need Visual Studio Code with a REST client and sample data. You can get started with the REST client by following the Quickstart: Azure AI Search using REST guide.

To use Azure Cognitive Search vector, you'll need an index with searchable vector fields on Azure AI Search. You'll also need a deployed embedding model, such as text-embedding-ada-002, text-embedding-3-small, or text-embedding-3-large on Azure OpenAI.

Here are the specific embedding models you can use:

  • text-embedding-ada-002
  • text-embedding-3-small
  • text-embedding-3-large

You'll also need permissions to use the embedding model. If you're using Azure OpenAI, the caller must have Cognitive Services OpenAI User permissions. Alternatively, you can provide an API key.

Finally, make sure you have Visual Studio Code with a REST client to send the query and accept a response. We recommend enabling diagnostic logging on your search service to confirm vector query execution.

Curious to learn more? Check out: Azure Search Openai Demo

Azure AI

Azure AI Search is a powerful tool that creates two indexes: vector-index and vector-index-content.

These indexes are used differently, as hinted by their varying sizes in the Azure Portal.

The system-prompt container has files stored in Blob Storage under the /SystemPrompts directory.

There are 3 system prompts provided in the codebase.

Discover more: Azure Index Search

Getting Started

Credit: youtube.com, Azure Cognitive Search: Vector Search

To get started with Azure Cognitive Search Vector, you'll need to be approved for the Azure OpenAI services on your Azure subscription, which usually takes around 24 hours.

First, clone the repo to your local drive and open VS Code and a PowerShell terminal. Then, use git to check out the cognitive-search-vector branch or change branches in VS Code.

You'll also need to start Docker Desktop if it isn't already running, and if you know your Azure subscription ID, you can skip the next few steps.

Here's a step-by-step guide to getting started:

  1. Clone the repo to your local drive
  2. Open VS Code and a PowerShell terminal
  3. Use git to check out the cognitive-search-vector branch or change branches in VS Code
  4. Start Docker Desktop if it isn't already running

The whole process should take around 10-20 minutes to deploy, and you'll see a web app URL close to the bottom of the output in the terminal once it's complete.

From Pure to Hybrid Retrieval

Using a pure vector approach for search can be limited, but Azure Cognitive Search has taken it to the next level with hybrid retrieval. This means you can now combine vector and traditional keyword scores for even better results.

Credit: youtube.com, Advanced RAG 03 - Hybrid Search BM25 & Ensembles

Hybrid search is a game-changer, delivering supercharged search experiences that are more accurate and effective. By harnessing both vector and keyword scores, you can get the best of both worlds.

With hybrid search, you can expect to see improved retrieval result quality, making it easier to find what you're looking for. This is especially useful when dealing with complex search queries or large datasets.

Indexing

Indexing is a crucial step in Azure Cognitive Search Vector. You can index data from various sources, including Azure Blob Storage, Azure Cosmos DB, and SQL Server.

Indexing allows you to create a searchable repository of data, making it possible to query and retrieve specific information.

For more insights, see: What Is Azure Storage

Getting Index

You can get the vector index size through various methods, such as the Azure portal, REST APIs, or Azure SDKs.

To get the vector index size, you can use the Azure portal, REST APIs, or Azure SDKs.

Vector indexes consume almost 93 megabytes of memory at the service level, and about three times more space than vector indexes in memory on disk.

Credit: youtube.com, Getting Your Website Content Indexed and Ranked Faster

The physical size of the index on disk, plus the in-memory size of the vector fields, can be obtained by sending a GET Index Statistics.

You can also get the physical size of the index on disk, plus the in-memory size of the vector fields, by sending a GET Index Statistics.

The response includes usage information at the index level.

Here are the methods to get vector index size:

  • Portal
  • REST

Factors Affecting Index

There are three major components that affect the size of your internal vector index: the raw size of the data, the overhead from the selected algorithm, and the overhead from deleting or updating documents within the index.

The raw size of the data is a significant contributor to the overall size of your vector index, as each vector is usually an array of single-precision floating-point numbers.

The selected algorithm also plays a crucial role in determining the size of your vector index, with different algorithms having varying overheads.

Deleting or updating documents within the index can also increase its size, as the index needs to be updated to reflect these changes.

To estimate the total size of your vector index, you can use the following calculation, taking into account these three factors.

Recommended read: Google Search Pdf Documents

Raw Data

Credit: youtube.com, Easy database indexing strategies

Raw data is the foundation of any indexing system, and understanding its size is crucial for estimating the requirements of your vector index. The raw size of your data is determined by the storage size of one vector, which is calculated by multiplying the number of documents containing that vector field by the dimensions of the vector field and the size of the data type.

The size of one vector is influenced by its dimensionality. For example, if you have a vector field with 4 dimensions and each dimension is represented by a Collection(Edm.Single) data type, the size of one vector would be 4 bytes.

The size of the data type itself is a critical factor in determining the raw size of your data. According to the calculation, the size of the data type can be 4 bytes, 2 bytes, 2 bytes, or 1 byte, depending on the EDM data type used.

Here's a quick reference table for the size of different EDM data types:

By understanding the raw size of your data, you can make informed decisions about how to optimize your indexing system for better performance and efficiency.

Multiple Fields

Credit: youtube.com, Index on Multiple fields

Indexing multiple fields can be a powerful way to enhance search functionality. You can set the "vectorQueries.fields" property to multiple vector fields, allowing you to execute a query against each field in the list.

This approach is especially useful when working with multimodal data, such as images and text. For instance, you can use the CLIP model to vectorize both image and text content, and then send multiple queries across these fields in parallel.

To achieve this, you can provide an array of vector queries, each targeting a different field. This is where things can get interesting, as you'll need to ensure that each field contains embeddings from the same embedding model, and that the query is also generated from the same model.

Here's a quick rundown of the key components involved:

  • vectorQueries provides an array of vector queries.
  • vector contains the image vectors and text vectors in the search index, each instance is a separate query.
  • fields specifies which vector field to target.
  • k is the number of nearest neighbor matches to include in results.

By using multiple fields, you can create a more comprehensive search experience that takes into account different types of content. This can be particularly useful in applications where users need to search for information across multiple formats, such as images, text, and video.

Data Storage

Credit: youtube.com, Open AI Embeddings in Azure Vector Database of Cognitive Search

Data storage is a crucial aspect of Azure Cognitive Search Vector, and understanding its implications is vital for efficient management.

Vector indexes in memory consume a significant amount of space, with the size of vectors in memory being the primary focus of most articles.

The storage size quota for vector indexes is roughly three times the size of the vector index in memory, so if your vectorIndexSize usage is at 100 megabytes (10 million bytes), you would have used at least 300 megabytes of storageSize quota to accommodate your vector indexes.

Storing Embeddings

Storing embeddings requires a search index, which can be done using Azure AI Search. You can store string representations of objects like Customer, Product, and Sales Order in a search index with vector embeddings.

To store embeddings, you can use Azure AI Search, as shown in the demo. This allows you to index objects with their vector representations.

Azure AI Search can handle vector field queries by converting the user's text query string into a vector representation. This can be done using an embedding library or API in your application code.

Broaden your view: Azure Connection String

Credit: youtube.com, Vector Databases simply explained! (Embeddings & Indexes)

For example, you can use the azure-search-vector-samples repository for code samples on generating embeddings. Always use the same embedding models used to generate embeddings in the source documents.

One approach to query a vector field is to use integrated vectorization, now generally available, which allows Azure AI Search to handle your query vectorization inputs and outputs. This approach eliminates the need for your application code to connect to a model, generate embeddings, and handle the response.

The expected response from using integrated vectorization is a 202 status code for a successful call to the deployed model. The "embedding" field in the response body contains the vector representation of the query string.

Affect Disk Storage

If you're storing vectors in a database, you should know how they affect disk storage.

The size of vectors on disk is roughly three times the size of the vector index in memory. For example, if your vector index size usage is 100 megabytes, you'll have used at least 300 megabytes of storage size quota to accommodate your vector indexes.

Google Search Engine on Macbook Pro
Credit: pexels.com, Google Search Engine on Macbook Pro

This means that disk storage for vectors can be substantial, so it's essential to consider this when planning your data storage.

The raw size of the data, overhead from the selected algorithm, and overhead from deleting or updating documents within the index are the three major components that affect the size of your internal vector index.

To estimate the total size of your vector index, you can use the following calculation:

Calculating Embeddings

Calculating embeddings is a powerful feature of Azure Cognitive Search Vector. You can calculate embeddings on only marked properties of an object using the EmbeddingFieldAttribute and EmbeddingUtility.

These two classes provide a way to decorate properties of models that you want to be included in the text used to create embeddings. The EmbeddingUtility has utility methods that will return JObjects with only the properties that have been decorated with the [EmbeddingField] attribute.

This is particularly useful if you need to save embeddings for data objects and only want to include specific properties in the semantic search.

Calculating Embeddings on Marked Properties

Credit: youtube.com, Vector 8 Properties of Embeddings

You can decorate properties of models with the EmbeddingFieldAttribute to include them in the text used to create embeddings.

The EmbeddingUtility class has utility methods that return JObjects with only the properties marked with the EmbeddingFieldAttribute.

This allows you to only include specific properties in the semantic search, making it a useful tool for saving embeddings for data objects.

The EmbeddingFieldAttribute is used in conjunction with the EmbeddingUtility to achieve this, as shown in the example below.

By using the EmbeddingFieldAttribute and EmbeddingUtility, you can customize the properties included in the embeddings, giving you more control over the search results.

Using Rrf

Using RRF, you can fuse multiple ranked results from different vector queries to get a single, more accurate ranking. This is particularly useful when targeting multiple vector fields or running multiple vector queries in parallel.

Multiple sets are created during query execution if the query targets multiple vector fields or runs multiple vector queries. This means the search engine generates multiple queries to target each vector index separately.

Credit: youtube.com, Text embeddings & semantic search

A vector query can only target one internal vector index at a time, so for multiple vector fields and queries, the search engine generates multiple queries to target each index. This results in multiple sets of ranked results.

These ranked results are then fused using RRF to produce a single, more accurate ranking. For more information on how RRF works, see the relevant section on Relevance scoring using Reciprocal Rank Fusion (RRF).

Weighting

Weighting is a crucial step in calculating embeddings, as it allows you to control the relative importance of each vector query in a search operation.

You can assign a weight query parameter to specify the relative weight of each vector query. This value is used when combining the results of multiple ranking lists produced by two or more vector queries in the same request.

The default weight is 1.0, and the value must be a positive number larger than zero. Weights are used when calculating the reciprocal rank fusion scores of each document.

Credit: youtube.com, Word Embedding and Word2Vec, Clearly Explained!!!

A weight of 0.5, for example, would reduce the importance of a vector query in a request. You can also assign a weight of 2.0 to make a vector query twice as important as another one.

Here's a simple example of how you can use weights in a hybrid query with two vector query strings and one text string:

The implicit weight of 1.0 for the text query means it has neutral weight, but you can increase or decrease its importance by setting maxTextRecallSize.

Frequently Asked Questions

Does Azure Cognitive Search use a vector database?

Azure Cognitive Search uses vector similarity searches, which are powered by embeddings stored in vector fields, but it doesn't use a traditional vector database. This approach enables more accurate and relevant search results compared to traditional keyword searches.

What is the difference between Azure search and Azure Cognitive Search?

Azure Search is best for simple search needs, while Azure Cognitive Search offers advanced AI capabilities for more complex search requirements. It's ideal for when you need data enrichment and integration with Azure Cognitive Services.

What is the difference between traditional search and vector search?

Traditional search relies on keyword matches, while vector search uses distances in a mathematical space to find similar data, making it more precise and efficient. This difference in approach leads to more accurate and relevant results, especially for complex queries.

What is the difference between semantic search and vector search in Azure?

Semantic search interprets the meaning and intent behind your query, while Vector Search finds the best matches by comparing your query to possible results as numerical vectors. Together, they help you get more accurate and relevant search results.

Claire Beier

Senior Writer

Claire Beier is a seasoned writer with a passion for creating informative and engaging content. With a keen eye for detail and a talent for simplifying complex concepts, Claire has established herself as a go-to expert in the field of web development. Her articles on HTML elements have been widely praised for their clarity and accessibility.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.