
Azure Redis is a popular in-memory data store that helps you build fast and scalable applications. It's a NoSQL database that stores data in a key-value format, making it easy to use and fast to retrieve data.
With Azure Redis, you can store and retrieve data quickly, even with large amounts of data. This is because it uses a high-performance storage engine that can handle a high volume of requests.
Azure Redis is designed to be highly available and scalable, making it a great choice for applications that need to handle a large number of users or requests. It can scale up or down automatically to match the needs of your application.
A fresh viewpoint: Azure Data Studio vs Azure Data Explorer
Getting Started
Azure Cache for Redis is designed to cover key scenarios, including improving application performance and increasing scalability. You can choose from service tiers such as Basic, Standard, and Premium.
To get started, you need to understand the key parameters for creating an Azure Cache for Redis instance. This includes identifying the right service tier and configuration options.
You can create an Azure Cache for Redis instance in just a few clicks. Once created, you can interact with the cache using various tools and languages.
Here are the key tasks you'll need to complete to get started with Azure Redis:
Azure Redis Service
Azure Redis Service provides a managed data cache for your Azure applications. This means you can easily store and retrieve data with high performance and reliability.
You can get metrics from Azure Redis Cache to visualize the performance of your Redis Caches. This allows you to see how your cache is performing and make adjustments as needed.
Here are some key benefits of using Azure Redis Service:
- Visualize the performance of your Redis Caches.
- Correlate the performance of your Redis Caches with your applications.
Overview
Azure Redis Service is a powerful tool for caching data in your Azure applications. It's a managed service, which means you don't have to worry about the underlying infrastructure.
One of the key benefits of Azure Redis Service is that you can get metrics from it to visualize the performance of your Redis Caches. This allows you to see how your cache is performing over time.
You can also use metrics to correlate the performance of your Redis Caches with your applications. This is especially useful if you're experiencing performance issues and want to identify the root cause.
To get started with Azure Redis Service, you can use metrics to monitor the performance of your Redis Caches. This will help you identify any issues and make data-driven decisions to optimize your cache performance.
Here are some ways you can use metrics with Azure Redis Service:
- Visualize the performance of your Redis Caches.
- Correlate the performance of your Redis Caches with your applications.
Versions
Azure Redis Service offers flexibility in terms of Redis versions.
You can choose from OSS Redis versions 4.0.x and 6.0.x, which are the supported versions by Azure Cache for Redis.
We've skipped Redis 5.0 to bring you the latest version, which is a significant upgrade.
In the past, Azure Cache for Redis maintained a single Redis version, but now you can choose from a newer major release upgrade and at least one older stable version.
This means you can pick the version that works best for your application, giving you more control and flexibility.
You can choose a newer version for the latest features and performance, or stick with an older version for stability and compatibility.
Service Tiers
Azure Redis Service offers a range of service tiers to suit different needs and budgets. The tiers are designed to provide varying levels of performance, features, and availability.
The Basic tier is ideal for development, testing, and non-critical workloads, but it has no service-level agreement (SLA).
The Standard tier offers a replicated configuration with two VMs, providing a higher level of reliability than the Basic tier.
The Premium tier is a high-performance option, deployed on more powerful VMs, offering higher throughput, lower latency, and better availability.
The Enterprise tier supports Redis modules, including RediSearch, RedisBloom, RedisJSON, and RedisTimeSeries, and offers even higher availability than the Premium tier.
The Enterprise Flash tier is a cost-effective option, powered by Redis Inc.'s Redis Enterprise software, which extends Redis data storage to nonvolatile memory.
Azure Redis Service offers the following service tiers:
Scaling and Performance
Scaling an Azure Cache for Redis Instance can be done in two ways: scaling up, which increases the size of the Virtual Machine (VM) running the Redis server, or scaling out, which divides the cache instance into more nodes of the same size.
Scaling up adds more memory, Virtual CPUs (vCPUs), and network bandwidth, while scaling out is also known as horizontal scaling or sharding, and is the opposite of scaling in.
To scale out, you can create a new cache that is scaled out using clustering, which is enabled during cache creation from the working pane. This involves configuring the settings for non-TLS port, clustering, and data persistence, and selecting a shard count between 1 and 30.
Scaling out improves scalability by allowing new Redis servers to be added and data to be repartitioned as the size of the cache increases.
You might like: Azure Data Studio Connect to Azure Sql
Types of Scaling
Scaling and performance go hand in hand, and understanding the different types of scaling is crucial to ensuring your Azure Cache for Redis instance runs smoothly.
There are two main types of scaling: scaling up and scaling out. Scaling up increases the size of the Virtual Machine (VM) running the Redis server, adding more memory, Virtual CPUs (vCPUs), and network bandwidth.
Scaling out, on the other hand, divides the cache instance into more nodes of the same size, increasing memory, vCPUs, and network bandwidth through parallelization. This is also known as horizontal scaling or sharding.
Scaling up is also called vertical scaling, and the opposite of scaling up is scaling down. Scaling out is frequently called clustering in the Redis community.
Here are the details of scaling up and scaling out:
Scaling out is only supported on the Premium, Enterprise, and Enterprise Flash tiers, and scale in is only supported on the Premium tier. On the Premium tier, clustering must be enabled first before scaling in or out.
Memory Use
Memory use is a crucial aspect of Redis performance. It's essential to configure Redis to use the right amount of memory for your application.
You can specify the maximum amount of memory a Redis server can use. This is done when you configure the server, not when you're already using it.
You might enjoy: How to Use Windows Azure
A Redis cache has a finite size that depends on the host computer's resources. This is a good thing, as it prevents the cache from growing indefinitely.
You can configure a key in a Redis cache to have an expiration time, after which it's automatically removed from the cache. This helps prevent old or stale data from filling up the cache.
As memory fills up, Redis can automatically evict keys and their values. This is done by following a number of policies, with LRU (least recently used) being the default.
LRU evicts the least recently used keys first, which can be a good policy if you want to keep the most recently used data in the cache.
Security and Management
Azure Redis provides a limited security model based on password authentication, which is not recommended to be removed, as it leaves the server directly exposed to untrusted or unauthenticated clients.
To add an extra layer of security, you can implement your own security layer in front of the Redis server, and all client requests should pass through this additional layer.
If you need more comprehensive sign-in security, consider implementing SSL to protect data as it flows across the network, as Redis doesn't directly support any form of data encryption or transport security.
To protect cached data, consider implementing an authentication mechanism that requires applications to specify which identities can access data in the cache and which operations they are allowed to perform.
Here are some ways to restrict access to subsets of cached data:
- Split the cache into partitions using different cache servers and only grant access to identities for the partitions that they should be allowed to use.
- Encrypt the data in each subset using different keys and provide the encryption keys only to identities that should have access to each subset.
Protect
Protecting your data is crucial, and it's great that you're thinking about it. Redis has a limited security model based on password authentication, but it's not designed to handle comprehensive sign-in security.
You can't rely solely on Redis for security, so it's essential to implement your own security layer in front of the Redis server. This way, all client requests will pass through the additional layer, and you can ensure that only trusted clients have access to your data.
To restrict access to commands, you can disable them or rename them, and provide only privileged clients with the new names. This will give you more control over who can access your data.
If you need to protect data as it flows across the network, you should implement an SSL proxy. This will encrypt your data and prevent unauthorized access.
Here are some ways to protect the data in your cache:
- Implement an authentication mechanism that requires applications to specify which identities can access data in the cache.
- Specify which operations (read and write) these identities are allowed to perform.
You can also split the cache into partitions by using different cache servers and only grant access to identities for the partitions that they should be allowed to use. Alternatively, you can encrypt the data in each subset by using different keys, and provide the encryption keys only to identities that should have access to each subset.
To protect the data as it flows in and out of the cache, you should consider implementing SSL if the cache is located remotely and requires a TCP or HTTP connection over a public network.
Managing Concurrency
Concurrent access to shared data can lead to issues, just like with any shared data store. Caches are often shared by multiple instances of an application, making concurrency a concern.
To manage concurrency, you can adopt either an optimistic or pessimistic approach. Optimistic concurrency checks if the data has changed before updating it, while pessimistic concurrency locks the data to prevent changes.
Optimistic concurrency is suitable for situations where updates are infrequent or unlikely to collide. This approach is more scalable than pessimistic concurrency.
Pessimistic concurrency, on the other hand, locks the data to prevent changes, but can block other instances that need to process the same data. This approach is recommended only for short-lived operations.
Here are the key differences between optimistic and pessimistic concurrency:
- Optimistic concurrency: checks for data changes before updating, suitable for infrequent updates or unlikely collisions.
- Pessimistic concurrency: locks data to prevent changes, suitable for short-lived operations or high-collision scenarios.
Manage Expiration
Managing expiration is a crucial aspect of caching, as it ensures that data remains fresh and relevant. You can configure the cache to expire data and reduce the period for which data may be out of date.
Most caching systems enable you to set a default expiration policy when you configure the cache. This policy can be overridden for individual objects when you store them programmatically in the cache.
The expiration period for the cache and its objects should be carefully considered. If it's too short, objects will expire too quickly, and if it's too long, data may become stale.
Some caches enable you to specify the expiration period as an absolute value or as a sliding value that causes the item to be removed from the cache if it isn't accessed within the specified time. This setting overrides any cache-wide expiration policy, but only for the specified objects.
Data might fill up in the cache if it remains resident for a long time, causing some items to be forcibly removed in a process known as eviction. Cache services typically evict data on a least-recently-used (LRU) basis, but you can usually override this policy and prevent items from being evicted.
Here are some common eviction policies:
- A most-recently-used policy (in the expectation that the data won't be required again).
- A first-in-first-out policy (oldest data is evicted first).
- An explicit removal policy based on a triggered event (such as the data being modified).
Implementation and Configuration
Azure Cache for Redis can be implemented as a service in an Azure datacenter, providing a caching service that can be accessed from any Azure application.
It's a high-performance caching solution that provides availability, scalability, and security, typically running as a service spread across one or more dedicated machines.
To configure Azure Cache for Redis, you can start by checking out the Azure Cache for Redis documentation or FAQ for more information.
Here are some additional resources to consider when implementing Azure Cache for Redis:
- Azure Cache for Redis documentation
- Azure Cache for Redis FAQ
- Task-based Asynchronous pattern
- Redis documentation
- StackExchange.Redis
- Data partitioning guide
Portal
To scale your Azure Cache for Redis using the Azure portal, you can follow these steps: browse to the cache in the Azure portal and select Scale from the Resource menu. You can scale up or down, but keep in mind that scaling up will automatically update maxmemory-reserved and maxfragmentationmemory-reserved settings in proportion to the cache size.
To scale up or down, choose a pricing tier in the working pane and then select. While the cache is scaling to the new tier, a Scaling Redis Cache notification is displayed. When scaling is complete, the status changes from Scaling to Running.

Alternatively, you can scale out by increasing the Capacity slider, but this can only be done to increase capacity, not decrease it. Capacity increases in increments of two, reflecting the number of underlying Redis Enterprise nodes being added.
Here's a summary of the scaling options available in the Azure portal:
Remember to monitor the Scaling Redis Cache notification while the cache is scaling to the new tier.
Considerations for Implementation
When implementing caching in Azure, you'll want to consider the architecture of Azure Cache for Redis, which runs as a service spread across one or more dedicated machines. This is intended to provide low latency and high throughput by reducing the need to perform slow I/O operations.
Azure Cache for Redis is a high-performance caching solution that provides availability, scalability and security. It typically runs as a service spread across one or more dedicated machines.
To ensure fast access, Azure Cache for Redis attempts to store as much information as it can in memory. This is a key consideration when designing your cache.
Worth a look: When Was Azure Launched
Azure Cache for Redis is compatible with many APIs used by client applications. If you have existing applications that already use Azure Cache for Redis running on-premises, the Azure Cache for Redis provides a quick migration path to caching in the cloud.
To get started with Azure Cache for Redis, be sure to check out the documentation and FAQ sections listed below.
- Azure Cache for Redis documentation
- Azure Cache for Redis FAQ
Decide When to
Caching can be a game-changer for improving performance, scalability, and availability. The more data you have and the larger the number of users, the greater the benefits of caching become.
You should consider caching data that is read frequently but modified infrequently. This is because caching reduces latency and contention associated with handling large volumes of concurrent requests.
For example, if you have a database with a limited number of concurrent connections, caching data in a shared cache can help clients access the data even when connections are exhausted.

Data that has a higher proportion of read operations than write operations is a good candidate for caching. This includes data that is frequently accessed but rarely modified.
However, don't rely solely on the cache as the authoritative store of critical information. Always ensure that changes are saved to a persistent data store, so you don't lose important information if the cache becomes unavailable.
Here are some metrics to monitor to determine if you need to scale your cache:
- Redis Server Load
- Memory Usage
- Client connections
- Network Bandwidth
- Internal Defender Scans
Building a Custom
Building a custom Redis cache requires advanced configuration, which isn't covered by the Azure Redis cache.
You'll need to create your own Redis servers using Azure Virtual Machines, which can be a complex process. Creating several VMs to act as primary and subordinate nodes is necessary for replication.
Each primary/subordinate pair should be located close together to minimize latency. However, each set of pairs can be running in different Azure datacenters located in different regions.
A minimal clustered replication topology that provides a high degree of availability and scalability comprises at least six VMs organized as three pairs of primary/subordinate servers. This is because a cluster must contain at least three primary nodes.
You'll be responsible for monitoring, managing, and securing the service if you implement your own Redis cache this way.
Storing Complex Values
You can store complex values in Redis by serializing them to a textual format, typically XML or JSON. This is useful for caching object graphs.
To do this, create a new file called GameStat.cs and define a class that represents the complex value you want to store. For example, the article shows a GameStat class with properties like Id, Sport, DatePlayed, and Results.
The GameStat class can have properties like Id, Sport, DatePlayed, Game, Teams, and Results. The Results property is even a list of tuples, where each tuple contains a team and a score.
You can use a library like System.Text.Json to serialize an instance of the GameStat class to a JSON string. This is shown in the article, where a GameStat object is serialized to a JSON string and then stored in Redis using the StringSet method.
Here's an example of how to use System.Text.Json to serialize and deserialize a GameStat object:
```csharp
var stat = new GameStat { Id = "1950-world-cup", Sport = "Football", DatePlayed = new DateTime(1950, 7, 16), Game = "FIFA World Cup", Teams = new[] { "Uruguay", "Brazil" }, Results = new[] { ("Uruguay", 2), ("Brazil", 1) } };
string jsonString = JsonSerializer.Serialize(stat);
bool added = db.StringSet("event:1950-world-cup", jsonString);
```
This code creates a GameStat object, serializes it to a JSON string, and then stores the JSON string in Redis.
See what others are reading: Azure Auth Json Website Azure Ad Authentication
Frequently Asked Questions
What is the alternative to Redis in Azure?
For an alternative to Redis in Azure, consider using Azure Cache for Redis or Azure Cosmos DB, both of which offer similar in-memory data storage capabilities.
Is Redis a database or cache?
Redis can be used as both a database and a cache, offering high performance for data storage and retrieval. It's a versatile in-memory data store that excels as a cache or message broker, but also provides database-like functionality.
What is an Azure Redis cache?
Azure Redis Cache is a high-performance, in-memory caching service that enables fast and scalable cloud or hybrid deployments. It provides sub-millisecond latency and handles millions of requests per second.
How long does it take to deploy Azure Redis cache?
Deployment of Azure Redis cache typically takes several minutes to complete. You can monitor the progress on the Azure Cache for Redis Overview pane.
How do I check my Redis cache in Azure?
To check your Redis cache in Azure, sign in to the Azure portal and navigate to Azure Cache for Redis under the Monitor section. From there, you can view and manage your Redis cache settings.
Sources
- https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-overview
- https://docs.datadoghq.com/integrations/azure_redis_cache/
- https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-how-to-scale
- https://learn.microsoft.com/en-us/azure/architecture/best-practices/caching
- https://azure.github.io/redis-on-azure-workshop/labs/01-explore-azure-cache-for-redis.html
Featured Images: pexels.com