Azure Premium Storage is designed for high-performance workloads, offering up to 60,000 IOPS and 1,000 MB/s throughput per disk. This makes it ideal for applications that require low latency and high throughput.
By using Azure Premium Storage, you can achieve faster deployment and scaling of your applications, which is especially useful for development and testing environments. This can help reduce the time it takes to get your application up and running.
With Azure Premium Storage, you can also take advantage of its high availability features, which include redundant storage and automatic failover. This ensures that your data is always accessible and your applications remain available, even in the event of a failure.
Azure Premium Storage Basics
Azure Premium Storage is designed for high-performance workloads, providing fast and consistent I/O performance.
It's perfect for demanding applications like virtual machines and databases that require low latency. Premium Storage disks offer up to 5,000 IOPS and 200 MB/sec throughput, depending on the disk size.
There are three Premium Storage disk types available: P10, P20, and P30, each with different IOPS and throughput capabilities.
Here's a breakdown of the disk types:
You can maximize the performance of your "DS" series VMs by attaching multiple Premium Storage disks, up to the network bandwidth limit available to the VM for disk traffic.
Optimize Iops, Throughput, and Latency
Optimize IOPS, throughput, and latency by understanding the performance factors that influence your application's performance. The main factors include I/O size, VM size, disk size, number of disks, disk caching, multithreading, and queue depth.
To optimize IOPS, use a smaller I/O size, which yields higher IOPS. For example, in an enterprise OLTP application requiring very high transactions per second rate, a smaller I/O size is beneficial.
To optimize throughput, use a larger I/O size, which yields higher throughput. For instance, in an enterprise data warehousing application processing large amounts of data, a larger I/O size is ideal.
For another approach, see: Which Azure Storage Service Supports Big Data Analytics
VM size also plays a crucial role in optimizing IOPS and throughput. Use a VM size that offers IOPS and throughput limits greater than your application's requirements. For example, in an enterprise OLTP application, use a VM size with an IOPS limit greater than the total IOPS driven by the storage disks attached to it.
Disk size is another important factor in optimizing IOPS and throughput. Use a disk size that offers IOPS and throughput limits greater than your application's requirements. For example, in an enterprise data warehousing application, use a disk size with a throughput limit greater than the total throughput driven by the premium storage disks attached to it.
The following table summarizes the performance factors and the steps necessary to optimize IOPS, throughput, and latency:
By understanding and optimizing these performance factors, you can achieve optimal IOPS, throughput, and latency for your application on Azure premium storage.
Scalability and Limits
The IOPS and throughput limits of each premium disk size are different and independent from the VM scale limits, so make sure to check both.
For example, a DS4 VM can give up to 256 MB/sec throughput, but a single P30 disk has a throughput limit of 200 MB/sec. This means your application is constrained at 200 MB/sec because of the disk limit.
To overcome this limit, you can provision more than one data disk to the VM or resize your disks to P40 or P50.
For your interest: Google Workspace Storage Limits
High-Scale VM Sizes
High-scale VM sizes are designed to meet the needs of applications requiring high compute power and local disk I/O performance. These VMs come with faster processors, a higher memory-to-core ratio, and a solid-state drive (SSD) for the local disk.
The DS and GS series VMs are examples of high-scale VMs that support premium storage. They provide a range of sizes with varying numbers of CPU cores, memory, and storage capacity.
Choosing the right VM size is crucial, as it affects the processing, memory, and storage capacity available for your application. It also impacts the compute and storage cost.
The largest VM size in the DS series is the Standard_DS14, which has 16 CPU cores, 112 GB of memory, and a maximum of 32 data disks. The largest VM size in the GS series is the Standard_GS5, which has 32 CPU cores, 448 GB of memory, and a maximum of 64 data disks.
Here's a comparison of the two largest VM sizes:
By choosing the right VM size, you can ensure your application meets its performance requirements and scales accordingly.
Scale Limits
Scale limits are a crucial aspect of scalability, and understanding them can help you avoid performance bottlenecks.
The maximum IOPS limits per VM and per disk are different and independent of each other. This means that even if you have a high-performance disk, it won't be able to deliver its full potential if the VM it's attached to has lower IOPS limits.
Suggestion: Azure Storage Account Limits
For example, if you're using a P30 disk on a DS1 VM, the disk can deliver up to 5,000 IOPS, but the VM is limited to 3,200 IOPS. In this case, the application performance is constrained by the VM limit at 3,200 IOPS.
To prevent this situation, choose a VM and disk size that both meet application requirements. This ensures that your application can take full advantage of the available resources.
The IOPS and throughput limits of each premium disk size are different and independent from the VM scale limits. This means that you need to consider both the VM and disk limits when designing your scalable architecture.
For instance, if an application requirement is a maximum of 250 MB/sec throughput, and you're using a DS4 VM with a single P30 disk, the DS4 VM can give up to 256 MB/sec throughput, but the P30 disk has a throughput limit of 200 MB/sec.
To overcome this limit, you can provision more than one data disk to the VM or resize your disks to P40 or P50. This ensures that your application can meet its performance requirements.
Here's a summary of the scale limits for each VM size:
By understanding these scale limits, you can design your scalable architecture to meet the performance requirements of your application.
Snapshots Restrictions
You can take a snapshot of blobs in Premium Storage, but there are some important restrictions to keep in mind.
There can be a maximum of 100 snapshots for a blob. This means you can't take more than 100 snapshots of the same blob.
You can only take a blob snapshot once every 10 minutes. Any attempt to take another snapshot within 10 minutes of taking a snapshot will result in an error.
The maximum capacity for snapshots per Premium Storage account is 10 TB. This is the unique data that exists in the snapshots, not including the base blob size.
Here are the key snapshot restrictions summarized in a table:
Storage Configuration
When choosing a VM size, consider the number of disks your application needs. Each VM size has a limit on the number of disks you can attach, typically twice the number of cores.
To ensure you have enough storage, assess your application's requirements and choose a VM size that can support the number of disks needed. This will help prevent performance issues down the line.
If you're migrating from standard storage to premium storage, you may need fewer premium disks to achieve the same or higher performance for your application.
Disk Sizes
Disk sizes play a crucial role in determining the performance and scalability of your storage configuration. The right disk size can make all the difference in meeting your application's requirements.
You have a range of options to choose from, including P1, P2, P3, P4, P6, P10, P15, P20, P30, P40, P50, P60, P70, and P80. Each disk size has a unique combination of IOPS, bandwidth, and storage capabilities.
The P1, P2, and P3 disk sizes have a base provisioned IOPS per disk of 120, while the P4 and P6 disk sizes have 240 and 500 IOPS respectively. The P10, P15, and P20 disk sizes have 1,100, 2,300, and 5,000 IOPS respectively.
Here's a breakdown of the different disk sizes and their capabilities:
You can mix and match different disk sizes to meet your application's requirements. For example, you could use a single P50 disk or multiple P10 disks to achieve the desired performance and scalability.
Only Supports Blobs
Premium Storage only supports blobs, not Tables, Queues, or File Service. This means you won't be able to access these services using the "account.[table]|[queue]|[file].core.windows.net" endpoint.
If you try to access Tables, Queues, or File Service, you'll get a DNS resolution failure error. This is a hard limit, not a feature that's just not recommended.
Only Page Blobs are supported, while Block blobs are not supported at all. Any attempt to perform Block blob related operations will result in an error from the storage service itself.
All other operations on a blob container and blobs in that container are fully supported.
Additional reading: Block Level Storage
Monitoring and Statistics
Monitoring Azure Premium Storage involves tracking key performance indicators to ensure optimal storage performance. The Azure Monitor provides a detailed overview of Azure Premium Storage Accounts under a given subscription.
The Availability tab shows the Availability history for the past 24 hours or 30 days, giving you a clear picture of your storage's uptime. Performance tab shows the Health Status and events for the past 24 hours or 30 days, helping you identify any potential issues.
To monitor Azure Premium Storage, you can choose from various parameters, including Configuration, Capacity, Throughput, and Latency. Some of the key metrics monitored include Blob Availability, Blob User Data Size, and Blob Ingress. Here are some of the key metrics monitored in Azure Premium Storage Account Monitoring:
Blob Request Statistics
Monitoring Blob Request Statistics can be a complex task, especially if you're not familiar with Azure Premium Storage Accounts. The good news is that with the right tools and knowledge, you can easily track and analyze your Blob Request Statistics.
If you've created an Azure Premium Storage Account for Blobs, you'll notice that the Blob Request Statistics tab is displayed alongside the Monitor Information and Overview tabs. This tab provides a wealth of information about your Blob requests.
To get started with monitoring your Blob Request Statistics, you'll want to take a look at the data being displayed in this tab. Specifically, you'll want to check out the STORAGE BLOB SERVICE AVAILABILITY metric, which is displayed next to the Configuration group in the Overview tab.
Monitored Parameters
Monitoring and statistics are crucial for understanding the performance and health of your Azure Premium Storage Account. You can view the Availability history for the past 24 hours or 30 days on the Availability tab.
The Performance tab gives you the Health Status and events for the past 24 hours or 30 days. This is a great way to quickly identify any issues or trends.
List view enables you to perform bulk admin configurations, which can save you a lot of time and effort.
Here are some key metrics that are monitored in Azure Premium Storage Account Monitoring:
Security and Support
Azure Premium Storage offers robust security features to protect your data.
Data is redundantly stored across multiple data centers to ensure high availability and durability.
With Azure Premium Storage, you can encrypt your data at rest and in transit using industry-standard protocols like AES 256-bit encryption.
Azure Premium Storage also provides secure access controls, including user authentication and authorization, to ensure only authorized users can access your data.
Azure Premium Storage offers 24/7 support to help you with any issues or concerns you may have.
Consider reading: What Is Azure Storage
ReadOnly
ReadOnly caching is a feature that can significantly boost the performance of your application. By configuring it on premium storage data disks, you can achieve low read latency and get very high read IOPS and throughput.
Reads performed from cache, which is on the VM memory and local SSD, are faster than reads from the data disk, which is on Azure Blob Storage. This is because the cache is closer to the application, reducing the distance data has to travel.
Premium storage doesn't count the reads served from the cache toward the disk IOPS and throughput, allowing your application to achieve higher total IOPS and throughput. This means you can handle more requests and keep your application running smoothly.
To take advantage of ReadOnly caching, you'll need to configure it on your premium storage data disks. This can be done in the Azure portal, where you can set up the caching policy and adjust settings to suit your application's needs.
No Cors/Support
You can't rely on Premium Storage for certain security features. No CORS (Cross-Origin Resource Sharing) support is available, which means you can't use it for certain web applications.
Any attempts to enable CORS will result in an error. This is because Premium Storage doesn't support Cross-Origin Resource Sharing.
Storage Analytics is also not supported in Premium Storage. This is because it requires storing data in tables within the same storage account, which Premium Storage doesn't support.
You won't be able to perform "Get Blob Service Properties" and "Set Blob Service Properties" on a Premium Storage Account.
Readers also liked: Google Storage Transfer Service
Blob Container ACL Limitation
In Premium Storage, a blob container can only have a Private ACL. This means you can't set the ACL to "Blob" or "Container" like you can in other Azure Storage options.
You'll get an error if you try to set the ACL to anything other than Private. This is a limitation of Premium Storage, so be aware of it when setting up your blob containers.
All your blob containers in Premium Storage will default to a Private ACL, so you won't have to worry about changing it. Just keep in mind that this is the only option available.
You might enjoy: Azure Storage Container
Pricing and Operations
Premium Storage is offered in three SKUs – P10, P20, and P30, each with a maximum disk size limit of 128 GB, 512 GB, and 1 TB respectively.
The pricing for Premium Storage is calculated by rounding up the page blob size to the nearest SKU, so if you have a page blob size of 1 KB, it will be rounded up to 128 GB, which can result in a significant increase in storage costs.
In one scenario, a user created 150 page blobs with each about 1 KB in size, thinking they would only be charged for 150 KB, but ended up with 150 P10 disks in their storage account, burning roughly $50.00 per day.
Premium Storage has a higher storage cost but a lower transaction cost compared to standard general-purpose v2 accounts, making it cost-effective for workloads that execute a large number of transactions.
If your workload executes more than 35 to 40 transactions per second per terabyte (TPS/TB), you may be a good candidate for a premium block blob storage account.
A different take: Azure Cloud Storage Costs
Here's a rough estimate of the cost-effectiveness of premium block blob storage accounts, based on an Azure Data Lake Storage enabled premium block blob storage account:
Note that prices differ per operation and per region, so be sure to use the Azure pricing calculator to compare pricing between standard and premium performance tiers.
Use Cases and Scenarios
Azure premium storage is a great option for various use cases, particularly for analytics. If you have an analytics use case, we highly recommend using Azure Data Lake Storage along with a premium block blob storage account.
In premium scenarios, some Azure Storage partners use premium block blob storage, which can further enhance transaction performance in certain scenarios. This is especially useful for large-scale data storage and processing.
Some of the key features of premium block blob storage accounts include:
- Premium block blob storage accounts
E-commerce Businesses
E-commerce businesses often require scalable data storage solutions to support their internal teams. They may use premium block blob storage accounts to meet the low latency requirements of data warehousing and analytics solutions.
These solutions help teams analyze vast amounts of data related to offers, pricing, ship methods, suppliers, inventory, and logistics. This data is queried, scanned, extracted, and mined for multiple use cases.
By running analytics on this data, merchandising teams receive relevant insights and information to inform their decisions.
Intriguing read: Azure Log Analytics Storage Cost
Data Processing Pipelines
In some cases, we've seen partners use multiple standard storage accounts to store data from various sources.
Raw data from multiple sources needs to be cleansed and processed so that it becomes useful for downstream consumption in tools such as data dashboards that help users make decisions.
To detect fraud, companies in the financial services industry must process inputs from various sources, identify risks to their customers, and take swift action.
Directory listing calls in a Data Lake Storage enabled premium block blob storage account were much faster and performed much more consistently than they would otherwise perform in a standard general-purpose v2 account.
This helped companies catch and act upon potential security risks promptly.
Here are some benefits of using a Data Lake Storage enabled premium block blob storage account for data processing pipelines:
- Directory listing operations are fast
- Low latency is achieved
- Speed and consistency are offered
- New data is made available to downstream processing systems as quickly as possible
Internet of Things (IoT)
The Internet of Things (IoT) has become a significant part of our daily lives. It's used to track car movements, control lights, and monitor our health.
IoT has industrial applications too. Companies use it to enable smart factory projects and improve agricultural output.
On oil rigs, IoT is used for predictive maintenance. This helps prevent equipment failures and reduces downtime.
Premium block blob storage accounts add significant value to these scenarios. They're cost-effective and optimized for workloads that perform a large number of write transactions.
In the mining industry, partners use a Data Lake Storage enabled premium block blob storage account to ingest time series sensor data from multiple equipment types. This satisfies their need for high sample rate ingestion.
Premium block blob storage is cost-effective because it's optimized for workloads that generate a large number of small write transactions, such as tens of thousands per second.
Additional reading: Block Storage for Openshift
Frequently Asked Questions
What is the difference between Azure storage account standard and premium IOPS?
Standard storage offers up to 10,000 IOPS, while premium storage delivers significantly higher performance with up to 100,000 IOPS, making it ideal for demanding applications
Sources
- https://learn.microsoft.com/en-us/azure/virtual-machines/premium-storage-performance
- https://azure.microsoft.com/en-us/blog/introducing-premium-storage-high-performance-storage-for-azure-virtual-machine-workloads/
- https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-block-blob-premium
- https://www.manageengine.com/products/applications_manager/help/azure-premium-storage-monitoring.html
- https://gauravmantri.com/2015/05/22/understanding-azure-premium-storage-a-developer-perspective/
Featured Images: pexels.com