Azure Blob Storage has several limits and constraints that you should be aware of to ensure smooth operation of your storage.
Each Azure Blob Storage account has a maximum total size limit of 5 TB per container, which is divided into individual blob limits.
You can have up to 500,000 blobs in a single container, and each blob can be up to 195 GB in size.
Azure Blob Storage also has a maximum request rate of 20,000 requests per second per account.
A different take: Azure Storage Limits
Upload Limits
Azure Blob Storage has a limit of 5 TB per container, which should be enough for most users' needs.
This limit is in place to prevent overloading the system and ensure that all users can access their data efficiently.
The maximum number of containers per account is 500, so be mindful of your container organization to stay within this limit.
Each container can have up to 500,000 blobs, so you can store a large amount of data within a single container.
You might enjoy: What Is the Data Storage in Azure Called
Blob size is limited to 4 MB for block blobs and 64 MB for append blobs.
If you need to store larger blobs, you can use page blobs, which can be up to 8 TB in size.
Azure Blob Storage also has a limit on the number of requests per second, which is 20,000 for standard tier and 100,000 for premium tier.
These limits are in place to prevent abuse of the system and ensure that all users can access their data reliably.
Block-Level Operations
Block-Level Operations are a crucial part of Azure Blob Storage, and understanding the limits is essential for efficient use. You can upload blocks to a block blob using the Put Block operation.
Each block must be smaller than 100 MiB, although this limit is increasing to 4 GiB in preview. You can stage blocks using the .NET SDK Methods: StageBlock & StageBlockAsync.
You can also upload blocks from a URL using the Put Block From URL operation. This method is synchronous and completes quickly, making it ideal for performance-critical applications. The source of the block can be any object range retrievable via a standard GET HTTP request on the given URL.
Here's a summary of the block-level operations:
Put
When working with Put operations, it's essential to understand the source of your data. You provide the bytes, which is a straightforward approach.
The size of the blob is also a crucial factor. It must be smaller than 256 MiB, although this limit is increasing to 5 GiB in preview mode.
For more information on Put operations, check out the official documentation.
The .NET SDK offers two methods for uploading blobs: Upload and UploadAsync. These methods will automatically use PutBlob for small files, but switch to PutBlock/PutBlockList for larger uploads.
Block from URL
Block from URL operations allow you to stage a block from a URL, which is useful when you need to upload large files. This operation is synchronous, meaning it completes immediately.
You can use any object range retrievable via a standard GET HTTP request on the given URL, which includes public access or pre-signed URLs. This means you can use any accessible object, inside or outside of Azure.
Explore further: Block Storage for Openshift
Each block must be smaller than 100 MiB, but don't worry, the limit is increasing to 4 GiB in preview. You can find more information on the maximum size of a block in a block blob.
To stage a block from a URL, you can use the .NET SDK methods StageBlockFromUri and StageBlockFromUriAsync. These methods make it easy to upload blocks from a URL.
Here's a quick summary of the key facts about staging blocks from a URL:
Put Block
When working with block-level operations, you'll often encounter the term "Put Block." Essentially, a Put Block operation involves dividing a large file into smaller blocks, each with a maximum size of 100 MiB.
These blocks are then uploaded separately to Azure Blob Storage. The benefits of this approach include improved upload performance and reduced memory usage.
To stage a block, you can use the StageBlock or StageBlockAsync method, which are part of the .NET SDK. These methods allow you to upload blocks in parallel, making the process more efficient.
Broaden your view: Block Level Storage
Keep in mind that each block must be smaller than 100 MiB, with a limit increasing to 4 GiB currently in preview. This is a crucial detail to consider when working with large files.
Here's a summary of the key points to keep in mind for Put Block operations:
By understanding the specifics of Put Block operations, you'll be better equipped to handle large files and optimize your storage solutions.
Copy
Copy is an essential block-level operation that allows you to replicate a blob from one location to another.
The source blob for a copy operation can be a block blob, an append blob, or a page blob, a snapshot, or even a file in the Azure File service.
Each blob must be smaller than 4.75 TiB, although the maximum size is increasing to 190.7 TiB, which is currently in preview.
You can complete copy operations asynchronously, making it a convenient option for large-scale data transfers.
Here's a quick rundown of the types of blobs you can copy:
- Block blob
- Append blob
- Page blob
- Snapshot
- File in the Azure File service
The .NET SDK provides two methods for initiating a copy operation: StartCopyFromUri and StartCopyFromUriAsync.
Sources
- https://www.javatpoint.com/azure-blob-storage
- https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-upload-apis/
- https://docs.hevodata.com/sources/dbfs/file-storage/azure-blob-storage/
- https://docs.dapr.io/reference/components-reference/supported-bindings/blobstorage/
- https://serverfault.com/questions/512725/azure-storage-limitations
Featured Images: pexels.com