Data Lake Layers: A Comprehensive Guide to Building a Scalable Architecture

Author

Posted Nov 22, 2024

Reads 1K

A collection of vintage floppy disks showcasing retro data storage technology.
Credit: pexels.com, A collection of vintage floppy disks showcasing retro data storage technology.

Building a data lake architecture can be a complex task, but breaking it down into layers can make it more manageable.

The first layer is the raw data layer, where all the incoming data is stored. This layer is where the data is first received and stored in its raw form, without any processing or filtering.

Data is then moved to the curated data layer, where it's processed and transformed into a more usable format. This layer is where data quality and governance play a crucial role.

The final layer is the governed data layer, where data is refined and made available for consumption by various stakeholders. This layer is where data is validated, secured, and made compliant with regulatory requirements.

Data lakes can be built on various technologies, including Hadoop, Spark, and cloud-based storage solutions.

What Is Architecture?

Data lake architecture is an approach to data storage that's perfect for businesses with constantly changing or very large datasets.

Credit: youtube.com, Data Lake Architecture

A data lake is built around a big data repository that uses a schema-on-read approach, which means we store large amounts of unstructured data without structuring it in advance.

This approach is ideal for handling large datasets because it maintains flexibility to perform further ETL and ELT on the data in the future.

Data lake architecture is not a specific platform, but rather a combination of tools used to build and operationalize this approach to data.

These tools can include event processing tools, ingestion and transformation pipelines, and analytics and query tools.

Data Lake Layers

Data lake layers are the backbone of a data lakehouse, providing a structured approach to data storage and management. A data lakehouse has a layered architecture with five layers.

The ingestion layer is responsible for bringing in data from various sources, while the storage layer holds the raw data. The metadata layer stores metadata, which includes information about the data objects, and has features like data versioning and cloning. The API layer provides a unified interface for accessing and manipulating data, and the consumption layer is where users interact with the data.

Here are the five layers of a data lakehouse:

The

Credit: youtube.com, Demystifying data lake layers by Dmytro Dragan

The data lakehouse is a low-cost solution that can store data in various formats and facilitate a range of data analytics workloads.

It offers centralized and unified data storage, which is flexible and efficient. A Data Lakehouse is also the best solution for data governance and security.

The five components of a data lakehouse are data ingestion, storage, metadata, API, and the data consumption layer.

The metadata layer is a unified catalog of metadata that enables data governance, auditing, and schema management functionalities.

It also has data management features like ACID transactions, caching, indexing, data versioning, and cloning.

Snowflake has redefined the data lake landscape with its cross-cloud platform, emerging as a top vendor in the field.

It breaks down data silos and enables seamless integration of structured, semi-structured, and unstructured data, powered by an elastic processing engine that eliminates concurrency issues and resource contention.

Snowflake offers advanced features like Snowpark and Snowpipe, which facilitate multilanguage programming and data streaming.

Its efficient storage capabilities include automatic micro-partitioning, rest and transit encryption, and compatibility with existing cloud object storage, eliminating data movement.

The metadata layer can be seen as a unified catalog of metadata, storing metadata or all the information of data objects in the data storage layer.

Ingestion

Credit: youtube.com, Data Lake Architecture

The ingestion layer is the foundation of a data lake, responsible for bringing in data from various sources. It's a critical step that enables further processing and analysis.

Data ingestion can occur in batch mode, where large chunks of data are imported at scheduled intervals, or in real-time, where data is immediately brought into the lake as it's generated. Tools like Apache NiFi, Flume, and traditional ETL tools like Talend and Microsoft SSIS are commonly used for batch ingestion.

Real-time ingestion is crucial for applications that require immediate processing, such as fraud detection or real-time analytics. Apache Kafka and AWS Kinesis are popular tools for handling real-time data ingestion.

The ingestion layer often utilizes multiple protocols, APIs, or connection methods to link with various data sources, ensuring a smooth data flow. This is especially important given the heterogeneous nature of data sources.

Here are some common data sources that the ingestion layer retrieves data from:

  • Relational and NoSQL databases
  • Social media platforms
  • Websites
  • Organization-specific applications that generate data

The ingestion layer also has data streaming capabilities for real-time data processing from streaming data sources like IoT sensors. This allows for immediate processing of data as it's generated, enabling applications that require real-time insights.

Analytical Sandboxes

Credit: youtube.com, What is Azure Data Lake Analytics? Data Lake Explained & Examples

Analytical sandboxes serve as isolated environments for data exploration, facilitating activities like discovery, machine learning, predictive modeling, and exploratory data analysis.

Both raw and processed data can be ingested into these sandboxes, allowing analysts to choose the type of data that best suits their needs, whether it's raw data for exploratory activities or processed data for more refined analytics.

Data discovery is the initial step where analysts and data scientists explore the data to understand its structure, quality, and potential value. This often involves descriptive statistics and data visualization.

Machine learning and predictive modeling can be applied once a solid understanding of the data is achieved, using a range of ML libraries like TensorFlow, PyTorch, or Scikit-learn.

Exploratory data analysis (EDA) involves statistical graphics, plots, and information tables to analyze the data and understand the variables' relationships, patterns, or anomalies without making any assumptions.

Tools like Jupyter Notebooks, RStudio, or specialized software like Dataiku or Knime are often used within these sandboxes for creating workflows, scripting, and running analyses.

The sandbox environment offers the advantage of testing hypotheses and models without affecting the main data flow, thus encouraging a culture of experimentation and agile analytics within data-driven organizations.

AWS

Credit: youtube.com, Back to Basics: Building an Efficient Data Lake

AWS offers a robust data lake architecture anchored by its highly available and low-latency Amazon S3 storage service. S3 is particularly attractive for those looking to take advantage of AWS's expansive ecosystem.

Amazon S3 is integrated with various AWS services, including Amazon Aurora for relational databases. AWS Lake Formation architecture is a great example of this integration.

AWS Glue provides robust data cataloging, making data easily searchable and usable. This is especially useful for intricate data management tasks.

Amazon Athena offers ad hoc querying capabilities, allowing users to analyze data in real-time. Amazon Redshift serves as the go-to data warehousing solution within the AWS ecosystem.

Data lake on AWS automatically sets up core AWS services to aid in data tagging, searching, sharing, transformation, analysis, and governance. The platform includes a user-friendly console for dataset search and browsing.

AWS provides a comprehensive yet complex set of tools and services for building and managing data lakes. This makes it a versatile choice for organizations with varying needs and expertise levels.

SimilarWeb, a leading market intelligence company, utilizes AWS services to generate insights from hundreds of terabytes of anonymous data. They use S3 as their events storage layer and Amazon Athena for SQL querying.

Browsi – Managing ETL Pipelines for 4 Billion Events

Credit: youtube.com, What is Data Pipeline | How to design Data Pipeline ? - ETL vs Data pipeline (2024)

Browsi successfully implemented Upsolver to replace its manually-coded data solutions and Lambda architecture used for ingest, as well as its Spark/EMR implementation used to process data.

The company now uses Upsolver to ingest data from Amazon Kinesis Streams, enforcing partitioning, exactly-once processing, and other data lake best practices.

This has enabled Browsi to output ETL flows to Amazon Athena, which it uses for data science as well as BI reporting via Domo.

End-to-end latency from Kinesis to Athena is now mere minutes, a significant improvement from its previous setup.

A single data engineer is now able to manage this entire process, a testament to the efficiency of Upsolver.

With Upsolver, Browsi can handle 4 billion events, a staggering amount of data that would be challenging to process manually.

This achievement highlights the importance of using the right tools for data processing and management.

By leveraging Upsolver and other AWS services, Browsi has been able to streamline its data pipelines and gain valuable insights from its data.

Contrasting Designs

Credit: youtube.com, Medallion Architecture Data Design and Lakehouse Patterns | Microsoft Fabric Data Factory

A data lake and a data warehouse have contrasting designs, with different goals and philosophies. A data warehouse is structured by default, with a focus on resolving queries efficiently, whereas a data lake flips this paradigm, allowing users to apply modeling and schemas when consuming raw data.

Data warehouses require a schema to be imposed upfront, making them less flexible. On the other hand, a data lake allows a schema-on-read approach, enabling greater flexibility in data storage. This makes data lakes ideal for more advanced analytics activities, including real-time analytics and machine learning.

The efficiency and speed of a data lake's analytics are based on the technologies used, whereas a data warehouse's efficiency is based on its architecture and design. This means that data lakes can be more cost-effective and scalable, but may require more extensive management to ensure data quality and security.

Here's a comparison of data warehouses and data lakes:

As you can see, data lakes offer more flexibility and scalability, but may require more management and governance. Data warehouses, on the other hand, provide a more structured and efficient way to analyze data, but may be less flexible and more expensive.

Data Storage

Credit: youtube.com, Database vs Data Warehouse vs Data Lake | What is the Difference?

Data storage is a crucial aspect of a data lake, and it's where your raw data resides before being transformed and analyzed. This layer is divided into different zones for ease of management and workflow efficiency.

Raw data is initially stored in a raw or landing zone, where it remains in its native format. Storage solutions like Hadoop HDFS, Amazon S3, or Azure Blob Storage are commonly used for this purpose. The raw data store acts as a repository where data is staged before any form of cleansing or transformation.

Data can be stored as raw data without any transformation, allowing client tools to access that data directly. Low-cost storage solutions like AWS S3 and HDFS are used in the storage layer, which can store data across multiple servers in a cluster.

Here are some popular low-cost storage solutions used in data lakehouses:

  • Cloud storage, which can scale on demand and cost per usage.
  • Hadoop Distributed File System (HDFS), which can store data across multiple servers in a cluster.

Components of a House

A data lakehouse is made up of five distinct layers, each playing a crucial role in storing and managing data. These layers are the foundation of a data lakehouse.

Stylish home office setup featuring laptop and external drives for data storage and backup.
Credit: pexels.com, Stylish home office setup featuring laptop and external drives for data storage and backup.

The ingestion layer is where data is pulled from various sources and delivered to the storage layer. This is the starting point for data storage.

The storage layer is a cost-effective object store, such as Amazon S3, that keeps various types of data, including structured, semi-structured, and unstructured data. This layer is designed to be efficient and cost-effective.

The metadata layer is a unified catalog that provides metadata about all objects in the data lake. This layer is the defining element of the data lakehouse, enabling data indexing, quality enforcement, and ACID transactions.

The API layer provides metadata APIs that allow users to understand what data is required for a particular use case and how to retrieve it. This layer is essential for data consumption.

The consumption layer is where business tools and applications leverage the data stored within the data lake for analytics, BI, and AI purposes. This layer is the final step in the data storage process.

Here are the five components of a data lakehouse:

  • Ingestion layer
  • Storage layer
  • Metadata layer
  • API layer
  • Consumption layer

Storage and Processing

Credit: youtube.com, How is Big Data stored and processed

Data storage and processing are crucial components of a data lakehouse. The data storage layer is where the ingested data resides and undergoes transformations to make it more accessible and valuable for analysis.

Raw data is initially stored in a raw or landing zone, where it's staged before any form of cleansing or transformation. This zone utilizes storage solutions like Hadoop HDFS, Amazon S3, or Azure Blob Storage.

Data undergoes various transformations after residing in the raw zone, including data cleansing, enrichment, normalization, and structuring. These processes ensure the data is reliable, clean, and suitable for analytics and machine learning models.

After transformations, the data becomes trusted data, which is then moved to a refined or conformed data zone. This is where additional transformation and structuring may occur to prepare the data for specific business use cases.

The storage layer of a data lakehouse consists of low-cost storage solutions such as AWS S3 and HDFS, allowing data to be stored as raw data without any transformation. This enables client tools to access the data directly.

Credit: youtube.com, Data Storage Types: File, Block, & Object

Here are some key components of the data storage and processing layer:

  • Data cleansing: removing or correcting inaccurate records, discrepancies, or inconsistencies in the data.
  • Data enrichment: adding value to the original data set by incorporating additional information or context.
  • Normalization: modifying the data into a common format, ensuring consistency.
  • Structuring: breaking down unstructured or semi-structured data into a structured form suitable for analysis.

Data lakehouses use low-cost storage solutions, such as cloud storage and Hadoop Distributed File System (HDFS), which can scale on demand and cost per usage. This reduces the costs of handling multiple databases and maintenance costs.

Azure Storage

Azure Storage is a powerful solution that offers a suite of capabilities for data management. It's designed to meet the needs of enterprises invested in or interested in Azure services.

Enterprise-grade security is a top priority for Azure Storage, providing built-in data encryption and granular access control policies to secure data at rest.

Azure Private Link support allows for secure and private access to data lakes via a private network connection, giving users peace of mind when working with sensitive data.

Integration and versatility are key benefits of Azure Storage, allowing it to seamlessly integrate with operational stores and data warehouses for a cohesive data management strategy.

The platform can handle high workloads, making it ideal for running advanced analyses and storing large volumes of data.

A Comparison: Warehouses vs.

Credit: youtube.com, Databases Vs Data Warehouses Vs Data Lakes - What Is The Difference And Why Should You Care?

Data warehouses have been around for decades, initially designed to support analytics by allowing organizations to query their data for insights, trends, and decision-making. They require a schema - a formal structure for how the data is organized - to be imposed upfront.

Data warehouses have limitations, such as being less flexible due to schema-on-write, and they can handle thousands of daily queries for tasks like reporting and forecasting business conditions. However, they're not ideal for handling modern data types like weblogs, clickstreams, and social media activity.

Data lakes, on the other hand, are a more recent innovation designed to handle these modern data types in semi-structured or unstructured formats. Unlike data warehouses, data lakes allow a schema-on-read approach, enabling greater flexibility in data storage. This makes them ideal for more advanced analytics activities, including real-time analytics and machine learning.

However, data lakes also have limitations, such as low data quality and challenges in data governance. To address these challenges, data lakehouses emerged as a better solution, combining the best features of data lakes and data warehouses.

Credit: youtube.com, KNOW the difference between Data Base // Data Warehouse // Data Lake (Easy Explanation👌)

Here's a comparison of data warehouses, data lakes, and data lakehouses:

Business Use of Residential Property

Business use of a data lakehouse can significantly improve the reliability of data, reducing the risk of disruptions caused by quality issues in multiple systems. This is particularly beneficial for businesses that rely on data for critical operations.

One of the key advantages of a data lakehouse is that it eliminates data redundancy, serving as a single repository for all data. This simplifies data movement and reduces the need for engineering ETL transfers.

Fresher data is also a major advantage of a data lakehouse. With data available for analysis in a few hours rather than a few days, businesses can make more timely and informed decisions.

In addition to these benefits, a data lakehouse can also help decrease costs by streamlining ETL processes and moving to a single-tier architecture.

Here are some specific benefits of using a data lakehouse:

  • Improved reliability
  • Reduced data redundancy
  • Fresher data
  • Decreased cost

By using a data lakehouse, businesses can adopt AI and machine learning (ML) or take their existing technology to the next level, while still meeting compliance requirements. This is made possible by the ability to automate compliance processes and even anonymize personal data if needed.

API

Credit: youtube.com, Understanding the BigQuery API landscape

The API layer plays a crucial role in data storage, allowing machine learning libraries like TensorFlow and MLlib to read directly from the metadata layer.

This layer hosts different types of APIs for data analytics and other related data processing activities. It's an essential component for optimizing data processing.

DataFrame APIs help with optimizations, making data analysis more efficient. This is particularly useful when working with large datasets.

Metadata APIs can be used to understand the required data, ensuring that the right information is being processed.

Consumption

The consumption layer is where data comes alive, and it's crucial for making sense of the data we've stored. This layer sits at the top of our architecture, consuming data from the storage layer and accessing the metadata.

The data consumption layer is home to various analytics tools like data science, ML, and BI tools like Power BI and Tableau. These tools enable organizations to create and run various analytics jobs.

By using these tools, organizations can gain valuable insights into their data, making informed decisions that drive business forward.

Frequently Asked Questions

How is data structured in a data lake?

Data in a data lake is stored in its native, raw format without a hierarchical structure, using object storage instead of files or folders. This flat architecture allows for flexible and scalable data storage.

What is serving layer in data lake?

The serving layer in a data lake exposes views created by other layers for querying, while also handling orchestration tasks. It acts as a gateway for other systems or users to access and utilize data from the data lake.

Katrina Sanford

Writer

Katrina Sanford is a seasoned writer with a knack for crafting compelling content on a wide range of topics. Her expertise spans the realm of important issues, where she delves into thought-provoking subjects that resonate with readers. Her ability to distill complex concepts into engaging narratives has earned her a reputation as a versatile and reliable writer.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.