Cortex Data Lake Essentials for Enterprise Security

Author

Reads 540

An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...
Credit: pexels.com, An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...

A data lake is a centralized repository that stores raw, unprocessed data in its native format. It's a game-changer for enterprise security, allowing for faster and more accurate threat detection.

Data lakes are scalable and can handle large volumes of data from various sources, making them ideal for storing and analyzing security data. This scalability is especially important for enterprises with complex security needs.

Having a data lake in place enables organizations to quickly identify and respond to potential security threats. By storing and analyzing security data in a centralized repository, enterprises can reduce the risk of data breaches and cyber attacks.

Suggestion: Security Data Lake

What is Cortex Data Lake

Cortex Data Lake is a cloud-based, centralized log storage and aggregation solution that provides secure, resilient, and fault-tolerant logging infrastructure. It ensures logging data is up-to-date and available when needed.

Palo Alto Networks developed Cortex Data Lake to alleviate the need for planning and deploying Log Collectors to meet log retention needs. This service can easily complement existing on-premise Log Collectors, allowing businesses to expand operational capacity as they grow or meet capacity needs for new locations.

Credit: youtube.com, What is Palo Alto Cortex Data Lake

Cortex Data Lake takes care of ongoing maintenance and monitoring of the logging infrastructure, freeing up time for businesses to focus on their operations. This allows companies to scale their logging infrastructure without the operational burdens and high costs associated with legacy hardware-based deployments.

Here are some key features of Cortex Data Lake:

  • Provides cloud-based, centralized log storage and aggregation
  • Secure, resilient, and fault-tolerant
  • Ensures logging data is up-to-date and available when needed
  • Provides a scalable logging infrastructure
  • Easy to integrate with existing on-premise Log Collectors
  • Ongoing maintenance and monitoring is handled by Palo Alto Networks

Setup and Configuration

To set up a Cortex Data Lake, you'll need to create a new cluster in the Cortex Data Lake console. This involves specifying the cluster name, location, and number of nodes.

First, create a new cluster by clicking on the "Create Cluster" button. The cluster name should be unique and descriptive, as it will help you identify the cluster in the future.

The cluster location determines the region where your data will be stored. Cortex Data Lake offers multiple regions, so choose the one that best suits your needs.

Next, specify the number of nodes you want for your cluster. The number of nodes will impact the performance and scalability of your cluster.

Here's an interesting read: Create Azure Data Lake Storage Gen2

Credit: youtube.com, Cortex XDR - Connect firewall to Cortex Data Lake

A minimum of three nodes is recommended for a production-ready cluster. This ensures that your cluster can handle high traffic and data ingestion.

After specifying the cluster details, click on the "Create Cluster" button to initiate the cluster creation process. This may take a few minutes to complete.

Once the cluster is created, you'll need to configure the data sources and sinks. This involves specifying the data ingestion settings, data processing settings, and data storage settings.

Data ingestion settings determine how data is ingested into the cluster. Cortex Data Lake supports multiple data ingestion protocols, including Kafka and Filebeat.

Data processing settings determine how data is processed within the cluster. You can choose from various processing modes, including batch processing and streaming processing.

Data storage settings determine where data is stored within the cluster. Cortex Data Lake supports various storage options, including S3 and HDFS.

Log Management

Log management is a crucial aspect of Cortex Data Lake. You can set up log sources from various products and services, including Palo Alto Networks Firewalls, Panorama-Managed Firewalls, and Prisma Access.

Credit: youtube.com, Tech Docs: Allocate Storage Based on Log Type in the Cortex Data Lake

To start sending logs to Cortex Data Lake, you must configure the XDR Agent, activate your Cortex Data Lake instance, and set up log forwarding. This will allow you to collect and store logs from different sources in one place.

Here are some common log types that you can store in Cortex Data Lake:

These log types provide valuable insights into network activity and security events. By storing them in Cortex Data Lake, you can analyze and visualize the data to improve your security posture and incident response.

Security Analytics

Security Analytics is a critical component of effective Log Management. Collecting, transforming, and integrating your enterprise's security data is essential to enable Palo Alto Networks solutions.

Radically simplifying security operations is a major benefit of this process. It allows you to focus on driving your business forward, rather than managing complex infrastructure.

Powers Palo Alto Networks offerings, facilitating AI and machine learning with access to rich data at cloud native scale. This enables you to significantly improve detection accuracy with trillions of multi-source artifacts.

Deploying massive data collection, storage, and analysis infrastructure is complex and costly. It requires planning for space, power, compute, networking, and high availability needs.

Log Types

Credit: youtube.com, Logs and Monitoring - N10-008 CompTIA Network+ : 3.1

Log Types are a crucial aspect of Log Management, and understanding what types of logs are available can help you make informed decisions about how to manage and analyze them.

Cortex Data Lake offers a variety of log types, including Cortex XDR Logs, which provide information for all alerts raised in Cortex XDR.

Common Logs include config and system logs, which track changes to the firewall configuration and system events on the firewall respectively.

Firewall Logs are another important type, and include auth logs, which track authentication events, and eal logs, which provide enhanced application logs for Palo Alto Networks apps and services.

Data filering logs, or file_data, track the security rules that help prevent sensitive information from leaving the area protected by the firewall.

GlobalProtect system logs, LSVPN/satellite events, GlobalProtect portal and gateway logs, and Clientless VPN logs are all part of the GlobalProtect log type.

HIP Match logs, or hipmatch, track the security status of end devices accessing the network.

Check this out: Delta Lake Data Types

Credit: youtube.com, Introduction to Log Management

IP-Tag logs, or iptag, track how and when a source IP address is registered or unregistered on the firewall and what tag the firewall applied to the address.

Stream Control Transmission Protocol logs, or sctp, track events and associations based on logs generated by the firewall while it performs stateful inspection, protocol validation, and filtering of SCTP traffic.

Threat logs track entries generated when traffic matches one of the Security Profiles attached to a security rule on the firewall.

Traffic logs track the start and end of each session.

Tunnel Inspection logs, or tunnel, track entries of non-encrypted tunnel sessions.

URL Filtering logs track entries for traffic that matches the URL Filtering profile attached to a security policy rule.

User-ID logs, or userid, track information about IP address-to-username mappings and Authentication Timestamps.

Start Sending Logs

To start sending logs to Cortex Data Lake, you'll need to configure the XDR Agent. This is a crucial step to ensure that your logs are properly sent to the data lake.

Credit: youtube.com, Logs and Monitoring - N10-008 CompTIA Network+ : 3.1

First, you'll need to activate your Cortex Data Lake instance. This will give you access to the necessary tools and settings to start sending logs.

To send log data to Cortex Data Lake, you'll need to use one of the supported products or services. Here are some of the products and services that can send logs to Cortex Data Lake:

Activation and Ingestion

Activation of Cortex Data Lake involves a straightforward process. Upon purchasing Cortex Data Lake, all firewalls registered to support the account receive a Cortex Data Lake license.

An authentication code is also provided to activate the Cortex Data Lake instance. This is a crucial step in setting up the data lake.

The Ingestion Service components play a vital role in collecting and processing log data. Here's a brief overview of the functionality of these components:

Metadata service maintains customer-topic metadata information, which is shared between FEI and BEI components.

License Activation

Credit: youtube.com, Milestone - license activation

License Activation is a crucial step in getting your Cortex Data Lake up and running. You'll receive a Cortex Data Lake license when you purchase it, which covers all firewalls registered to support your account.

This license allows you to collect log data from next-generation firewalls, Prisma Access, and Cortex XDR.

With the license, you'll also receive an authentication code to activate your Cortex Data Lake instance. This code is essential for setting up your data lake and starting the ingestion process.

Functionality of Ingestion Service Components

The Ingestion Service is made up of several components, each with its own set of responsibilities. The Frontend Ingestors (FEI) are multi-tenant stateless bi-directional gRPC servers.

FEI components handle authentication, authorization, schema validation, encryption, and writing AVRO data to Kafka in compressed format. They also write data only to the last topic in the list of topics, which enables failover and handling scenarios where we need to move to a new topic.

Credit: youtube.com, All About Amazon OpenSearch Ingestion | Amazon Web Services

The Backend Ingestors (BEI) are Kafka consumers that subscribe to a topic, read a batch of AVRO records from Kafka, decrypt the records, and write to a Sink (BQ and GCS) in batches as parallel writes. They also commit offsets to Kafka.

Here's a comparison of the functionality of FEI and BEI components:

The Metadata service maintains customer-topic metadata information, which is shared between FEI and BEI components. This information is a mapping of customer IDs and schema types to a list of clusters and topics.

Ordr Taps Palo Alto Networks

Ordr has partnered with Palo Alto Networks to integrate their security platform with the Ordr platform. This integration enables the Ordr platform to leverage the security features of Palo Alto Networks.

The integration allows for the ingestion of security data from Palo Alto Networks into the Ordr platform. This data is then used to inform and enhance the Ordr platform's capabilities.

Ordr's platform can now provide more comprehensive security insights and analytics. This is made possible by the integration with Palo Alto Networks' security data.

Frequently Asked Questions

What is the new name for Cortex data lake?

The new name for Cortex Data Lake is Strata Logging Service. This change reflects an updated approach to data management and analysis.

Is data lake part of DataBricks?

DataBricks is a fully managed data lake service, providing a centralized repository for storing and analyzing large datasets. It's not just a storage solution, but a comprehensive platform that automatically analyzes and stores data in the cloud.

Which system can feed data into Cortex data lake?

Cortex Data Lake collects data from various security products, including next-generation firewalls, Prisma Access, and Cortex XDR. These systems generate network logs that are stored in the data lake for analysis and threat detection

Katrina Sanford

Writer

Katrina Sanford is a seasoned writer with a knack for crafting compelling content on a wide range of topics. Her expertise spans the realm of important issues, where she delves into thought-provoking subjects that resonate with readers. Her ability to distill complex concepts into engaging narratives has earned her a reputation as a versatile and reliable writer.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.