Openshift Logging Essentials for Kubernetes Administrators

Author

Reads 1.3K

Screen With Code
Credit: pexels.com, Screen With Code

As a Kubernetes administrator, you're likely no stranger to the importance of logging. In fact, logging is a crucial aspect of any Kubernetes setup, allowing you to monitor and troubleshoot issues in real-time.

OpenShift Logging is a powerful tool that provides a comprehensive logging and monitoring solution for your Kubernetes cluster. It collects, processes, and stores logs from various sources, giving you a unified view of your cluster's activity.

With OpenShift Logging, you can set up a centralized logging system that aggregates logs from all your pods, services, and other components. This makes it easier to identify and diagnose issues, as well as track performance metrics and trends.

By leveraging OpenShift Logging, you can gain valuable insights into your cluster's behavior and make data-driven decisions to improve its overall performance and reliability.

Installation

To install the OpenShift logging subsystem, you have two options: using the web console or the CLI. You can use the OpenShift Container Platform web console to install the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators.

Businessman reviewing data analytics dashboard on laptop in bright office.
Credit: pexels.com, Businessman reviewing data analytics dashboard on laptop in bright office.

Ensure that you have the necessary persistent storage for Elasticsearch. Each Elasticsearch node requires its own storage volume, so don't use a raw block volume for local storage.

Elasticsearch is a memory-intensive application, so you might need to add more nodes to your cluster if you experience memory issues.

To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the web console, follow these steps:

  • Install the OpenShift Elasticsearch Operator
  • Install the Red Hat OpenShift Logging Operator
  • Create an OpenShift Logging instance
  • Verify the install

Alternatively, you can use the CLI to install the logging subsystem. To do this, follow these steps:

1. Create a namespace for the OpenShift Elasticsearch Operator.

2. Create a namespace for the Red Hat OpenShift Logging Operator.

3. Install the OpenShift Elasticsearch Operator by creating the following objects:

  • A namespace for the operator
  • A deployment configuration
  • A service account
  • A role and role binding

4. Install the Red Hat OpenShift Logging Operator by creating the following objects:

  • A namespace for the operator
  • A deployment configuration
  • A service account
  • A role and role binding

5. Create an OpenShift Logging instance

Computer server in data center room
Credit: pexels.com, Computer server in data center room

6. Verify the installation by listing the pods in the openshift-logging project

Here's a summary of the steps:

Configuration

To configure OpenShift logging, you'll need to create a cluster-wide logging configuration. This involves specifying the logging level, format, and destination.

The logging level determines the verbosity of the logs, with options including debug, info, warning, error, and fatal. For example, setting the logging level to info will provide more detailed logs than setting it to warning.

The logging format can be customized to suit your needs, but it typically includes timestamp, log level, and log message. OpenShift logging uses a JSON format by default.

The destination of the logs is also configurable, with options including file, stdout, and a logging service like Elasticsearch. You can specify the destination in the logging configuration file.

In a production environment, it's recommended to use a logging service like Elasticsearch to store and analyze logs. This allows for better log management and easier troubleshooting.

The logging configuration file is typically located at /etc/logging/logging.conf. You can edit this file to customize your logging settings.

Deployment

A laptop showing an analytics dashboard with charts and graphs, symbolizing modern data analysis tools.
Credit: pexels.com, A laptop showing an analytics dashboard with charts and graphs, symbolizing modern data analysis tools.

You can deploy OpenShift Logging using the web console or CLI to install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator.

To install the Operators using the web console, you can follow these steps: install the OpenShift Elasticsearch Operator, install the Red Hat OpenShift Logging Operator, create an OpenShift Logging instance, and verify the install.

Alternatively, you can use the CLI to install the Operators. This involves creating a namespace for the OpenShift Elasticsearch Operator, creating a namespace for the Red Hat OpenShift Logging Operator, installing the OpenShift Elasticsearch Operator, installing the Red Hat OpenShift Logging Operator, creating an OpenShift Logging instance, and verifying the installation.

Elasticsearch requires its own storage volume, and it's a memory-intensive application. By default, OpenShift Container Platform installs three Elasticsearch nodes with memory requests and limits of 16 GB.

To ensure smooth deployment, ensure you have the necessary persistent storage for Elasticsearch, and consider adding more Elasticsearch nodes if you experience memory issues.

Here is a summary of the steps to deploy OpenShift Logging:

  • Install the OpenShift Elasticsearch Operator
  • Install the Red Hat OpenShift Logging Operator
  • Create an OpenShift Logging instance
  • Verify the installation

By following these steps, you can successfully deploy OpenShift Logging and start collecting, storing, and visualizing logs in your OpenShift cluster.

Components

Credit: youtube.com, OpenShift 3.1 Logging & Metrics Overview

The OpenShift logging components are made up of three main parts: collection, log store, and visualization. The collection component is responsible for collecting logs from the cluster, formatting them, and forwarding them to the log store. This is currently implemented using Fluentd.

The log store is where the logs are stored, and the default implementation is Elasticsearch. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage.

The visualization component is the UI part of the logging system, allowing you to view logs, graphs, charts, and more. This is currently implemented using Kibana.

Here are the main components of the OpenShift logging system:

  • Collection (Fluentd): collects logs from the cluster and forwards them to the log store
  • Log Store (Elasticsearch): stores the collected logs
  • Visualization (Kibana): allows you to view logs and create visualizations

You must manually deploy the Event Router, which is not included in the default logging setup.

Data Collection

Data Collection is a crucial aspect of OpenShift Logging. The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by OpenShift Container Platform Logging. You must manually deploy the Event Router.

Credit: youtube.com, OpenShift Logging - 20 minutes Guide

The logging collector is a daemon set that deploys pods to each OpenShift Container Platform node. It collects container and node logs from various sources, including journald for system logs and /var/log/containers/*.log for container logs. If you configure the log collector to collect audit logs, it gets them from /var/log/audit/audit.log.

The available container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. The logging collector uses Fluentd to collect logs from these sources and forwards them internally or externally as you configure in OpenShift Container Platform.

Here are the default log sources used by the log collector:

  • journald for all system logs
  • /var/log/containers/*.log for all container logs
  • /var/log/audit/audit.log for audit logs

Collector Features

Fluentd and Vector are the two logging collectors used in OpenShift Container Platform. They have several features in common, including the ability to collect app container logs, infra container logs, infra journal logs, and more.

Credit: youtube.com, New Feature: MQTT data collector for our Data Collection Module (DCM)

Table 1. Log Sources

Both collectors also support various outputs, including Elasticsearch v5-v7, Fluent forward, and Cloudwatch.

Table 2. Outputs

Fluentd also supports various authorization and authentication methods, including Elasticsearch certificates and Cloudwatch keys.

Table 3. Authorization and Authentication

Both collectors also support various normalization and transformation features, including Viaq data model and loglevel normalization.

Table 4. Normalizations and Transformations

About Exporting Fields

Exporting fields is a key feature of the logging system. Exported fields are available for searching from Elasticsearch and Kibana.

The logging system exports fields that are present in log records. This means you can easily access and search these fields.

Exported fields are available for searching from Elasticsearch and Kibana. For more information, see the relevant documentation.

Data Storage

OpenShift Container Platform uses Elasticsearch (ES) by default to store log data. Optionally, you can use the Log Forwarder API to forward logs to an external store.

The logging subsystem Elasticsearch instance is optimized for short-term storage, approximately seven days. This means logs are retained for a week before being deleted.

Credit: youtube.com, OpenShift storage automation with container-native storage

Elasticsearch organizes log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards, which are spread across a set of Elasticsearch nodes in an Elasticsearch cluster.

The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. This ensures that log data is evenly distributed across the cluster.

A highly-available Elasticsearch environment requires at least three Elasticsearch nodes, each on a different host. This provides redundancy and resilience to failure.

Role-based access control (RBAC) applied on the Elasticsearch indices enables controlled access of logs to developers. Administrators can access all logs, while developers can access only logs in their projects.

To increase the number of Elasticsearch nodes, you can use a ClusterLogging custom resource (CR). This allows you to specify how shards are replicated for data redundancy and resilience to failure.

To specify how long different types of logs are retained, you can use a retention policy in the ClusterLogging CR. This ensures that logs are kept for a specified period of time before being deleted.

You can use the ClusterLogging CR to specify how many replicas of shards to create. This provides additional redundancy and resilience to failure.

Here is a summary of the recommended configuration for a highly-available Elasticsearch environment:

Data Visualization

Credit: youtube.com, Deep Dive on the OpenShift Logging-Stack Gabriel Ferraz Stein Red Hat | OpenShift Commons Briefing

OpenShift Container Platform uses Kibana to display log data collected by Fluentd and indexed by Elasticsearch.

Kibana is a browser-based console interface that allows you to query, discover, and visualize your Elasticsearch data.

You can view your log data through various visualizations, including histograms, line graphs, and pie charts.

Kibana provides a convenient way to explore and understand your log data, making it easier to identify trends and patterns.

OpenShift Container Platform's use of Kibana and Elasticsearch enables efficient log data management and analysis.

By leveraging Kibana's visualization capabilities, you can gain valuable insights into your application's behavior and performance.

Event Management

Event Management is crucial for OpenShift Container Platform logging. The Event Router is a pod that watches OpenShift Container Platform events.

The Event Router collects events from all projects and writes them to STDOUT. This allows for centralized event collection and storage.

Fluentd collects the events written to STDOUT and forwards them into the OpenShift Container Platform Elasticsearch instance. Elasticsearch indexes the events to the infra index.

The Event Router must be manually deployed to collect and store Kubernetes events. You can find more information on this process in the relevant documentation.

Troubleshooting

Credit: youtube.com, OpenShift Online Troubleshooting Guide - Part 1

Troubleshooting OpenShift logging issues can be done by performing specific tasks. To start, you can view the logging status.

Viewing the logging status is a good first step in troubleshooting. This will give you an idea of the current state of your logging system.

Viewing the status of the log store is also crucial. This will help you determine if there are any issues with the storage of your logs.

Understanding logging alerts is another important task. This will help you identify any potential issues before they become major problems.

Collecting logging data for Red Hat Support can be helpful in resolving complex issues. This data can be used to diagnose and fix problems.

Troubleshooting for critical alerts is the final step in the process. This will help you resolve any critical issues that may be affecting your system.

Here are the tasks involved in troubleshooting OpenShift logging:

  • Viewing logging status
  • Viewing the status of the log store
  • Understanding logging alerts
  • Collecting logging data for Red Hat Support
  • Troubleshooting for critical alerts

Updating

Updating OpenShift Container Platform Logging involves updating specific operators. You must update the Elasticsearch Operator and the Cluster Logging Operator.

Credit: youtube.com, Ask an OpenShift Admin (E93) | Openshift Logging and Observability

The Elasticsearch Operator is a crucial component for OpenShift Container Platform logging. It's essential to update it to ensure seamless logging functionality.

Updating these operators requires attention to detail and a clear understanding of the OpenShift Container Platform logging process.

Here are the operators you need to update:

  • Elasticsearch Operator
  • Cluster Logging Operator

For more information on updating OpenShift Container Platform logging, see the relevant documentation.

About Uninstalling

When you're ready to uninstall OpenShift Container Platform Logging, you can start by deleting the ClusterLogging custom resource (CR). This will stop log aggregation.

Deleting the ClusterLogging CR is the first step in uninstalling OpenShift Container Platform Logging. After deletion, you can optionally remove other cluster logging components that remain.

You can stop log aggregation by deleting the ClusterLogging custom resource (CR). This is the key action required to begin the uninstallation process.

For more information on uninstalling these remaining components, see the documentation on uninstalling OpenShift Container Platform Logging.

Post-Installation Tasks

Credit: youtube.com, Ask an OpenShift Admin (Ep 26): Day 2 Operations, part 1

After installing OpenShift Logging, you'll need to manually create your Kibana index patterns and visualizations to start exploring and visualizing your data in Kibana.

One important thing to keep in mind is that if your cluster network provider enforces network isolation, you'll need to allow network traffic between the projects that contain the logging subsystem Operators.

In this mode, all traffic is allowed, so no action is needed if you don't want to enforce network isolation.

Kubernetes

Kubernetes is an open-source container orchestration system for automating the deployment, scaling, and management of containerized applications. It was originally designed by Google, but is now maintained by the Cloud Native Computing Foundation.

Kubernetes provides a way to package, deploy, and manage applications in containers, which are lightweight and portable. This makes it easier to develop, test, and deploy applications in a consistent and reliable way.

In the context of OpenShift logging, Kubernetes plays a crucial role in managing and scaling the logging infrastructure. By leveraging Kubernetes, OpenShift can automate the deployment and management of logging components, such as Elasticsearch, Fluentd, and Kibana.

Allowing Traffic Between Projects with Network Isolation

Credit: youtube.com, Network Policies on Kubernetes

Allowing traffic between projects with network isolation is crucial for OpenShift Logging. You need to join the two logging-related projects to allow traffic between them.

OpenShift SDN has three modes: default, multitenant, and network policy mode. In multitenant mode, you must join the two projects.

If you're using OpenShift SDN in multitenant mode, you can join the two projects using the command $oc adm pod-network join-projects --to=openshift-operators-redhat openshift-logging.

In network policy mode and OVN-Kubernetes, you need to configure the policy to allow traffic to egress from one logging-related project to the other.

To do this, you can follow these steps:

  • Join the two projects using the $oc adm pod-network join-projects command.
  • Configure the policy to allow traffic to egress from one logging-related project to the other.

OVN-Kubernetes always uses a network policy, so you must configure the policy to allow traffic to egress from one logging-related project to the other.

In all cases, network isolation blocks traffic between pods or services in different projects, so you must explicitly configure the policy to allow traffic between the two logging-related projects.

Viewing the Cluster Dashboard

Credit: youtube.com, Advanced Kubernetes: Customize the Web Dashboard

The OpenShift Container Platform Logging dashboard is a powerful tool for diagnosing and anticipating problems in your cluster. It contains charts that show details about your Elasticsearch instance at the cluster level.

These charts are based on the data collected by Fluentd, which is deployed to each node in the OpenShift cluster. It collects all node and container logs and writes them to Elasticsearch.

The dashboard is accessible through Kibana, the centralized, web UI where users and administrators can create rich visualizations and dashboards with the aggregated data. This allows administrators to see and search through all logs.

Application owners and developers can also allow access to logs that belong to their projects, giving them a clear view of their application's performance and any issues that may be occurring.

Enabling Vector

Enabling Vector is a straightforward process on OpenShift Container Platform. You'll need to meet specific requirements first.

To enable Vector, you'll need OpenShift Container Platform 4.11 or later, the Logging subsystem for Red Hat OpenShift 5.4 or later, and FIPS (Federal Information Processing Standard) disabled.

Here are the steps to enable Vector:

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project using the command $oc -n openshift-logging edit ClusterLogging instance.
  2. Add a logging.openshift.io/preview-vector-collector: enabled annotation to the ClusterLogging custom resource (CR).
  3. Add vector as a collection type to the ClusterLogging custom resource (CR).

Frequently Asked Questions

How to see logs on OpenShift?

To view logs on OpenShift, navigate to Workloads → Pods, select a project, and click the name of the pod you want to investigate, then click Logs. This will display the logs for the selected pod.

What is fluentd in OpenShift?

Fluentd is a logging collector in OpenShift that gathers logs from nodes and containers, sending them to Elasticsearch for storage and analysis. It's a key component of the EFK (Elasticsearch, Fluentd, Kibana) logging stack in OpenShift.

Nancy Rath

Copy Editor

Nancy Rath is a meticulous and detail-oriented Copy Editor with a passion for refining written content. With a keen eye for grammar, syntax, and style, she has honed her skills in ensuring that articles are polished and engaging. Her expertise spans a range of categories, including digital presentation design, where she has a particular interest in the intersection of visual and written communication.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.