Openshift Route Configuration Options and Customization

Author

Reads 637

Monochrome view of a person driving a car, focusing on steering and dashboard details.
Credit: pexels.com, Monochrome view of a person driving a car, focusing on steering and dashboard details.

In Openshift, routes are the entry point for users to access applications. A route is a single endpoint that can be accessed by a client, and it's used to expose an application to the outside world.

A route can be configured to use a specific host name, which can be set using the `host` field in the route configuration. For example, you can set the host name to `example.com` to access your application.

The `path` field in the route configuration is used to specify the URL path that clients will use to access the application. This field can be set to a specific path, such as `/my-app`, to expose a specific part of the application.

You can also configure a route to use a specific port, such as `443` for HTTPS traffic. This is done by setting the `port` field in the route configuration to the desired port number.

Types of Routes

There are two types of OpenShift routes: HTTP routes and HTTPS routes. HTTP routes are unsecured routes, while HTTPS routes are secured routes.

Credit: youtube.com, Openshift Routes | Openshift Tutorial | Insecure Route | Edge Route | Passthrough Route

HTTP routes, also known as unsecured routes, require that the traffic for the route be HTTP based. This allows for multiple routes to be served using the same hostname, each with a different path.

Here's a breakdown of path-based routes:

Path-Based

Path-Based routes are a type of route that allows for a path to be specified within the route, which is then compared against a URL to either allow or disallow the traffic.

This type of route is HTTP-based and unsecured, meaning it does not support TLS termination. Multiple routes can be served using the same hostname, each with a different path.

The path is the only added attribute for a path-based route, and it's used to compare against a URL. The most specific path is chosen as the best match.

Here's a table showing example routes and their accessibility:

Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request.

SNI Communication Flow

Credit: youtube.com, What are Routing Protocols and their Types?

SNI Communication Flow is a crucial aspect of OpenShift routes. Secured routes can use one of three types of secure TLS termination.

The type of termination is determined by where the encryption is being terminated. There are three termination types: Edge Termination, Passthrough Termination, and Re-encryption Termination.

Edge Termination terminates encryption at the router. This is a straightforward approach that can be effective in many cases.

Passthrough Termination passes the termination from the router straight to the pod. This approach is useful when you need to maintain control over encryption at the application level.

Re-encryption Termination adds encapsulation to the Edge Termination process. It's like Edge Termination, but with an extra layer of security.

Here's a breakdown of the three termination types:

Route Configuration

Route configuration is a crucial aspect of OpenShift routes. You can set or delete HTTP request and response headers for compliance purposes or other reasons. This can be done for all routes served by an Ingress Controller or for specific routes.

Credit: youtube.com, Networking in OpenShift - Edge Route, Passthrough Route (OpenShift Administration) RedHat Ex280

To set or delete HTTP request and response headers, you can use the `httpHeaders` field in the route definition. For example, you can set the `Content-Location` HTTP request header to direct application traffic to a specific location.

You can specify the actions to be performed on the HTTP headers in the `actions` list within the `httpHeaders` field. The `action` field can have the value `Set` or `Delete`, and the `set` field specifies the value to be set for the header. For instance, to set the `Content-Location` header to `/lang/en-us`, you would use `set:value:/lang/en-us`.

Here's a breakdown of the `actions` list format:

Enabling

Enabling HTTP Strict Transport Security (HSTS) is a crucial step in ensuring secure interactions with websites.

To enable HSTS on a route, you need to add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running a command in your terminal.

Credit: youtube.com, Static Routing Overview & Configuration

The haproxy.router.openshift.io/hsts_header value should include the max-age parameter, which measures the length of time, in seconds, that the HSTS policy is in effect. The client updates max-age whenever a response with a HSTS header is received from the host.

Here are the required and optional parameters for the haproxy.router.openshift.io/hsts_header value:

The max-age parameter can be specified with a unit, such as us, ms, s, m, h, or d, which is supported by HAProxy. If no unit is provided, ms is the default.

Configuration

You can set or delete HTTP headers within an Ingress Controller or Route CR, but you cannot append them. If a header is set with a value, that value must be complete and not require appending in the future.

To modify request and response headers, you can use specific fields in the Ingress Controller or an individual route. Route annotations can also be used to set certain headers. However, if you need to append a header, such as the X-Forwarded-For header, you should use the spec.httpHeaders.forwardedHeaderPolicy field instead of spec.httpHeaders.actions.

Computer server in data center room
Credit: pexels.com, Computer server in data center room

You can only set or delete headers within an IngressController or Route CR, you cannot append them. If an HTTP header is set with a value, that value must be complete and not require appending in the future.

To enable HTTP Strict Transport Security (HSTS), you need to add the haproxy.router.openshift.io/hsts_header value to the edge terminated or re-encrypt route. This will add a Strict Transport Security header to HTTPS responses from the site.

Here's a breakdown of the HSTS configuration parameters:

To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration. This will create an edge-terminated route using the default certificate.

Header Configuration

Header configuration can be a bit tricky in OpenShift Container Platform, but don't worry, I've got you covered.

You can only set or delete headers within an IngressController or Route CR, you cannot append them. This means if an HTTP header is set with a value, that value must be complete and not require appending in the future.

Credit: youtube.com, Route - Page Header Settings

Special case headers, like proxy, host, strict-transport-security, cookie, and set-cookie, have specific configuration options and restrictions. For example, the proxy header cannot be set or deleted, as it can be used to exploit vulnerable CGI applications.

To enable HTTP Strict Transport Security (HSTS) on a route, you can add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. This will add a Strict Transport Security header to HTTPS responses from the site.

The haproxy.router.openshift.io/hsts_header annotation has specific parameters, such as max-age, includeSubDomains, and preload. For instance, max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0, it negates the policy.

Here is a summary of the special case header configuration options:

Security and Authentication

OpenShift Secured Routes provide a secure way to serve certificates to clients, using various TLS termination methods to decrypt encrypted traffic. This process is called TLS termination, where the encryption is removed before passing traffic to the required service or pod.

Credit: youtube.com, Getting Started with OpenShift 4 Security

Secure routes in OpenShift use Server Name Indication (SNI) to determine which hostname the client is trying to connect to. Non-SNI traffic routed to the secure port (default 443) is assigned a default certificate that likely won't match the hostname, resulting in an authentication error.

Creating routes in Microsoft Azure through public endpoints is subject to certain restrictions, including a list of reserved resource names that cannot be used.

Unsecured

Unsecured routes in OpenShift use plaintext HTTP communication. This means that data transmitted over these routes is not encrypted, making it vulnerable to interception and eavesdropping.

You can create unsecured OpenShift routes through the GUI/web console or CLI (command line interface).

Secured

Secured routes in OpenShift provide a secure way to serve certificates to clients, and they offer various TLS termination options, including edge, passthrough, and re-encryption termination.

TLS termination in OpenShift uses SNI (Server Name Indication), which allows clients to indicate the hostname they're trying to connect to during the TLS handshake.

Credit: youtube.com, Secure authentication for EVERYTHING! // Authentik

Non-SNI traffic routed to the secure port (default 443) is assigned a default certificate that likely won't match the hostname, resulting in an authentication error.

To create secured routes, you can use the OpenShift Route documentation as a reference.

Here are some key facts about secured routes:

If you're creating routes in Microsoft Azure through public endpoints, be aware that resource names are subject to restriction, and certain terms are not allowed.

To enforce HTTP Strict Transport Security (HSTS) per-domain for secure routes, you need to add a requiredHSTSPolicies record to the Ingress spec. This will ensure that any newly created route is configured with a compliant HSTS policy annotation.

You can review the HSTS policy you configured using the following commands: $oc get clusteroperator/ingress -n openshift-ingress-operator -ojsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"

"}{end}' and $oc get route --all-namespaces-o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"

"}}{{else}}{{""}}{{end}}{{end}}{{end}}'.

Troubleshooting and Management

To troubleshoot OpenShift routes, start by checking if the service is running by running the command on the machine where the issue occurs. This is used if the HTTP service is unavailable from outside the cluster.

Credit: youtube.com, Ask an OpenShift Admin (Ep 52): Troubleshooting OpenShift

Next, verify that DNS is working by checking if the URL resolves to an IP address. If it doesn't resolve, there's an issue with the DNS.

If the URL resolves and the service is still not accessible externally, try telnetting to the resolved IP address against the port your service is running on. The command should result in a successful connection if the router is listening on the specified port.

Application Troubleshooting

Application Troubleshooting is a crucial step in ensuring your OpenShift applications are running smoothly. To start, you'll want to determine if the service is running by running a command on the machine where the issue occurs.

A service needs to be configured to listen on a public IP for external users to access it. A route is also required for traffic inside the cluster if an attempt to reach the service has been made externally.

Check if DNS is working by running a command that resolves the given URL to an IP address. If it doesn't resolve, there's an issue with DNS.

Credit: youtube.com, Application Troubleshooting Methodology

If the URL resolves, but the service is still not accessible externally, use telnet to connect to the resolved IP address against the port your service is running on. This will help you determine if the router is listening on the specified port.

If the ocget route command shows that the route is working, there's no issue in the route. However, if it shows that the route is not working, run the oc describe route command to get a detailed description of the issue.

The output of the telnet command should result in a specific output if the router is listening on the specified port. If the router is not listening or there's no service listening on the IP address, you'll get an error message.

Networking and Configuration

To configure the OpenShift Container Platform Ingress Controller for dual-stack networking, you can create a service YAML file or modify an existing one by setting the ipFamilies and ipFamilyPolicy fields. This allows the Ingress Controller to serve traffic over IPv4/IPv6 to a workload.

Credit: youtube.com, Services & Routes | Networking | Openshift Essentials | #2

You can specify the ipFamilies field as IPv4 or IPv6 for single-stack instances, or both IPv4 and IPv6 for dual-stack instances. The ipFamilyPolicy field should be set to RequireDualStack for dual-stack instances.

To view endpoints, enter the following command: $oc get endpoints. To view endpointslices, enter the following command: $oc get endpointslices.

Nothing

Nothing is a crucial aspect of configuring HSTS, as it only works with secure routes.

HSTS, or HTTP Strict Transport Security, is a security enhancement that signals to the browser client that only HTTPS traffic is allowed on the route host.

If you're configuring HSTS, you can enable it per-route or disable it per-route, giving you flexibility in how you implement security measures.

HSTS also allows you to enforce it per-domain, for a set of domains, or use namespace labels in combination with domains.

You should note that HSTS is ineffective on HTTP or passthrough routes, so make sure you're using secure routes like edge-terminated or re-encrypt routes.

Credit: youtube.com, Docker networking is CRAZY!! (you NEED to learn it)

Here's a summary of HSTS configurations:

Managing Request and Response Headers

Managing request and response headers is a crucial aspect of configuring your OpenShift Container Platform. You can set or delete certain HTTP request and response headers for compliance purposes or other reasons.

To set or delete these headers, you can use the Ingress Controller or an individual route. However, there are certain headers that cannot be set or deleted, such as the proxy and host headers, due to security and configuration issues.

You can set or delete headers using route annotations, but be aware that some headers have specific restrictions and requirements. For example, the strict-transport-security header can only be configured using route annotations.

If you need to append a header, such as the X-Forwarded-For header, you should use the spec.httpHeaders.forwardedHeaderPolicy field instead of spec.httpHeaders.actions. This is because setting or deleting headers can present challenges when working together.

Here are some special case headers that have specific configuration options:

To set a specific header, you can create a route definition and save it in a file. For example, to set the Content-Location HTTP request header, you would create a route definition with the following YAML:

apiVersion: route.openshift.io/v1

kind: Route

spec:

host: app.example.com

tls:

termination: edge

to:

kind: Service

name: app-example

httpHeaders:

actions:

  • response:
  • name: Content-Location

action: type: Set

set: /lang/en-us

Controller for Dual-Stack Networking

Credit: youtube.com, IPv4 to IPv6 transition - Dual Stack

The OpenShift Container Platform Ingress Controller is a powerful tool that allows you to configure dual-stack networking for your cluster.

To have the Ingress Controller serve traffic over IPv4/IPv6 to a workload, you can create a service YAML file or modify an existing service YAML file by setting the ipFamilies and ipFamilyPolicy fields. For example, you can specify both IPv4 and IPv6 in the ipFamilies field, and RequireDualStack in the ipFamilyPolicy field.

In a dual-stack instance, there are two different clusterIPs provided. For a single-stack instance, enter IPv4 or IPv6, while for a dual-stack instance, enter both IPv4 and IPv6. For a single-stack instance, enter SingleStack, while for a dual-stack instance, enter RequireDualStack.

These resources generate corresponding endpoints. The Ingress Controller now watches endpointslices.

To view endpoints, enter the following command: $oc get endpoints. To view endpointslices, enter the following command: $oc get endpointslices.

Here's a summary of the options for the ipFamilies field:

And here's a summary of the options for the ipFamilyPolicy field:

Customization and Edge Termination

Credit: youtube.com, OpenShift Routes vs Custom Resource Definitions

You can customize your secure route using edge termination by specifying a custom certificate and key pair in PEM-encoded format. This is done by using the oc create route command with the edge termination option.

To create a secure route with edge termination, you must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may also have a separate CA certificate in a PEM-encoded file that completes the certificate chain.

The following command is used to create a secure route with edge termination: $ oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com. This command assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory.

The YAML definition of the secure route using edge termination is similar to the following: apiVersion: v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----

Credit: youtube.com, Create Routes In OpenShift | Insecured | edge | passthrough | re-encrypt - Step By Step Guide

You can also configure edge termination to allow or redirect insecure traffic. The insecureEdgeTerminationPolicy field is used to configure what happens with the insecure traffic. The three values representing the goal are None or empty, Allow, or Redirect.

The following YAML file shows how to configure edge termination to allow insecure traffic: The name field is used for naming the object and has a limit of 63 characters.The termination field is edge for Edge Termination.insecureEdgeTerminationPolicy field is used to configure what happens with the insecure traffic. In the example YAML file above, insecure traffic is allowed.

The following YAML file shows how to configure edge termination to redirect insecure traffic to HTTPS: The name field is used for naming the object and has a limit of 63 characters.The termination field is edge for Edge Termination.insecureEdgeTerminationPolicy field is used to configure what happens with the insecure traffic. In the example YAML file above, insecure traffic is Redirected to HTTPS.

Frequently Asked Questions

What is an OpenShift route?

An OpenShift route is a way to expose a service to external clients by assigning a host name, such as www.example.com, to it. This allows clients to access the service by name, while DNS resolution is handled separately.

What is the default route port for OpenShift?

The default listening ports for OpenShift are 443 and 80. These standard ports are used for secure and non-secure connections, respectively.

Walter Brekke

Lead Writer

Walter Brekke is a seasoned writer with a passion for creating informative and engaging content. With a strong background in technology, Walter has established himself as a go-to expert in the field of cloud storage and collaboration. His articles have been widely read and respected, providing valuable insights and solutions to readers.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.