
Setting up OpenShift Ingress is a straightforward process that allows you to expose your applications to the outside world.
First, you need to create an Ingress resource, which is a Kubernetes object that manages external access to your application.
In OpenShift, you can create an Ingress resource using the OpenShift console or the command-line interface (CLI). To do this, you'll need to define the Ingress object, specifying the IP address or hostname, the port, and the path that you want to expose.
To create an Ingress resource, you can use the following YAML file:
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: 80
```
This YAML file defines an Ingress resource that exposes the `example-service` on port 80 to the outside world.
Expand your knowledge: Openshift Certification Path
Before You Begin
Before you start working with OpenShift Ingress, there are a few things to review.
In Runtime Fabric, configuration of routes closely follows the setup of ingress resources.
You should check out how Ingress Resource Templates work in Runtime Fabric.
To create routes effectively, you need to understand the basics of Ingress Resources.
Here are some key concepts to review before creating routes:
- How Ingress Resources Templates Work in Runtime Fabric
- Example Ingress Resource Templates
- Creating a route through an Ingress object
Configuring Ingress
Configuring Ingress is a crucial step in setting up your OpenShift cluster. You can configure ingress for an application when you deploy it to Runtime Fabric using Runtime Manager.
To deploy an application to Runtime Fabric, navigate to Runtime Manager and follow the documentation. Select Ingress and choose a host from the Host drop-down list.
The Host field is crucial, as it determines which domain will be used for your application. If the hostname uses a wildcard, you'll need to add a subdomain in the Subdomain field.
The Path field is where you specify the URL path to your application's endpoint. You can add multiple endpoints by clicking the + Add Endpoint button.
After you've configured your ingress settings, you can deploy your application. Runtime Manager will create a route to serve traffic to your application as per the template specified.
To create an ingress resource template, you'll need to add an annotation to the template: route.openshift.io/termination: passthrough. This will enable passthrough TLS termination.
Recommended read: Deploy Nfs Server on Openshift
Here's a summary of the steps to configure ingress:
The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml. This Ingress resource defines the cluster-wide configuration for Ingress.
The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller. The OpenShift API Server Operator uses the domain from the cluster Ingress configuration.
Broaden your view: Ingress Nginx Azure
Resource Templates
To create an ingress resource template, you must place it in the rtf namespace. This is a crucial step, as the template must be recognized as a template by Runtime Fabric.
The template must be modified to include a specific ingressClassName, which must be prefixed with rtf- and set to rtf-openshift. This is because Runtime Fabric uses the rtf- prefix to recognize the object as a template.
When configuring a route, you can include multiple paths for a host, but Runtime Manager will only display the first path rule for the host.
Here is a list of key requirements for an ingress resource template:
- The template must be placed in the rtf namespace.
- The ingressClassName must be prefixed with rtf- and set to rtf-openshift.
- TLS is optional, but if specified, Runtime Fabric creates a route with edge termination by default.
- Runtime Fabric replaces the app-name placeholder parameter with the actual app name when you deploy the application.
Create and Apply Resource Template
To create a resource template, you must place it in the rtf namespace. This is a crucial step, as it tells Runtime Fabric to recognize the object as a template.
The template must be in YAML format and include a .yaml extension in the file name. This is essential for Kubernetes to validate the template correctly.
When modifying the template, note that ingressClassName must be prefixed with rtf- and set to rtf-openshift. This is how Runtime Fabric identifies the template and consumes it.
If you need to specify TLS, you can add it to the template, but it's optional. Runtime Fabric will create a route with edge termination by default unless you specify the route.openshift.io/termination annotation.
A template can include multiple paths for a host, but Runtime Manager will only display the first path rule for the host.
Here's a summary of the required fields for a template:
By following these guidelines, you'll be able to create a resource template that meets the requirements for Runtime Fabric.
Understanding Profiles
You can specify one of four TLS security profiles for each component: Old, Intermediate, Modern, or Custom.
The Old profile is intended for use with legacy clients or libraries, and it requires a minimum TLS version of 1.0. For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1.
The Intermediate profile is the recommended configuration for the majority of clients, and it requires a minimum TLS version of 1.2. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane.
The Modern profile requires a minimum TLS version of 1.3 and is intended for use with modern clients that have no need for backwards compatibility.
The Custom profile allows you to define the TLS version and ciphers to use, but use caution when using it because invalid configurations can cause problems.
You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile.
See what others are reading: Openshift Security
Mule Application Configuration
To configure ingress for a Mule application, you deploy it to Runtime Fabric using Runtime Manager. Available hosts and paths come from the ingress resource template configured by the Runtime Fabric administrator.
You can also use this procedure to deploy a test application to validate your ingress resource template. If you're using the Mule Maven plugin, the publicUrl in the http block can accept a comma-delimited string of multiple endpoints.
To configure ingress for a Mule application, follow these steps:
- Navigate to Runtime Manager and deploy an application to Runtime Fabric.
- Select Ingress.
- From the Host drop-down list, select a host for the application.
- Enter a URL path to the application’s endpoint in the Path field.
- Preview the endpoint by clicking the generated preview link.
After deploying the application, Runtime Manager creates a route to serve traffic to the application as per the template specified.
The Configuration Asset
The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml.
This asset is stored in the cluster-ingress-02-config.yml file in the manifests/ directory.
The Ingress resource defines the cluster-wide configuration for Ingress.
The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller.
A different take: Ingress Load Balancer Azure
The OpenShift API Server Operator uses the domain from the cluster Ingress configuration.
This domain is also used when generating a default host for a Route resource that does not specify an explicit host.
- The Ingress configuration is used for both the Ingress Operator and the OpenShift API Server Operator.
- The domain from the cluster Ingress configuration is used for the default Ingress Controller and default host for a Route resource.
Profiles
Profiles are a crucial aspect of Mule application configuration, and understanding how they work can make a huge difference in your application's security and performance.
There are four types of TLS security profiles: Old, Intermediate, Modern, and Custom. The Old profile is intended for use with legacy clients or libraries, while the Intermediate profile is the recommended configuration for most clients.
The Modern profile is designed for modern clients with no need for backwards compatibility. It requires a minimum TLS version of 1.3, which is supported by the HAProxy Ingress Controller image.
Here are the four types of TLS security profiles in a nutshell:
To configure a TLS security profile for an Ingress Controller, you need to edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server.
Configure a Mule Application
To configure a Mule application, you need to deploy it to Runtime Fabric using Runtime Manager. This process involves configuring ingress for the application.
You can use the Runtime Manager documentation to deploy an application to Runtime Fabric. Once you've done that, select Ingress from the menu. To choose a host for the application, select a host from the Host drop-down list.
If the hostname uses a wildcard, you'll need to add a subdomain in the Subdomain field. This field is only available if the hostname uses a wildcard. You'll also need to add a URL path to the application's endpoint in the Path field.
To preview the endpoint, click the generated preview link. If you need to add additional endpoints, click the + Add Endpoint button. Once you're ready, click Deploy application to deploy the application.
After deploying the application, Runtime Manager creates a route to serve traffic to the application as per the template specified.
Publishing Strategy
When working with Mule applications, you'll need to consider how to publish your Ingress Controller. In Kubernetes, there are several endpoint publishing strategies to choose from.
The NodePortService endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service. This strategy is useful when you need to support static port allocations.
A NodePortService is created to publish the deployment, and the specific node ports are dynamically allocated by OpenShift Container Platform. However, you can update the managed service resource directly to achieve integrations with static node ports.
The HostNetwork endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed. This strategy has some limitations, such as only allowing one pod replica per node.
To use the HostNetwork endpoint publishing strategy, you must use at least as many nodes as you want replicas. Each pod replica requests ports 80 and 443 on the node host where it is scheduled, so a replica cannot be scheduled to a node if another pod on the same node is using those ports.
A unique perspective: Single Node Openshift
Here are the key differences between the NodePortService and HostNetwork endpoint publishing strategies:
If you need to configure the default Ingress Controller for your cluster to be internal, you can delete and recreate it. To do this, you'll need to update the IngressController resource with the desired endpoint publishing strategy.
Deploying Our Application
To deploy our application, we need to create a route to access it. This is done using the OpenShift Route command.
We can deploy an OpenShift Route to our service with Edge TLS encryption using the following commands. The route will be created and can be validated.
The default certificate from the OpenShift Router can be used for the Route, which is trusted by most clients. However, if you don't have an HTTPS certificate deployed in your OpenShift Router, you'll need to add the -k option to curl to accept untrusted certificates.
We can validate that we can reach the application through the HTTPS Route by using curl.
Deploy an Upstream
Deploying an upstream Ingress controller can be a bit tricky, especially when working with OpenShift. You'll need to add the capabilities and UID constraints required for Nginx Ingress via a simple manifest.
To do this, you'll apply the manifest to create a namespace, security context constraints, roles, and role bindings for the Nginx Ingress. This will give the necessary permissions for the Nginx Ingress to run on your cluster.
The manifest will create a namespace, security context constraints, and roles, which will then be applied to your cluster. This process can take a few minutes to complete.
Once the manifest is applied, you can then deploy the stock upstream Nginx deployment. This will create a service account, roles, cluster roles, and a deployment for the Nginx Ingress controller.
The deployment will create a configmap, services, and a deployment for the Nginx Ingress controller. It will also create an IngressClass and a validating webhook configuration for the Nginx Ingress admission.
After the deployment is complete, you should be able to watch your Deployment come online and show Ready. This indicates that the Nginx Ingress controller is up and running.
Broaden your view: Deploy Openshift
Specifying an Alternative Cluster Domain with AppsDomain

As a cluster administrator, you have the ability to specify an alternative cluster domain for user-created routes by configuring the appsDomain field. This field is optional and allows you to use a custom domain instead of the default one.
To configure the appsDomain field, you simply need to specify an alternative default domain for user-created routes. This can be done by following these steps:
- Configure the appsDomain field by specifying an alternative default domain for user-created routes.
- Verify that an existing route contains the domain name specified in the appsDomain field by exposing the route and verifying the route domain change.
It's worth noting that if you specify an alternative domain, it will override the default cluster domain for the purpose of determining the default host for a new route. This means that you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster.
Wait for the openshift-apiserver to finish rolling updates before exposing the route, as this will ensure that the changes take effect.
Broaden your view: Openshift Routes
Creating a
Creating a route for your Mule application is a crucial step in exposing it to the world. You can create a route by running the command `oc expose pod/hello-openshift` to expose the pod as a service.
A different take: Openshift Pod
To create a route for Ingress Controller sharding, you need to create a project called hello-openshift by running the command `oc new-project hello-openshift`. This will create a new project where you can deploy your Mule application.
You'll also need to create a pod in the project by running the command `oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json`. This will create a pod that runs your Mule application.
To create a route definition for sharding, you'll need to create a YAML file called hello-openshift-route.yaml with the following contents:
```yaml
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
type: sharded
name: hello-openshift-edge
namespace: hello-openshift
spec:
subdomain: hello-openshift
tls:
termination: edge
to:
kind: Service
name: hello-openshift
```
This YAML file defines a route that uses the subdomain `hello-openshift` and exposes the service `hello-openshift` as an edge termination point.
Once you've created the YAML file, you can create the route by running the command `oc -n hello-openshift create -f hello-openshift-route.yaml`. This will create a route that exposes your Mule application to the world.
Here's a summary of the steps to create a route for Ingress Controller sharding:
- Create a project called hello-openshift
- Create a pod in the project
- Create a service called hello-openshift
- Create a route definition for sharding
- Create the route by running the command `oc -n hello-openshift create -f hello-openshift-route.yaml`
By following these steps, you can create a route for your Mule application that takes advantage of Ingress Controller sharding.
Container Platform
OpenShift Container Platform is a powerful tool for creating and managing containerized applications. It provides a robust environment for deploying and scaling applications.
To enable external access to your services, you need to use the Ingress Operator, which implements the IngressController API. This component is responsible for making your services accessible to outside clients.
The Ingress Operator achieves this by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources.
To configure the wildcard policy, you need to have the OpenShift CLI (oc) installed and access to the cluster as a user with the cluster-admin role.
For more insights, see: Ocp Openshift
Container Platform Operator
The Container Platform Operator is a crucial component of OpenShift Container Platform, responsible for enabling external access to cluster services. This is achieved through the Ingress Operator, which implements the IngressController API.
The Ingress Operator makes it possible for external clients to access services by deploying and managing HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources.
On a similar theme: Openshift Platform plus
The Ingress Operator is a component of the OpenShift Container Platform, and it's responsible for enabling external access to cluster services. This is done by implementing the IngressController API.
To configure the wildcard policy, you need to have the OpenShift CLI (oc) installed and access to the cluster as a user with the cluster-admin role. This will allow you to configure the Ingress Operator to enable external access to your services.
Here are the available TLS security profiles:
To configure a TLS security profile for an Ingress Controller, you need to edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server.
Load Balancing
Load balancing is a crucial aspect of OpenShift Ingress, ensuring efficient distribution of traffic across multiple servers.
You can configure an Ingress Controller to use an internal load balancer, which is useful for cloud platforms like Azure, where you need at least one public load balancer pointing to your nodes to maintain egress connectivity to the internet.
To use an internal load balancer, you must specify the scope as Internal in the IngressController custom resource (CR). You can also change the scope after the CR is created, but you'll need to delete and recreate the IngressController object.
Here are the options for endpoint publishing strategy types:
- HostNetwork
- NodePortService
- LoadBalancerService
Each of these options has its own implications for load balancing, such as the PROXY protocol, which preserves original client addresses for connections received by the Ingress Controller.
Passthrough Termination
To configure passthrough TLS termination in your ingress resource template, you'll need to add the annotation route.openshift.io/termination: passthrough.
This annotation allows you to bypass the default termination behavior and instead use passthrough TLS termination.
In the template, you'll also need to set specific parameter values.
Here are the steps to follow:
- Add the annotation route.openshift.io/termination: passthrough to the template.
- Set the parameter values in the template.
- In Runtime Manager, include a / in the Path field when configuring ingress for a Mule app.
By following these steps, you can enable passthrough TLS termination for your application endpoint, which will show HTTPS.
Maximum Connections
To optimize load balancing, you can adjust the maximum number of connections for your Ingress Controller. You'll need to patch the existing Ingress Controller to make this change.
The maximum number of connections can be set for HAProxy by updating the Ingress Controller with the following command:
$ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"maxConnections": 7500}}}'
Be aware that if you set the maximum connections value too high, it may exceed the current operating system limit, preventing HAProxy from starting.
Proxy Protocol
The Proxy Protocol is a feature that enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. This is useful for logging, filtering, and injecting HTTP headers.
In the default configuration, the connections that the Ingress Controller receives only contain the source address associated with the load balancer. To configure the Proxy Protocol, you must use either the HostNetwork or NodePortService endpoint publishing strategy types.
You must configure both OpenShift Container Platform and the external load balancer to either use the Proxy Protocol or to use TCP. This is a requirement for the Proxy Protocol to work correctly.
The Proxy Protocol is unsupported for the default Ingress Controller with installer-provisioned clusters on non-cloud platforms that use a Keepalived Ingress VIP. This means you'll need to use a different configuration if you're working with these types of clusters.
To configure the Proxy Protocol, you'll need to edit the Ingress Controller resource. Here's the step-by-step process:
- Edit the Ingress Controller resource: $oc -n openshift-ingress-operator edit ingresscontroller/default
- Set the PROXY configuration:
Frequently Asked Questions
What is ingress and egress in OpenShift?
In OpenShift, ingress refers to incoming traffic from outside the network to a pod, while egress refers to outgoing traffic from a pod to an external destination. Understanding the difference between ingress and egress is crucial for securing and optimizing network traffic in your OpenShift environment.
What is ingress in a container?
Ingress in a container is a way to expose HTTP and HTTPS routes from outside the cluster to services within the cluster, controlling traffic routing through defined rules. It acts as a gateway, directing external traffic to specific services within the cluster.
What is the difference between OpenShift ingress and load balancer?
OpenShift ingress and load balancer differ in their location and routing capabilities: ingress is a native cluster object routing to multiple services, while load balancer is external and routes to a single service
Sources
- https://docs.mulesoft.com/runtime-fabric/latest/configure-openshift-routes
- https://docs.openshift.com/container-platform/4.9/networking/ingress-operator.html
- https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/configuring-ingress
- https://github.com/RedHatGov/ingress-route-examples
- https://docs.okd.io/latest/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-ingress-controller.html
Featured Images: pexels.com