Getting started with Ansible OpenShift for cloud-native apps is a straightforward process. First, ensure you have Ansible installed on your machine.
To begin, you'll need to install the OpenShift Ansible collection, which provides the necessary modules for working with OpenShift clusters. This collection can be installed using the `ansible-galaxy` command.
The OpenShift Ansible collection includes a set of modules that allow you to manage OpenShift clusters, including creating and deleting clusters, as well as managing applications and services within those clusters.
Getting Started
To get started with Ansible on OpenShift, you'll need to have a basic understanding of Ansible and OpenShift, as well as a working knowledge of YAML syntax.
First, ensure you have Ansible installed on your machine. You can do this by running the command `pip install ansible` in your terminal.
Next, create a new project in OpenShift by running the command `oc new-project myproject`. This will create a new namespace for your project.
You can then use Ansible to deploy your application to OpenShift by creating a playbook that uses the OpenShift Ansible module. For example, you can create a playbook that uses the `oc` command to create a new deployment in your project.
Introduction
Red Hat Ansible Automation Platform (AAP) is now supported on the Red Hat OpenShift platform, which is a great direction to consider, especially if your organization already has experience with OpenShift and containers.
You can install AAP on your laptop or workstation, which will give you hands-on experience with the installation process. This will also help you gain experience with OpenShift and containers.
The traditional installation method using the tar ball and running setup.sh script is quite different on OpenShift.
Requirements
To get started with Red Hat OpenShift on your laptop, you'll need to use Red Hat OpenShift Local, which simplifies the architecture to a single virtual machine. This virtual machine is managed through the crc binary and OpenShift is managed through the typical oc command line tool or its GUI interface.
One of the key requirements is to use the OpenShift Container Runtime default preset, as required by Ansible.
Here are the key resources you'll need to run Red Hat OpenShift Local:
- OpenShift Container Runtime default preset
- Non-customizable cluster settings
Keep in mind that you'll be using a virtual machine, so you'll need to have a suitable operating system installed on your laptop.
Deploying Containers
Deploying containers with Ansible on OpenShift involves using a playbook to automate the process. You can deploy a simple container with fixed resource definitions using a playbook found in the appendix, which needs to be updated with your environment specifics.
To deploy a simple container, run the Ansible playbook with the updated file, and you should see an output similar to the one shown in the example. This will deploy the container and confirm the resources spec of the running object.
You can also deploy a container with dynamic references to resources by using the aws_ssm lookup function in your Ansible playbook. This involves creating parameters in your AWS environment and then referencing them in your playbook. The parameters can be created using the AWS CLI, and the playbook can be updated to use the dynamic references.
Here are the steps to deploy a container with dynamic references:
- Create parameters in your AWS environment using the AWS CLI.
- Update your Ansible playbook to use the aws_ssm lookup function and reference the parameters.
- Rerun the Ansible playbook and verify that the resource spec within the running pods has been modified to conform with what's specified in Parameter Store.
Deploying a Container with Dynamic Resource References
To deploy a container with dynamic resource references, you need to ensure the parameters exist in the AWS Parameter Store. You can quickly insert some parameters into your AWS environment using the CLI.
Using the AWS CLI, you can create 4 parameters: cpu_request, cpu_limit, mem_request, and mem_limit. For example, you can create a parameter for cpu_request with the value '250m' using the command: $ aws ssm put-parameter --name ‘/tutorial/cpu_request’ --value ‘250m’ --type String.
You can insert dynamic references into your Ansible playbook using the aws_ssm lookup function. This function allows you to retrieve the values of the parameters from the AWS Parameter Store and use them in your playbook.
To use the aws_ssm lookup function, you need to edit the resource spec of the manifest in your playbook. For example, you can use the following code to reference the cpu_request parameter: cpu: "{{ lookup('aws_ssm', '/tutorial/cpu_request' ) }}".
Once you have updated your playbook, you can re-run the ansible template to verify the resource spec within the running pods have been modified to conform with what’s specified in Parameter Store.
Crash Course
In the past, Ansible Automation Platform has been installed on virtual machines (VMs).
Historically, Ansible Automation Platform has been installed on VMs.
To install, manage, and upgrade Ansible Automation Platform on OpenShift, we need to become familiar with some basic OpenShift terminology.
It's necessary to introduce some basic OpenShift terminology to understand the new landscape in OpenShift.
We certainly won't cover everything, but this crash course will give you a good understanding of the concepts you need to know.
Container Management
Container management is a crucial aspect of working with OpenShift, and it's built on top of Kubernetes foundations.
Each OpenShift cluster represents a single cluster, and it's made up of one or more nodes, which can be virtual or bare-metal machines that provide the runtime environments.
A typical cluster contains nodes that run on Red Hat Enterprise Linux CoreOS (RHCOS), a container-optimized operating system.
Each node runs one or more pods, which are objects that define, immutable, and run one or more containers on one node.
Pods have their own internal IP address and own their entire port space, and containers within pods can share local storage and networking.
Pods have a specific lifecycle, starting with configuration with specific CPU and Memory to ensure proper performance.
Containers represent a basic unit of an OpenShift application, comprising the application code, dependencies, libraries, and binaries.
Containers consume compute resources from the node they run on, which are measurable quantities that can be requested, allocated, limited, and consumed.
Here's a quick breakdown of the pod lifecycle:
- Pods are configured with specific CPU and Memory to ensure proper performance.
- Containers within pods can share local storage and networking.
- Pods have their own internal IP address and own their entire port space.
Network and Storage
In Ansible OpenShift, network management is a crucial aspect that allows you to allocate internal IP addresses to each pod, making them behave like physical hosts or virtual machines.
Each pod is treated like a single host for port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. This means you can easily manage and scale your applications.
Pods and their containers can network with each other, and containers in the same pod share the same network space. However, clients outside the cluster do not have direct access to these networks, ensuring all containers behave as if they were on the same host.
To make services accessible to clients outside the environment, you can use routes that map real-world URLs to services within the cluster. This enables external clients to communicate with your services seamlessly.
Services also provide internal load-balancing and service discovery across pods, allowing your applications to talk to each other via services. This makes it easy to deploy and manage complex applications in OpenShift.
Here's a quick rundown of the network management features in Ansible OpenShift:
- Internal IP address allocation for each pod
- Pod networking and container sharing within the same pod
- Route creation for external access to services
- Service discovery and load-balancing across pods
For storage management, OpenShift provides two major storage mechanisms: ephemeral and persistent storage. Ephemeral storage is suitable for stateless applications, while persistent storage is designed for stateful applications that require data persistence beyond the lifecycle of a pod.
Persistent Volumes are pre-provisioned storage frameworks that allow cluster administrators to provision persistent storage. Persistent Volume Claims are objects that claim a portion of the defined persistent volume, making it easy to manage storage resources in OpenShift.
Network Management
Network Management is crucial for any cluster, and in this case, it's handled in a way that makes sense for the Automation Hub.
Each pod is allocated an internal IP address, which means they can be treated like physical hosts or virtual machines in terms of networking and other aspects.
Pods and their containers share the same network space, but clients outside the cluster don't have access. This ensures all containers within a pod behave as if they were on the same host.
This setup allows for internal load-balancing and service discovery across pods, making it easy for apps to talk to each other via services.
Services provide a way for clients outside the environment to access services via real-world URLs through routes.
Here's a quick rundown of the key benefits of this network management system:
- Internal IP addresses for pods
- Same network space for pods and containers
- Internal load-balancing and service discovery
- Services accessible via real-world URLs
Storage Management
Storage Management is a crucial aspect of any network and storage setup. It's achieved through two major storage mechanisms: ephemeral and persistent storage.
Ephemeral storage is designed for stateless applications, which means it's perfect for pods and containers that are transient in nature. This type of storage is ideal for applications that don't require data to be retained.
Ephemeral storage is not suitable for stateful applications, which require persistent storage to retain data. This is where Persistent Volumes come in – they're pre-provisioned storage frameworks that allow cluster administrators to provision persistent storage.
Persistent Volumes can hold data that exists beyond the lifecycle of an individual pod. This is made possible through Persistent Volume Claims, which are objects that claim a portion of the defined persistent volume.
Many different storage types are supported, including raw, block, and file storage. Some of these storage types are read-write-once, while others are read-write-many.
Installation
To install the Ansible Automation Platform on OpenShift, you'll need to follow these steps.
First, ensure you're logged in as the administrator account kubeadmin. Navigate to the OpenShift web console and click on Operators -> OperatorHub.
To find the Ansible Automation Platform Operator, type 'aap' in the search field. Select the Ansible Automation Platform Operator and click the blue Install button.
You'll be given installation options, but it's recommended to leave the default options and click Install. Wait for the Installing Operator message to change to Installed operator - ready for use.
You can watch the pods using the command `oc get pods -n ansible-automation-platform`.
By default, the Ansible Automation Platform Operator creates a managed PostgreSQL pod for the Automation Controller. This pod is created in the same namespace as your Ansible Automation Platform deployment.
The Automation Controller requires a few items before you can log in, including pods created and running. You should see the following pods created and running:
- Managed PostgreSQL pod
- Automation Controller pod
To install the Automation Hub, the Operator creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. This is the same behavior as when installing the Automation Controller.
Scaling
Scaling your Automation Controller in Ansible OpenShift is a straightforward process. You can scale it up or down by leveraging the Replicas property in the Automation Controller managed by the Operator.
To do this, you need to navigate to the Operators section, select the Ansible Automation Platform operator, and then select the Automation Controller tab. From there, click on the three dots on the right-hand side of the listed Automation Controller and select Edit Automation Controller.
In the YAML, scroll to the spec section and look for the replicas property. This is where you can change the number to scale up or down.
Here's a step-by-step guide to scaling your Automation Controller:
- Navigate to Operators -> Installed Operator
- Select Ansible Automation Platform operator
- Select the Automation Controller tab
- Click on the 3-dots on the right-hand side of the listed Automation Controller
- Click on Edit Automation Controller
- In the YAML, scroll to the spec section for property replicas
- Change the number to scale up or down
- Click blue Save button
- Navigate to Workloads -> Pods and watch the Operator scale up/down your Automation Controller pods
Note that the Operator will generate unique pod names based on the name you provided the Automation Controller, but it will still only create one instance of the Postgres pod.
Frequently Asked Questions
What is OpenShift Ansible?
OpenShift Ansible is not a single product, but rather a combination of two separate platforms: Red Hat OpenShift for containerized application deployment, and Red Hat Ansible Automation Platform for enterprise-wide automation
Is Ansible part of Red Hat?
Yes, Ansible has been part of Red Hat since its acquisition in October 2015. Ansible is now a core technology within the Red Hat portfolio.
Sources
- https://medium.com/@s4saif.121/automation-of-vms-on-ocp-virtualization-using-ansible-automation-platform-under-5-minutes-31ce494581c4
- https://docs.ansible.com/ansible/latest/collections/community/okd/k8s_module.html
- https://www.densify.com/blog/scaling-openshift-container-resources-ansible/
- https://www.cisecurity.org/cis-benchmarks
- https://www.ansiblejunky.com/blog/ansible-platform-in-openshift-on-laptop/
Featured Images: pexels.com