
The Openshift API is a powerful tool for managing and automating your applications. It's a RESTful API, which means you can interact with it using standard HTTP requests.
To get started with the Openshift API, you'll need to know the base URL, which is typically `https://api.openshift.com`. This is where you'll send your API requests to interact with your Openshift cluster.
The Openshift API supports a wide range of operations, including creating and managing projects, users, and applications. You can also use it to inspect and modify the configuration of your applications and projects.
One of the most useful features of the Openshift API is its support for authentication and authorization. This allows you to securely interact with your Openshift cluster and ensure that only authorized users can make changes.
Authentication
Authentication is crucial for securing the OpenShift Container Platform API. Requests to the API are authenticated using one of four methods.
API authentication methods include obtaining an access token from the OAuth server, sending an Authorization: Bearer header, sending an access_token query parameter, or sending a websocket subprotocol header.
Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401 error.
Here are the details on the authentication process:
- Requires an HTTPS connection to the API server.
- Verified by the API server against a trusted certificate authority bundle.
- The API server creates and distributes certificates to controllers to authenticate themselves.
If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request.
Adding Role to Service
Adding a role to a service account requires two requests, where the second request type is dependent on the response of the first.
To begin, you need to make a GET request to the policybindings subresource of the namespace, which returns the PolicyBindingList, listing all the RoleBinding configurations in the namespace.
This list will tell you if the intended role is already present in the namespace, or if you need to create it.
If the role is not listed, you'll need to send a POST request to create the RoleBinding configuration, including the service account as a subject for the role, and the system:serviceaccount:{$PROJECT}:{$SERVICEACCOUNT} user name that distinguishes the service account.
If the role is already listed, you can update the returned RoleBinding configuration with the subject and user name of the service account being added to the role.
Here's an interesting read: Openshift Service Mesh
To do this, you'll need to send a PUT request to the role name in the rolebindings subresource of the namespace.
Here's a summary of the two possible scenarios:
Remember to include the service account as a subject for the role, and the system:serviceaccount:{$PROJECT}:{$SERVICEACCOUNT} user name in your request.
Status.Conditions[]
Status.conditions[] is a crucial part of the API Resource's state, providing details on various aspects of its current status.
The conditions struct is intended for direct use as an array at the field path .status.conditions, with several properties that offer valuable insights into the resource's state.
The type property of each condition is a CamelCase string, with known types including "Available", "Progressing", and "Degraded", allowing for deconfliction across resources.
Here's a breakdown of the condition properties:
The observedGeneration property is particularly useful, as it indicates the .metadata.generation that the condition was set based upon, helping to detect outdated conditions.
Apis/V1/ApiRequestCounts/{Name}
The Apis/V1/ApiRequestCounts/{Name} endpoint is used for partially updating a specific APIRequestCount. This endpoint expects a body parameter in the form of an APIRequestCount schema, which contains the updated information.

You'll need to provide the name of the APIRequestCount you want to update in the path parameter. The endpoint will return the updated APIRequestCount schema with a 200 - OK HTTP code.
If the update is successful, the updated APIRequestCount schema will be returned in the response body. If the request is unauthorized, an empty response will be returned with a 401 - Unauthorized HTTP code.
Projects
To list the details for a project in OpenShift API, you'll need to make three specific requests. These requests are essential for fetching the necessary configurations for the project namespace.
A GET request to the project name in the projects resource is the first step. This request returns the Project configuration for the project namespace.
The second request is a GET request to the resourcequotas subresource of the project namespace, which returns the ResourceQuotaList configuration. The third request is a GET request to the limitranges subresource of the project namespace, which returns the LimitRangeList configuration.
Here's a summary of the requests needed to list a project in OpenShift API:
- GET request to the project name in the projects resource
- GET request to the resourcequotas subresource of the project namespace
- GET request to the limitranges subresource of the project namespace
All Projects

Listing all projects in an environment is a straightforward process. It requires a single GET request to the projects resource.
This request returns a ProjectList configuration, which is a collection of project details. You can use this information to manage and organize your projects.
To get the list of projects, you can use a tool like a REST client or a programming language that supports HTTP requests. Simply send a GET request to the projects resource, and you'll receive the list of projects in the ProjectList configuration.
Here's a brief summary of the steps involved in listing all projects in an environment:
- Send a GET request to the projects resource.
- Receive a ProjectList configuration in response.
- Use the ProjectList configuration to manage and organize your projects.
By following these simple steps, you can easily list all projects in an environment and take the next steps in managing your projects.
2.2.5 Settings
Settings are a crucial part of any project, and understanding how to configure them is essential for success. To list a specific service account configuration, you'll need to make a GET request to the service account name in the serviceaccounts subresource of the namespace.
On a similar theme: Openshift Platform as a Service

This request returns the ServiceAccount configuration for the specified service account, providing you with the necessary information to fine-tune your settings.
Here's a step-by-step guide to listing a specific service account configuration:
A GET request to the service account name in the serviceaccounts subresource of the namespace.
The GET request returns the ServiceAccount configuration for the specified service account.
Secrets
To manage secrets in OpenShift API, you'll need to know how to list and delete them. Listing all secrets in a namespace requires just one request: a GET request to the secrets subresource of the namespace.
This GET request returns a SecretList, which details all secrets in the namespace. I've found this to be a useful feature when working with multiple secrets.
Deleting a secret is a straightforward process. To delete a secret, you'll need to send a DELETE request to the secret name in the secret subresource of the namespace.
The DELETE request returns a 200 code, confirming that the secret has been deleted. This is a simple yet effective way to manage your secrets in OpenShift API.
Claims

Creating a new persistent volume claim requires a POST request to the persistentvolumeclaims subresource of the namespace. The POST request uses a PersistentVolumeClaim configuration in the request body.
To list all persistent volume claims in an environment, you need to make a GET request to the persistentvolumeclaims resource. This returns the PersistentVolumeClaimList, listing the details of all persistent volume claims in the environment.
You can also list all persistent volume claims in a namespace by making a GET request to the persistentvolumeclaims subresource of the namespace. This also returns the PersistentVolumeClaimList, listing the details of all persistent volume claims in the namespace.
Here's a summary of the requests you can make to manage persistent volume claims:
You can also delete a persistent volume claim by making a DELETE request to the persistent volume claim name in the persistentvolumeclaims subresource of the namespace.
2.5.4. All Claims
Listing all persistent volume claims in a namespace is a straightforward process. You can do this with a single GET request to the persistentvolumeclaims subresource of the namespace.

This request returns the PersistentVolumeClaimList, which lists the details of all persistent volume claims in the namespace. You can use this list to keep track of all your persistent volume claims.
To get started, you'll need to send a GET request to the correct endpoint. The endpoint is the persistentvolumeclaims subresource of the namespace.
Here's a step-by-step guide to help you:
- Send a GET request to the persistentvolumeclaims subresource of the namespace.
- The GET request returns the PersistentVolumeClaimList.
By following these simple steps, you can easily list all persistent volume claims in a namespace.
Status by Node and User
The current hour's status by node and user can be accessed through the .status.currentHour.byNode[].byUser field, which contains request details by the top .spec.numberOfUsersToReport users.
This list might not be entirely precise due to the way top users are determined on a best-effort basis for apiservers.
Some system users may be explicitly included in the list, so keep that in mind when reviewing the data.
PerUserAPIRequestCount logs a user's requests, providing a record of their activity.
Builds

To create a new buildconfig, you'll need to make a POST request to the buildconfigs subresource of the namespace. This request uses a BuildConfig configuration in the request body.
To list a specific build configuration, you can make a GET request to the build name in the builds subresource of the namespace. This request returns the Build configuration for the specified build.
Here's a summary of the requests you can make to manage builds:
- Creating a new buildconfig: POST to buildconfigs subresource
- Listing a specific build configuration: GET to build name in builds subresource
- Starting a build: POST to instantiate subresource of BuildConfig name
- Deleting a build: DELETE to build name in builds subresource
Remember to use the correct request method and resource endpoint for each action to ensure successful execution.
Deleting
Deleting a build can be a straightforward process, requiring just one request to the persistent volume name in the persistentvolumes resource.
A DELETE request is all it takes to delete a persistent volume, as we learned from the example of deleting a persistent volume.
The response to this request is a code of 200, indicating that the persistent volume has been successfully deleted.
To delete a build, you'll need to know the persistent volume name and make a DELETE request to it in the persistentvolumes resource.
Here's a quick rundown of the steps to delete a persistent volume:
- A DELETE request to the persistent volume name in the persistentvolumes resource.
This process is efficient and gets the job done, as evidenced by the successful deletion of a persistent volume with a response code of 200.
All BuildConfigs
To view all BuildConfigs in an OpenShift environment, you'll need to make a GET request to the buildconfigs resource. This will return a BuildConfigList, listing the details of all BuildConfigs in the environment.
You can also list all BuildConfigs in a specific namespace by making a GET request to the buildconfigs subresource of the namespace. This will also return a BuildConfigList, but this time listing the details of all BuildConfigs within that namespace.
Alternatively, you can list all BuildConfigs in an environment using a GET request to the builds resource. This will return a BuildList, but it's worth noting that this method is not specific to BuildConfigs and will include all builds in the environment.

Here's a summary of the methods to list BuildConfigs:
Keep in mind that each of these methods will return different information, so you'll need to choose the one that best fits your needs.
Cluster
To list all the images in a cluster, you'll need to send a single GET request to the images resource. This request returns the ImageList, which contains the details of all images in the cluster.
The process is straightforward, requiring only one request to get the job done. You can then use this information to manage your images efficiently.
Here are the specific steps to list all images in a cluster:
- A GET request to the images resource.
Similarly, listing all image streams in a cluster requires a single GET request to the imagestreams resource. This returns the ImageStreamList, which contains the details of all image streams in the cluster.
All in One Cluster
Listing all the images in a cluster is a straightforward process that requires just one request.
You can do this by sending a GET request to the images resource. This request returns the ImageList, which provides the details of all images in the cluster.
To get started, you'll need to make a GET request to the images resource.
Streams in a Cluster

Managing streams in a cluster is a straightforward process. You can list all the image streams in a cluster with a single GET request to the imagestreams resource.
This request returns the ImageStreamList, which includes the details of all image streams in the cluster. You can then use this information to manage your streams as needed.
To list all image streams, you'll need to make a GET request to the imagestreams resource. This is a simple and efficient way to get an overview of your cluster's streams.
For All Pods
Listing all pods in a namespace is a straightforward process that requires just one request. You can achieve this by sending a GET request to the pods subresource of the namespace.
The GET request returns a PodList, which contains the details of all pods in the namespace. This is a quick and efficient way to get an overview of all the pods in a given namespace.
If you need to list all pods in a specific namespace, you can use the following approach. Here are the steps:
- A GET request to the pods subresource of the namespace.
This single request will return a PodList with the details of all pods in the namespace.
Deployment
Deployment is a crucial aspect of OpenShift API. Rolling back a deployment involves two requests: a POST request to the rollback subresource of the deployment and a PUT request to the deployment name in the deploymentconfig subresource of the namespace.
The POST request body specifies which elements of the deployment configuration version spec to be included in the returned deploymentconfig. This request returns a new deploymentconfig that represents the rollback state and can be used verbatim in the request body of the PUT request.
To roll back a deployment, you'll need to specify the elements to include in the rollback state, such as triggers, template, replication meta, and strategy parameters. Table 4.4 below outlines the field descriptions for the rollback request.
You can also scale a deployment by making two requests: a GET request to the scale subresource of the deployment and a PUT request with an updated replicas value to the scale subresource of the deployment.
Broaden your view: Deploy Openshift
Idling a DeploymentConfig
Idling a DeploymentConfig can be a useful strategy to reduce costs and optimize resource utilization.
If a DeploymentConfig is not actively deploying new versions of an application, it can be idled to stop incurring costs. This is especially useful for applications that are not frequently updated or have a stable version.
Idling a DeploymentConfig doesn't delete the configuration, it simply marks it as idle, allowing it to be reused when needed.
To idle a DeploymentConfig, you can use the `oc set deployment-config --idle` command, which will stop the DeploymentConfig from deploying new versions of the application.
Intriguing read: Newrelic Api
Rollback Deployment
To roll back a deployment, you'll need to make two requests: a POST request to the rollback subresource of the deployment, and a PUT request to the deployment name in the deploymentconfig subresource of the namespace.
The POST request body specifies which elements of the deployment configuration version spec to be included in the returned deploymentconfig. This returned deploymentconfig represents the rollback state and can be used verbatim in the request body of the PUT request.
The PUT request body sends the returned deploymentconfig to the deployment name in the subresource, triggering a new deployment that effectively rolls back the deployment to the version specified in the returned deploymentconfig.
Here are the field descriptions for the rollback request:
Scaling a Deployment
Scaling a deployment is a straightforward process that involves two requests. The first is a GET request to the scale subresource of the deployment, which returns the Scale configuration.
The GET request is optional if you can provide an updated Scale configuration for the PUT request. This configuration is used to update the spec, specifically the replicas parameter in the Scale configuration.
To send the configuration, you'll need to make a PUT request to the scale subresource of the deployment. This request includes the updated Scale configuration as the request body.
To give you a better idea, here's a high-level overview of the process:
- A GET request to the scale subresource of the deployment.
- A PUT request with an updated replicas value to the scale subresource of the deployment.
This two-step process allows you to scale your deployment with ease, making it a crucial part of managing your application's resources.
Scaling a Job

Scaling a job involves two requests: a GET request to retrieve the job configuration and a PUT request to update the parallelism value.
You can make a GET request to the job name in the jobs subresource of the namespace to retrieve the Job configuration. The GET request is optional if you can provide an updated Job configuration in the PUT request.
To update the parallelism parameter in the Job configuration, you can send the updated configuration as the request body in a PUT request to the deployment jobs subresource.
To scale a job, you'll need to make two requests: a GET request to the job name in the jobs subresource of the namespace, and a PUT request with the updated parallelism value to the same job name.
Here's a breakdown of the two requests:
Status Conditions
In a deployment, understanding the status conditions is crucial to ensure everything is running smoothly. The .status.conditions field contains details of the current status of this API Resource.
It's like checking the dashboard of your car to see if everything is functioning properly. conditions contains details of the current status of this API Resource.
Having this information at hand helps you troubleshoot any issues that may arise during the deployment process.
Route
Creating a new route in OpenShift API is a straightforward process. You'll need to make a POST request to the routes subresource of the namespace, including a Routeconfiguration in the request body.
The minimum required fields to expose a service with an insecure route include labels.app, name, host, port.targetPort, to.name, to.weight, alternateBackends.name, alternateBackends.weight, tls.termination, and wildcardPolicy.
You can use the following fields to customize your route:
- labels.app: a map of string keys and values that can be used to organize and categorize objects
- name: the name of the route
- host: an optional alias/DNS that points to the service
- port.targetPort: the port to be used by the router
- to.name: the name of the service/target used for the route
- to.weight: the weight as an integer between 1 and 256 that specifies the target’s relative weight against other target
- alternateBackends.name: the name of an alternate service/target that is being referred to
- alternateBackends.weight: the weight as an integer between 1 and 256 that specifies the target’s relative weight against other target
- tls.termination: the tls field provides the ability to configure certificates and termination for the route
- wildcardPolicy: the wildcard policy, if any, for the route
You can list a specific route configuration by making a GET request to the route name in the routes subresource of the namespace. This will return the Route configuration for the specified route.
To update a route configuration, you'll need to make two requests: a GET request to the route name, followed by a PATCH request with updated field values.
Readers also liked: Openshift Routes
Endpoints
The Openshift API has several endpoints that allow you to interact with the system.
The API endpoints for Apirequestcounts are /apis/apiserver.openshift.io/v1/apirequestcounts, /apis/apiserver.openshift.io/v1/apirequestcounts/{name}, and /apis/apiserver.openshift.io/v1/apirequestcounts/{name}/status.
You can query these endpoints using various parameters, including pretty, dryRun, and fieldManager.
For more insights, see: Why Are Apis Important

The pretty parameter allows you to specify whether the output should be pretty printed, with a value of true enabling this feature.
The dryRun parameter indicates that modifications should not be persisted, with valid values including All, which processes all dry run stages.
The fieldManager parameter is a name associated with the actor or entity making changes, with a value that must be less than or 128 characters long and contain only printable characters.
Frequently Asked Questions
How to get OpenShift API URL?
To get the OpenShift API URL, log in to your Red Hat OpenShift environment and copy the URL under Cluster API Address on the Overview page. This URL is essential for accessing and managing your OpenShift cluster.
How to get OpenShift API token?
To obtain an OpenShift API token, navigate to the Authentication Tokens category and click Generate Token. Select a role that matches your access needs and enter a name for the token.
Sources
- https://docs.redhat.com/en/documentation/openshift_container_platform/3.5/html-single/using_the_openshift_rest_api/index
- https://github.com/openshift/openshift-restclient-python
- https://docs.okd.io/4.11/rest_api/metadata_apis/apirequestcount-apiserver-openshift-io-v1.html
- https://stackoverflow.com/questions/70040777/using-the-openshift-api-is-there-a-way-to-get-deployments-in-a-project
- https://miminar.fedorapeople.org/_preview/openshift-enterprise/registry-redeploy/rest_api/examples.html
Featured Images: pexels.com