Azure CNI Overlay Networking in AKS Clusters

Author

Reads 636

Modern data server room with network racks and cables.
Credit: pexels.com, Modern data server room with network racks and cables.

Azure CNI Overlay Networking in AKS Clusters is a powerful tool for managing network traffic within your Kubernetes clusters. It provides a flexible and scalable way to connect your pods and services.

Azure CNI Overlay Networking is a type of Container Network Interface that uses an overlay network to connect pods in an AKS cluster. This allows for better network isolation and security.

With Azure CNI Overlay Networking, you can create a virtual network that spans multiple nodes in your AKS cluster. This enables you to manage network traffic and security policies at a higher level.

This approach also simplifies network configuration and management, making it easier to deploy and manage your AKS clusters.

Setup and Configuration

To set up an Azure CNI Overlay cluster, you'll need to have CLI version 2.48.0 or later installed. For Windows users, ensure you have the latest aks-preview Azure CLI extension installed.

To create a cluster, use the az aks create command with the --network-plugin-mode argument to specify an overlay cluster. If you don't specify the pod CIDR, AKS will assign a default space: 10.244.0.0/16.

For another approach, see: Create Virtual Network Azure

Credit: youtube.com, 4: AKS Networking: Azure CNI Overlay

Azure CNI Overlay clusters can be created with either a /24 or /27 IP range. However, using a /27 range is recommended as it saves IP ranges for future use and allows for more flexibility in new processes like blue-green deployments.

Here's a comparison of the two IP range options:

Azure CNI Overlay clusters are managed by Azure, and you can use the az aks create command to set one up.

Set Up

To set up an overlay cluster, you'll need to create a cluster with Azure CNI Overlay using the az aks create command.

Make sure to use the argument --network-plugin-mode to specify an overlay cluster, but only if you're using CLI version 2.48.0 or later.

For Windows, you'll need the latest aks-preview Azure CLI extension installed.

If the pod CIDR isn't specified, AKS will assign a default space of 10.244.0.0/16.

To set up an overlay cluster with Azure CNI Powered by Cilium, use the argument --network-dataplane cilium to specify the Cilium dataplane.

You can also create an overlay cluster by simply using the az aks create command.

Check this out: Cli Azure

Setup and Configuration

Network rack
Credit: pexels.com, Network rack

To set up and configure Azure CNI, you need to consider the version of Kubernetes your cluster is running. If it's on version 1.22 or later, you can update to CNI Overlay. However, this process is non-reversible, so be sure to back up your data first.

A cluster can't use the dynamic pod IP allocation feature or have network policies enabled to upgrade to CNI Overlay. You'll also need to uninstall the Azure Network Policy Manager or Calico if it's installed.

If you're using Windows nodes with Docker as the container runtime, you won't be able to upgrade to CNI Overlay.

To avoid issues with SNATing packets from host network pods, make sure your Windows OS is at least Build 20348.1668.

Upgrading to CNI Overlay will trigger each node pool to be re-imaged simultaneously, so be prepared for some downtime.

Here are the requirements for upgrading to CNI Overlay:

  • The cluster is on Kubernetes version 1.22+.
  • Doesn't use the dynamic pod IP allocation feature.
  • Doesn't have network policies enabled.
  • Doesn't use any Windows node pools with docker as the container runtime.

If you're using a custom azure-ip-masq-agent config, be aware that upgrading to CNI Overlay can break connectivity to additional IP ranges. You may need to delete a leftover ConfigMap named azure-ip-masq-agent-config before running the update command.

Explore further: Ip Address Azure

Security and Networking

Credit: youtube.com, Azure CNI networking explained in plain English in less than 5 minutes-azure kubernetes services-AKS

In an Azure CNI overlay, network security groups play a crucial role in controlling traffic between pods. To ensure proper cluster functionality, you need to have specific rules in place, including allowing traffic from the node CIDR to the node CIDR on all ports and protocols.

To enable this, you'll need to add the following rules to your subnet network security group: Traffic from the node CIDR to the node CIDR on all ports and protocolsTraffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)

If you want to restrict traffic between workloads in the cluster, it's recommended to use network policies instead. This will give you more control over what traffic is allowed and what's not.

Related reading: Security on Azure

Security Groups

Credit: youtube.com, AZ-900 Episode 21 | Azure Security Groups | Network and Application Security Groups (NSG, ASG)

In an Azure Kubernetes Service (AKS) cluster with Azure CNI Overlay, network security groups (NSGs) play a crucial role in controlling pod-to-pod traffic.

Pod-to-pod traffic with Azure CNI Overlay isn't encapsulated, so subnet NSG rules are applied. If your subnet NSG contains deny rules that would impact pod CIDR traffic, make sure you have the following rules in place to ensure proper cluster functionality.

Here are the necessary NSG rules:

  • Traffic from the node CIDR to the node CIDR on all ports and protocols
  • Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
  • Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)

Traffic from a pod to any destination outside of the pod CIDR block uses SNAT to set the source IP to the IP of the node where the pod runs.

On a similar theme: Azure Ip Address Ranges

Audit

To determine if Azure CNI networking mode is configured for your AKS clusters, perform the following actions: audit your cluster for any misconfigurations or vulnerabilities.

You can do this by checking the cluster's configuration files for any settings related to Azure CNI.

Azure CNI is a networking mode that allows for more control over network policies, but it requires careful configuration to ensure security and reliability.

To verify if Azure CNI is enabled, check the cluster's configuration files for the presence of the "azureCNI" parameter.

If you're not sure how to check the configuration files, you can always refer to the Azure documentation for more information.

Recommended read: Azure File

Differences and Comparison

Credit: youtube.com, Azure CNI Overlay networking in AKS (Preview)

Azure CNI Overlay offers a more scalable solution, supporting up to 5000 nodes and 250 pods/node, compared to Kubenet's 400 nodes and 250 pods/node.

Network configuration with Azure CNI Overlay is simple, requiring no extra configurations for pod networking, whereas Kubenet requires complex setup involving route tables and UDRs on cluster subnet.

Azure CNI Overlay provides performance on par with VMs in a VNet, whereas Kubenet adds an extra hop that introduces latency.

Here's a comparison of Azure CNI Overlay and Kubenet in a table format:

VXLAN-Based vs Host

When it comes to evaluating network performance, it's essential to consider the differences between VXLAN-based and Host network implementations.

Azure CNI Overlay and Host Network have similar throughput and CPU usage results, reinforcing that the Azure CNI Overlay implementation for Pod routing across nodes using the native VNET feature is as efficient as native VNET traffic.

The throughput of Host Network is similar to Azure CNI Overlay, which is a significant advantage for production-grade workloads in Kubernetes.

A different take: Azure Cni

Credit: youtube.com, What is EVPN-VXLAN Anyways?

In fact, the performance of both implementations is comparable, making Azure CNI Overlay a viable solution for running production-grade workloads in Kubernetes.

This similarity in performance highlights the efficiency of Azure CNI Overlay's native Layer 3 implementation of overlay routing.

Note that actual results may vary based on underlying hardware and node proximity within a datacenter.

Differences Between Kubenet

Kubenet is the default networking model used by Kubernetes, and it's also the default selected by managed cloud offerings like Azure Kubernetes Service. It's a good start, but it's also pretty basic.

Kubenet doesn't enforce any network policies, meaning that pods are reachable by any source, deferring that responsibility to a CNI plug-in. This can be a problem if you need more control over your network.

The cluster scale with Kubenet is limited to 400 nodes and 250 pods/node. This means you'll need to choose a different networking model if you need to support larger clusters.

A fresh viewpoint: Where Is Networking in Azure

Credit: youtube.com, AKS Kubenet networking explained in plain English - in less than 5 minutes

Kubenet requires complex network configurations, including route tables and UDRs on cluster subnets for pod networking. This can be a challenge if you're not familiar with these concepts.

Here's a comparison of Kubenet and Azure CNI Overlay in a table:

Azure CNI Overlay, on the other hand, supports larger clusters with up to 5000 nodes and 250 pods/node. It also provides simple network configurations and better performance.

Azure CNI Overlay

Azure CNI Overlay is a networking solution for Azure Kubernetes Service (AKS) that offers improved performance and scalability. It's generally available in AKS and provides significant improvements over other networking options.

Azure CNI Overlay is powered by Cilium, which brings better resource utilization and more efficient intra-cluster load balancing. This approach also enables Network Policy enforcement by leveraging eBPF over iptables. The benefits of using Azure CNI Overlay include higher pod-to-service throughput and increased observability.

The pod limit is by default set to 30 pods per node, and you can reduce your cluster IP range from /24 to /27 by default. Azure CNI Overlay also allows for dual-stack networking, where nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet.

You might enjoy: Azure Service Tags

Credit: youtube.com, 4: AKS Networking: Azure CNI Overlay

Here are some key differences between Overlay networking and traditional VNet options:

Add Nodepool to Dedicated Subnet

Adding a new nodepool to a dedicated subnet is a powerful feature of Azure CNI Overlay. This approach allows you to control the ingress or egress IPs of the host from/to targets in the same VNET or peered VNets.

You can create another nodepool and assign the nodes to a new subnet of the same VNet. This is useful if you want to isolate certain nodes or workloads within your cluster.

With Azure CNI Overlay, you can reuse the private CIDR in different AKS clusters, which extends the IP space available for containerized applications in Azure Kubernetes Service (AKS).

This solution saves a significant amount of VNet IP addresses and enables you to scale your clusters to large sizes.

Related reading: Azure Subnet

eBPF Dataplane

The eBPF dataplane is a game-changer in the world of networking. It's powered by Cilium, a next-generation technology that brings better resource utilization to the table.

Credit: youtube.com, AKS Networking Update - Control plane, nodes and pod networking options!

With Cilium's eBPF dataplane, you can expect more efficient intra-cluster load balancing, a key advantage of Azure CNI Overlay powered by Cilium. This means your cluster will be able to handle traffic more effectively.

The eBPF dataplane also enables Network Policy enforcement by leveraging eBPF over iptables, providing an added layer of security and control. This is a significant improvement over traditional networking methods.

Cilium's eBPF dataplane is a key component of Azure CNI Overlay powered by Cilium, which sets up IP-address management and Pod routing, while Cilium provisions Service routing and Network Policy programming. This partnership brings even more performance benefits to the table.

The higher pod to service throughput achieved with the Cilium eBPF dataplane is a promising improvement, making it an attractive option for those looking to optimize their AKS clusters.

A fresh viewpoint: Service Endpoints Azure

Model Selection

Choosing the right network model for your Azure Kubernetes Service (AKS) cluster can be a bit tricky. You have two options: traditional VNet configuration or Overlay networking.

Credit: youtube.com, Kubernetes networking on Azure

If you need to scale to a large number of pods, but have limited IP address space in your VNet, Overlay networking is the way to go. It's perfect for clusters with a lot of internal communication.

If most of the pod communication is within the cluster, Overlay networking is a good choice. You don't need advanced AKS features like virtual nodes in this case.

Here's a quick summary of the two options:

If you have available IP address space, most of the pod communication is to resources outside of the cluster, or resources outside the cluster need to reach pods directly, traditional VNet configuration is the better choice.

You might like: Azure Cluster

Advanced Observability

Azure CNI Overlay offers advanced network observability through a feature in preview. This feature enables hub-and-spoke flow visualization and a user interface, as well as Prometheus and Grafana monitoring.

With this feature, you can gain a deeper understanding of your cluster's network traffic and performance. However, it's worth noting that this feature is currently in preview and incurs an additional cost of $0.025 per node per hour.

Credit: youtube.com, Azure Kubernetes Service (AKS) Networking Essentials

To take advantage of this feature, you'll need to enable it separately from Azure CNI Overlay. This will give you access to detailed network flow data and monitoring metrics, helping you identify and troubleshoot issues more efficiently.

Here are some key details about the advanced network observability feature:

  • Enables hub-and-spoke flow visualization and a user interface
  • Includes Prometheus and Grafana monitoring
  • Costs $0.025 per node per hour

Solution

The Azure CNI Overlay solution is a game-changer for AKS clusters, allowing us to deploy cluster nodes into an Azure Virtual Network (VNet) subnet.

Pods are assigned IP addresses from a private CIDR that's logically different from the VNet hosting the nodes, which saves a significant amount of VNet IP addresses.

This solution enables us to scale our clusters to large sizes, and we can even reuse the private CIDR in different AKS clusters, extending the IP space available for containerized applications in AKS.

Here are the key benefits of Azure CNI Overlay:

  • The cluster nodes are deployed into an Azure Virtual Network (VNet) subnet.
  • Pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes.
  • Pod and node traffic within the cluster use an Overlay network.
  • Network Address Translation (NAT) uses the node's IP address to reach resources outside the cluster.

We can also reduce our cluster IP range from /24 to /27 by default, which is a huge advantage.

Solving for Performance and Scale

Credit: youtube.com, Azure Kubernetes Service (AKS) Networking Model Explained

Azure CNI Overlay was introduced to address the challenges of large-scale clusters in AKS.

The "kubenet" plugin, an existing overlay network solution, is limited to no more than 400 nodes or 200 nodes in dual stack clusters.

BYOCNI brings flexibility to AKS, but the additional encapsulation increases latency and instability as the cluster size increases.

To support customers who want to run large clusters with many nodes and pods with no limitations on performance, scale, and IP exhaustion, Azure CNI Overlay was introduced.

The new solution is designed to support large clusters, unlike the "kubenet" plugin.

Azure CNI Overlay is a game-changer for AKS customers who need to scale their clusters without sacrificing performance.

Deploy Dual-Stack AKS

To deploy a dual-stack AKS cluster, you'll need to create an Azure resource group and then use the az aks create command with the --ip-families parameter set to ipv4,ipv6.

You can deploy your AKS clusters in a dual-stack mode when using Overlay networking and a dual-stack Azure virtual network, which enables nodes to receive both an IPv4 and IPv6 address.

For more insights, see: Azure Ipv6

Credit: youtube.com, 5: AKS Networking : Dual Stack Networking

The az aks create command requires the --ip-families parameter to be set to ipv4,ipv6 to enable dual-stack mode. You can also use the --pod-cidrs parameter to specify the IP range for pods and the --service-cidrs parameter to specify the IP range for services.

Here are the required parameters for the az aks create command:

  • --ip-families: Takes a comma-separated list of IP families to enable on the cluster (ipv4,ipv6)
  • --pod-cidrs: Takes a comma-separated list of CIDR notation IP ranges to assign pod IPs from
  • --service-cidrs: Takes a comma-separated list of CIDR notation IP ranges to assign service IPs from

By deploying a dual-stack AKS cluster, you can take advantage of both IPv4 and IPv6 networking, which can be useful for applications that require both protocols.

Frequently Asked Questions

What is the maximum number of nodes in Azure CNI overlay?

The maximum number of nodes in Azure CNI overlay is 250. This limit applies to the nodes in a specific node pool, not the entire cluster.

What is an overlay network in Kubernetes?

An overlay network in Kubernetes is an additional layer of encapsulation for network traffic, allowing pods to communicate with each other securely. It involves wrapping traffic in a protocol like VXLAN, enabling seamless communication across the cluster.

Thomas Goodwin

Lead Writer

Thomas Goodwin is a seasoned writer with a passion for exploring the intersection of technology and business. With a keen eye for detail and a knack for simplifying complex concepts, he has established himself as a trusted voice in the tech industry. Thomas's writing portfolio spans a range of topics, including Azure Virtual Desktop and Cloud Computing Costs.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.