Azure CNI in Kubernetes Service: A Comprehensive Overview

Author

Reads 702

Modern data server room with network racks and cables.
Credit: pexels.com, Modern data server room with network racks and cables.

Azure CNI in Kubernetes Service is a game-changer for cloud-native applications. It's a network plugin that allows you to manage and configure your network settings in a more efficient and scalable way.

Azure CNI provides a range of benefits, including improved network performance and reduced latency. This is especially important for applications that require high-speed communication and low-latency responses.

Azure CNI is designed to work seamlessly with Kubernetes, allowing you to easily manage and scale your network resources. This integration enables you to take full advantage of Kubernetes' container orchestration capabilities.

With Azure CNI, you can create and manage network policies, IP addresses, and routes with ease. This level of control gives you the flexibility to optimize your network settings for your specific use case.

Installation and Configuration

To install the Azure CNI plugin, you can copy the plugin package from the release share to your Azure VM and extract the contents to the CNI directories. You can also use the provided install scripts, such as install-cni-plugin.sh for Linux or install-cni-plugin.ps1 for Windows.

Credit: youtube.com, Kubernetes networking on Azure

The plugin package comes with a simple network configuration file that works out of the box, but you can customize it to suit your needs.

Here are the key configuration options for the network plugin:

Note that network configuration files are processed in lexical order during container creation, and in the reverse-lexical order during container deletion.

Install

To install the plugin, you'll need to copy the plugin package from the release share to your Azure VM and extract the contents to the CNI directories.

You can also use the install-cni-plugin.sh script on Linux or install-cni-plugin.ps1 script on Windows, which can be found in the scripts directory.

The plugin package comes with a simple network configuration file that works out of the box.

This means you can get started quickly without needing to customize anything.

Network Configuration

Network Configuration is a crucial aspect of Azure CNI. The default location for configuration files is /etc/cni/net.d for Linux and c:\k\azurecni\ for Windows.

Credit: youtube.com, DHCP Explained - Dynamic Host Configuration Protocol

You can create multiple network configuration files to connect containers to multiple networks. These files are processed in lexical order during container creation, and in the reverse-lexical order during container deletion.

The network configuration file uses JSON format and must include the following properties for the network plugin: cniVersion, name, type, mode, master, bridge, and logLevel. The type property should always be set to azure-vnet, and the cniVersion should be 0.3.0 or 0.3.1.

Here are the specific properties you need to include in your network configuration file:

  • cniVersion: 0.3.0 or 0.3.1
  • name: Unique value for the network
  • type: azure-vnet
  • mode: Optional, but can be set for operational modes
  • master: Optional, but can be set to a host network interface
  • bridge: Optional, but can be set to a bridge name
  • logLevel: Optional, but can be set to info or debug

For the IPAM plugin, you need to include the type property set to azure-vnet-ipam and the environment property set to azure or mas.

Networking Models

Azure CNI offers two IP addressing options for pods: traditional configuration and Overlay networking. The traditional configuration assigns VNet IPs to pods, while Overlay networking assigns IP addresses from a virtual network.

The choice of which option to use is a balance between flexibility and advanced configuration needs. If you need to scale to a large number of pods but have limited IP address space in your VNet, Overlay networking is the way to go. Most pod communication is within the cluster, and you don't need advanced AKS features like virtual nodes.

Here are the key differences between the two options:

Network Flexibility

Credit: youtube.com, What is OSI Model?

Azure CNI offers flexibility in designing your AKS network architecture, allowing you to define how pods and services communicate within and outside the cluster.

You can create multiple network configuration files to connect containers to multiple networks, and network configuration files are processed in lexical order during container creation, and in the reverse-lexical order during container deletion.

Azure CNI supports complex networking scenarios such as deploying pods across multiple subnets, enabling peering between VNets, and connecting on-premises networks to AKS clusters.

To take advantage of this flexibility, you can use the Azure CNI plugin, which provides a networking model that leverages Azure's virtual network infrastructure, offering seamless integration with Azure resources.

Here are some key features of Azure CNI that enable network flexibility:

  • Dynamic Plugin specific fields (Capabilities / Runtime Configuration)
  • Azure VNet Integration
  • Azure VNET CNI Plugins
  • Differences between kubenet and Azure CNI Overlay

These features allow you to customize your network configuration to meet the specific needs of your application, and to take advantage of the scalability and performance characteristics of Azure CNI.

Pods per Node

Credit: youtube.com, Azure AKS limitations - Network size and pods per node

Pods per Node is a crucial aspect of Azure Kubernetes Service (AKS) networking. You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool.

The default and maximum value for Azure CNI Overlay is 250 pods per node. The minimum value is 10, so you can't go lower than that.

To plan for future cluster expansion, ensure your private CIDR is large enough to provide /24 address spaces for new nodes. This will give you the flexibility to add more nodes as your cluster grows.

The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet. Just make sure it doesn't overlap with the cluster subnet range or directly connected networks like VNet peering, ExpressRoute, or VPN.

Here's a quick rundown of the key facts to keep in mind:

  • Default and maximum pods per node: 250 (Azure CNI Overlay)
  • Minimum pods per node: 10
  • Private CIDR must be large enough for /24 address spaces for new nodes
  • Pod CIDR space can be used on multiple AKS clusters in the same VNet
  • Pod CIDR space must not overlap with cluster subnet range or directly connected networks

Network Planning and Management

Azure CNI offers flexibility in designing your AKS network architecture, allowing you to define how pods and services communicate within and outside the cluster.

Credit: youtube.com, Azure CNI networking explained in plain English in less than 5 minutes-azure kubernetes services-AKS

To ensure seamless integration with Azure resources, consider the IP address requirements for your AKS cluster. The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet, but it's recommended to use a /16 for pod CIDR to accommodate scaling.

You should also avoid overlapping pod CIDR space with the cluster subnet range and directly connected networks, such as VNet peering, ExpressRoute, or VPN.

Here are some key IP address planning considerations:

  • The pod CIDR space must not overlap with the cluster subnet range.
  • The pod CIDR space must not overlap with directly connected networks (like VNet peering, ExpressRoute, or VPN).
  • Use a /16 for pod CIDR to accommodate scaling.

Azure CNI also supports complex networking scenarios, such as deploying pods across multiple subnets and enabling peering between VNets. This allows for a more flexible and extensible way to manage network connectivity in Kubernetes clusters.

Security and Isolation

Security and isolation are crucial for any Azure Kubernetes Service (AKS) cluster. Azure CNI integrates with Azure Network Security Groups (NSGs) to enforce network policies and control traffic flows to and from pods.

NSGs provide layer-4 security controls, enhancing security within the AKS cluster. This integration enables you to implement robust network security policies, controlling traffic between pods, between pods and Azure resources, and even from on-premises networks.

To ensure proper cluster functionality, make sure to have the following rules in place in your subnet NSG: Traffic from the node CIDR to the node CIDR on all ports and protocolsTraffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)

Security and Isolation

Credit: youtube.com, Cyber Security || 2. Introduction to Isolation and Compartmentalization

Azure CNI integrates with Azure NSGs to enforce network policies and control traffic flows to and from pods, providing layer-4 security controls that enhance security within the AKS cluster.

By leveraging Azure NSGs, you can implement robust network security policies that control traffic between pods, between pods and Azure resources, and even from on-premises networks, enhancing isolation and security.

Azure CNI enables you to define how pods and services communicate within and outside the cluster, tailoring the network to your specific requirements.

To ensure proper cluster functionality, make sure to include specific NSG rules, such as:

  • Traffic from the node CIDR to the node CIDR on all ports and protocols
  • Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
  • Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)

This integration simplifies network configuration and enhances security by allowing pods to directly communicate with Azure resources like virtual machines, databases, and other services, while also providing flexibility in designing your AKS network architecture.

Audit

To determine if Azure CNI networking mode is configured for your AKS clusters, you need to perform a series of steps.

Repeat steps for each AKS cluster provisioned in the selected Azure subscription, following the same process to ensure accuracy.

Advanced Networking Scenarios

Credit: youtube.com, Azure AKS : Networking Model - Kubenet & Azure CNI

Azure CNI supports complex networking scenarios such as deploying pods across multiple subnets.

This allows for more flexibility in designing your AKS network architecture.

You can enable peering between VNets, which is a great way to connect multiple virtual networks and extend your network reach.

This is especially useful for large-scale deployments or when working with multiple teams.

Connecting on-premises networks to AKS clusters is also possible with Azure CNI, making it easier to integrate your cloud and on-premises environments.

This can be a game-changer for companies with existing on-premises infrastructure.

Azure CNI provides a flexible and extensible way to manage network connectivity in Kubernetes clusters.

In the CNI model, each pod gets its IP address, and network traffic is routed through a set of plugins that define how traffic is handled.

CNI plugins can be used to implement advanced networking features such as network policies, load balancing, and encryption.

These features are crucial for ensuring the security and performance of your AKS cluster.

Frequently Asked Questions

What is the difference between kubenet and Azure CNI?

Kubenet and Azure CNI differ in how they assign IP addresses and handle traffic within a virtual network, with Azure CNI providing direct communication between pods and services without NAT

What is the difference between Azure CNI and Calico?

Calico offers more advanced network features and supports both Azure CNI and kubenet, while Azure Network Policy Manager has a more basic set of capabilities and only supports Azure CNI

What are the advantages of Azure CNI?

Azure CNI enables direct routing of traffic to pods by translating source IPs, allowing for seamless connectivity within and outside the cluster. This simplifies network management and improves communication between pods and external endpoints.

Patricia Dach

Junior Copy Editor

Patricia Dach is a meticulous and detail-oriented Copy Editor with a passion for refining written content. With a keen eye for grammar and syntax, she ensures that articles are polished and error-free. Her expertise spans a range of topics, from technology to lifestyle, and she is well-versed in various style guides.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.