Skip to main content

CNI - Container Network Interface

CNI stands for Container Network Interface, which is an specification to develop plugins/binaries that can configure containerized environments networking. A Container environment usually runs in Linux Machine separated by Linux Namespaces, cgroups etc. Depending how your container was built/deployed, it may or may not have access/visibility into the host networking namespace - it may be isolated in it's own network namespace. A network namespace will dictate which devices, like eth or veth a container is attached.

As example, the famous Kubernetes, it delegates the entire responsibility of assigning IP's and creating the entire networking/communication between pods for the CNI plugin.

Implementations of CNI can vary is lots of flavors, each with it's own limitations and capabilities:

PluginFeaturesLimitationsUse-Case
CalicoAdvanced security policies, BGP supportOverhead in overlay modeLarge, secure clusters with external routing needs
FlannelLightweight, easy setupNo network policiesSmall to medium clusters with simple requirements
CiliumeBPF-powered, L7 security policiesSteeper learning curveMicroservices needing advanced observability
Weave NetPeer-to-peer mesh, encryptionOverlay latencyMedium clusters requiring encrypted networking
MultusMulti-network interface supportComplex managementTelco/NFV or multi-network Kubernetes setups
AWS VPC CNIVPC-native performance, AWS service integrationAWS-specific, ENI limitsKubernetes on AWS with direct VPC integration

Interesting Tooling

While on a Kubernetes Node, having the processID, you can run nsenter to attach the current session to a certain namespace and execute commands like ip addr list to check devices attached to a certain namespace/pod.

Tools like brctl - to manage Bridge Interfaces and ip-utils like ip addr add,ip addr list are - very roughly - tools that CNI plugins could use to manager IPAM for Pods.