CNI - Container Network Interface
CNI stands for Container Network Interface, which is an specification to develop plugins/binaries that can configure containerized environments networking.
A Container environment usually runs in Linux Machine separated by Linux Namespaces, cgroups
etc.
Depending how your container was built/deployed, it may or may not have access/visibility into the host networking namespace - it may be isolated in it's own network namespace.
A network namespace will dictate which devices, like eth
or veth
a container is attached.
As example, the famous Kubernetes, it delegates the entire responsibility of assigning IP's and creating the entire networking/communication between pods for the CNI plugin.
Implementations of CNI can vary is lots of flavors, each with it's own limitations and capabilities:
Plugin | Features | Limitations | Use-Case |
---|---|---|---|
Calico | Advanced security policies, BGP support | Overhead in overlay mode | Large, secure clusters with external routing needs |
Flannel | Lightweight, easy setup | No network policies | Small to medium clusters with simple requirements |
Cilium | eBPF-powered, L7 security policies | Steeper learning curve | Microservices needing advanced observability |
Weave Net | Peer-to-peer mesh, encryption | Overlay latency | Medium clusters requiring encrypted networking |
Multus | Multi-network interface support | Complex management | Telco/NFV or multi-network Kubernetes setups |
AWS VPC CNI | VPC-native performance, AWS service integration | AWS-specific, ENI limits | Kubernetes on AWS with direct VPC integration |
Interesting Tooling
While on a Kubernetes Node, having the processID, you can run nsenter
to attach
the current session to a certain namespace and execute commands like ip addr list
to check devices attached to a certain namespace/pod.
Tools like brctl
- to manage Bridge Interfaces and ip-utils
like ip addr add
,ip addr list
are - very roughly - tools that CNI plugins could use to manager IPAM for Pods.