There are two kinds of IP in kubernetes: ClusterIP and Pod IP.
CNI
CNI cares about Pod IP.
CNI Plugin is focusing on building up an overlay network, without which Pods can't communicate with each other. The task of the CNI plugin is to assign Pod IP to the Pod when it's scheduled, and to build a virtual device for this IP, and make this IP accessable from every node of the cluster.
In Calico, this is implement by N host routes (N=the number of cali veth device) and M direct routes on tun0 (M=the number of K8s cluster nodes).
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.130.29.1 0.0.0.0 UG 100 0 0 ens32
10.130.29.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 *
10.244.0.137 0.0.0.0 255.255.255.255 UH 0 0 0 calid3c6b0469a6
10.244.0.138 0.0.0.0 255.255.255.255 UH 0 0 0 calidbc2311f514
10.244.0.140 0.0.0.0 255.255.255.255 UH 0 0 0 califb4eac25ec6
10.244.1.0 10.130.29.81 255.255.255.0 UG 0 0 0 tunl0
10.244.2.0 10.130.29.82 255.255.255.0 UG 0 0 0 tunl0
In this case, 10.244.0.0/16
is the Pod IP CIDR, and 10.130.29.81
is a node in the cluster. You can imagine, if you have a TCP request to 10.244.1.141
, it will be sent to 10.130.29.81
following the 7th rule. And on 10.130.29.81
, there will be a route rule like this:
10.244.1.141 0.0.0.0 255.255.255.255 UH 0 0 0 cali4eac25ec62b
This will finally send the request to the correct Pod.
I'm not sure why a daemon is nessesary, I guess daemoned is to prevent the route rules it created from being deleted manually.
kube-proxy
kube-proxy's job is rather simple, it just redirect requests from Cluster IP to Pod IP.
kube-proxy has two mode, IPVS
and iptables
. If your kube-proxy is working on IPVS
mode, you can see the redirect rules created by kube-proxy by running the following command on any node in the cluster:
ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 10.130.29.80:6443 Masq 1 6 0
-> 10.130.29.81:6443 Masq 1 1 0
-> 10.130.29.82:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.137:53 Masq 1 0 0
-> 10.244.0.138:53 Masq 1 0 0
...
In this case, you can see the default Cluster IP of CoreDNS 10.96.0.10
, and behind it is two real server with Pod IP: 10.244.0.137
and 10.244.0.138
.
This rule is what kube-proxy to create, and it's what kube-proxy created.
P.S. iptables
mode is almost the same, but iptables rules looks ugly. I don't want to paste it here. :p