# vim calico-etcd.yaml ... apiVersion: v1 kind: Secret type: Opaque metadata: name: calico-etcd-secrets namespace: kube-system data: # Populate the following with etcd TLS configuration if desired, but leave blank if # not using TLS for etcd. # The keys below should be uncommented and the values populated with the base64 # encoded contents of each file that would be associated with the TLS data. # Example command for encoding a file contents: cat <file> | base64 -w 0 etcd-key: 填写上面的加密字符串 etcd-cert: 填写上面的加密字符串 etcd-ca: 填写上面的加密字符串 ... kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: # Configure this with the location of your etcd cluster. etcd_endpoints: "https://192.168.2.61:2379,https://192.168.2.62:2379,https://192.168.2.63:2379" # If you're using TLS enabled etcd uncomment the following. # You must also populate the Secret below with these files. etcd_ca: "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key"
根据实际网络规划修改 Pod CIDR(CALICO_IPV4POOL_CIDR),与 controller-manager 配置 /opt/kubernetes/cfg/kube-controller-manager.conf 中相同。
# kubectl apply -f calico-etcd.yaml secret/calico-etcd-secrets created configmap/calico-config created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created # kubectl get pods -n kube-system
如果事先部署了 Fannel 网络组件,需要先卸载和删除 Flannel,在每个节点均需要操作。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
# kubectl delete -f kube-flannel.yaml # ip link delete cni0 # ip link delete flannel.1 # ip route default via 192.168.2.2 dev eth0 10.244.1.0/24 via 192.168.2.63 dev eth0 10.244.2.0/24 via 192.168.2.62 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1002 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.61 # ip route del 10.244.1.0/24 via 192.168.2.63 dev eth0 # ip route del 10.244.2.0/24 via 192.168.2.62 dev eth0 # ip route default via 192.168.2.2 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1002 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.61
# ./calicoctl node status Calico process is running.
IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 192.168.2.62 | node-to-node mesh | up | 02:58:05 | Established | | 192.168.2.63 | node-to-node mesh | up | 03:08:46 | Established | +--------------+-------------------+-------+----------+-------------+
# calicoctl get node NAME k8s-master-01 k8s-node-01 k8s-node-02
查看 IPAM 的 IP 地址池:
1 2 3 4 5 6 7
# ./calicoctl get ippool NAME CIDR SELECTOR default-ipv4-ippool 10.244.0.0/16 all()
# ./calicoctl get ippool -o wide NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR default-ipv4-ippool 10.244.0.0/16 true Never Never false all()
5、Calico BGP 模式
Pod 1 访问 Pod 2 大致流程如下:
数据包从容器 1 出到达 Veth Pair 另一端(宿主机上,以 cali 前缀开头);
宿主机根据路由规则,将数据包转发给下一跳(网关);
到达 Node 2,根据路由规则将数据包转发给 cali 设备,从而到达容器 2。
路由表:
1 2 3 4 5 6 7 8 9
# node1 10.244.36.65 dev cali4f18ce2c9a1 scope link 10.244.169.128/26 via 192.168.31.63 dev ens33 proto bird 10.244.235.192/26 via 192.168.31.61 dev ens33 proto bird
# node2 10.244.169.129 dev calia4d5b2258bb scope link 10.244.36.64/26 via 192.168.31.62 dev ens33 proto bird 10.244.235.192/26 via 192.168.31.61 dev ens33 proto bird
# calicoctl node status Calico process is running.
IPv4 BGP status +--------------+---------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+---------------+-------+----------+-------------+ | 192.168.2.63 | node specific | up | 04:17:14 | Established | +--------------+---------------+-------+----------+-------------+
# calicoctl apply -f ipip.yaml # calicoctl get ippool -o wide NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR default-ipv4-ippool 10.244.0.0/16 true Always Never false all() # ip route # 会增加tunl0网卡 default via 192.168.2.2 dev eth0 10.244.44.192/26 via 192.168.2.63 dev tunl0 proto bird onlink blackhole 10.244.151.128/26 proto bird 10.244.154.192/26 via 192.168.2.62 dev tunl0 proto bird onlink 169.254.0.0/16 dev eth0 scope link metric 1002 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.61
IPIP 示意图:
Pod 1 访问 Pod 2 大致流程如下:
数据包从容器1 出到达 Veth Pair 另一端(宿主机上,以 cali 前缀开头);
进入 IP 隧道设备( tunl0 ),由 Linux 内核 IPIP 驱动封装在宿主机网络的 IP 包中(新的 IP 包目的地之是原 IP 包的下一跳地址,即 192.168.31.63 ),这样,就成了 Node1 到 Node2 的数据包;
数据包经过路由器三层转发到 Node2;
Node2 收到数据包后,网络协议栈会使用 IPIP 驱动进行解包,从中拿到原始 IP 包;
然后根据路由规则,根据路由规则将数据包转发给 Cali 设备,从而到达容器 2。
路由表:
1 2 3 4 5 6
# node1 10.244.36.65 dev cali4f18ce2c9a1 scope link 10.244.169.128/26 via 192.168.31.63 dev tunl0 proto bird onlink # node2 10.244.169.129 dev calia4d5b2258bb scope link 10.244.36.64/26 via 192.168.31.62 dev tunl0 proto bird onlink