🔥码云GVP开源项目 12k star Uniapp+ElementUI 功能强大 支持多语言、二开方便! 广告
[TOC] # 小知识点 - proxy_arp: 原理就是当出现跨网段的ARP请求时,路由器将自己的MAC返回给发送ARP广播请求发送者,实现MAC地址代理(善意的欺骗),最终使得主机能够通信。 0为不开启,1则开启 > 开启了proxy_arp(/proc/sys/net/ipv4/conf/[网卡名称]/proxy_arp) 的情况下。如果请求中的ip地址不是本机网卡接口的地址,且有该地址的路由,则会以自己的mac地址进行回复;如果没有该地址的路由,不回复。 - 确认容器与宿主机一对veth-pair 1. 登录容器 `cat /sys/class/net/eth0/iflink` 查看另一个veth设备在宿主机哪个编号 2. 在宿主机 `ip r | grep [容器IP地址]` - IPIP协议对应IP协议4 - tcpdump 抓包: `tcpdump 'ip proto 4'` - Wireshark 过滤 `ip.proto == 4` # 同节点通信 ## 两个pod背景信息 两个pod分布情况 ```shell $ kubectl get pod -l app=fileserver -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fileserver-7cb9d7d4d-h99sp 1/1 Running 0 14m 172.26.40.147 192.168.32.127 <none> <none> fileserver-7cb9d7d4d-mssdr 1/1 Running 0 14m 172.26.40.146 192.168.32.127 <none> <none> ``` `fileserver-7cb9d7d4d-mssdr` 容器的信息 ```shell # IP地址信息 $ kubectl exec -it fileserver-7cb9d7d4d-mssdr -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 4: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue state UP link/ether 26:05:c7:19:a8:cf brd ff:ff:ff:ff:ff:ff inet 172.26.40.146/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::2405:c7ff:fe19:a8cf/64 scope link valid_lft forever preferred_lft forever # 路由信息 $ kubectl exec -it fileserver-7cb9d7d4d-mssdr -- ip r default via 169.254.1.1 dev eth0 169.254.1.1 dev eth0 scope link # veth-pair对在宿主机网卡名称 $ ip r | grep 172.26.40.146 172.26.40.146 dev calie64b9fa939d scope link ``` `fileserver-7cb9d7d4d-h99sp` 容器的信息 ```shell # IP地址信息 $ kubectl exec -it fileserver-7cb9d7d4d-h99sp -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 4: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue state UP link/ether 7a:3a:28:54:4e:03 brd ff:ff:ff:ff:ff:ff inet 172.26.40.147/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::783a:28ff:fe54:4e03/64 scope link valid_lft forever preferred_lft forever # 路由信息 $ kubectl exec -it fileserver-7cb9d7d4d-h99sp -- ip r default via 169.254.1.1 dev eth0 169.254.1.1 dev eth0 scope link # veth-pair对在宿主机网卡名称 $ ip r | grep 172.26.40.147 172.26.40.147 dev calic40aae79714 scope link ``` ## IPIP 从 `fileserver-7cb9d7d4d-mssdr` 到 `fileserver-7cb9d7d4d-h99sp` 两个pod在同节点上,数据包流程图 ![](https://img.kancloud.cn/7b/0a/7b0aae22386b382d94bf4e0168373867_1307x582.png) 抓包验证 ```shell tcpdump -i calie64b9fa939d -penn tcpdump -i calic40aae79714 -penn ``` ![](https://img.kancloud.cn/a7/ae/a7ae5e64dd64f06b8e28dacb7085a60e_1920x1002.png) ## BGP 从 `fileserver-7cb9d7d4d-mssdr` 到 `fileserver-7cb9d7d4d-h99sp` 两个pod在同节点上,数据包流程图 ![](https://img.kancloud.cn/7b/0a/7b0aae22386b382d94bf4e0168373867_1307x582.png) 抓包验证 ```shell tcpdump -i calie64b9fa939d -penn tcpdump -i calic40aae79714 -penn ``` ![](https://img.kancloud.cn/0a/dd/0add13a632e70dc4f74913da800baee4_1920x842.png) > 三次握手详细过程与IPIP是一致的,下面截图就是抓包的数据。因为pod与宿主机在做IPIP协议的时候,已经arp表已经有缓存了。所以少一些arp广报播文 # 跨节点通信 ## 两个pod背景信息 两个pod分布情况 ```shell $ kubectl get pod -owide -l app=fileserver NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fileserver-595ccd77dd-hh8c7 1/1 Running 0 7s 172.26.40.161 192.168.32.127 <none> <none> fileserver-595ccd77dd-k9bzv 1/1 Running 0 8s 172.26.122.151 192.168.32.128 <none> <none> ``` `fileserver-595ccd77dd-hh8c7` 容器的信息 ```shell # IP地址信息 $ kubectl exec -it fileserver-595ccd77dd-hh8c7 -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 4: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 82:de:a5:aa:e4:41 brd ff:ff:ff:ff:ff:ff inet 172.26.40.161/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::80de:a5ff:feaa:e441/64 scope link valid_lft forever preferred_lft forever # 路由信息 $ kubectl exec -it fileserver-595ccd77dd-hh8c7 -- ip r default via 169.254.1.1 dev eth0 169.254.1.1 dev eth0 scope link # veth-pair对在宿主机网卡名称 $ ip r | grep 172.26.40.150 172.26.40.161 dev cali5e8dd2e9d68 scope link ``` `fileserver-595ccd77dd-k9bzv` 容器的信息 ```shell # IP地址信息 $ kubectl exec -it fileserver-595ccd77dd-k9bzv -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 4: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 12:ec:00:00:e6:71 brd ff:ff:ff:ff:ff:ff inet 172.26.122.151/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::10ec:ff:fe00:e671/64 scope link valid_lft forever preferred_lft forever # 路由信息 $ kubectl exec -it fileserver-595ccd77dd-k9bzv -- ip r default via 169.254.1.1 dev eth0 169.254.1.1 dev eth0 scope link # veth-pair对在宿主机网卡名称 $ ip r | grep 172.26.122.141 172.26.122.151 dev cali7b1def0e886 scope link ``` ## IPIP 从 `fileserver-595ccd77dd-hh8c7` 到 `fileserver-595ccd77dd-k9bzv` 两个pod在跨节点上,数据包流程图 ![](https://img.kancloud.cn/89/43/894399fcfaa2aa21cb49b7594d5710c4_1815x610.png) 抓包验证 ```shell # 192.168.32.127 主机抓包 tcpdump -i ens33 -penn host 192.168.32.128 and 'ip proto 4' tcpdump -i tunl0 -penn host 172.26.122.151 tcpdump -i cali5e8dd2e9d68 -penn # 192.168.32.128 主机抓包 tcpdump -i ens33 -penn host 192.168.32.127 and 'ip proto 4' tcpdump -i tunl0 -penn host 172.26.40.161 tcpdump -i cali7b1def0e886 -penn ``` ![](https://img.kancloud.cn/b6/68/b6681dc66d4cb76ed21c9fe0e32bb48a_1920x1019.png) ![](https://img.kancloud.cn/1d/61/1d61e9b467040b012b4bd9e31943726f_1920x1014.png) ## BGP 从 `fileserver-595ccd77dd-hh8c7` 到 `fileserver-595ccd77dd-k9bzv` 两个pod在跨节点上,数据包流程图 ![](https://img.kancloud.cn/42/4c/424c443378afb0bd68d4d60178b13b96_1804x591.png) 抓包验证 ```shell # 192.168.32.127 主机抓包 tcpdump -i ens33 -penn host 172.26.122.151 tcpdump -i cali5e8dd2e9d68 -penn # 192.168.32.128 主机抓包 tcpdump -i ens33 -penn host 172.26.40.161 tcpdump -i cali7b1def0e886 -penn ``` ![](https://img.kancloud.cn/a4/1f/a41f3346d8714845c46a9eb55803c2d8_1920x1016.png) ![](https://img.kancloud.cn/65/99/6599e0cadadb2d556f79dd178b1a30ef_1920x967.png) # 总结 - 同节点:无论是IPIP,BGP协议封装,网络通信过程都是一样的。查宿主机路由表转发请求 - 跨节点 - IPIP封装:`tunl0` 网卡有数据包通过且封装数据包(宿主机IP);宿主机网卡抓到 **数据包网络层** 是两层的(第一层源宿主机,目的宿主机;第二层源容器,目的容器);**数据包数据链路层** 是分别是源宿主机与目的宿主机MAC地址; - BGP封装:数据包不经过 `tunl0` 网卡;宿主机网卡抓到 **数据包网络层** 分别是客户端容器IP地址与服务端容器IP地址;**数据包数据链路层** 是分别是源宿主机与目的宿主机MAC地址; - 从抓包层面来看:只有网络层有区别,IPIP协议多一层宿主机之间的IP地址封装,而BGP协议是没有的