一步一步搞定Kubernetes二进制布置(二)——flannel网络装备(单节点)

一步一步搞定Kubernetes二进制布置(二)——flannel网络装备(单节点

前语

​ 前面搭建了单节点Kubernetes二进制布置的etcd集群流程的演示,本文将结合前次的文章持续布置Kubernetes单节点集群,完结集群的外部通讯之flannel网络装备。

环境预备

​ 首要,两个node节点装置docker-ce,能够检查我之前的有关docker布置的文章:揭开docker的面纱——基础理论整理和装置流程演示,这儿我直接运用shell脚本装置了,留意其间的镜像加快最好运用自己在阿里云或其他地方请求的地址。

​ 前次我是在试验环境中挂起了虚拟机,此刻主张检查网络是否能够拜访外网,然后检查三个节点的etcd集群健康状况,这儿的三个环境已node01为例演示验证

[root@node01 opt]# ping www.baidu.com
#两个node节点上测验验证docker服务是否敞开
[root@node01 opt]# systemctl status docker.service
#健康状况检查
[root@node01 ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379" cluster-health
member a25c294d3a391c7c is healthy: got healthy result from https://192.168.0.128:2379
member b2db359ffad36ee5 is healthy: got healthy result from https://192.168.0.129:2379
member eddae83baed564ba is healthy: got healthy result from https://192.168.0.130:2379
cluster is healthy

成果显现cluster is healthy标明现在etcd集群是健康的

装备flannel网络

master节点上:写入分配的子网段到ETCD中,供flannel运用

#写入操作
[root@master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.131:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
#履行成果显现
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
#检查指令操作
[root@master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379" get /coreos.com/network/config
#履行成果显现
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

在node节点上布置flannel,首要需求软件包,两个节点上装备相同,这儿还是以node01为例:
软件包资源:
链接:https://pan.baidu.com/s/1etCPIGRQ1ZUxcNaCxChaCQ
提取码:65ml

[root@node01 ~]# ls
anaconda-ks.cfg                     initial-setup-ks.cfg  模板  图片  下载  桌面
flannel-v0.10.0-linux-amd64.tar.gz
[root@node01 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
#以上便是该软件包解压后的文件

咱们在两个节点上创立Kubernetes的作业目录,将两个文件移动到bin目录下

oot@node01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node01 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

需求编写装备文件以及发动脚本文件,这儿运用shell脚本即可

vim flannel.sh

#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} 
-etcd-cafile=/opt/etcd/ssl/ca.pem 
-etcd-certfile=/opt/etcd/ssl/server.pem 
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld

履行脚本

[root@node01 ~]# bash flannel.sh https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379
#履行成果如下:
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

此刻装备docker衔接flannel

#修改docker服务发动文件
[root@node01 ~]# vim /usr/lib/systemd/system/docker.service
#设置环境文件
14 EnvironmentFile=/run/flannel/subnet.env
#增加$DOCKER_NETWORK_OPTIONS参数
15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

检查一下subnet.env文件

[root@node01 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.56.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.56.1/24 --ip-masq=false --mtu=1450"
#其间--bip标明的是发动时的子网

重启docker服务

[root@node01 ~]# systemctl daemon-reload
[root@node01 ~]# systemctl restart docker

检查flannel网络

[root@node01 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
inet 172.17.56.1  netmask 255.255.255.0  broadcast 172.17.56.255
ether 02:42:fb:e2:37:f9  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 192.168.0.129  netmask 255.255.255.0  broadcast 192.168.0.255
inet6 fe80::20c:29ff:fe1d:9287  prefixlen 64  scopeid 0x20<link>
ether 00:0c:29:1d:92:87  txqueuelen 1000  (Ethernet)
RX packets 1068818  bytes 1195325321 (1.1 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 461088  bytes 43526519 (41.5 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#flannel的网段是否和前的subnet.env共同
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
inet 172.17.56.0  netmask 255.255.255.255  broadcast 0.0.0.0
inet6 fe80::74a5:98ff:fe3f:4bf7  prefixlen 64  scopeid 0x20<link>
ether 76:a5:98:3f:4b:f7  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 26 overruns 0  carrier 0  collisions 0

我的node02节点上的网段是172.17.91.0,在node01上测验ping该网段的网关

[root@node01 ~]# ping 172.17.91.1
PING 172.17.91.1 (172.17.91.1) 56(84) bytes of data.
64 bytes from 172.17.91.1: icmp_seq=1 ttl=64 time=0.436 ms
64 bytes from 172.17.91.1: icmp_seq=2 ttl=64 time=0.343 ms
64 bytes from 172.17.91.1: icmp_seq=3 ttl=64 time=1.19 ms
64 bytes from 172.17.91.1: icmp_seq=4 ttl=64 time=0.439 ms
^C

能够ping通就证明flannel起到路由效果

此刻咱们在两个节点上发动一个容器来测验两个容器之间的网络通讯是否正常

[root@node01 ~]#  docker run -it centos:7 /bin/bash
#直接进入容器
[root@8bf87d48390f /]# yum install -y net-tools
[root@8bf87d48390f /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
inet 172.17.56.2  netmask 255.255.255.0  broadcast 172.17.56.255
ether 02:42:ac:11:38:02  txqueuelen 0  (Ethernet)
RX packets 9511  bytes 7631125 (7.2 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 4561  bytes 249617 (243.7 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#第二个容器地址
[root@234aac7fad6c /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
inet 172.17.91.2  netmask 255.255.255.0  broadcast 172.17.91.255
ether 02:42:ac:11:5b:02  txqueuelen 0  (Ethernet)
RX packets 9456  bytes 7629047 (7.2 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 4802  bytes 262568 (256.4 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

测验两个容器是否能够相互ping通

[root@8bf87d48390f /]# ping 172.17.91.2
PING 172.17.91.2 (172.17.91.2) 56(84) bytes of data.
64 bytes from 172.17.91.2: icmp_seq=1 ttl=62 time=0.555 ms
64 bytes from 172.17.91.2: icmp_seq=2 ttl=62 time=0.361 ms
64 bytes from 172.17.91.2: icmp_seq=3 ttl=62 time=0.435 ms

测验能够ping公例标明此刻节点间能够相互通讯了