容器云平台No.2~kubeadm创建高可用集群v1.19.1

通过kubernetes构建容器平台第二篇,最近刚好官方发布了V1.19.0,本文就以最新版来介绍通过kubeadm安装高可用的kubernetes集群
市面上安装k8s的工具很多,但是用于学习的话,还是建议一步步安装,了解整个集群内部运行的组件,以便后期学习排~ $ W 7 # 5 ^ - g错更方便。。。

h Q [ - Z p文环境如下:
服务器:3] Y m 8 D 0 4 U
操作系统:CentOW # Q Q # k p b aS 7
拓扑图就不画了,直接copy官网的
容器云平台No.2~kubeadm创建高可用集群v1.19.1

###概述
简单说下这个图,三台服务器作为master节点 t M y $ _ g q w,使用keepalive+haproxs t H +y对apiserver进行负载均衡,node节点和apiserver通信通w % Z S过VIP进行。第一篇说过,集群的所有信息存在ETCD集群中。
接下来,开干。。。

###配置源
这边配置了三种源,全部替换从国内的镜像源,以加快安装包的速度n a x Z J o

# 系统源
curl -O http://mirrors.aliyun.com/repb d Fo/Centos-7.repo
# docker源
cuf z H (rl -O https://mirrors.ustc.edu.cn/d5 5  p hocker-ce/linux/centos/docker-ce.repo
sed -i 's/download.docker* y q.com/mir; - g n ~ _rors.ustc.edu.cn\/docker-ce/g' docker-ce= C 3 I  T w Q.repo
# kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
bas~ Z  Jeurl=https://mirrors.aliyun.com/kubernetes/yum/repos/k; v D z Nubernetes-el7-x86_64
enabled I X Y p 5 G * I=1
gpgcheck=0
repo_gpgcheck=: O X 5 T @ s1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yL ] U ~ F % 4 3u6 Y v 4m-key.gpg https://mirrors.aliyud 4 Vn.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

###配置系统相关参数
系统配置完源以后,需要对一些参数进行设置,都是官方的推荐y ( x,更多优化后期介绍。

# 临时禁用seliL & -nux
# 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=permissive/SEp e - ^ j ? R { -LIN& u t S Z } 5 {UX=disabled/' /etc/sysconfig/selinux
setenfo` ? _ B C U z x Vrce 0
# 临时关闭swap
# 永久关闭 注释/etc/fstab文件里swap相关的行
swapoff -a
# 开启forward
# Docker从1.13版本开始调整i M j ` { , Q Z了默认的防火墙规则
# 禁用了iptables filter表中FOWARD.  [ s }链
# 这样会引起Kubernetes集群中跨Node的Pod无法通信
iptablC . z z 4 A ^ )es -P F5 / $ ! | H u ] !ORWARD ACCEPT
# 配置转发相关参数,否则可能会出错
cat <<EOF >  /e] : U Ztc/syst V H sctl.d/k8s.conf
net.bridge.bridge-nf-x = Z 6 t X d Jcall-ip6tu O v # B l 2ables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl --system
# 加载i D m ^pvs相关内核模块
# 如果重新开机,需要S j _ Z j T Z C H重新加载
modpr o - 5 u # Eobe ip_vs
modprobe ip_vs_rV W ~ Qr
modprobe ip_ b B ; yvs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
lsmod | grep ip_vs

###安装kubeadm及其相关软件

yum  install -y kubelA W q ` Y ~ * +et kubeadm kubectl ipvsadm

###配置docker
主要配置加速下H ! S k E A载公有镜像和允许从不安全的私有仓库下载镜像
hub.xxx.om需要改成自己的私有仓库地址,如果没有请删除insecure-regiQ v P L % | % X *stries该行
vim /etc/docker/daemon.json

{
"registry-mirrors": ["hF a zttps://ci7pm4nx.mirr* 7 a Z g ( Y 5 6or.aliyuncs.com","https://registry.docker-cn.com","http://h ( 5 f f h 8 # Whub-mirror.c.163; U 5.com"],
L D K { m"insecur( r { oe-registries":["hub.xxx.om"]
}

写好配置,重启docker

systemctl  restart docker
systemctl  en4 L l c y ? 3 *able docker.service

查看dockera u s s K , & R { info,输出如下

 Insecure Registries:
hub.xxx.com
127.0.0.U 4 W ? / G n | r0/8
RW ] E ) j - J .egistry Mirrors:
https://ci7pm4nx.mirror.aliyuncs.com/
https://rU 2  Z =egistry.dockt m Per-cn.com/
http://hub-mirror.cP B a.163.com/

###启动kubelet

systemctl enable --now kubelet

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。

##安装配置haproxy和keepalive (三台机器都要安装配置)
安装软件包yum1 S R p t % install -y haproxy keepalived

配置haproxy

需要注意# G ( 9 0 &,手动创建/var/log/haprC o l O ] W &oxy.log文件

[root@k8s-master001 ~]## R h = c 8 4 S cat /etc/haproxy/haproxy.cfg
# /etc/haproxy/haproxy.cfg
#------------------------------1 I `--------------------------------, J L H : U 4 [-------
# Global settings
#---------------------o h Y o + c Q------------------------------------------------G c 9 W R w = b
global
log /var/log/haP . C | 4 + sproxy.log local0
daemon
#---------------------------------------------------------------------
# common defaule a ? ~ C  Its that all the 'listen' and W 6 J u + c } s'backend' sections will/ 6 ! T F
# use if not designated in their block
#---------------------------------------------------c { i------------------
defaults
mode                    http
log                     global
retries                 1
timeout http-request    10s
timeout queue           20s
timeout connect         5s
timeout clientr [ X q Q Z C j           20s
timeout server          20s
timeout http-keep-alive 10s
timeout check           10s
listen admin_stats
mode                    http
bind                    0.0.0.0:1080
log7 8 6 r h 8 ` =                     127.0.0.1 local0 err
stats refresh           30s
stats uri               /haproxy-status
stats realm             Haproxy\ Statistics
stats auth              admin:admin
stats hide-version
sta~ b H b v Pts admin if T[ l CRUE
#---------------------} l J + m r S W------------------------------------------------
# apiserver frontev W Und which proxys to the masters
#---------------------------O ] z Q j 3--------------j i / (-------------j V { @ T---------------
frontend apiserver
bind *:8443
mode tcp
option tcplN c _ ^ b K E +og
default_backend api7 ! I n Aserver
#---------------------------------------------------------------------
# roun3 7 ~ K 5 %d robin balancing for apiserver
#---------------------------------------------------------------------
backend apised x H , { rver
option httpchk GET /healthz
http-check expect status 200
mode tcpB 2 2 a m P q H
option ssl-hello-chk
balance     roundrobin
server k8s-master001  10.26.25.20:6443 weightq : 7 n , B 1 m) H . ( 1 H 8 eaxconn 1000 check inter 2000 rise 2 fall 3
server? + v 7 $ k8s-A . z W / :mM I . 8aster002  10.26.25.21:6443 weight 1 maxconn 1000 check iA 8 9 2 ? Dnter 2000 rise 2 fall 3
server k8s-master003  10.26.25.22:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3

####启动haproxy
systemctl start haproxy
systemctl enable haproxL Q c Ny

###配置keepalived

[root@k8u & T { # 8 & ys-master001 ~]# cat /etc/keepalived/f w l S ? w I kkeepalived.H & o = r B v Bco. 2 F ; J A . onf
! /etcr E w 4 %/keepalived/keepalived.conf
! Configuration Film j n 0 # !  k Xe for keepalived
global_defs {
router_id LVS_K8S
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens18
virtual_rX } n  D r 4outer_i6 # { Eda c 0 d 51
priority 100
authentication {
autb A x ` + ? s yh_type PASQ 9 P 6 ] ^ E O 6S
auth_pass kuberu 9 n k 4 E  J Znetes
}
virtual_ipaddress {
10.26.25. f Q Z % # , S23
}
track_script {
check_apiserver
}
}

添加keepalivw * x #e检查脚本

[rooa p w k 9 yt@k8s-master001 ~]# cat /etc/keepalived/check_apiserver.sh
#!/bin/sh
errorExit() {
echo "*** $*" 1>&as d hmp;2
exk 2 6 : s @it 1
}
curl --silent --max6 & ~ _ b-time 2V 0 % { --insecure} H : o # d g https://localr B G G C Y *host:84# E $ - c43/ -o /dev/null || errorExit "Error GET https:/e - 0/localhost:8443/"
if7 Q J , f F A ? 9 ip addr | grep -q 10.26.25.23; then
curl --s+ v w = $ ^ eilent --max-time 2 --insecure https://10.26.25.23:8443/ -o /dev/null || errorExit "Error GET httpE S C : M w vs://10.26.25.23:8443/"
fi
chmod +x  /etc/keepalived/check_apiserver.sh

####启动keepalived

systemctl  start  keepalived
systemctl  enable keepalived

现在你可以通过访问masterIP:1080/aproxy-status 访问haq B r a : l hproxy管理界q 2 q ? f F .面,用户名密码在配置文件中。本文是admin/admin,可以自己修改。
刚开始apiserver的行都是红的,表示服务还未启动,我这里图是后截的,所以是绿的
容器云平台No.2~kubeadm创建高可用集群v1.19.1


接下开,开始初始化kubernetes集群
###初始化第一个控制节点master001

[root@k8s-master001 ~]# kubeadm init --control-plane-endP g O } U 2 , Cpoint 10.26.25.23:8443 --upload-certs --image-repository registry.aliyuncs.com/google| o q * H N m m_containers  --pod-netwc C W ` / M %ork-cidr 10.244.0.0/16
W0910 05:09:41.166260   29186 configset.g* ( R @ 4o:348] WARNING: kubeadmN 7 e s L cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.1
[preflight] Running pre-fl$ 1 z hight checks
[WAT ) ! XRNING IsDockerS* t v { D hystemdCheck]: detected "cgroupfs" as the Do) { i M W d {cker cgroup driver. The recommended driver is "] o . 6 ] s 2systemd". Please follow the guide at https://kubernetes.io/docy x  { 5 s O Gs/setup/cri/
[preflight] Pulling images[ 4 i ] & required for setting up a Kubern1 5 ( Ietes cluster
[preflight] This miO M p F A 1 0 Oght take a minute or two,a 9 | ` 5 7 o ? . depeQ ! 8 Pnding on the speed of your iU ` d P Mnternet connectH @ N A Lion
[preflight] You can also perfK f b 3 m * Porm this action in befor M X b . ) l 2 kehand using 'kubeadm config images pull'
[certs] Using cerA ! 1 j B 0 = EtificateDir folder "/etc/kubernetes/pki"
........忽略了部分信息
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"a & { P
[kubelet-start] StartinH P G F  . ! Y Xg the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creati. T D v 6 K Q 8ng static Pod manifest for "kube-apiserverI y 7 + 5 d @"
[control-plane] Creating static Po4 6 Y + p [ k wd manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod maF # .nifest for local etcd in "/etc/kubernetes/manifel G #sts"
............忽略了部分信息
[addons] Applied essential addon:; &  y a h . CoreDNS
[endpoint] WARNINGA y o 6: port specifie . kd in controlPlaneEndpoint overN / F frides bindPort in the controlplane address
[addons] Applied essential addon: kube0 J P | m [-proxy
Your Kubernetez } O {s contx Y u  1 =rol-plane has initialized s= g n r D gucc2 q s r p 1 E o Jessfully!
To start using your cluster, you need to runR t g 8 S Q the fol` p 0 f 4 w w 3 zlowing as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/B l & I !kubernetes/admin.conf $HO, a l - 8ME/.kube/config
sudo chown $(id -u):- g m$(id -g) $HOME/.kube/confie C 5 ;g
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podn. F s $ - e @ Ketwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addH P  b 7 Q yons/
You can now join any nu- o L G 1 4 2mber of the control-plane noh { e Z X 1 W N Nde running the following command onF r x 6 each as root:
kubeadm join 10.26.25.23:8443 --token f28it( k 6 O ] ` { ^ ji.c5fgj45u28332ga7 \
--dis@ v ; 4 B R &covery-token-ca-cert-hash sha256:81ey 2 { ( { / [ Vc8f1J 8 ` ~d1db0bb8a31d64ae31091726a92b9294bcfa0e2b4309b9d8c5245db41Y B H n L \
--control-pP r 6 3  (  Ylane --certifT E g Hicate-key 93f9514164e2ecbd85293a9c671344e06a1aa811faf1069db6f678a1a5) E Ee6f38b
Please note that the certificate-key giv[ O P D S l f ces access toK F u b cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be den d Q : @ Gleted in two hours; If necessary, you can use
"kubeadm init phase uploaI 8 d :d-certs --upload-certs" t~ Y -o relo. p n * ? Dad certs aftr _ H ? Aerward.
Then you can join any number ofc z & w 8 t m ) y worker node | ] E { 5 K Des by running the following on each as root:
kubeadm join 10.26.25.23:8443 --token f28i* i L p R 1ti.c5fgj45u28332ga7 \
--discovery-token-1 5 t t U S Pca-cert-hash sha256:81ec8f1d1db0bb8a31d64ae31091726a92b9294bcfa0e2b4309b9d8c5245db41

看到输出如上,代表初始化成功
初始化命令说明:
kubeadm init --control-plane-endpoint 10.26.2n O D n5.23:8? 7 0 O ) S | Y O443P B h R P { --upload-certs --image-repository registry.aliyuncs.com/google_containers --pod-network( j B Q (-cidr 10.244.0.0/16

  • --control-plW { J ] _ane-endpoint 10.Q ! p26.25.23:8443 这里的10.26.25.23就是keepalived配置的VIP
  • --image-repository registry.aliyuncs.com/google_containers 更改了默认下载镜像的地址,默认是k8s.gcr.io,国内下载不了,或[ E p = o | ^ V者自行爬墙~~~
  • --pod-network-cidr 10.244.0.0/16 定义了pod的网段,需要与flannel定义的网段一直,否则在安装flannel时可能会出现flannel的pod一直重启,后面安装flannel的时候会提到

初始化过程简s F p k e )介:

  • 下载需要的镜像
  • 创建证书
  • 创建服务的yam~ 7 O G Ul配置文件
  • 启动静态pod

初始化完成以后,现在就可以根据提示,T [ b 9 z w Y @配置kubectl客户端,o * _ g e x A使用kubernetes了,虽然现在只有一个master节点

开始使用集群

[ro; / W 9 8 Tot@k8s-master001 ~]#  mkdir -p $HOME/.kube
[root@k6 @ P g D u U E 98s-masteX | er001 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master001 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master001 ~]#   kubectl  get no
NAME            STATUS     ROLES    AGE    VERSION
k8s-master001   NotReady   ma. V rster   105s   v1.19.0

现在可以看到集群中只有一个节点,状态为NotReady,这是因为网络插件还没有安装
接下来安装网络插件Flannel

###Flannel安装
下载安装需要的yalm文件:wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-m~ r M Yanifests/kubek P o | = O-flannel.yml
因为现在安装的是最新版本的kubernetes,rbac的api版本需要修改为rbac.authorizatiy = E Y F R - i oon.k8s.io/v1,DaemonSetC J 5 Lp # 7 k r F 4 Z 1api版本改为 apps/v1,同时添加g g 6 v g H }selector,这里只贴出配置的一部分。

    [root@k8s-master001 ~]# cat kube-flannel.L ~  pyml
---
apF  D ] z g = siVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namesph ? = [ X s ; 5ace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
tier: node
app: flannel
template:
metadata:
labels:
tier: node
app: flannel

接下来,通过kubectl安装Flannel,并通过kubectl查看flannel pod的状态是否运行。

    kubectl apply -f kube-flannel.yaml
[root@k8s-master001 ~]# kubectl  get no
NAME            STATUS O )   ROLES~ X s    AGE     VERSION
k8s-master001   Ready    masw G T x S K !ter   6m35s   v1.19.[ t C ) J b0
[root@k8s-master001 ~]# kubectl  get po5 s b - 8 -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
cored6 o 6 hns-6dv ! ^ y 4 l c W56c8448f-9cr5l                1/1     RB ! } 2 p { Y dunning   0          6m51s
corednu p c V x e 7 =s-6d56c844# s ? D L m8f-wsjwx                1/1     Running   0          6m51s
etcd-k8s-master001                      1/1     Running   0          7m
kube-apiserver-k8s-master001            1/1     Running   0          7m
kube-controller-manager-k8s-mast+ P X Ber001   1/1     Running   0          7m
ku1 P 9 :be% = H F 5 / #-flannel-ds-nmfwd                   1/1     Running   0          4m36s
kube-proxy-pqrnl                        1/1     Running   0          6m51s
kube-scheduler-k8s-master001            1/1     Running   0          7m

可以看到一个名字叫kube-flanY C .nel-ds-nmfwd的pod,状态为running,表示flannel已经安装好了
因为现在只有一个节点,只看到一` A w 2 E个flannel的pod,后面继续添加另外两个节点,就会看到更多的pod了
接下来继续添加mast3 Q 3er节点

###添加另外控制节点master002,master003
因为现在已经有一个控制节点,集群已经存在,只需要将剩下的机器添加到集群中即可,添加信息在刚在初始化节点的时候输出中可以看到,命令如下
因为输出太多,这里会删除一部分不重要的输出信息
在master002上操作:

    [root@k8s-master002 ~]#   kubeadm join 10.26.25.23:8443 --token f28iti.c= : Y5fgj45u28332ga7     --discovery-token-ca-cert-hash sha256:81ec8f1d1db0bb8a31d64ae31091726a92b9294bcfa0e2b4309b9d8c5245db41     --control-plane --certificate-key 93f9514164e2ecbd85293a9c671344e06a1aa811faf1069db6f678a1a5e6f38b
[preflight] Running pre-flight checks
[WARNING IsDockerSh ] ^ d jystemdCheck]: detected "cg/ / @ - + y I oroupfs" as the Docker cgroup driver. The recommended driver is "sysp ` V c ltemd o C S I /". Please follow the guide at https://kubernetn ] W D - T A ves.io/docs/setup/cri[ = - + Z d w/
[preflight] Reading configuration fromW { i b the/ T : s  # o cluster...
[preflig) [ Z ; 1 A zht] FYI: You can look at this configQ G ~ _ K file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running prC + fe-flight checks befos z * l #rS ; * b oe initializing the new$ % 4  . C 8 _ ) control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minutew : D or two, depending on the spM f ` 7 X 8 j r ^eed of your internet connection
[preflight] You can also perform thim ~ Gs action in beforehand usN @ ^ j J 3ing 'kubeadm config images pull'
[doS i $ A 0 z X gwnload-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
..............
To start administering your cluster from this node, you need to run thr / z s Ie fol1 p C Plowing as a regular user:
mkdir -p $HOME/.kL 8 v U ) ?ube
sudo cp -i /etc/kubernetes/admin.con0 8 * F 4 6 U ?f $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/N T I w d A.kube/config
Run 'kubectl get nodes' to see this node join the cluster.

看到这样的输出,表示添加成功了。
现在来查看下集群节点信息

    [root@k8s-master002 ~]# kubectl  get no
NAME            STATUS   ROLl S z e = H + PES    AGE     VERSION
k8s-master001   Ready    master   21m     v1.19.0
k8s-master002   Ready    master   6m5s    v1.19.0

从输出能看? 3 f l t 9 3 + 9到两个mas5 j + 5 b wter节点,添$ ! R加master003节点操作和master002一样,不再多说

最后三个节点全部添加以后,通过kubectl可以看到集群的具体信息

    [root@k8s-master003 ~]# kubectl  get no
NAME            STf B  x 8 { o nATUS   ROLES    AGE   VERSION
k8s-master001   Ready    master   25m   v1.19.0
k8s-master002   Ready    master   10m   v1.19.0
k8s-maZ * z v f wster003   Ready    masy D n 3 / vter   26s   v1.19.0

最后查看现在运行的所有pod

    [root@k8s-master003 ~]# kubey S D y B V fctl  get po -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
cored8 m n[ Y 3 # x g 9 = xs-6d56c8448f-9cr5l                1/1     Rui s B . = 0 C 9 {nning   0          27m
coredns-6d56c8448f-E K p [ 5wsjwx                1/1     Running   0          27m
etcd-k8s-master001                      1/1     Running   0          27m
etcd[ ? 4 } I 1-k8s-master002                      1/1     Running   0          8m19s
etcd-k8s-master003                      1/1     Ru- ! c bnning   0          83s
kube-apiserver-k8s-master001            1/1     Running   0          27m
kube-apiservt { 2 e { K P ^ &er-k8s + 5 Z ( Z-master002            1/1     Running   0          12m
kube-apiserN 3 !ver-k8s-master003            1/1     Running   0          85s
kube-controller-manager-k8s-master001   1/1     Ruq b s o O U X Pnning   1          27mG P ; f D D
kube-control6 L i Sler-manager-k8s-ma[ X ~ } * S H e Fster002   1/1     Runninz . @ p [ ] O ) ~g   0          12m
kube-conn a 4 ~troller-manager-k8s-master003   1/1     Running   0          81s
kube-flannel-ds-2lhN ( 0 ~ d ?42                   1/1     Runni9 B X  T ~ng   0          2m31s
kube-flannel-ds-nmfwd                   1/1     Runnin* Q ( ) X o h ng   0          25m
kube-flannel-ds-w276b                   1/1     Running   0          11m
kube-pro] ( p h 4 ~ K 9 xy-dzpdz                        1/1     Running   0          2m39s
kube-proxy-hd5tb                        1/1     Running   0          12m
kube-proxy-pqrnl                        1/1     Running   0          27m
kube-scheduler-k8s-master001            1/1     Running   1          27m
kube-scheduler-k8s-ma5 W | 7 @ i / ?ster002            1/1     Runn# ~ Uing   0          12m
kube-scheduler-k8s- i fmaster003            1/1. K 7 P - w 7 o      Running   0          76s

现在可以看到,kubernetes的核心服务apiserver,-controlle` O q [ j g = W |r-manager,scheduler都W % ^是3个pod。

以上,k( V Mubernetes的master高科用就部署完毕了。1 ~ h
现在你可以通过haproxy的web管理界面,可以看到三个master已经可用了。

###故/ b p d T ]障排除
如果master初始化失败,或者添加节点失败,可以使用kubeadm reset重置,然后重新安装
#####重置节点

    [E s = yroot@k8s-nodn P A | } N be003 haproxy]# kubeadm  reset
[rE j ^ [ . I Neset] Reading configuration from the cluster...
[reset] FYI: You can look at this config fi_ r  o 3le with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0910 05:31:57.345399   20386 reset.go:99] [reset] Unable4 i 6 _ to fetch the kubeadm-config ConfigMap from cluster: faie g # H g 4 k 5led to get node registration: node k8s-node003 doesn't hS n , y 9 D iave kubeadm.alpha.kubernetes.io/criv : C c-socket annotation
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be revert*  4 F e F t Q !ed.
[reset] Are you sure you want to pP . h - Uroceed? [y/N]:i _ L a v y
[preflight] Running pre-flight checks
W0910 05:31:58.580982   20386 removeetcdmember.go:7) G R + ?9] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd confi! / Y e i 0 P pg found.k ` H 0 Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stoppm l $ing the kD 5 ? s G x s H Gubelet service
[reset] Unmounting mounted directories in "/varV u t 6 Y X k d //lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[res[ F O K _et] Deleting files: [z 4 w N J | Y e @/etc/kuN U Z obernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.cQ + v o Y # E V tonf /etc/kubernetes/controller-maI ` i C 8 Hnager.conf /etc/kubernetG ) L Wes/scheduler.conf]
[reset] Deleting contents of statg d 7 9 * d e }eful direC $ y V { z a Jctories: [/var/libd ~ M 7 6 ( |/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process doef , @ x  : js n4 ? x + ^ ] V *ot clean CNI configS n W T C E 9 W curatiN K * y o w q 7 Don. To do so, you must remove /etc/c= 3 q r * 4ni/net.d
The reset proces: 3 c K 8 qs does not r{ M s , 7 heset or clean up iptables rules or IPVS tables.
If you wish to reset ih } & v [ sptableR P a h m n S e ws, you must do so manually h Y & v A / zy by using the "iptables" command.
If your cluster was setup to utilize IPVS, run0 = ( S ipvsadm --cle/ Q 9ar (or sim2 q Nilar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of_ H 6 m the $HOME/.kube/config file.

一篇内容太多,后续的内容看下篇。。。