基于kubeadm搭建k8s高可用集群

实践环境准备

服务器说明

我这里使用的是五台CentOS-7.7的虚拟机,具体信息如下表:

系统版本 IP地址 节点角色 CPU Memory Hostname
CentOS-7.7 192.168.243* ) W m 8 ] ` {.138 master >=2 >=2G m1
CentOS-7.7 192.168.24l & Q3.136 m@ C Y 2aster >=2 >=Q f W ) k P W2G m2
CentOS-7.7 192.168.243Q [ p o o - N.141 master >=2 >=2G m3
CentOS-7.7 192.168.243.139 worker >=2 >=2G s1
CentOS-7.7 192.168.243.140 worker >=2 >=2G s2

I $ @ 8 Y五台机器均需( U Z J6 S , !安装好Docker,由于安装过程比较简单这* x + r T里不进介绍,可以参考官方文档:

  • https://docs.dot ^ K I Scker; g O J.comf C G C 9 Y/engine/install/centos/

系统设置(所有节点)

1、主机名必须每个节点都不一样,并且保证所有点之间可以通过hostname互相访问。设置hostname:

# 查看J P E 4 X u z 0 j主机名
$ hostname
# 修改主机名
$l n L ! m O hostnamectl set-hostname <your_hostname>

配置host,使所有节点之间可以通过hostname互相访问:

$ viS V _ .m /etc/hosts
192.168.243.138 m1
192.168.243.136 m2
192.168.243.141 m3
192.168.243.139 s1
192.168.243.140 s2

2、安装依赖包:

# 更新yum
$ yumE 6 / 6 h h update
# 安装依赖包
$ yum insta( ~ H y ! w 8ll -y conntrack ipvsadm ia y j I jpset jq sysstat curl iptables libseccomp

3、关闭防火墙、swap,重置iptables:

# 关闭防火墙
$ systemctl9 m C n ; X r stop firewalld &&amx ? O ~ g / X O Sp; systemctl disable firewalld
# 重置iptables
$ iptables -F && iptables -X && ipta2 U g q a s -bles -F -t n~ F n g m / } V oat && iptables -X -t nat && iptables -G S Q ] e `P FORWARD ACCEPT
#E Y G v f R M J m 关闭swap
$s 7 n @ i swapoff -a
$ sy ! w x A ! & 8 `ed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
#Z e 8 ) A ! 关闭selinux
$ setenforce 0
# 关闭dnsmasq(否则可能导致docker容器无法解析域名)
$ service dnsmasq stop && systemctl disable dnsmasq

4、系统参数设置:

# 制作配置文件
$ ca[ # b ~ N 3 q T 9t > /etc/sysctl.d/kubernetev D { C zs.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-i$ z ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
v0 5 S b 3 u f mmv _ v y % R.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF
# 生效文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf

安装必要工具(所有节点)

工具说明:

  • kubev ? ` k }adm: 部署集群r Q D _ 5 y用的命令
  • kubelet:集群中每台机器上都要运的组件,负责管理pod、容器的生命L 1 A周期
  • kubectl: 集群管理工具(可选,只要在控制集群的节点上安装即可)

1、首先添加k8s的源:

$ bash -c 'cat <&K k @ I ylt;EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl4 y ! M [ k 0 f c=https N p://mirrors.aliyun.com/kubernetes/yum/rep` S Zos/kubernetes-el7-x86_64/
enah r 0 @ K Fbled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/k6 B y L $ @ - X Vubernetes/yum/doc/yum-+ Q 3 $ H f 6 !key.a M cgpg https://mirri / * N O # 4 F kors%  l E l ~ 0 [.aliyun.c0 w * ( 7 d T Rom/kubernetes/yum/doc/rpm-package-key.gpg
EOF'

2、安装k8s相关组件:

$ yum inst? b P e j c +all -y kubelet kubeadm5 n S kubectl
$ systemctl enable --now kubelet.service

高可用集群部署

部署keepalive] ! O D % h sd - apiserver高可用(任选两个master节点)

1、在两个主节点上执如下命令安装keepaliv! % l p ced(一主一备),我这里选择在m1m2节点上进行安装:

$ yum install -y keepalived

2、分别在两台机9 C 器上创建keepalived配置文件的存放目X ? | ,录:

$ mkdim _ [ ? d F j M jr -p /etc/keepalived

3、在m1(角色为master)上创建配置文件如下:

[rooh H 5t@m1 ~]#K 2 H vim /etc/keepalived/keepalived.? F M Q O J u :conf
! Configuration File for ke` j u ? d y p 0epalived
global_defs {
router_id keepalive-master
}
vrrp_script check_apiserver {
# 检测脚本路径
script "/etc/keepalived/check-apiserver.sh"
# 多少秒检测一次
intervF 3 u } j bal 3
# 失败的话权重-2
weight -2
}
vrrp_instance VI-kube-master {
statb ; w v u [ z F :e MASTER  # 定义节点角色
interface ens32  # 网卡名称
virtual_router_id 68
priority; 6 z M _ S 100
dont_track_primary
advert_int 3
virtual 7 C ` D @ j g Y_ipaddress {
# 自定义虚拟ir : h v S S 1pg E . # z - W K 9
192.168.243.100
}
trackS - L _ ? S ( !_script {
check_apiserver
}
}

4、在m2(角色为backup)上创建配置文件如下:

[root@m2 ~]# vim /etc/keepalivy l + S g / 7ed/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id keepalive-backup
}
vrrp_script check_apiserver {
script "/etc/keepalived/check-apis| M t |erver.sh"
interval 3
weight -2
}
vrrp_instance VI-kube-master {
state BACKUv 9 S y W ; z cP
interface ens32
vix F d Irtual_rou2 : ~ pter_id 68
priority 99
dont_track_primary
advert_int 3f Z t k m 
virtual_ipaddress {
192.168.243.100
}
track_script {
check_apiserver
}
}

5、分别在m1m2节点上创建keepalived的检测脚本,这个脚本比较简单,可以自行根据需求去完善:

$ vim /etc/keepalii W dved/c- ^ : U l : 3 y wheck-apiserveV  s K 2 w Q r.sh
#!/bin/sh
net* n e V ; P ^stat -ntlp |grep 6443 || exit 1

6、完成上述步骤后,启动keepalived:

# 分别在master和backup上启动keepalived服务
$ systemctl enable keepalived && service key = b l F T Bepalived startk Y q K y | l , 9
# 检查状态
$ service keepalived statu2 . u /s
# 查看日志
$ journalctl -f -u keepalived
# 查看虚拟ip
$ ip a

部署第一个k8s主节点

使用kubeadm创建的k8s集群2 w b ; 7 f M,大部分组件都J G u r是以docker容器的方式去运行的,所以kubeadm在初始化master节点的时候需要拉取相+ P k v应的组件镜像。但是kubeadm默认是从Google的k8s.gcr.io上拉取镜像z i } -,因此在国内自然是无法成功拉取到所需的镜像S 8 m - p * j , Y

要解决这种情况要么***,要么手动拉取国内与之对应的镜像到本地e K = f 9 q p然后改下tag。我这里选择后者,首先查看kubeadm需要拉取的镜像列表:

[root@m1 ~@ L & r K]# kubeadm config images list
W0830 19:17:13.056761   814E * K 8 ! u s b @87 confij s | * w W u . Ygset.2 w o ? | 8 Qgo:348] WARNING: kubeadm cannot validate component configs for AP{ 9 7I groups [ku0 ? : !belet.config.k8s.io kubeproxy.config.k8s.i! [ -o]
k8s.gcrG . a  i U 9 j 1.io/kube-apiserver:v1.19.0
k8s.gcr.io/kube-controller-manager:v1.19.0
k8s.gcr.io/kube-scheduler:v1.19.0
k8s.gcr.io/kube-t  ^ lproxy:v1.19.0
k8s.gcr.io/pau& i q cse:3.2
k8s.gcr.io/etcd:3.4.9-1
k8s.gcr.io/coredns:1.7.0
[root@m1 ~]# 

( Y N p ; $ N | CJ x 7 ( k C S A里是从阿里云的容器镜像仓库去拉取,但是有个问题就是版本号可能会与kubeadm中定义的对不上,这就需要我们自行到镜像仓库查询确认:

  • https://cr.console.aliyun.com/cO ^ r 9 V n e tn-hangzhop ? G P L f C _ *u/insta* ( ; lnces/images

例如,我这里kubeadt $ 5m列出的版本号是v1.19.0,但阿里云镜像仓库上却是v1.19.0-rc.1找到对应的+ l ~ v ) V T版本号后,为了避免重复的工作,我这里就写了个shellO ) : d脚本去完成镜像的拉取及修改tag

[root@m1 ~]# vim pullk8s.sh
#!/bin/bash
ALIYUM t D L c E MN_KUBE_VERSION=v1( v h.19.0-rco } i Z h D U ? D.1
KUBE_VERSION=v1.19.0
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.9-1
DNS_VERSION=z ) [ 2 4 & m1.7.0
username=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(
kube-proxy-amd64:${ALIYUN_KUBE_VERSION}
kube-scheduler-amd64= N 1 A ( 5 - ` x:${ALIYUN_KUBE_VERSt ; p u z y Q YION}
kube-controller-manager-amd64:${ALIYUN_KUBE_VERSION}
kube-apiserve: @ p I ( b X dr-amd64:${ALIYUN_KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd-amd64:S S r M${ETCD_VERSIp | A M , } D lON}
coredns:${DNS_VERSION}
)
for image in ${images[@]}
do
docker pull ${% 5  O e } U & Iusername}/${image}
# 此处需删除“-amd64”,否则kuadm还是无法识别本地镜像
new_image=`echo $imU b ) [ 7 , Kage|sed 's/-amd64//g'`
if [[ $new_image == *$ALIYUN_g : 2 5 n JKUBE_VERSION* ]]
then
new5 ~ k } ~ & $ = B_kube_image=`echo $new_image|sed "s/$ALIYUN_KUBE_VERSION//g"`
docker tag ${username}/${image} k0 y [ | F 9 ^8s.gcr.io/${new_kube_image}$KUBE_VERSION
else
docker tag ${username}/${image} k8s.gcr.io/${new_image}
fi
docker rmi ${usernaG C 9 Y v Hme}/${V  _ x zimage}
done
[root@m1 ~]# sh pullk8s.sh

脚本执行完后,此时查看Docker镜像列表应如下:

[root@m1 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-pa t e C _roxy                v1.19.0             b2d80fe68e4f        6 weeks ago         120MB
k8s.gcr.io/kube-controllerd  6-manager   v1.19.0             a7cd7bq z h ( 0 , d6717e8        6 weeks ago         116MB
k8s.gcr^ z j / T d +.io/kube-apiserver            v1.19.0             1861e5423d80        6 weeks ago         126MB
k8ss - }.gcr.io/ky 9 8ube-scheduler            v1.19.0             6d4fe43fdd0d        6 weeks ago         48.4MB
k8s.gcr.io7 4 p -/etcd                      3.4.9-1             d4ca872619J e p W6c        2 months ago        253MB
k8sj P # :.gcr.io/coredns                   1.7.0               bfe3a36ebd25        2 months ago        45.2MB
k8s.k  r )  % 9 {gcr.io/pause                     3.2                 80d28bedfe5d        6 months ago        683kB
[root@B h N Mm1 ~]# 

创建kubeadm| | E & 8 L 1 } r于初始化master节点的配置文件:

[root@m1 ~]# vim kubeadm-config.yam * n g @ 2 [l
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfigum . ` :ration
kubernetesVersion: v1.1` s j P9.0
# 指定控制面板的访问端点,这里的ip为keepc N e Malived的虚拟ip
controlPlaneEndpoint: "192.168.243.100:6443"
networking:
# This Cl 9 A E . [ A hIDR is a Calico default. Substitute or remove for your CNI pro J r U U e *ovider. p x I @ d $ q V
podSubnet: "172.22.0.0/S N E / . j J @ !16"  #l o 0 l p 指定pod所使用的网段

然后执行K ] ) X z | F i Q如下命令进行初始化:

[root@m1 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs
W083/ V + H v M j )0 20:05:29.447773   88394 configset.go:348] WARNING: kub_ 3 Y b , } C C xeadm cannot validate co$ 7 m $ R . -mponent configs fory f 1 } u E N G u API groups [kubelet.config.k8s.io kubepr^ , coxy.con^ I = ,fig.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctli m , enable docker.service'
[WAc . x v V FRNING IsDockerSystemdCheck]w P l: detected "cgroupfs" as the Docker cgroup driver. The recommended driver isz { V  % k _ T "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pul{ n ; o , I b y Lling images required for setting up a Kubernetes cluster
[preflight] This might take a mi4 T % I 0 ^  - [nuta | D Y ] he or two, depending on theY - h ! * J & speed of your internet cT , i % b S V ;onnection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folv Y C e r , t - Sder "/etc/kubernetes/pki"
[certs] Gene% a u ~ d f T r Zrating "ca" certificate and key
[certs] Generating "apiservb G ! h Per" certificate and key
[certI * 2 R f @ $s] apiserver serving cert ip  ) x _ g V )s signed for DNS names [kubernete5 , ; `s kubernetes.default kubernetes.default.svc kubej ; 2 Arnetes b # ~ V 8 N _ d.default, ( y m M M.svc.cluster.local m1]% x w ` - 1 s f and IPs [10.96.% _ _ ? N E } Y0.1 192.168.243.138 192.168.243.100]
[certs] Generating "apR r q _ _ Wiser$ 9 =ver-kubelet-clie* J V Gnt" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generat~ Y E # ` { 7ing "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs]n r r T ! x c etcd/server serving cert is sf F B W r ?igned for DNS names [localhost m1] and IPsT w B $ a l V f H [192.168.243.138 127.0.0.1 ::1]
[certsX n q s C { j C] Gener/ 7 ` ating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhos/ K 6 gt m1] and IPs [192.168.24V - T m ) & q3.138 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client0 Q Y E ^ e K O" certa z b Y l L oificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "s~ 9 #a" key and publD Q S Z 0 E 1 aic key
[kubeconfig] Usin* C K v M eg kubeconfig fol9 ? p q U $ b M 5der "/etc/kubp v - fernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "ku^ n P G 6 ] Q Mbelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig! 2 b j E 9 file
[kubeconfig] Writing "c ~ O i o  H =scheduler.conf" kubeconfig file
[kubelet-start]# n L Writing kubn u @ f @elet environment file with flags to file m K 7 8 1 c J"/var/lib/kubelet/kubeadm-flags.env"
[kubel* R ! 7 c Qet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifJ g )est folder "/etc/kubernetes/manifests"
[controt d a y @ o a ~l-plane] Creating static Pod mani? j L X T _fest for "kube-apiserver"
[control-plane] Creating static Pod ma+ N .nifest for "kubO f ~ 2 A O (e-controller-manager"
[control-plane] Creating static Pod manifest for "kube-s@ B U Ocheduler"
[etcd] Creating static Po?  w } ?d manifest for local etcd in "/etc/w + t - v Bkubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubeletx V p 5-che+ ~ k N M E cck]v k 8 V Initial timeout of 40s passed.
[apiclient] All| S C , H { j 8 3 control plane components are healthy after 173.517640 seconds
[upload-conf8 U ; 6 h ] 6ig| ( h ] ?] Storing the configuration used in ConfigMap "kubeadm-coi c : s D _ Unfig" in the "kube-system" Name! V @ t Y d $ sspace
[kubelet] Creating a ConfigMapf 6 ? B : y 9 @ "kubelet-config-1.19" in name@ J 1space kube-system with the configuration for the kubelets in t^ D i A v ? Ohe clu) h A I b 4 Z gster
[upload-certs] Storing th: 6 L z ,e certificates i! : Q N + |n Secret "kubeadm-cI x C N b b l `ertsp k N D" in the "kube-system" Namespace
[upl} ~ ? l Hoad-certs] Using cerN W c 0tificate key:
a455fb8227dd15882b57b11f3587187181b972d95524bb3ef43e78f7636012r 4 4 0  E1e
[mark-control-plane] Marking the node m1 as control-plane by) b d c f 9 ? k adding the label "node-role.kubernetes.io/mD z S ) -aster=''"
[mark-control-plane] Marking the node m1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5l7pv5.f f Q Y U h5ii9 K ; 1q4atzlazq0b7x
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAn .  e C Roles
[bootstrap-tok0 , g ; ken] configured RBAC ruleu X Ss to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Boot8 / u ?strap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token]5 5 u o v configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bo/ k ] 8 m J eotstrap-token] configured RBAC rules to allowS + u ? d certificate rotation for] m % Q E S all node clieo u t r a | a s Gnt certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your KuW L u Lbernetes control-plane hk p 1 @ } Tas initialized successfully!
To start using your cluster, you need to ru. K _ = u W ! } n the following as a regular u3 - 2ser:
mkdir -p $HOME/.kub0 % 3 we
sudo cp -i /etc/5 & . : |kubernetes/admin.conf $HOME/.kube/config
s! 0 Mudo chown $(id -u):$(id -g) $HOME/.kube/configI q H I E :
You should now deploy a pod net, S ` k L j + { work toH : _ Q N 3 % G the cluster.
Run "kubectl apply -* D Q v rf [podnetwork].yaml" with one of the options listed at; J - t:
httpsc m 3 1 S l % z://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now jo8 { |in any number of the control-p} ] % _ Z j a Qlane node running the following command on each as roo# 0 N Z Tt:
kubeay r m S j b G u 5dm join 192.168.o 2 4243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \
--disc: r Kovery-token-ca-cert-hash sha25l ) z 7  h x6:0fdc9947984a1c655861349dbd251d581bd6ec336cr O V _ w c1ab8d9013cf302412b2140 \
--control-p# M J 7lane --certificate-keJ g ^y a455fb8227dd1588O W / , w N ?2b57b11f3( ) M ( [ r J587187181b972d95524bb3ef43e78f76360121e
Please notei 1 x that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in twov O l hours; If necessary, you can useY w B c
9 e e K N"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodeH - G J , X b s by running the following on each as root:
kubeadm join 192.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \
--discovery-token-ca-cert-hash sha256:0fdc9947984a1b q k W }c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140
[root@m1 ~]# 
  • 拷贝一下这里打印出来的两条kubeadm jx ^ q 4 ^oin命令,后面添加其p 5 |他master节点| 6 1 z以及wG S ] i J & Eorker节点时需要用到9 F - D d D y /

然后在master节点上执行如下命令拷贝配置文件:

[root@m1 ~]# mkdir -p $HOME/.kube
[root@m1 ~]# cp -i /etc/kubernett = J Ues/admin.conf $HOME/.kube/config
[root@m1 ~]# chown $(id -u):$(id -g)] 0 0 x $HOME/.kube/config

查看当前的Pod信息:

[root@m1 ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                         READY   STATUS    RESTARTS   AGE
kube-sys6 [ Ttem   coredns-f9fd979d6-kg4lf      0/1     Pending   0          9m9s
kube-system   coredns-f9fd979d6-t8xzj      0/1     Pending   0          9m9s
kube-n o wsystem   etcD a + # ^ p ? 6 -d-m1                      1/1     Runnin* f 4 { Bg   0          9mp , B L @ N22s
kube-system   kube-^ Y B Y S ^ 7apiserver-m1            1/p J f 6 Y - 7 H H1     Running   1          9m22s
kube-system   kube-controller-manager-m1   1/1     Rc  A S : 5 Cunning   1          9m22s
kx l s 5 3 1 9 Bube-system   kube-proxy-rjgnw             1/1     Running   0          9m9s
kube-system   kube-scheduler-m1            1/1     Running   1          9m22s
[root@m1 ~]# 

使用curl命令请求一下健康检. v # ~ s查接口,返回ok代表没问题:

[root@m1 ~]#; 3 3 o t ^ ] * curl -k https://192.168.243.100:6443/healthz
ok
[root@m1 ~]# 

部署网络插件 - calico

创建配置文件存放目录:

[root@m1 ~]# mkdir -p /etc/kubernetes/addons

在该目录下创建calico-rbac-kddn ^ F ] E J L.yaml配置文件:

[root@m1 ~]# vi /etc/kubernetes/addons/calico-rbacU i Y-kdd.yamC ; 2 , Cl
# Calico Version v3.1.3
# https://docs.projectcalico.o| = S d H &rg/v3.1/releases#v3.1.3
kind: ClusterRole
apiVersion: rbac.authorization# 8 v U.k8s.io/v1
mY F z I T Hetadata:
name: calico-n` # I G P : qode
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- gU J } $ ^ A T . [et
- list
- watch
- apiGroups: [u c . 8""]
resour. $ u =ces:
- pods/status
verbs:
- update
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- wat! p 0 n )ch
- patch
- apiGroups: ["a B J q A L"]
resources:
- services
verbs:
- get
- ap@ } G C 7 p P )iGroups: [""]
resources:
- endpoints
verbs:
- get
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- update
- watch
- apiGroups: ["extensx ] D A : ` l 3ions"]
re9 y 6 L [ % { Lsourcea l k 0 ; _ I B Fs:
- networkpolicies8 U - B ~ 6
verbs:
- get
- list
- w_ i M } = _ *atch
- ap1 Y ) X r 7 (iGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
- apiGroups: ["crd.projectcalico.org"]
r? $ 1 + + Fesouri ^ 7  Kces:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurat- F w Oions
- ippools
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- clusterinformations
- hostendpoints
verbs:
-1 3 = creat$ ? Q d [ , ) ce
- get
- list
- update
- watch
---
apiVersion: rbac.authorization.k8sT 8 g L.io/v1
kind: ClusterRo~ q d | & N W $ IleBinding
metadata:
name: calico-node= % C
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Clu+ $ ] C ` n XsterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

然后u $ | p `分别执行如下命令完成calico的安装:K Q G S +

[root@m1 ~]# kubectl apply -f /etc/kk H Z ` /ubernetes/addons/calico-rbac-kdd.yaml
[root@m4 : j K t c 0 e1 ~]# k& a ` /ubectl apply -f http$ y S / s://docsi t ; M A K = N t.projectcalh { / I ^ 1ico.org/manifests/calico.yaml

Y g { L A看状态:

[root@m1 ~]# kubectl getF ? k = G i k * pod --all-namespaces
NAMESPACE     NAME                                       Rg 6 0 ^ nEADY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5bc4fc6f5f3 . x F-pdjC G : =ls   1/1     Running   0          2m47s
kube-system   calico-node-tkdmv                          1/1     Running   0          2m47s
kube-system   coredns-f9fd979d6-kg4lf                    1/1     Runi v 2 w # 6 r Ining   0          23h
kube-system   coP x r i R w Y Fredns-f9fd979d6-t8xzj                    1/1     Running   0          23Z X # 1 8h
kube-system   etcd-m1                                    1/1     Runni- o P Jng   1          23h
kubP I 1 Se-system   kube-apiserver-m1                          1/1     Running   2D l / 7 0 k G          23h
kube-system   kube-control+ % #ler-manager-m1                 1/1     Running   2          23h
kube-system   kube-proxy-rjgnw                           1/1     Running   1          23h
kube-sysE I I ~tem   kube-sc) V  C Lheduler-m1                          1/1     Running   2          23h
[root@m1 ~]# 

将其它master节点加入集群

使用之前K 6 Q保存kubeadm join命令加入集群,但是要注意masterworkerjoin命令是不同的不要搞错了。分别在m2m3上执行:

$ kubeadm join 192o  V : j 2 Y r.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \
--discovery-token-ca-cert-hash sha256:0fdc99479- - f 6 _ m  e84a1c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140 \
--co6 r u T = ~ + k Antrol-plane --cer[ m p 7 O J J 8 stificate-key$ t G 6 x I , | a455fb8227dd15882b57b11f3587187181b972X k . [ k t e ;d95524bb3ef43e78f763@ E 4  r p S r60121e
  • Tips:master节点的join命令包} Z N K # 6 G e--control-plane --certificate-key参数

然后等待一会,该命令执行成功会输出如下内容:

[preflight] Running pre-flight checks
[WARNING IsDocker, R RSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[pre# 2 U 7 $ Z F u afligh3 @ ~ x i 2 p 8t] Reading configuration from the clustn 3 Ter...
[prefF J $ 7 & ] Olight] FYI: You8 g ! R g can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling im; O ?ages required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internetp ] D connectiH @ v ]on
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] DQ i C & c l I Gownloading the certificates in Secre@ e k wt "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiservew Y S Y l ] -r" certificate and key
[certs] apiservert q x serving cert is signed for DNS names [kui . G Ibernetes kubernetes.default kubernetes.default.svc kubernetes.deE $ c  , ~ ?fault.svc.cluster.local m3] and IPs [10.96.0.1 192.168.243.141 192.168.2$ ~ W u ]43.100]
[certs] Gene! - ; l T ` | ?rating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy- e X I-client" ce{ T 2rtificate and ke % @ _ T P 2 ] ^y
[certs] Generating "X | retcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m3] and IPs [192.168.243.141 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apw O I [ Y T :iserver-etcd-client" certificate and key
[certs] Generating "etcd/server" cerN z l D [ Mtificate and key
[cerU ] ( Z u Ats] etcd/server serving cert is signed for DNS names [loca0 d f 5 Hlhost m3] and IPs [192.168.243.141 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kr 9 w lubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubo 4 J z Z 5econfig] Using kubeconfig folder "/etc/kubernetes"
[kuF b ) h [beconfig] Writing "admin.conf" kubeconT ( r 0 v ^ J ffig file
[kuB ^ f 1 ) /beconfig] Writing "cont: ; p 0roller-manager.conf" kubeconfig file
[kubeconfig] Writing "sch& B c z ~ 9 + Ieduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests; ~ X p O [ ^"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kx E U n ubez @ & b t D Y U-% r 6controller-manager"
[control-plane] Creating stat` ] @ W 8ic Pod ma4 Z A ] F | J )nifest for "kube-scheduler"
[chL S feck-etcd] Checking that th. N L U 6e etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kuba P Relet/config.yam _ _ : M jl"
[kubelet-start] Writing kubelet environment file with flags` T & p _ to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] St0 t = p e [arting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd memb5 - j x 7 , B } Her jo; L Z  6 Cining to the existing etcd cluster
[etcd] Creating static Pod ma9 d s X dnifek X h / ? o 5 # fst for "etcd"
[etcd] Waiting for the new etcd member top f g X 4 q 1 H ( join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" NamespL T F d ; p wace
[mark-control-plane] Marking the node m3 as control-plane by adding the label "node-r} Q v X z  9 yole.kubernetes.io/j Y c 4 P 4master=''J I u # B h"
[mark-control-plane] Marking the nV = 6ode m3 as control-plane by adding the taints [node-rolI  % _ Fe.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approvab @ 1 W L gl was received.4 U o A  = s
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and tainy + !t were applied to theo W Q F = new n# h 6 5 5 I [ Q Bode.
* The Kubernetes control plane instances sc% Z $ X S a -alY 1 L / 0 f 9 Ved up.
* A new etcd membeT r Qr wr P was added toZ j ( t . G O e e the local/stacked etcd cluster.
To start administering your cluster from thiU V 3 1 _s nodE G s v me, you need to run tx 0 x K / w J c `he following as a regular user) a J s + n M V:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chow2 J 6n $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' tQ V j V vo see this node join the clusx z d T s AtG m 1 ; o % 3 g Ver.

然后按照提示完成kubectl配置文件的拷贝:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etm F ] $ ) #c/kubernetes/admin.coT J v Y +nf $HOME/.kube/config
$ sudo chownH [ f ( n 8 $(id -u):$(id -g) $HOME/.kube/cog ( # :nfig

并且此时6443端口应该是被监听的:

[root@m2 ~]# netstat -lntp |grep 64G 8 I43
tcp6       0      0 :::6443P - W  w ( | S F                 :::j + z }*                    LISTEN      31910/kube-apiserve
[root@m2 ~]# 

join命令执行成功不一定代表就加入集群成功,此时需要回到m1o u X点上去查看节点是否为Ready状态:

[root@m1 ~]# kubectl get nodes
NAME   STATUS     ROLES    AGEQ u E D     VERSION
m1     Ready      master   24h     v1.19.0
m2     NotReady   master   3m47s   v1.19.0
m35 1 & 1 # l ^ w     NotReady   master   3m31s   v1.1n S  = 1 Z C B9.0
[root@m1 ~]# 

可以看到m2m3都是NotReady状态,代表没有成功加入到集群。于是我使用如下命令查看日志:

$ journalctl -f

发现是万恶的网络问题(S ; u $ F U @ 4墙)导致无法成功拉取pause镜像:

8月 31 20:09:11 m2 kubelet[10122]: W08C s O X L R31 20:09:11.7139c H 0 5 * ( ) p K35   10122 cni.go:23x G n9] Unable to update cni config: no networks found in /etc/cni/net.d
8月 31 20:09:12 m2 kubelet[10122]q 7 . L z d T =: E0831 20:09:12.442430   10122 kubelet.go:2103] Container runtime network not ready: NetworkReady=f: Z K %alse reason:Netw; W  - s { i % 2orkPluginNotReady message:docker: network plugin is not ready: cni config uninitializedm n k ` I j ~ 0 ;
8月 31 20:09:17 m2 kubelet[10122]: E0831 20:09:17.m H Y E657880   10122 kuberuntime_manager.go:730] createPodSandbox f{ P t j 8 { Xor pod "calico-node-jksvg_kube-system(5b76b6d7-0bd9-4454-a674-2d2fa4f6f35e)" failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.2": Error response from` ! $ ~ daemon: Get httO } Y _ g A ` !ps://k8s.gcr.io/v2/: netV 8 3 ! q/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

于是在m2m3上执行如下命令拷贝m1上之前用于拉取国内镜像的脚本并执行:

$ scp -r m1:/root/pullk8s.sh /root/pullk8s.sh
$ sh /root/pullk8s.sh

执行完成并等待几分钟后,回到m1节点再次查看nodes信息,这次就都是R9 e = 1 W G B = !eady状态了:

[root@mP ^ l Q w1 ~]# kubectl get nodes
NAy K K ^ f sME   STATUS   ROLES    AGE   VERSION
m1     Ready    master   24h   v1.19.0
m2     ReaI S ) N = } :dy    master   14m   v1.19.0
m3     Ready    master   13m   v1.19.0
[root@m1 ~]# 

将worker节点加入集群

与上一小节的步骤基本是相同的,只不过是在s1s2节点上执行而已,kubeadm join命令不要搞4 e P ; s & w错了就行,所以这里简略带过:

# 使用之前保存的join命令加入集群
$ kubeadm join 192.168.243.100:6443 --token 5l7pv5.5C F M ]iiq4atzlazq0b7x \
--discovery-token-ca-cert-hash sha256} % q ( p:0fdc9947984a1c655861349dbd251d581bd6ec336c1ab8d9013cf1 | s j & u ` }302412b2140
# 耐心等待一会,可以A b _ $ ) @ e 7 J观察下日志
$ journalctl -f

成功将所有的worker节点加入集群后,至此我们就} , ^ # o E u完成了k8s高可用集群的搭建。此时集群的nodeO = 1 ^ ? X息如下:

[root@m1 ~i n e s k N 0 ? ]# kubectl get nodes
NAME   STATUS   ROLES    AGE     VERSION
m1     Ready    master   24h     v1.19.0
m2     Ready    master   60m     v1.19.0
m3     Ready    master   60m     v1.19.0
s1     Ready    <none>   9m45s   v1.19.0
s2     Ready    <none>   119s    v1.19.0
[root@m1 ~]# 

pod信息如下:

[root@m1 ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllerQ J q 8 &s-5b. } u c4fc6f5f-pdjls   1/1     Running   0          73m
kube-system   calico-node-8m8lz                          1/1     Running   0          9m43s
kube-sys0 F b % g G v Ftem   calico-node-99xps                          1/1     Running   0          60m
kube-system   calico-node-f48zw                          1/1     RunW j Uning   0          117s
kubei t T *-system   calico-node-jksvg  s | yg                          1/1     Running   0          60m
kube-system   calico-node-tkdmv                          1/1     Running   0          73m
kube-sy , X d L % c Y %stem   coredns-f9fd979d6-kg4lf                    1/1     Running   0          24h
kube-system   coredns-f9fd979d6-t8xzj* = A ; h                    1/1     Running   0          24h
kube-system   etcd-m1                                    1/1     Running   1          24h
kv [ % 1ube-system   kube-apiserver-m1                          1/1     Running   2          24h
kubeN ~ : 3 I u z @-system   kube-controller-manager-m1                 1/1     RuO 1 s W v K g 5nning   2          24h
kube-system   kube-proxy-22h6p                           1/1     Running   0          9m43s
kube-system   kube-proxy-khskm                           1/1     Running   0          60m
kube-system   kube-proxy-pkrgm                           1/1     Running   0          60m
kube-system   kube-proxy-rjgnw                           1/1     Running   1          24h
kube-systemD C r   kube-pe m y u T / s H Nroxy-t4pxl                           1/1     Running   0          1171 g w x I A ^s
kube-system   kube-scheduler-m1                          1/1     Running   2          24h
[root@m1# X Y R ~]# 

集群可用性测试

创建nginx ds

m1节点上创建nginx-ds.yml配置文件,内容如Z q ^ 7 E 9 j &下:

apiVersion: v1
kind: Service
metadata:
name: nginx-ds
labels:
app: nginx-ds
sp# 1 ~ T U |ec:
type: NodePort
selector:
app: nginx-ds
ports 3 O / s 9:
- name: http
port: 80
targetPort: 80
-O 8 t--
apiVers A Rion: ap{ ) Nps/v1
kind: DaemonSet
metadata:
name: nginx-ds
la E ; z v . ? sbels:
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
app: nginx-ds
template:
metadata:
lac M L E . C / *bels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: nginx:1.j ; U { u J G 87.9
ports:
- containerPort: 80

然后执行h ) T G b 5 x如下命令创建nginx ds:

[root@m1 ~]# kub r {ectl create -] x 0 J y Ff nginx-ds.yml
service/nginx-ds creN Y b C # Dated
daemonset.apps/nginx-ds created
[root@m1 ~]# 

检查各种ip连通性

稍等一会后,检查Pod状态是否正常:

[, a ) * 3 8 # Aroot@m1 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESg N z p v oTARTS   AGE     IP               NODE   NOMINb ^ w N Z | /ATED NODE   READINESS GATES
nginx-ds-6nnpm   1/1     Running   0          2m32s   172.22.152.19% f z R J3   s1     <none>           <none>
nginx-ds-bvpqj   1/1     Running   0          2m32s   172.22.78.- M P B J x129    s2     <none>           <none>
[root@m1 ~]# 

在每个节点上去尝试ping Pod IP:

[root@s1 ~]# ping 172.22.152.193
PING 172.22.152.193 (172.22.152.s t q ^ &193) 56(84) bytes of data.
64 bytes from 172.22.152.193: i3 . 4 Z B $ ~ Ycmp_seq=1 ttl=63 time=0.269 ms
64 bytes from 172.22.152.193: icmp_seq=2 ttl=63 time=0.24m o k Z0 ms
64 bytes from 172.22.152.193: icmp_seq=3 ttl=63 time=0.228 ms
64 bytes from 172.T N _22.152.193: icmp_seq=4 ttl=63 time=0.229 ms
^C
--- 172.22.152.193 ping statistics ---
4 packets transmitted, 4 receiv| | - B O } ^ Y bed, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.228/0.2i k r * j r m & O41/0.269/0.022 ms
[root@s1 ~]# 

然后检查Sg 6 u c N L (ervice的状态:

[root@m1 ~]# kubectl get svc
NAME         TYPE        Cp C f p # S & ILUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/T3 r , 5 _ R Q qCP        2d1h
nginx-ds     NodePort    10.105.139.228   <none>        80:31145/TCP   3m21s
[root@m1 ~]# 

在每个节点上尝试下访问该服务,能正常访问代表Service的IP也是通的:

[root@m1 ~]# curl 10.105.139.228:80
&n u X - , mlt;!DOCTYPE html>
<html>
<head>
<title>Welcome to( 7 $ , ] B o K V nginx!</K 8 X * !title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdanao 0 I G  k H, Arial, sans-serif;
}
</sty( i & f C ( ? 5 vle>g g L *;
</head>
<body>
<h1>WelcoT S - | X ; 5me to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Furt` # k  d D wher configuration is required.</p>
<p>For online documentation an` # ( 9 r r 5  Zd support pleJ A ^ E d P wase refer to
<a href="http://nginx.org/">nginx.org</a&e + : [ w ; M ?gt;.<br/>
Co: D r m Z D k y qmmercial support is available a- P m & I p $ ? !t
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>8 z W e X - V
</htm, ) 4 y 1 3 Tl>
[root@m1 ~]# 

然后在每个节点检查NodePort的可用性,nginx-: 2 h 6dsNodePortN w Z .31145。如下能正常访问代表NoY K D v +dePort也是正常的U ; u 9 h H r N u

[root@m3 ~]# curl 192.168.243.140:31145
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
wi; e 2dthT @ 7  N g . : 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h0 C r I12 s $ q >Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is s? p [ _ p | g auccessfully installed and
working. Further con@ i w D l ( ufiguration is required.</p>
<p>For online documentation and support please refer to
<a href="http:_ | N S : G -//nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">. I k Z & % ` Y;B L D : ? z vnginx.com<S j 8;/a>.</p>
<p><em>Thank you for using nginx.</C = % 0 C y 3 hem>&l+ ` t ] M y ^ : Dt;/p>
</body>
</html>
[root@m3 ~]# 

检查dns可用性

需要创建一个Nginx Pod,首先定义一个pod-nginx.yaml配置文件,内容如下:

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

然后基% z O n ; V于该配置去创建Pod9 1 ~ S v 2 n 6 lH k 9 O

[root@m1 ~]# kubeG ? U 3 ( n Nctl creato T ye -f pod-nginx.C b # C H P Q 5 syaml
pod/nginx created
[root@m1 ~]# 

使用如下命令进入到Pod里:

[root@m1 ~]# kubectl exec nginx -i -t -- /bin/bash0 b y 

查看dns配置:

root@nginx:/# cat /etc/resolv.conf
nam7 ) [ | h 0 m , ,eserver 10.96w / c 5 & Y.0.10
s+ b . U ~ c i hearch default.svc.cluster.local svc.cluster.local cluster.local localdomain
options ndots:5
root@nginx:/# 

接着测试是否可以正确解析Service的名称。如下能根据nginx-ds这个名称解析出对应的IP:10.105.139.228,代表dns也是正常的:H m l x 9

root@nginx:/# ping nv c . Z S N :ginx-ds
PING nginx-ds.default.svc.cluster.local (1r j )  K0.o G  v w105.139.228): 48 data bytes

高可用测试

m1节点上执行如下命令k ^ l E A将其关机:

[rootf E v w 7 ` O@mx D } r  u1 ~]# init 0

然后查看虚拟IP是否成功漂移到了m2节点上:

[root@m2 ~]# ip a |grep 192.168.243.100
inet 192.168.243.100/32 scope globalu G L 5 l f ` m 0 ens32
[root@m2 ~]# 

接着测试能否在m2, 8 ; F W 4 + nm3节点上使用kubect^ R *l与集群进行交互,能正常交互则代表集群具备了一定程度的高可用性:

[root@m2 ~]# kubectl get nodes
NAME   STATUS     ROLEe m vS    AGEA k . = M G / I d   VEe ] qRSION
m1     No+ k .tReady   ma: w , i Uster   3d    v1.19.0
m2     Ready      master   16m   v1.19.0
m3     Ready      master   13m   v1.19.0
s1     Ready      <none>   2d    vp )  3 V1.19.0
s2     Ready      <none>   47h   v1.19.0
[root@m2W U 3 . ^ ~]# 

部署dashboard

dashboard是k8s提供的一个可视化操作界面,用于简化我们对集群的操作和管理,在界面上我们可以很方便的查看} 5 v H各种信息、操作Pod、Service等资源,以及创建新的资源等。dashboard的仓库地址如下,

  • https://github.com/kubernetes/dashboard

dashboard的部署也比较简单,首先定义dashboard-all.yaml配置文件,内S k b 1 F s ) t容如下:

apiVersion: v1
kind: Namespace
metS p [adata:
name: kubernetes-dash8 D , c , sboard
---
apiVersion: v1
kind: Se! ; W P o prviceAccount
metadata:: 0 ^ E ; 8 [
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernep  / U R B 4tes-dashboard
---
kind:; ? C Service
apiVersion: v1
metadata:
labe: ] 9 K ~ 9 5ls:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30005
type: NodePort
selector:
k8s-app: k1 q d ubernet? { : Ies; V N F 5 (-das: ^ 2 l _ Bhboard
---
apiVersion: v1
kind: Secret
metadata:
labK O R sels:
k8s-app: kubernetes-dashboard
name: kuberneI % E # 9 0 T Btes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-ai B ( + p g hpp: kubernU  { b # [etes-dashbB 2 u ^ / s Loard
name: k! Z D K o  5 ) cubernetes-dashboard-csrf
namM * =  | z Desp, D @ 1 :ace: kubernetes-dashboard
type: Opaque
data:
csrf: "h ( _ 4"
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
typo , q V ze: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app:( c I M _ i ; kubernetes-dl /  g Bashboard
name: kubernetes-dashbol X +ard-settings
namespace: kubernetes-dasw a x n ) 9hboard
---
kinda * Q l d | 8 K: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetU ! u / &es-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGR ] I q [ , 7roups: [""]
resources: ["configmaps"]
resourceF l -Names: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGrou] 0 q 9 j ` e dps: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
reso? M N @ =urces: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scrapi B s / Z o H Ter", "http:dashboard-metrics-scraper"]
ve= c + & o 2 +rbs: [w 6 v q 5"get"]
---
kind: ClusterRole
apiVersion: rbac.authorizatiM 5 B w I ^ 6on.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
naD 6 ` ! tme: kubernetes-dg o  ?ashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authoriz B Gzation.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dA 6 m v x z k .ashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name:t V A N A kubernetes-dashboard
subjects:
- kind: ServiceAccou$ c w * [nt
name: kubernetes-dashb1 m K *oard
namespace: kubernetes-dashboard
---
apw ! u aiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind:y l $ = 0 F ClusterRole
namr r 9 ` j v %e: kuberneL X m L A st$ { V 2 es-dashboard
subjects:
- kind: ServiceAccoun] ~ z ] O 1 q +t
name: kubeV ; } P [rneteL / j e l * Ks-dashboard
namespace: kubernetes-dashboard
---c % : 7 R
kind: Deployment
apiVersioG 8 ` g R )n: apps/v1
metadata:
labels:
k8s-app: ku! / - 2 K : m y Ubernetes-dashboard
name: kubernete) h S / | q f } xs-dashboard
namespace: kubernetes-dashboard
spec:
replicas7 v b n o ; r * o: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dh m A a 6 K sashboard
template:
metadata:
labels:
k8s-app: kuberU d A K B =netes-da% h b v B w + _shboard
speg  S +c:
containers- 4 % H:
- name: kubernetes-dashboard
image: kube. X n x 6 q $ v ernetesui/dashboard:v2! : 3 V b d.0.3
imagePullPolicy: AG @ T w g V M ^lways
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --+ Z ( x a } 2namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, D7 . washboard will attempt to auto discov[ U b } y G fer the API server and connect
# to it. Uncomment only if tO { /he default does not work.
# - --apiserver-host=http://my5 r Y e X ! 9-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme:+ O [ 9 Q V ! HTTPS
path: /
port: 8443
initialDelay^ % 6 t 2Seconds: 3A  b 30
timeoutS8 p p Oeconds: 30
securityContext:
alloD : v 7 (wPrivilege2 J i T B B 1 M sEscalation_ L s = ? 5: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernU J O r ^ X d I setesA s s ^ X ] I-dashboard-c[ H Oerts
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboar5 z r !d
node k D + vSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must noQ 4 Z t be deployed on master
tole8 _ E } 0 /rations:
- key: noM t Vde-role.kubernetes.io/master
effect: NoSchedule
---
kind: Servi} Z a v b %ce
apiV5 + s ? _ ersion: v1
metadata:
labels:
k8s-app: dash$ N @ x N Dboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboarU E b J _ Q & ` Id
spec:
po9 % w B a y [ # *rts:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboS F B ; $ k 4 Eard-metrics-scraper
---
kind: Deployment! u e * G m : w
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metric@ U 2s-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
sel/ L 0 7 6 | ) .ec= Y i G p . Utor:q U s s
mM } i X 2 } @ l natchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8sm l i-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashb2 L _ =oard-met: # K Arics-scraper
i$ g a _ z jmage: kuber4 5 p % / j ) t Anetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: T6 I [ e 1CP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscal3 ; A h & [atiR N z 2 K on: false% W h N { C
readOnlj | 2 n f 1 8 8 pyRootFilesystem: true
r. K  ; r BunAsUserR M Z X v | 5 B A: 1001
runAsGroup: 2001
serviceAcc1 A q h 9 8 5 1ountName: kubernetes-dashboard
nX r ~ 7 K A B 4 jodeSelector:
"kubernetez ` B Z 3 h Ds.io/u ) ! n # =os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
toleratio_ y g U M a :ns:
- keU ? G 1 @ 8 Vy: node-roleh V +.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}

创建dashboard服务:

[root@m1 ~]# kubectl create -f dashboard-all.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboq : C C j y { ^ard created
secret/kubernetes-dashbo1 u X & u + Aard-certs created
sez n L a J o Ecret/kubernetes-d= : ] @ = |ashboard-csrf created
sk , C D |ecret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetesi a # a | 4 P m-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernete= J o ~s-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashbf t ? % m N : # doard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@m1 ~]# 

查看deployment运行情况:

[root@m1 ~]# kubectl get deploymes s nt kubernetes-dashboard -n kubernetes-daa x ? W ] 9shboard
NAME                   READY   UP-/ f j ` ;TOP x t ? & C 5-DATE   AVAILABLE   AGE
kubernetes-dashboarO Z 9 x fd   1/1     1            1           29s
[root@m1 ~]# 

查看dashboard pod运行情况i ) c l c , U

[root@m1 ~]# ku0 { | m Gbectl --namespace kubernetes-dashboard get pods -o wide |grep dashboard
dashboard-metrics-scraper-7W ` T q  h @b59f7d4df-q4jqj   1/1     Running   0          5m27s   172.22.1521 D i & ( f  I n.198   s1     <noneU - h @ T&+ $ k H  C v ^ .gt;           <none>
kuberb Q ] Gnetes-dashboard-5dbf55bd9d-nqvjz        1/1     Running   0          5m27s   172.22.202.17    m1     <none>           <none>
[roa ~ ) $ F c e 4 4ot@m1 ~]# 

查看dashboard service的运行情况:

[root@m1Q F a : M / K w ~]# kubectl get servicef r 5 ! P U w H ~s kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AS O r ` # H z L iGE
kubernetes-dashboard   NodePort   10.104.217.178   <none&=  *gt;        443:30005/TCP   5m57s
[root@m1 ~]# 

查看30005端口是否有被正常监听:

[root@m1 ~]# netstat -ntlp |grep 30005
tcp        0      0 0.0.0.0:30005      0.0.0.0:*     LISTEN      4085/kube-proxy
[root@m1 ~]# 

访问dashboard

为了集群安全,从 1.7 开始M 5 { Z n @ {,dashboard 只允许通过 htt4 w j `ps 访问,我们使用NodePort的方式暴露服务,可以使s K v _ ; I 用 https://NodeIP:NodePort 地址访问。例如使用curl进行访问:

[root@m1 ~]# cJ C 2 l  W f ( vurl https://192.168.243.138:30005 -k
<!--
Copyright 2017 The Kubernete+ $ _ Z * S @ #s Authors.
Licensei ] 5 W G A / Gd under t` u R @ ] l A Ihe Apache License, Version 2.0 (@ 9 $ e B k Q Jthe "License");
you may not use thD ^ ) 0 ~ xis file except in compliance with the Li7 ^ Pcense.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicag & p 3 l (ble l- Q 7 H . , Daw or agreed tov u s w - _ l 1 ` in writing, software
distributed under the License is distributed oT L f B ; Q J 7n an "AS IS" BASISV d 6 L F H j - h,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing peh S S q Krmissioa 8 T %ns and
limitations under the License.
-->
<!doctype html>
<htmO H o 2 l L X 9 xl lang="en">
<head>n # D p + E
<meta charset="utN , 1 , nf-8">
<title>KuberneA - { W * )tes Dashboard</title>
&U M ] 5 % ! m K flt;linj R h _ J bkD ` w i o T rel="icon"
tyW 7 % ] [ * mpe="image/png"
href="https:/P ( 7/blog.51cto.com/zero01/2527914/assets/images/kubernetes-logo.png" />
<meta name="g k ^viewport"T 4  S * ( .
content="width=device-width"&` } 8 , %gt;
<link rel="stylesheet" href="https://blog.51cto.com/zero01/2527914/styles.988f26601cdcb14da469.css"><? 4 2 l W/head>
<body>
<kd-root></kd-root>
<script src="d ] ? E 4 L https://blog.51cto.com/zero01/2527914/runtime.dd0 = efec48137b0abfd678a.js" defer></script>X C :<script src="https:K x z } M B E p ]//blog.51cto.coP ? W { F Rm/zero01/2527914/polyfills-es5.d57fe778f4588e63cc5c.js" nomodule defer></script>&l1 = 2 t;script src="https://blog.51cto.com/zero01/252e . * 1 i E7914/polyfills.49104fe38e0ae7955ebb.js" defer></script><script src="https://blog.51c* / - u B S z ` ,to.com/zero01/2527914/scripts.391d299173602e261418.js" defer></script><F 1 e : 7script s2 v c }rc="https://blog.~ C c z 5 % $ Z |51cto.com/zero01/2527914/main.b94e3 | h 3 H p D A335c0d02b12e3a7b.js" defer></script></b_ 7 9 Hody>
</html>
[root@m1 ~]# 
  • 由于dashboard的证书是自签的,所以这里需要加-k参数B G y h指定不验证证书C 3 , 进行https请求

关于自定义证书

默认dashboard的证书是自动生成的,肯定是非安全的证书,如果大家有域名和对应的安全证书可以自己替换掉。使用安全的域名方式访问dashboard。

dashboard-all.yaml中增加dashboard启动参数,可以指定证书文件,其中证书文件是通过secret注进来的。

- –tls-cert-file
- dashboard.cer
- –tls-key-file
- dashboard.key

登录dashbor w s i q mard

Dashboard 默认只支持 token 认证,所以如果使用 KubeConfig 文件,需要在该文件中指# 8 z 1 b d u =定 token,我们这里使用token的方式登录。

首先创建service accountK ! i 9 ^ L , , 6

[root@m1 ~]# kubectl create sa dashboard-admin -n kubeN } J %-system
serviceaccount/dash2 n Z |board-admin created
[root@m1 ~]#

创建角色绑定关系:

[root@mZ w $ m q 1 ! }1 ~]# kubectl create clusterrolebinding dashby = ! b r ;oard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@m1 ~]# 

查看dashboard-admin的secret名称:

[root@m1 ~]# kubectl get secrets -n kube-systea x .m | grep dashboard-admin | awk '{print $1}'
dashbu 9 m 8 . R 8 0oard-admin-token-ph7h2
[root@m^ T F K & } ` j1 ~]# 

打印secret的token:

[root@m1 ~]#E S V f # q j f ADMIN_SECRET=$(kubectl get secrets -n kube-system | gre* U ]p dashboard-admin | awk '{print $1}')
[root@m1 ~]# kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{prinB / { R *t $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6X l u eIkVn= T b k u 2 OaDc n *RYQXgySkFDOGdDMnhX~ 8 } 2 t * p -YXJWbkY2W5 y M wVczSDVKl 6 i o B % JeVJRaE5vQ0ozOG5PanMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy53 M & qpby9z| 5 ? L { Q ?ZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQ{ Q 7 n ^vc2Vjc r v CmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdGj L N ( n Q ; u k9rZW4tcGg3aDIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9-  X D - ! E @ 2zZXJ2aWNlLWFjY291bnQudWl% g s . akIjoiNjA1ZWY3OTAtOWY3OC00NDQzLTgwMDgtOWRiMjU1Y q f 3 | m ? CMjU0M y a O SThkIiw+ ^ t j d sic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.xAO3njShhTRkgNdq45nO7XNy242f8XVsL [ j & O p = g .-W4WBMui-Ts6ahdZECoNegvWjLDCEamB0UW72JeG67f2yjcWohANwfDCHobRYPkOhzrVghkdULbrCCGai_fe60Svwf_apSmlKP3UUdu16M4GxopaTlINZpJY_z5KJ4kLq66Y1rjAA6j9TI4Ue4EazJKKv0dciv6NsP28l7-nvUmhj93QZpKqY3PQ7vvcPXk_sB-jjSSNJ5ObWuGeDBGHgQMRI4F1XTWXJBYClIucsbx : A ) 2 i 8u6MzDA8yop9S7Ci8D00QSa0u3M_rqw-3UHtSxQee41uVVjIAv . B a G J N ( xSfnCEVayw F C G U EKDIbJzG3gc2AjqGqJhkQ
[root@m1 ~]# 

获取到token后,使用浏览器访问https://192.168.243.18 H W c P T t ]3, { Y . ( ) O &8:30005,由于是dashboard是自签的证书,所以此时浏览器会提示警告。不] s :用理会直接点击“高级” -> “继续前往”即可:
基于kubeadm搭建k8s高可用集群

然后输入token:
基于kubeadm搭建k8s高可用集群

成功登录后首页如下:
基于kubeadm搭建k8s高可用集群

可视化界k $ .面也没啥可说的,这里就不进一步介绍了,可以自行探索一下。m 8 =