kubeadm部署k8s集群

实验环境:
主节点:192.168.1.10(master)
Node1节点:192.168.1.20(node01)
Node2节点:192.168.1.30(node02)

环境准备:
分别将3台虚拟机命名,设置好对应IP,并将其写入域名解析/etc/hosts中,关闭防火墙,iptables,禁用selinux。还有要做到,时间必须一致。全部禁用swap

这里我们指定我们安装的k8s版本为1.15.0版本。DOCKER部署安装指定版本18.9.0
[root@localhost ~]# yum install -y docker-ce-18.09.0-3.el7 docker-ce-cli-18.09.0-3.el7 containerd.io-1.2.0-3.el7

[root@localhost ~]# hostnamectl set-hostname master //master节点操作
[root@localhost ~]# hostnamectl set-hostname node01 //nod1节点操作
[root@localhost ~]# hostnamectl set-hostname node02 //node2节点操作
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@master ~]# iptables -F
[root@master ~]# iptables-save
[root@master ~]# vim /etc/selinux/config
kubeadm部署k8s集群

#3台虚拟机全部禁用swap
[root@master ~]# swapoff –a
[root@node01 ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
kubeadm部署k8s集群
[root@master ~]# free -h

[root@master ~]# vim /etc/hosts
kubeadm部署k8s集群

#开启无密码传送
#连续点3下回车键
[root@master ~]# ssh-keygen -t rsa
[root@master ~]# ssh-copy-id root@node01
[root@master ~]# ssh-copy-id root@node02

#打开iptables桥接功能
[root@master ~]# vim /etc/sysctl.d/k8s.conf
kubeadm部署k8s集群
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf //如果提示没有文件夹或目录输入下面命令
[root@master ~]# modprobe br_netfilter
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf //两个node节点也需要做
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

到此基本环境准备完毕,先来master节点上操作
[root@master ~]# cd /etc/yum.repos.d/

[root@master yum.repos.d]# vi docker-ce.repo
[docker-ce]
name=docker-ce
baseurl=https://download.docker.com/linux/centos/7/x86_64/stable/
enable=1
gpgcheck=0

[```
root@master yum.repos.d]# vim kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enable=1
gpgcheck=0
~


[root@master yum.repos.d]# scp docker-ce.repo kubernetes.repo node01:/etc/yum.repos.d/
[root@master yum.repos.d]# scp docker-ce.repo kubernetes.repo node02:/etc/yum.repos.d/
#注意下载
##查看docker 版本
[root@master ~]# yum list docker-ce --showduplicates | sort –r
[root@master yum.repos.d]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
[root@master ~]# systemctl enable docker
[root@master ~]# systemctl enable kubelet   

#配置docker加速器
[root@master ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io


[root@master ~]# vi /etc/docker/daemon.json
{"registry-mirrors": ["http://f1361db2.m.daocloud.io"]}
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
[root@master ~]# rpm -ql kubelet
/etc/kubernetes/manifests               //清单目录
/etc/sysconfig/kubelet                  //配置文件
/etc/systemd/system/kubelet.service
/usr/bin/kubelet
至此,准备工作做完,可以开始初始化,可是由于国内网络环境限制,我们不能直接从谷歌的镜像站下载镜像,这时,需要我们手工从docker镜像站下载镜像,然后重新命名,这里用脚本来实现。
docker pull mirrorgooglecontainers/kube-apiserver:v1.14.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.1
docker pull mirrorgooglecontainers/kube-proxy:v1.14.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1
docker tag mirrorgooglecontainers/kube-proxy:v1.14.1  k8s.gcr.io/kube-proxy:v1.14.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
docker tag mirrorgooglecontainers/kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag mirrorgooglecontainers/etcd:3.3.10  k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.1
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.1
docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.3.10
docker rmi coredns/coredns:1.3.1
这里我已经下载好了,只需要导入形影的镜像遍可。
![](https://s1.51cto.com/images/blog/202001/31/15ea2bf8ed379fb9b7e39b8b3f277809.png)
[root@master images]# systemctl enable kubelet
[root@master ~]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
![](https://s1.51cto.com/images/blog/202001/31/3ef96ff9c2d5f2e016ea0a6a7cfa974d.png)
![](https://s1.51cto.com/images/blog/202001/31/4d0482468b427e05a819e2f7d509cd28.png)
**可以看出master的状态是未就绪(NotReady),之所以是这种状态是因为还缺少一个附件flannel,没有网络各Pod是无法通信的
也可以通过检查组件的健康状态**
![](https://s1.51cto.com/images/blog/202001/31/917fff812a293acd25581ea3980b7c43.png)
、添加网络组件(flannel)
 组件flannel可以通过https://github.com/coreos/flannel中获取
![](https://s1.51cto.com/images/blog/202001/31/790957863f23ecfd017932813056920c.png)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
**看到很多东西被创建是还不够的,还需要查看flannel是否处于正常启动并运行的状态,才算正在的部署完成**
![](https://s1.51cto.com/images/blog/202001/31/05643601871197e8bc429e970af2b43f.png)
![](https://s1.51cto.com/images/blog/202001/31/dd9c50dfe61096645ead9b89fd14d7dc.png)
[root@master ~]# kubectl get ns
NAME          STATUS   AGE
default       Active   14m
kube-public   Active   14m
kube-system   Active   14m
以上是主节点的安装部署,然后个node几点的安装,和加入集群
[root@node01 ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0
[root@node01 ~]# systemctl enable docker kubelet
在加入集群之前,仍需要我们手工下载2个镜像,这样速度更快。
[root@node01 ~]# docker pull mirrorgooglecontainers/kube-proxy:v1.14.1
[root@node01 ~]# docker pull mirrorgooglecontainers/pause:3.1
[root@node01 ~]# docker tag mirrorgooglecontainers/kube-proxy:v1.14.1  k8s.gcr.io/kube-proxy:v1.14.1
[root@node01 ~]# docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
[root@node01 ~]# docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1
[root@node01 ~]# docker rmi mirrorgooglecontainers/pause:3.1
[root@node01 ~]kubeadm join 192.168.1.110:6443 --token njus35.kw3hxkys3urmnuob --discovery-token-ca-cert-hash sha256:05761b73b571c18eebd6972fb70323cd3c4d8e0aa7514efa2680411310424184
![](https://s1.51cto.com/images/blog/202001/31/fe13516f4788917d10a04179bd0fa10a.png)
等待一会去master节点验证。等待的是同步flannel网络。
![](https://s1.51cto.com/images/blog/202001/31/2ea606191e5700bf75b1a5cb254bc1eb.png)
> **如何安装指定版本kubenetes,这里注意,kubernetes的版本一致,主要体现在下载的各个组件的统一,那么这里注意组件是
> Kube-proxy,kube-apiserver,kube-controller-manager,kube-scheduler**
>
列出已经安装过的rpm包
yum list installed | grep kube
卸载安装的rpm包
yum remove kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 -y
> 安装指定的kubeadm
> yum install -y kubelet-1.12.1 kubeadm-1.12.1 kubectl-1.12.1
>
> 设置kubectl命令行工具自动补全功能
> [root@k8s-master ~]# yum install -y bash-completion
> [root@k8s-master ~]# source /usr/share/bash-completion/bash_completion
> [root@k8s-master ~]# source <(kubectl completion bash)
> [root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
设置tab键空格个数
[root@master ~]# vim .vimrc
![](https://s1.51cto.com/images/blog/202001/31/43f6afc0c6fffa5445b9f6d31b639f6a.png)
[root@master ~]# source .vimrc