kubernetes v1.23.3 二进制部署

1. 组件版本和配置策略

1.1 主要组件版本

组件 版本 发布时间
kubernetes v1.23.3 2022-01-26
etcd v3.5.2 2022-02-01
cri-o v1.23.0 2021-12-18
flannel v0.16.3二进制文件转换为文本文件 2022-01-29
coredns 1.9.0 2022-02-10
cni-plugins v1.0.1 2021-09-08

1.2 主要配置策略

kube-apiserver:

  • 使用节点本地 ngi颜色配置文件nx 4 层透明代su二进制文件理实现高可用
  • 关闭非安全端口 8080 和匿名访问;
  • 在安全端口 5443 接收 https 请求;
  • 严格的认证和授权策linux常用命令略 (x509、token、RBAC);
  • 用户配置文件启 bootstrap token 认证,支持 kubelet TLS b键盘配置文件ootstrapping高可用架构
  • 使用 https 访问 kubelet、etcd,加密通信;

kubelinux系统-controller-manager:

  • 3 节点高可用;
  • 关闭非安全端口,在安全端口 10257 接收 htt系统运维工作内容ps 请求;
  • 使用 kubeconfig 访问 apiserver 的安全端口;
  • 自动 approve kubelet 证书签名请求 (CSR)配置文件是什么意思,证书过期后自动轮转;
  • 各 controller 使用自己的 ServiceA颜色配置文件ccount 访问 apiserver;

kube-scheduler:

  • 3 节点高可用;
  • 使用 kubeconfig 访问 apiserver 的安全端口;
  • 关闭非安全端口,在安全端口 10259 接收 https 请求;
    kubelet:

  • 使用 kubeadm 动态创建 bootstrap二进制文件是什么意思 token,而不是在 apiserver 中静态配网络配置文件置;
  • 使用 TLS bootstrap 机制自动生成 client 和 server 证书,过期后自动轮转;
  • 在 KubeletC高可用架构on二进制文件和文本文件的区别figuration 类型的 JSON 文件配置主要参数;
  • 关闭只读端口,在安全端口 102linux是什么操作系统50 接收二进制文件转换为文本文件 https 请求,对请求进行认证和授权,拒绝匿名访问和非授权访问;
  • 使用 kubeconfig 访问 apiserver 的安全端口;

kube-proxy:

  • 使用 kube二进制文件转换为文本文件config 访问 apiserver 的安全su二进制文件端口;
  • 在 KubeProxyConfiguration 类型的 JSON 文件配置主要参数;
  • 使用 ipvs 代理模式;

集群插件:

  • DNS:使用功能、性能更好的 coredns;

2. 初始化系统和全局变量

2.1 集群规划

master节点:

  • k8s-master-1:192.168.2.175
  • k8s-mastelinux操作系统基础知识r-2:192.168.2.176
  • k8s-master-3:192.168.2.178
    node 节点:
  • k8s-node-1:19高可用架构2.168.2.185
  • k8s-node-2:192.168.2.187
  • k8s-node-3:192.键盘配置文件168.3.62
  • k8s-node-4:192.168.3.70
    控制节点/配置生成节点:
  • qist:192.168.0.151
    工作目录:
  • /opt

三台机器混合部署本文档的 etclinux删除文件命令d、master 集群和 woker 集群。

如果没有特殊说明,需要在所有节点上执行本文档的初始化操作。

2.2 kulinux系统安装belet cri-o cgroup

  • Cgroup Driver:systemd
    kubeelt cri-o Cgroup 配置为systemd

2.3 设置主机名二进制文件的后缀名

hostnamectl set-hostname k8s-master-1 # 将 k8s-master-1 替换为当前主机名

退出,重新登录 root 账号,可以看到主机名生效。

2.4 添加节点信任关系

本操作只需要在 qist 节点上进行,设置 root 账户可以无password登录所有节点

ssh-keygen -t rsa
ssh-copy-id root@192.168.2.175
ssh-copy-id root@192.168.2.176
ssh-copy-id root@192.168.2.177
ssh-copy-id root@192.168.2.185
ssh-copy-id root@192.168.2.187
ssh-copy-id root@192.168.3.62
ssh-copy-id root@192.168.3.70

2.5 安装依赖包

yum install -y epel-release
yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git
  • 本文档的 kube-配置文件在哪proxy 使用 ipvs 模式,ipvsadm 为 ipvs 的管理工具;
  • etcd 集群各机器需要时间同步,chrony 用于系统时间同步;

2.6 关闭防火墙

关闭防火墙,清理防火墙规则,设置默认转发策略:

systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

2.7 关闭 swap 分区

关闭 swap 分区高可用是什么意思啊,否则kubelet 会启动失败(可以设置 kubelet 启动参数 --fail-swap-on 为 false 关闭 swap 检查):

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 

2.8 关闭 SELinux

关闭 SELinux,否则 kubelet 挂载目录时可能报错 Permissionlinux系统 denied

setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

2.9 优化内核参数

cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.ipv4.tcp_slow_start_after_idle=0
net.core.rmem_max=16777216
fs.inotify.max_user_watches=1048576
kernel.softlockup_all_cpu_backtrace=1
kernel.softlockup_panic=1
fs.file-max=2097152
fs.nr_open=2097152
fs.inotify.max_user_instances=8192
fs.inotify.max_queued_events=16384
vm.max_map_count=262144
net.core.netdev_max_backlog=16384
net.ipv4.tcp_wmem=4096 12582912 16777216
net.core.wmem_max=16777216
net.core.somaxconn=32768
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=8096
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-arptables=1
net.ipv4.tcp_rmem=4096 12582912 16777216
vm.swappiness=0
kernel.sysrq=1
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce=2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
net.ipv4.tcp_max_tw_buckets=5000
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_synack_retries=2
net.ipv6.conf.lo.disable_ipv6=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.all.forwarding=0
net.ipv4.ip_local_port_range=1024 65535
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_probes=10
net.ipv4.tcp_keepalive_intvl=30
net.nf_conntrack_max=25000000
net.netfilter.nf_conntrack_max=25000000
net.netfilter.nf_conntrack_tcp_timeout_established=180
net.netfilter.nf_conntrack_tcp_timeout_time_wait=120
net.netfilter.nf_conntrack_tcp_timeout_close_wait=60
net.netfilter.nf_conntrack_tcp_timeout_fin_wait=12
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_orphan_retries=3
fs.may_detach_mounts=1
kernel.pid_max=4194303
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_fin_timeout=1
vm.min_free_kbytes=262144
kernel.msgmnb=65535
kernel.msgmax=65535
kernel.shmmax=68719476736
kernel.shmall=4294967296
kernel.core_uses_pid=1
net.ipv4.neigh.default.gc_thresh1=0
net.ipv4.neigh.default.gc_thresh2=4096
net.ipv4.neigh.default.gc_thresh3=8192
net.netfilter.nf_conntrack_tcp_timeout_close=3
net.ipv4.conf.all.route_localnet=1
EOF
sysctl -p /etc/sysctl.d/kubernetes.conf
  • 关闭 tcp_tw_recycle,否则与 NAT 冲突,可能导致服务不通;
  • 内核低于4系统/运维版本添加 fs.may_detach_mounts=1

    2.10 系统文件打开数

cat >> /etc/security/limits.conf <<EOF
 *       soft    nofile  655350
 *       hard    nofile  655350
 *       soft    nproc   655350
 *       hard    nproc   655350
 *       soft    core    unlimited
 *       hard    core    unlimited
EOF

centos7还需linux系统修改

sed -i 's/4096/655350/' /etc/security/limits.d/20-nproc.conf

2.11 内核模块配置重启自动加载

  • 加载ipvs内核模块
cat > /etc/modules-load.d/k8s-ipvs-modules.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
EOF
  • 加载netfilter等模块

linux命令核4版本以下 nf_conntrack 替换 nflinux是什么操作系统_conntrack_ipv4

cat > /etc/modules-load.d/k8s-net-modules.conf <<EOF
br_netfilter
nf_conntrack
EOF

2.1二进制文件转换为文本文件2 设置系统时linux重启命令

timedatectl set-timezone Asia/Shanghai

2.13 设置系统时钟同步

systemctl enable chronyd
systemctl start chronyd

查看同步状态

timedatectl status

输出:

System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
  • System clock sy配置文件是什么意思nchr二进制文件传输onized: yes,表示时钟已同步;
  • NTP service: active,表示开启了时钟同步服务;
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog 
systemctl restart crond

2.14 关闭无关的服务

systemctl stop postfix && systemctl disable postfix

2.15 创建相关目录

创建目录:

  • master 组件目录
# k8s 目录
mkdir -p /apps/k8s/{bin,log,conf,ssl,config}
mkdir -p /apps/work/kubernetes/{manifests,kubelet}
mkdir -p /var/lib/kubelet
mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
mkdir -p /apps/k8s/ssl/{etcd,k8s}
#etcd 目录
mkdir -p /apps/etcd/{bin,conf,data,ssl}
#etcd  data-dir 目录 
mkdir -p /apps/etcd/data/default.etcd 
# etcd wal-dir 目录
mkdir -p /apps/etcd/data/default.etcd/wal
  • node 节点目录
mkdir -p /apps/k8s/{bin,log,conf,ssl}
mkdir -p /apps/work/kubernetes/{manifests,kubelet}
mkdir -p /var/lib/kubelet
mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
  • cri-o 目录颜色配置文件结构创建
mkdir -p /apps/crio/{run,etc,keys}
mkdir -p /apps/crio/containers/oci/hooks.d
mkdir -p /etc/containers
mkdir -p /var/lib/containers/storage
mkdir -p /run/containers/storage
mkdir -p /apps/crio/lib/containers/storage
mkdir -p /apps/crio/run/containers/storage

2.16 mount目录挂载

  • 挂载kubelet 跟cri-o数据目录最大兼容其它依赖组件例如csi插件
cat >> /etc/fstab <<EOF
/apps/work/kubernetes/kubelet /var/lib/kubelet none defaults,bind,nofail 0 0
/apps/crio/lib/containers/storage /var/lib/containers/storage none defaults,bind,nofail 0 0
/apps/crio/run/containers/storage /run/containers/storage none defaults,bind,nofail 0 0
EOF
  • 验证挂载是否有误
mount -a

重启机器:

sync
reboot

3 创建 CA 根证书和秘钥

为确保安全,kuberneteslinux是什么操作系统 系统各组件需要使用 x509 证书对通信进行加二进制文件怎么打开密和认证。

CA (Certificate Authority) 是自签名的根证书,用来签名后续创建的其它证书。

CA 证书是集群所有节点共享的,只需要创建一次,后续用它签名其它所有证书。

本文档使用 CloudFlare 的 PKI 工具集 cfssl 创建所有证书。

注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行

3.1 安装 cfssl 工具集

mkdir -p /opt/k8s/bin
wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64
mv cfssl_1.4.1_linux_amd64 /opt/k8s/bin/cfssl
wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64
mv cfssljson_1.4.1_linux_amd64 /opt/k8s/bin/cfssljson
wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl-certinfo_1.4.1_linux_amd64
mv cfssl-certinfo_1.4.1_linux_amd64 /opt/k8s/bin/cfssl-certinfo
chmod +x /opt/k8s/bin/*
export PATH=/opt/k8s/bin:$PATH

3.2 创建配置文件

CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等):

  • 创建网络配置文件etcd K8linuxS ca 目录
mkdir -p /opt/k8s/cfssl/{etcd,k8s}
mkdir -p /opt/k8s/cfssl/pki/{etcd,k8s}
# 创建工作目录
mkdir -p /opt/k8s/work
  • 全局 配置文件生成
    cd /opt/k8s/work
    cat > /opt/k8s/cfssl/ca-config.json <<EOF
    {
    "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
    }
    }
    EOF
  • signing:表示该证书可用于签名其它证书(生成的 ca.pem 证书中 CA=TRUE);
  • server auth:表示 client 可以用该该证书对 ser二进制文件怎么打开ver 提供的证书进行验证;
  • client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;
  • "ex二进制文件是什么意思piry": "876000h"二进制文件的后缀名证书有效期设置为 100 年;

3.3 创建证书签linux常用命令名请求文件

  • etcd 证书签名请求文件
cd /opt/k8s/work
cat > /opt/k8s/cfssl/etcd/etcd-ca-csr.json <<EOF
{
    "CN": "etcd",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
            "O": "k8s",
            "OU": "Qist"
    }
  ],
    "ca": {
        "expiry": "876000h"
    }
}
EOF
  • kubernetes 证书签名请求文件
cd /opt/k8s/work
cat > /opt/k8s/cfssl/k8s/k8s-ca-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
            "O": "k8s",
            "OU": "Qist"
    }
  ],
 "ca": {
  "expiry": "876000h"
  }
}
EOF
  • CN:Common Name:kube-apiserver 从证书中提取该字段linux作为请求的用户名 (User Name),浏览器使用linux必学的60个命令该字段验证网站是否合法;
  • O:Or二进制文件怎么打开g配置文件中没有undefined的js对象anization:kube-apiserver 从证书中提取该字段作为请求用户二进制文件转换为文本文件所属的组 (Group)
  • kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;

注意:

  1. 不同证书 csr 文件的 CN、C、ST、L、O、OU 组合必须不同,否则可能出现 PEER'S CERTIlinux操作系统基础知识FICATE HAS AN INVALID SIGNATURE 错误;
  2. 后续创系统运维工作内容建证书的 csr 文件时,CN 都不相同(C、ST、L、O、OU 相同),以达到区分的目的;

3.4 生成 CA 证书和私钥

  • 生成 etcd CA 证书和私钥
cd /opt/k8s/work
cfssl gencert -initca /opt/k8s/cfssl/etcd/etcd-ca-csr.json | \
      cfssljson -bare /opt/k8s/cfssl/pki/etcd/etcd-ca
ls -la /opt/k8s/cfssl/pki/etcd/*-ca*
root@Qist ~# ls /opt/k8s/cfssl/pki/etcd/*-ca*
/opt/k8s/cfssl/pki/etcd/etcd-ca-key.pem  /opt/k8s/cfssl/pki/etcd/etcd-ca.csr  /opt/k8s/cfssl/pki/etcd/etcd-ca.pem
  • 生成 kubernetes CA 证书和私钥
cd /opt/k8s/work
cfssl gencert -initca /opt/k8s/cfssl/k8s/k8s-ca-csr.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/k8s/k8s-ca
root@Qist ~# ls  /opt/k8s/cfssl/pki/k8s/*-ca*
/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem  /opt/k8s/cfssl/pki/k8s/k8s-ca.csr  /opt/k8s/cfssl/pki/k8s/k8s-ca.pem

3.5 分发CA证书文件

  • etcd ca 证书分发
cd /opt/k8s/work
scp -r /opt/k8s/cfssl/pki/etcd/etcd-ca* root@192.168.2.175:/apps/etcd/ssl
scp -r /opt/k8s/cfssl/pki/etcd/etcd-ca* root@192.168.2.176:/apps/etcd/ssl
scp -r /opt/k8s/cfssl/pki/etcd/etcd-ca* root@192.168.2.177:/apps/etcd/ssl
  • kuberne高可用tes ca 证书分发
# k8s 连接etcd 使用ca 证书
scp -r /opt/k8s/cfssl/pki/etcd/etcd-ca.pem root@192.168.2.175:/apps/k8s/ssl/etcd/
scp -r /opt/k8s/cfssl/pki/etcd/etcd-ca.pem root@192.168.2.176:/apps/k8s/ssl/etcd/
scp -r /opt/k8s/cfssl/pki/etcd/etcd-ca.pem root@192.168.2.177:/apps/k8s/ssl/etcd/
# K8S 集群ca 证书
scp -r /opt/k8s/cfssl/pki/k8s/k8s-ca* root@192.168.2.175:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-ca* root@192.168.2.176:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-ca* root@192.168.2.177:/apps/k8s/ssl/k8s

4 安装和配置 kubectl

本文档介绍安装linux系统安装和配置 kubernetelinux操作系统基础知识s 命令行管理工具 kubectl 的步骤。

注意:

  1. 如果没有特殊指明,本文档的所有操作二进制文件的后缀名均在 qist 节点上执行
  2. 本文档只需要部署一次,生成的 kubeconfig 文件是通用的,可以拷贝到需要执行 kulinux必学的60个命令bectl 命令的机器的 ~/.kube/config 位置;

4.1 下载和分发 kubectl 二进制文件

cd /opt/k8s/work
wget https://dl.k8s.io/v1.23.3/kubernetes-client-linux-amd64.tar.gz # 自行解决翻墙下载问题
tar -xzvf kubernetes-client-linux-amd64.tar.gz

分发到所有使用 kubectl 工具的二进制文件的后缀名节点:

cd /opt/k8s/work
cp -pdr kubernetes/client/bin/kubectl /bin/
root@Qist opt# which kubectl
/usr/bin/kubectl
root@Qist opt# /usr/bin/kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

4.2 创建 admin 证书和私钥

kubectl 使用 https 协议与 kube-apiserver 进行安全通信,kube-apiserver 对linux kubectl高可用linux必学的60个命令求包含的证书进行认证和授权。

kubectl 后续用于集群管理,所以这里创建具有最高系统运维工程师权限的 admin 证书。

创建证书签名请求:

cd /opt/k8s/work
cat > /opt/k8s/cfssl/k8s/k8s-apiserver-admin.json << EOF
{
  "CN": "admin",
  "hosts": [""], 
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
  • O: system:masters:kube-apiserver 收到使用该证书的客户端请求后,为请求添加组(Group)认证标识 system高可用是什么意思啊:masters
  • 预定义的 ClusterRoleBinding cluster-linuxadmin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予操作集群所需的最高权限;
  • 该证书只会被 kubec配置文件tl 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
      -ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \
      -config=/opt/k8s/cfssl/ca-config.json \
      -profile=kubernetes \
      /opt/k8s/cfssl/k8s/k8s-apiserver-admin.json | \
     cfssljson -bare /opt/k8s/cfssl/pki/k8s/k8s-apiserver-admin
ls /opt/k8s/cfssl/pki/k8s/k8s-apiserver-*
root@Qist tmp# ls /opt/k8s/cfssl/pki/k8s/k8s-apiserver-*
/opt/k8s/cfssl/pki/k8s/k8s-apiserver-admin-key.pem  /opt/k8s/cfssl/pki/k8s/k8s-apiserver-admin.csr  /opt/k8s/cfssl/pki/k8s/k8s-apiserver-admin.pem

4.3 创建 kubeconfig 文件

kubectl 使用 kubeconfig 文件访问 apiserver,该文件包含 kube-apiserver 的地址和认证信二进制文件的后缀名息(CA 证书和客户端证书):

mkdir -p /opt/k8s/kubeconfig
cd /opt/k8s/work
# 设置集群参数
      kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
      --embed-certs=true  \
      --server=https://192.168.2.175:5443 \
      --kubeconfig=/opt/k8s/kubeconfig/admin.kubeconfig
# 设置客户端认证参数
      kubectl config set-credentials admin \
       --client-certificate=/opt/k8s/cfssl/pki/k8s/k8s-apiserver-admin.pem \
       --client-key=/opt/k8s/cfssl/pki/k8s/k8s-apiserver-admin-key.pem \
       --embed-certs=true \
       --kubeconfig=/opt/k8s/kubeconfig/admin.kubeconfig
# 设置上下文参数
      kubectl config set-context kubernetes \
      --cluster=kubernetes \
      --user=admin \
      --namespace=kube-system \
      --kubeconfig=/opt/k8s/kubeconfig/admin.kubeconfig
# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=/opt/k8s/kubeconfig/admin.kubeconfig
  • --certificate-authority系统运维工资一般多少验证 kube-apiserver 证书配置文件的根证书;
  • -二进制文件的后缀名-client-certificate--client-key:刚生成的 admin 证书和私钥,与 kube-apiserver https 通信时使用;
  • --embed-certs=tr配置文件ue:将 ca.pem 和二进制文件转换为文本文件 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(颜色配置文件否则,写入的是证书文件路径,后续拷贝 kubeconfig 到其它机器时,还需要单独拷贝证书文件,不方ha高可用便。);
  • --server:指定 kube-apiserver 的地址,这里指向第一个节点上linux的服务;

4linux操作系统基础知识.4 分发 kubeconfig 文件

分发到所有使用 kubectllinux删除文件命令令的节点:

mkdir -p ~/.kube
cd /opt/k8s/work
cp -pdr /opt/k8s/kubeconfig/admin.kubeconfig ~/.kube/config
# 或者使用环境变量
export KUBECONFIG=/opt/k8s/kubeconfig/admin.kubeconfig

5 部署 etcd 集群

etcd 是基高可用是什么意思啊于 Raft 的分系统运维工作内容布式 KV 存储系统,由 CoreOS 开发,常用于服linux常用命令务发现、共享配置以及并发控制(如 lea配置文件中没有undefined的js对象der 选举、分布式锁等)。

kubernetes 使用 etcd 集群持久化存储所有 API 对象、运行数据。

本文档介绍部署一个三节点高可用 etcd 集群的步骤:

  • 下载和分发 etcd 二进制文件;
  • 创建 etcd 集群各节点的 x509 证书,用于加密客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的通信;
  • 创建 etcd 的 systlinux必学的60个命令emd unit 文件,配置服务linux删除文件命令参数;
  • 检查集群工作状态

etcd 集群节点名称和 IP 如下:

  • k8s-master-1:192.168.2.175
  • k8s-master-2:192.168.2.176
  • k8s-master-系统运维工作内容3:192.168.2.178

注意:

  1. 如果没有特殊指明,本文档的所有操作均在qist 节点上执行

5.1 下载和分发 etcd 二进制文件

到 etcd 的 release 页面 下载最新版本的发布包:

cd /opt/k8s/work
wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz
tar -xvf tcd-v3.5.2-linux-amd64.tar.gz

分发二进制文件到集群所有节点:

cd /opt/k8s/work
scp -r  etcd-v3.5.2-linux-amd64/etcd* root@192.168.2.175:/apps/etcd/bin
scp -r  etcd-v3.5.2-linux-amd64/etcd* root@192.168.2.176:/apps/etcd/bin
scp -r  etcd-v3.5.2-linux-amd64/etcd* root@192.168.2.177:/apps/etcd/bin

5.2 创建 etcd 证书和私钥

  • 创建etcd服务证书
    创建证书签名请求:
    cat > /opt/k8s/cfssl/etcd/etcd-server.json << EOF
    {
    "CN": "etcd",
    "hosts": [
    "127.0.0.1",
    "192.168.2.175","192.168.2.176","192.168.2.177",
    "k8s-master-1","k8s-master-2","k8s-master-3"
    ],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
            "O": "k8s",
            "OU": "Qist"
    }
    ]
    }
    EOF

    生成证书和私钥:

cfssl gencert \
    -ca=/opt/k8s/cfssl/pki/etcd/etcd-ca.pem \
    -ca-key=/opt/k8s/cfssl/pki/etcd/etcd-ca-key.pem \
    -config=/opt/k8s/cfssl/ca-config.json \
    -profile=kubernetes \
    /opt/k8s/cfssl/etcd/etcd-server.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/etcd/etcd-server
  • 创建etcd节点证书

    192.168.2.17高可用5节点

cat > /opt/k8s/cfssl/etcd/k8s-master-1.json << EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.2.175",
    "k8s-master-1"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
            "O": "k8s",
            "OU": "Qist"
    }
  ]
}
EOF

生成证书和私钥:

cfssl gencert \
    -ca=/opt/k8s/cfssl/pki/etcd/etcd-ca.pem \
    -ca-key=/opt/k8s/cfssl/pki/etcd/etcd-ca-key.pem \
    -config=/opt/k8s/cfssl/ca-config.json \
    -profile=kubernetes \
    /opt/k8s/cfssl/etcd/k8s-master-1.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/etcd/etcd-member-k8s-master-1

192.168.2.176节点

cat > /opt/k8s/cfssl/etcd/k8s-master-2.json << EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.2.176",
    "k8s-master-2"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
            "O": "k8s",
            "OU": "Qist"
    }
  ]
}
EOF

生成证书和私钥:

cfssl gencert \
    -ca=/opt/k8s/cfssl/pki/etcd/etcd-ca.pem \
    -ca-key=/opt/k8s/cfssl/pki/etcd/etcd-ca-key.pem \
    -config=/opt/k8s/cfssl/ca-config.json \
    -profile=kubernetes \
    /opt/k8s/cfssl/etcd/k8s-master-2.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/etcd/etcd-member-k8s-master-2

192.1高可用架构68.2.177su二进制文件 节点

cat > /opt/k8s/cfssl/etcd/k8s-master-3.json << EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.2.177",
    "k8s-master-3"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
            "O": "k8s",
            "OU": "Qist"
    }
  ]
}
EOF

生成证书和私钥:

cfssl gencert \
    -ca=/opt/k8s/cfssl/pki/etcd/etcd-ca.pem \
    -ca-key=/opt/k8s/cfssl/pki/etcd/etcd-ca-key.pem \
    -config=/opt/k8s/cfssl/ca-config.json \
    -profile=kubernetes \
    /opt/k8s/cfssl/etcd/k8s-master-3.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/etcd/etcd-member-k8s-master-3
  • 创建etcd client 证书
    创建证书签名请求:
    cat > /opt/k8s/cfssl/etcd/etcd-client.json << EOF
    {
    "CN": "client",
    "hosts": [""], 
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
            "O": "k8s",
            "OU": "Qist"
    }
    ]
    }
    EOF

    生成证书和私钥:

cfssl gencert \
    -ca=/opt/k8s/cfssl/pki/etcd/etcd-ca.pem \
    -ca-key=/opt/k8s/cfssl/pki/etcd/etcd-ca-key.pem \
    -config=/opt/k8s/cfssl/ca-config.json \
    -profile=kubernetes \
    /opt/k8s/cfssl/etcd/etcd-client.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/etcd/etcd-client

分发生成的证书和私钥到各 etcd 节点:

# 分发server 证书
scp -r /opt/k8s/cfssl/pki/etcd/etcd-server* root@192.168.2.175:/apps/etcd/ssl
scp -r /opt/k8s/cfssl/pki/etcd/etcd-server* root@192.168.2.176:/apps/etcd/ssl
scp -r /opt/k8s/cfssl/pki/etcd/etcd-server* root@192.168.2.177:/apps/etcd/ssl
# 分发192.168.2.175 节点证书
scp -r /opt/k8s/cfssl/pki/etcd/etcd-member-k8s-master-1* root@192.168.2.175:/apps/etcd/ssl
# 分发192.168.2.176 节点证书
scp -r /opt/k8s/cfssl/pki/etcd/etcd-member-k8s-master-2* root@192.168.2.176:/apps/etcd/ssl
# 分发192.168.2.177 节点证书
scp -r /opt/k8s/cfssl/pki/etcd/etcd-member-k8s-master-3* root@192.168.2.175:/apps/etcd/ssl
# 分发客户端证书到K8S master 节点 kube-apiserver 连接etcd 集群使用
scp -r /opt/k8s/cfssl/pki/etcd/etcd-client* root@192.168.2.175:/apps/k8s/ssl/etcd/
scp -r /opt/k8s/cfssl/pki/etcd/etcd-client* root@192.168.2.176:/apps/k8s/ssl/etcd/
scp -r /opt/k8s/cfssl/pki/etcd/etcd-client* root@192.168.2.177:/apps/k8s/ssl/etcd/

5.3 创建 etcd 启动参数配置文件

  • 192.168.2.175节配置文件中没有undefined的js对象点:高可用
    k8s-master-1 节点上执行
cat > /apps/etcd/conf/etcd <<EOF
ETCD_OPTS="--name=k8s-master-1 \
           --data-dir=/apps/etcd/data/default.etcd \
           --wal-dir=/apps/etcd/data/default.etcd/wal \
           --listen-peer-urls=https://192.168.2.175:2380 \
           --listen-client-urls=https://192.168.2.175:2379,https://127.0.0.1:2379 \
           --advertise-client-urls=https://192.168.2.175:2379 \
           --initial-advertise-peer-urls=https://192.168.2.175:2380 \
           --initial-cluster=k8s-master-1=https://192.168.2.175:2380,k8s-master-2=https://192.168.2.176:2380,k8s-master-3=https://192.168.2.177:2380 \
           --initial-cluster-token=k8s-cluster \
           --initial-cluster-state=new \
           --heartbeat-interval=6000 \
           --election-timeout=30000 \
           --snapshot-count=5000 \
           --auto-compaction-retention=1 \
           --max-request-bytes=33554432 \
           --quota-backend-bytes=107374182400 \
           --trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem \
           --cert-file=/apps/etcd/ssl/etcd-server.pem \
           --key-file=/apps/etcd/ssl/etcd-server-key.pem \
           --peer-cert-file=/apps/etcd/ssl/etcd-member-k8s-master-1.pem \
           --peer-key-file=/apps/etcd/ssl/etcd-member-k8s-master-1-key.pem \
           --peer-client-cert-auth \
           --cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
           --enable-v2=true \
           --peer-trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem"
EOF
  • 192.168.2.176节点:
    k8s-mast配置文件后缀er-2 节点上执行
cat > /apps/etcd/conf/etcd <<EOF
ETCD_OPTS="--name=k8s-master-2 \
           --data-dir=/apps/etcd/data/default.etcd \
           --wal-dir=/apps/etcd/data/default.etcd/wal \
           --listen-peer-urls=https://192.168.2.176:2380 \
           --listen-client-urls=https://192.168.2.176:2379,https://127.0.0.1:2379 \
           --advertise-client-urls=https://192.168.2.176:2379 \
           --initial-advertise-peer-urls=https://192.168.2.176:2380 \
           --initial-cluster=k8s-master-1=https://192.168.2.175:2380,k8s-master-2=https://192.168.2.176:2380,k8s-master-3=https://192.168.2.177:2380 \
           --initial-cluster-token=k8s-cluster \
           --initial-cluster-state=new \
           --heartbeat-interval=6000 \
           --election-timeout=30000 \
           --snapshot-count=5000 \
           --auto-compaction-retention=1 \
           --max-request-bytes=33554432 \
           --quota-backend-bytes=107374182400 \
           --trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem \
           --cert-file=/apps/etcd/ssl/etcd-server.pem \
           --key-file=/apps/etcd/ssl/etcd-server-key.pem \
           --peer-cert-file=/apps/etcd/ssl/etcd-member-k8s-master-2.pem \
           --peer-key-file=/apps/etcd/ssl/etcd-member-k8s-master-2-key.pem \
           --peer-client-cert-auth \
           --cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
           --enable-v2=true \
           --peer-trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem"
  • 192.168.2.177节点:
    k8s-master-3 节点上执行
cat > /apps/etcd/conf/etcd <<EOF
ETCD_OPTS="--name=k8s-master-3 \
           --data-dir=/apps/etcd/data/default.etcd \
           --wal-dir=/apps/etcd/data/default.etcd/wal \
           --listen-peer-urls=https://192.168.2.177:2380 \
           --listen-client-urls=https://192.168.2.177:2379,https://127.0.0.1:2379 \
           --advertise-client-urls=https://192.168.2.177:2379 \
           --initial-advertise-peer-urls=https://192.168.2.177:2380 \
           --initial-cluster=k8s-master-1=https://192.168.2.175:2380,k8s-master-2=https://192.168.2.176:2380,k8s-master-3=https://192.168.2.177:2380 \
           --initial-cluster-token=k8s-cluster \
           --initial-cluster-state=new \
           --heartbeat-interval=6000 \
           --election-timeout=30000 \
           --snapshot-count=5000 \
           --auto-compaction-retention=1 \
           --max-request-bytes=33554432 \
           --quota-backend-bytes=107374182400 \
           --trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem \
           --cert-file=/apps/etcd/ssl/etcd-server.pem \
           --key-file=/apps/etcd/ssl/etcd-server-key.pem \
           --peer-cert-file=/apps/etcd/ssl/etcd-member-k8s-master-3.pem \
           --peer-key-file=/apps/etcd/ssl/etcd-member-k8s-master-3-key.pem \
           --peer-client-cert-auth \
           --cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
           --enable-v2=true \
           --peer-trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem"

创建 etcd 的 systemd unit 文件

k8s-master-1 k8s-master-2 k8s-master-3 节点上执行

cat > /usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/etcd-io/etcd
[Service]
Type=notify
LimitNOFILE=655350
LimitNPROC=655350
LimitCORE=infinity
LimitMEMLOCK=infinity
User=etcd
Group=etcd
WorkingDirectory=/apps/etcd/data/default.etcd
EnvironmentFile=-/apps/etcd/conf/etcd
ExecStart=/apps/etcd/bin/etcd \$ETCD_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  • Working系统运维工程师Directory--data-dir:指定工作目录和数据目录为 ${ETCD_DATA_DIR},需在启动服务前创建这个目录;
  • --wal-dirlinux是什么操作系统:指定 wal 目录,为了提高性能,一般使用 SSD 或者和 --data-dir 不同的磁盘;
  • --name:指定节点名linux重启命令称,当 --initial-clus系统运维工资一般多少ter-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中linux
  • --cert-file--key-file:etcd server 与 client 通信时使用的证书和私钥;
  • --trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
  • --peer-cert-file--peer-key-file:etcd 与 peer 通信使用的证书和私钥;
  • --peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书;

5.二进制文件是什么意思4 创建etcd 运行用户

k8s-master-1 k8s-master-2 k8s-master-3 节点上执行

  • 创建etcd用户
useradd etcd -s /sbin/nologin -M
  • etcd 目录给用户权限
    chown -R etcd:etcd /apps/etcd
    [root@k8s-master-3 ~]# ls -la /apps/etcd/
    total 4
    drwxr-xr-x  7 etcd etcd   64 Feb 10 20:32 .
    drwxr-xr-x. 8 root root   85 Aug 26 18:54 ..
    drwxr-xr-x  3 etcd etcd  117 Feb 10 20:28 bin
    drwxr-xr-x  2 etcd etcd   18 Feb 10 20:33 conf
    drwxr-xr-x  3 etcd etcd   26 Aug 26 12:57 data
    drwxr-xr-x  2 etcd etcd 4096 Aug 26 12:58 ssl

    5.5 启动 etcd 服务

k8s-master-1 k8s-master-2 k8s-master-3 节点上执行

# 全局刷新service
systemctl daemon-reload 
# 设置etcd 开机启动
systemctl enable etcd
#重启etcd
systemctl restart etcd
  • 必须先创建 etcd 数据目录和工作目录;
  • etcd 进程首次启动时会等待其它节二进制文件点的 etcd 加入集群,命令 systemctl st二进制文件是什么意思art etcd 会卡住一段时间,为正常现象;

5.6 检查启动结果

k8s-mas二进制文件传输ter-1 k8s-master-2 k8s-master-3 节点上执行

systemctl status etcd|grep Active
[root@k8s-master-1 conf]# systemctl status etcd|grep Active
   Active: active (running) since Fri 2022-02-11 13:49:37 CST; 4h 5min ago  
[root@k8s-master-2 ~]# systemctl status etcd|grep Active
   Active: active (running) since Fri 2022-02-11 13:49:36 CST; 4h 4min ago
[root@k8s-master-3 ~]# systemctl status etcd|grep Active
   Active: active (running) since Fri 2022-02-11 13:49:36 CST; 4h 5min ago

确保状态active (running),否则查看日志,确认原因:

journalctl -u etcd

5.7 验证服务状态

部署完 etcd 集群后,在任一 et二进制文件传输cd 节点上执行如下命令:

# 配置环境变量
export ETCDCTL_API=3
export ENDPOINTS=https://192.168.2.175:2379,https://192.168.2.176:2379,https://192.168.2.177:2379
alias etcdctl='/apps/etcd/bin/etcdctl --endpoints=${ENDPOINTS} --cacert=/apps/etcd/ssl/etcd-ca.pem --cert=/apps/etcd/ssl/etcd-client.pem   --key=/apps/etcd/ssl/etcd-client-key.pem'
  • 3.5.2 版本的 etcd/etcdctl 默认启用了 V3 API,所以执行 etcdctl 命令二进制文件是什么意思时不需要再指定环境变量 ETCDCTL_API=3二进制文件是什么意思
  • 从 K8S 1.13 开始,不再支持 v2 版本的 etcd;

预期输出:

[root@k8s-master-1 etcd]# etcdctl endpoint health
https://192.168.2.176:2379 is healthy: successfully committed proposal: took = 18.738902ms
https://192.168.2.175:2379 is healthy: successfully committed proposal: took = 18.252524ms
https://192.168.2.177:2379 is healthy: successfully committed proposal: took = 30.017655ms

输出均为 healthy 时表示集ha高可用群服务正常。

5.8 查看当前的 leade二进制文件和文本文件的区别r

etcdctl  -w table endpoint status

输出:

+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.2.175:2379 | 1d52723774170a71 |   3.5.2 |   14 MB |     false |      false |        26 |   48406910 |           48406908 |        |
| https://192.168.2.176:2379 | f378cb31036d611a |   3.5.2 |   14 MB |     false |      false |        26 |   48406910 |           48406910 |        |
| https://192.168.2.177:2379 | 81ecb1674fd0ce57 |   3.5.2 |   14 MB |      true |      false |        26 |   48406910 |           48406910 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  • 可见,当前的 lelinux是什么操作系统ader 为 192.168.2.177。

    6 下载及分发 kubernetes server 二二进制文件怎么打开进制包

kuberne高可用tes master 节点运行如下组件:

  • kube-apiservlinuxer
  • kube-scheduler
  • kube-controller-manager

kube-apiserver、kube-sche配置文件后缀duler 和 kube-controller-manager 均以多实例模式运行:linux删除文件命令

  1. kube-scheduler 和高可用是什么意思啊 kube-controller-manager 会自动选举产生一个配置文件是什么意思 leader 实例,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性;
  2. kube-apiserver 是无状态的,可以通过 kube-nginx 进行代理访问(见apiserver高可用是什么意思啊高可用),从而保证服务可用性;

注意:如果没有特殊指明,本文档的高可用所有linux常用命令高可用架构均在 qist 节点上执行

6.1 下载最新版配置文件的扩展名是什么本二进制文件

从 CHANGELOG 页面 下载二进制 tar 文件配置文件的扩展名是什么并解压:

cd /opt/k8s/work
wget https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz  # 自行解决翻墙问题
tar -xzvf kubernetes-server-linux-amd64.tar.gz

将二进制文件拷贝到所有 mlinux是什么操作系统aster 节点:

cd /opt/k8s/work
    scp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubelet} root@192.168.2.175:/apps/k8s/bin/
    scp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubelet} root@192.168.2.176:/apps/k8s/bin/
    scp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubelet} root@192.168.2.177:/apps/k8s/bin/

将二进制文件拷贝到所有 node 节点:

cd /opt/k8s/work
    scp -r kubernetes/server/bin/{kube-proxy,kubelet} root@192.168.2.187:/apps/k8s/bin/
    scp -r kubernetes/server/bin/{kube-proxy,kubelet} root@192.168.2.185:/apps/k8s/bin/
    scp -r kubernetes/server/bin/{kube-proxy,kubelet} root@192.168.3.62:/apps/k8s/bin/
    scp -r kubernetes/server/bin/{kube-proxy,kubelet} root@192.168.3.70:/apps/k8s/bin/

7 apiserver 高可用

注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行。

7.1 高可用选型

  • ipvs+keepalived
  • ngi二进制文件传输nx+keepalived
  • haprxoy+keepalived
  • 每个节点kubelet 启动静态pod nginx haprxoy

    本文档选择每个节点kubelet 启动静态pod nginx haprxoy

7.2 构建nginx或者haproxy镜像

  • nginx doc配置文件后缀kerfile
  • haroxy dockerfile

    #构建
    docker build -t imagename .

    7.3 生成kubelet 静态启动pod yaml

mkdir -p /opt/k8s/work/yaml
cd /opt//k8s/work/yaml
cat >/opt/k8s/work/yaml/kube-ha-proxy.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver-ha-proxy
    tier: control-plane
  annotations:
    prometheus.io/port: "8404"
    prometheus.io/scrape: "true"
  name: kube-apiserver-ha-proxy
  namespace: kube-system
spec:
  containers:
  - args:
    - "CP_HOSTS=192.168.2.175,192.168.2.176,192.168.2.177"
    image: docker.io/juestnow/nginx-proxy:1.21.0
    imagePullPolicy: IfNotPresent
    name: kube-apiserver-ha-proxy
    env:
    - name: CPU_NUM
      value: "4"
    - name: BACKEND_PORT
      value: "5443"
    - name: HOST_PORT
      value: "6443"
    - name: CP_HOSTS
      value: "192.168.2.175,192.168.2.176,192.168.2.177"
  hostNetwork: true
  priorityClassName: system-cluster-critical
status: {}
EOF

参数说明:

  • CPU_NUM:nginx 使用cpu 核数
  • BACKEND_PORT:后端kube-apiserver监听端口
  • HOST_PORT:负载均衡器监听端口
  • CP_HOSTS:kube-apis配置文件中没有undefined的js对象erver服务IP地址列表
  • metrics端口:8404 prometlinux必学的60个命令heus 拉取数据使用

已有镜像:

  • nginx 镜像:dock系统运维工资一般多少er.ilinux删除文件命令o/juestnow/ngin配置文件后缀x-proxy:1.21.0
  • haproxy镜像:docker.io/juestnow/haprolinux是什么操作系统xy-proxy:2.4.0

分发kube-ha-proxy.yaml 到所有节点

cd /opt/k8s/work/yaml/
# server 节点
scp kube-ha-proxy.yaml root@192.168.2.175:/apps/work/kubernetes/manifests/
scp kube-ha-proxy.yaml root@192.168.2.176:/apps/work/kubernetes/manifests/
scp kube-ha-proxy.yaml root@192.168.2.177:/apps/work/kubernetes/manifests/
# node 节点
scp kube-ha-proxy.yaml root@192.168.2.187:/apps/work/kubernetes/manifests/
scp kube-ha-proxy.yaml root@192.168.2.185:/apps/work/kubernetes/manifests/
scp kube-ha-proxy.yaml root@192.168.3.62:/apps/work/kubernetes/manifests/
scp kube-ha-proxy.yaml root@192.168.3.70:/apps/work/kubernetes/manifests/

8 runtime组件

runtime组件:docker containerd cri-o

本文档选择cri-o为runtime组件

8.配置文件是什么意思1 部署cri配置文件中没有undefined的js对象-o组件

cri-o 实现了 k网络配置文件ubernetes 的 Container Runtime Interface (CRI) 接口,提供容器运行时核心功能,如镜像管理、容器管理等,相比 docker 更加简单、健壮和可移植。

containerd cadvisor接口无pod网络不能很直观的监控pod网络使用所以本linux命令文选择cri-o

注意:

如果没有特殊指明,本文档的所有操作均在颜色配置文件qist 节点上执行。

8.2 下载二进制文件

下载二进制文件:

cd /opt/k8s/work
wget https://storage.googleapis.com/cri-o/artifacts/cri-o.amd64.9b7f5ae815c22a1d754abfbc2890d8d4c10e240d.tar.gz

RELEASES页面

解压压缩包:

tar -xvf cri-o.amd64.9b7f5ae815c22a1d754abfbc2890d8d4c10e240d.tar.gz

8.3 修改配置文件

cri-o 配置文件生成:

cd cri-o/etc
cat > crio.conf  <<EOF
[crio]
root = "/var/lib/containers/storage"
runroot = "/var/run/containers/storage"
log_dir = "/var/log/crio/pods"
version_file = "/var/run/crio/version"
version_file_persist = "/var/lib/crio/version"
[crio.api]
listen = "/var/run/crio/crio.sock"
stream_address = "127.0.0.1"
stream_port = "0"
stream_enable_tls = false
stream_tls_cert = ""
stream_tls_key = ""
stream_tls_ca = ""
grpc_max_send_msg_size = 16777216
grpc_max_recv_msg_size = 16777216
[crio.runtime]
default_ulimits = [
  "nofile=65535:65535",
  "nproc=65535:65535",
  "core=-1:-1"
]
default_runtime = "crun"
no_pivot = false
decryption_keys_path = "/apps/crio/keys/"
conmon = "/apps/crio/bin/conmon"
conmon_cgroup = "system.slice"
conmon_env = [
        "PATH=/apps/crio/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
]
default_env = [
]
selinux = false
seccomp_profile = ""
apparmor_profile = "crio-default"
cgroup_manager = "systemd"
default_capabilities = [
    "CHOWN",
    "MKNOD",
    "DAC_OVERRIDE",
    "NET_ADMIN",
    "NET_RAW",
    "SYS_CHROOT",
    "FSETID",
    "FOWNER",
    "SETGID",
    "SETUID",
    "SETPCAP",
    "NET_BIND_SERVICE",
    "KILL",
]
default_sysctls = [
]
additional_devices = [
]
hooks_dir = [
        "/apps/crio/containers/oci/hooks.d",
]
default_mounts = [
]
pids_limit = 102400
log_size_max = -1
log_to_journald = false
container_exits_dir = "/apps/crio/run/crio/exits"
container_attach_socket_dir = "/var/run/crio"
bind_mount_prefix = ""
read_only = false
log_level = "info"
log_filter = ""
uid_mappings = ""
gid_mappings = ""
ctr_stop_timeout = 30
manage_ns_lifecycle = true
namespaces_dir = "/apps/crio/run"
pinns_path = "/apps/crio/bin/pinns"
[crio.runtime.runtimes.crun]
runtime_path = "/apps/crio/bin/crun"
runtime_type = "oci"
runtime_root = "/apps/crio/run/crun"
allowed_annotations = [
    "io.containers.trace-syscall",
]
[crio.image]
default_transport = "docker://"
global_auth_file = ""
pause_image = "docker.io/juestnow/pause:3.5"
pause_image_auth_file = ""
pause_command = "/pause"
signature_policy = ""
image_volumes = "mkdir"
[crio.network]
network_dir = "/etc/cni/net.d"
plugin_dirs = [
        "/opt/cni/bin",
]
[crio.metrics]
enable_metrics = false
metrics_port = 9090
EOF

参数说明:

  • root:容器镜像存放目录;
  • runroot:容器运行目录;
  • log_dir:容器日志默认存放目录 kubelet 指定目录就存放kubelelinux系统安装t所指定目录;
  • default_runtime:指定默认运行linux删除文件命令时;
  • conmon:conmon 二进制文件的路径,用于监控 OCI 运行时;
  • conmon_env:conmha高可用on 运行时的环境变量;
  • hooks_dir:OCI hooks 目录;
  • container_exits_dir:conmon 将容器出口文件写入其中的目录的路径;
  • names二进制文件paces_dir:管理命名空间状态被跟踪的目录。仅在 manage_系统/运维ns_lifecy二进制文件转换为文本文件cle 为配置文件 true 时使用;
  • pinns_path:pinns_path 是查找 pinns 二进制文件的路径,这是管理命二进制文件名空间生命周期所必需的linux常用命令
  • runtime_path:运行时可执行文件的绝对路径 ;
  • runtime_root:存放容器的根目录;
  • pa二进制文件传输use_image:pause镜像路径;
  • network_dir: cni 配置文件路径;
  • plugin_dirs:cni 二进linux系统制文件存su二进制文件放路径;二进制文件
  • 官网文档
  • default runtime:使用crun
  • 运行路径:/apps/crio 请根据自己环境修改

    cri-o 启动其它所需配置文件生成

    
    cd /opt/k8s/work/cri-o
    mkdir containers
    cd containers
    cat > policy.json <<EOF
    {
    "default": [
        {
            "type": "insecureAcceptAnything"
        }
    ],
    "transports":
        {
            "docker-daemon":
                {
                    "": [{"type":"insecureAcceptAnything"}]
                }
        }
    }
    EOF
    cat >registries.conf <<EOF
    # This is a system-wide configuration file used to
    # keep track of registries for various container backends.
    # It adheres to TOML format and does not support recursive
    # lists of registries.

The default location for二进制文件的后缀名 this configuration file is /etc/containers/registries.conf.

The only valid categories are: 'registries.search', 'registries.insecure',

and 'registries.block'.

[registries.search]
registries = ['registry.access.redhat.com', 'docker.io', 'registry.fedoraproject.org', 'quay.io', 'registry.centos.org']

If you need to access insecure regist配置文件中没有undefined的js对象ries, add the registry's fully-qulinux命令aliflinux系统安装ied name.

An insecure registry is one that does not halinux重启命令ve a valid SSL certificatelinux删除文件命令 or only does HTTP.

[registries.insecure]
registries = []

If you need to block pull access from a registry, uncommen配置文件怎么创建t高可用 th高可用e section below

and add the registries fully-qualified name.

#

Docker only

[registries.block]
registries = []
EOF

 ## 8.4 创建 cri-o systemd  unit 文件
 ```bash
cd /opt/k8s/work
cat >crio.service <<EOF
[Unit]
Description=OCI-based implementation of Kubernetes Container Runtime Interface
Documentation=https://github.com/github.com/cri-o/cri-o
[Service]
Type=notify
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/apps/crio/bin/crio --config /apps/crio/etc/crio.conf --log-level info
Restart=on-failure
RestartSec=5
LimitNOFILE=655350
LimitNPROC=655350
LimitCORE=infinity
LimitMEMLOCK=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

8.5 分发文件

分发二进制文件及配置文件:

 cd /opt/k8s/work/cri-o
 scp -r {bin,etc} root@192.168.2.175:/apps/crio
 scp -r {bin,etc} root@192.168.2.176:/apps/crio
 scp -r {bin,etc} root@192.168.2.177:/apps/crio
 scp -r {bin,etc} root@192.168.2.185:/apps/crio
 scp -r {bin,etc} root@192.168.2.187:/apps/crio
 scp -r {bin,etc} root@192.168.3.62:/apps/crio
 scp -r {bin,etc} root@192.168.3.70/apps/crio

分发其它配置文件:

 cd /opt/k8s/work/cri-o
 scp -r containers root@192.168.2.175:/etc/
 scp -r containers root@192.168.2.176:/etc/
 scp -r containers root@192.168.2.177:/etc/
 scp -r containers root@192.168.2.185:/etc/
 scp -r containers root@192.168.2.187:/etc/
 scp -r containers root@192.168.3.62:/etc/
 scp -r containers root@192.168.3.70:/etc/

分发启动文件:

 cd /opt/k8s/work
 scp crio.service root@192.168.2.175:/usr/lib/systemd/system/crio.service
 scp crio.service root@192.168.2.176:/usr/lib/systemd/system/crio.service
 scp crio.service root@192.168.2.177:/usr/lib/systemd/system/crio.service
 scp crio.service root@192.168.2.185:/usr/lib/systemd/system/crio.service
 scp crio.service root@192.168.2.187:/usr/lib/systemd/system/crio.service
 scp crio.service root@192.168.3.62:/usr/lib/systemd/system/crio.service
 scp crio.service root@192.168.3.70:/usr/lib/systemd/system/crio.service

8.6 启动cri-o 服务

 # 全局刷新service
systemctl daemon-reload 
# 设置cri-o开机启动
systemctl enable crio
#重启cri-o
systemctl restart crio

8.7 检查启动结配置文件的扩展名是什么

所有节点执行

systemctl status crio|grep Active
[root@k8s-master-3 bin]# systemctl status crio|grep Active
   Active: active (running) since Fri 2022-02-11 13:48:39 CST; 3 days ago
[root@k8s-master-2 ~]# systemctl status crio|grep Active
   Active: active (running) since Fri 2022-02-11 13:49:31 CST; 3 days ago
[root@k8s-master-1 ~]# systemctl status crio|grep Active
   Active: active (running) since Fri 2022-02-11 13:49:30 CST; 3 days ago
# 请自行全部节点检查

确保状态为 active (running),否则查看日志配置文件后缀,确认原因:

journalctl -u crio

8.8 创建和分linux必学的60个命令发 crictl 配置文件

crictl 是兼容 CRI 容器运linux必学的60个命令行时的命令行工具,提供类似于 docker 命令的功能。具体参考官方文档。

cd /opt/k8s/work
cat << EOF | sudo tee crictl.yaml
runtime-endpoint: "unix:///var/run/crio/crio.sock"
image-endpoint: "unix:///var/run/crio/crio.sock"
timeout: 10
debug: false
pull-image-on-create: true
disable-pull-on-run: false
EOF

分发到所有 节点:

cd /opt/k8s/work
scp crictl.yaml root@192.168.2.175:/etc/crictl.yaml
scp crictl.yaml root@192.168.2.176:/etc/crictl.yaml
scp crictl.yaml root@192.168.2.177:/etc/crictl.yaml
scp crictl.yaml root@192.168.2.185:/etc/crictl.yaml
scp crictl.yaml root@192.168.2.187:/etc/crictl.yaml
scp crictl.yaml root@192.168.3.62:/etc/crictl.yaml
scp crictl.yaml root@192.168.3.70:/etc/crictl.yaml

8.9 验证cri-o是否能正常访问

# 查询镜像
crictl images
# pull 镜像
crictl pull docker.io/library/busybox:1.24
# 查看容器运行状态
crictl ps -a

9 cni plugins 部署

cni-plugins:容器运行时使用网络需要

9.1 下载二进制文件

下载页面

cd /opt/k8s/work
mkdir -p cni/bin
cd cni
wget https://github.com/containernetworking/plugins/releases/download/v1.0.1/cni-plugins-linux-amd64-v1.0.1.tgz

9.2 解压及分发二进制文件

注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行。

解压二进制:

cd /opt/k8s/work/cni
tar -xvf cni-plugins-linux-amd64-v1.0.1.tgz  -C bin

分发文件到所有节点:

cd /opt/k8s/work/cni
scp -r bin root@192.168.2.175:/opt/
scp -r bin root@192.168.2.176:/opt/
scp -r bin root@192.168.2.177:/opt/
scp -r bin root@192.168.2.185:/opt/
scp -r bin root@192.168.2.187:/opt/
scp -r bin root@192.168.3.62:/opt/
scp -r bin root@192.168.3.70:/opt/

创建cni 配置文件su二进制文件目录

ssh root@192.168.2.175 mkdir -p /etc/cni/net.d
ssh root@192.168.2.176 mkdir -p /etc/cni/net.d
ssh root@192.168.2.177 mkdir -p /etc/cni/net.d
ssh root@192.168.2.185 mkdir -p /etc/cni/net.d
ssh root@192.168.2.187 mkdir -p /etc/cni/net.d
ssh root@192.168.3.62 mkdir -p /etc/cni/net.d
ssh root@192.168.3.70 mkdir -p /etc/cni/net.d

10 部署 kube-apiserver 集群

本文档讲解部linux必学的60个命令署一个三实例 kube-apiserver 集群的步骤.

集群规划:
服务网段:10.66.0.0/16
Pod 网段:10.80.0.0/12
集群域名:cluster.l二进制文件是什么意思ocal

注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行。

10.1 创建kube-apiserver 证书

创建证书签名请求:

cd /opt/k8s/work
cat > /opt/k8s/cfssl/k8s/k8s-apiserver.json << EOF
{
  "CN": "kubernetes",
  "hosts": [
    "192.168.2.175","192.168.2.176","192.168.2.177",
    "10.66.0.1", 
    "192.168.2.175","127.0.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster.local"    
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
            "O": "k8s",
            "OU": "Qist"
    }
  ]
}
EOF
  • hosts 字段指定授权su二进制文件使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernet系统运维工程师es 服务的 IP配置文件后缀 和域名;
    • 10.66.0.1:kube-apiserver service ip 一般是service第一个ip selinux必学的60个命令rvice-cluster-ip-range 参数
    • "192.168.2.175","192.168.2.176","192.168.2.177"linux: master 节点IP
    • "192.168.2.175","127.0.0.1":192.168.linux是什么操作系统2.175 vip ip 方便客户端访问
      本地127IP 能访问 kube-ha-proxy使用
    • "kubernetes.default.svc.cluster.local" :全局域名访问cluster.local 可以是其它域

生成 Kubernetes API Server 证书和私钥

cfssl gencert \
    -ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
    -ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \
    -config=/opt/k8s/cfssl/ca-config.json \
    -profile=kubernetes \
    /opt/k8s/cfssl/k8s/k8s-apiserver.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/k8s/k8s-server

10.2 创建加密配置文件

# 生成 EncryptionConfig 所需的加密 key
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cd /opt/k8s/work
mkdir config
cat > config/encryption-config.yaml << EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

10.3 创建 Kubernetes webhoo配置文件中没有undefined的js对象k 证书

创建证书签名请求:

cd /opt/k8s/work
cat > /opt/k8s/cfssl/k8s/aggregator.json << EOF
{
  "CN": "aggregator",
  "hosts": [""], 
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
            "O": "k8s",
            "OU": "Qist"
    }
  ]
}
EOF
  • C系统运维工资一般多少N 名称需要位于 kube-apiserver 的 --requestheadelinuxr-su二进制文件allowed-names 参数中,否则后续访问 metrics 时会提示权限不足。

生成 Kubernetes webhook 证书和私钥

cfssl gencert \
    -ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
    -ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \
    -config=/opt/k8s/cfssl/ca-config.json \
    -profile=kubernetes \
    /opt/k8s/cfssl/k8s/aggregator.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/k8s/aggregator

10.4 创建 kub用户配置文件e-apiserver 配置文件

  • 192.168.2.175节点:
    k8s-master-1 节点上执行
cat >/apps/k8s/conf/kube-apiserver <<EOF
KUBE_APISERVER_OPTS="--logtostderr=true \
        --bind-address=192.168.2.175 \
        --advertise-address=192.168.2.175 \
        --secure-port=5443 \
        --insecure-port=0 \
        --service-cluster-ip-range=10.66.0.0/16 \
        --service-node-port-range=30000-65535 \
        --etcd-cafile=/apps/k8s/ssl/etcd/etcd-ca.pem \
        --etcd-certfile=/apps/k8s/ssl/etcd/etcd-client.pem \
        --etcd-keyfile=/apps/k8s/ssl/etcd/etcd-client-key.pem \
        --etcd-prefix=/registry \
        --etcd-servers=https://192.168.2.175:2379,https://192.168.2.176:2379,https://192.168.2.177:2379 \
        --client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
        --tls-cert-file=/apps/k8s/ssl/k8s/k8s-server.pem \
        --tls-private-key-file=/apps/k8s/ssl/k8s/k8s-server-key.pem \
        --kubelet-client-certificate=/apps/k8s/ssl/k8s/k8s-server.pem \
        --kubelet-client-key=/apps/k8s/ssl/k8s/k8s-server-key.pem \
        --service-account-key-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
        --requestheader-client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
        --proxy-client-cert-file=/apps/k8s/ssl/k8s/aggregator.pem \
        --proxy-client-key-file=/apps/k8s/ssl/k8s/aggregator-key.pem \
        --service-account-issuer=https://kubernetes.default.svc.cluster.local \
        --service-account-signing-key-file=/apps/k8s/ssl/k8s/k8s-ca-key.pem \
        --requestheader-allowed-names=aggregator \
        --requestheader-group-headers=X-Remote-Group \
        --requestheader-extra-headers-prefix=X-Remote-Extra- \
        --requestheader-username-headers=X-Remote-User \
        --enable-aggregator-routing=true \
        --anonymous-auth=false \
        --experimental-encryption-provider-config=/apps/k8s/config/encryption-config.yaml \
        --enable-admission-plugins=DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,NamespaceExists,NamespaceLifecycle,NodeRestriction,PodNodeSelector,PersistentVolumeClaimResize,PodTolerationRestriction,ResourceQuota,ServiceAccount,StorageObjectInUseProtection,MutatingAdmissionWebhook,ValidatingAdmissionWebhook \
        --disable-admission-plugins=ExtendedResourceToleration,ImagePolicyWebhook,LimitPodHardAntiAffinityTopology,NamespaceAutoProvision,Priority,EventRateLimit,PodSecurityPolicy \
        --cors-allowed-origins=.* \
        --enable-swagger-ui \
        --runtime-config=api/all=true \
        --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
        --authorization-mode=Node,RBAC \
        --allow-privileged=true \
        --apiserver-count=3 \
        --audit-log-maxage=30 \
        --audit-log-maxbackup=3 \
        --audit-log-maxsize=100 \
        --default-not-ready-toleration-seconds=30 \
        --default-unreachable-toleration-seconds=30 \
        --audit-log-truncate-enabled \
        --audit-log-path=/apps/k8s/log/api-server-audit.log \
        --profiling \
        --http2-max-streams-per-connection=10000 \
        --event-ttl=1h \
        --enable-bootstrap-token-auth=true \
        --alsologtostderr=true \
        --log-dir=/apps/k8s/log \
        --v=2 \
        --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
        --endpoint-reconciler-type=lease \
        --max-mutating-requests-inflight=500 \
        --max-requests-inflight=1500 \
        --target-ram-mb=300"
EOF
  • 192.168.2.配置文件后缀176节点:
    k8s-master-2 节点上执行
cat >/apps/k8s/conf/kube-apiserver <<EOF
KUBE_APISERVER_OPTS="--logtostderr=true \
        --bind-address=192.168.2.176 \
        --advertise-address=192.168.2.176 \
        --secure-port=5443 \
        --insecure-port=0 \
        --service-cluster-ip-range=10.66.0.0/16 \
        --service-node-port-range=30000-65535 \
        --etcd-cafile=/apps/k8s/ssl/etcd/etcd-ca.pem \
        --etcd-certfile=/apps/k8s/ssl/etcd/etcd-client.pem \
        --etcd-keyfile=/apps/k8s/ssl/etcd/etcd-client-key.pem \
        --etcd-prefix=/registry \
        --etcd-servers=https://192.168.2.175:2379,https://192.168.2.176:2379,https://192.168.2.177:2379 \
        --client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
        --tls-cert-file=/apps/k8s/ssl/k8s/k8s-server.pem \
        --tls-private-key-file=/apps/k8s/ssl/k8s/k8s-server-key.pem \
        --kubelet-client-certificate=/apps/k8s/ssl/k8s/k8s-server.pem \
        --kubelet-client-key=/apps/k8s/ssl/k8s/k8s-server-key.pem \
        --service-account-key-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
        --requestheader-client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
        --proxy-client-cert-file=/apps/k8s/ssl/k8s/aggregator.pem \
        --proxy-client-key-file=/apps/k8s/ssl/k8s/aggregator-key.pem \
        --service-account-issuer=https://kubernetes.default.svc.cluster.local \
        --service-account-signing-key-file=/apps/k8s/ssl/k8s/k8s-ca-key.pem \
        --requestheader-allowed-names=aggregator \
        --requestheader-group-headers=X-Remote-Group \
        --requestheader-extra-headers-prefix=X-Remote-Extra- \
        --requestheader-username-headers=X-Remote-User \
        --enable-aggregator-routing=true \
        --anonymous-auth=false \
        --experimental-encryption-provider-config=/apps/k8s/config/encryption-config.yaml \
        --enable-admission-plugins=DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,NamespaceExists,NamespaceLifecycle,NodeRestriction,PodNodeSelector,PersistentVolumeClaimResize,PodTolerationRestriction,ResourceQuota,ServiceAccount,StorageObjectInUseProtection,MutatingAdmissionWebhook,ValidatingAdmissionWebhook \
        --disable-admission-plugins=ExtendedResourceToleration,ImagePolicyWebhook,LimitPodHardAntiAffinityTopology,NamespaceAutoProvision,Priority,EventRateLimit,PodSecurityPolicy \
        --cors-allowed-origins=.* \
        --enable-swagger-ui \
        --runtime-config=api/all=true \
        --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
        --authorization-mode=Node,RBAC \
        --allow-privileged=true \
        --apiserver-count=3 \
        --audit-log-maxage=30 \
        --audit-log-maxbackup=3 \
        --audit-log-maxsize=100 \
        --default-not-ready-toleration-seconds=30 \
        --default-unreachable-toleration-seconds=30 \
        --audit-log-truncate-enabled \
        --audit-log-path=/apps/k8s/log/api-server-audit.log \
        --profiling \
        --http2-max-streams-per-connection=10000 \
        --event-ttl=1h \
        --enable-bootstrap-token-auth=true \
        --alsologtostderr=true \
        --log-dir=/apps/k8s/log \
        --v=2 \
        --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
        --endpoint-reconciler-type=lease \
        --max-mutating-requests-inflight=500 \
        --max-requests-inflight=1500 \
        --target-ram-mb=300"
EOF
  • 192.168.2.177节点:
    k8s-master-3 节点上执行
cat >/apps/k8s/conf/kube-apiserver <<EOF
KUBE_APISERVER_OPTS="--logtostderr=true \
        --bind-address=192.168.2.177 \
        --advertise-address=192.168.2.177 \
        --secure-port=5443 \
        --insecure-port=0 \
        --service-cluster-ip-range=10.66.0.0/16 \
        --service-node-port-range=30000-65535 \
        --etcd-cafile=/apps/k8s/ssl/etcd/etcd-ca.pem \
        --etcd-certfile=/apps/k8s/ssl/etcd/etcd-client.pem \
        --etcd-keyfile=/apps/k8s/ssl/etcd/etcd-client-key.pem \
        --etcd-prefix=/registry \
        --etcd-servers=https://192.168.2.175:2379,https://192.168.2.176:2379,https://192.168.2.177:2379 \
        --client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
        --tls-cert-file=/apps/k8s/ssl/k8s/k8s-server.pem \
        --tls-private-key-file=/apps/k8s/ssl/k8s/k8s-server-key.pem \
        --kubelet-client-certificate=/apps/k8s/ssl/k8s/k8s-server.pem \
        --kubelet-client-key=/apps/k8s/ssl/k8s/k8s-server-key.pem \
        --service-account-key-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
        --requestheader-client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
        --proxy-client-cert-file=/apps/k8s/ssl/k8s/aggregator.pem \
        --proxy-client-key-file=/apps/k8s/ssl/k8s/aggregator-key.pem \
        --service-account-issuer=https://kubernetes.default.svc.cluster.local \
        --service-account-signing-key-file=/apps/k8s/ssl/k8s/k8s-ca-key.pem \
        --requestheader-allowed-names=aggregator \
        --requestheader-group-headers=X-Remote-Group \
        --requestheader-extra-headers-prefix=X-Remote-Extra- \
        --requestheader-username-headers=X-Remote-User \
        --enable-aggregator-routing=true \
        --anonymous-auth=false \
        --experimental-encryption-provider-config=/apps/k8s/config/encryption-config.yaml \
        --enable-admission-plugins=DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,NamespaceExists,NamespaceLifecycle,NodeRestriction,PodNodeSelector,PersistentVolumeClaimResize,PodTolerationRestriction,ResourceQuota,ServiceAccount,StorageObjectInUseProtection,MutatingAdmissionWebhook,ValidatingAdmissionWebhook \
        --disable-admission-plugins=ExtendedResourceToleration,ImagePolicyWebhook,LimitPodHardAntiAffinityTopology,NamespaceAutoProvision,Priority,EventRateLimit,PodSecurityPolicy \
        --cors-allowed-origins=.* \
        --enable-swagger-ui \
        --runtime-config=api/all=true \
        --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
        --authorization-mode=Node,RBAC \
        --allow-privileged=true \
        --apiserver-count=3 \
        --audit-log-maxage=30 \
        --audit-log-maxbackup=3 \
        --audit-log-maxsize=100 \
        --default-not-ready-toleration-seconds=30 \
        --default-unreachable-toleration-seconds=30 \
        --audit-log-truncate-enabled \
        --audit-log-path=/apps/k8s/log/api-server-audit.log \
        --profiling \
        --http2-max-streams-per-connection=10000 \
        --event-ttl=1h \
        --enable-bootstrap-token-auth=true \
        --alsologtostderr=true \
        --log-dir=/apps/k8s/log \
        --v=2 \
        --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
        --endpoint-reconciler-type=lease \
        --max-mutating-requests-inflight=500 \
        --max-requests-inflight=1500 \
        --target-ram-mb=300"
EOF
  • --advertise-address:apiserver 对外通告的 IP(kubernha高可用etes 服务后端节点 IP);
  • --defaullinux删除文件命令t-*-toleration-seconds:设置节点异常相关的阈值;
  • --max-*-requests-inflight:请求相关的最大阈值;
  • --etcd-配置文件后缀*:访问 etcd 的证书和 etcd 服务器地址;
  • --bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能二进制文件的后缀名访问它的安全端口 5443;
  • --secret-port:https 监听端口;
  • --insecure-port=0:关闭监听 http 非安全端口(8080);
  • --tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
  • --audit-*:配置审计策略和审计日志文件相关的参数;
  • --client-ca-file:验证 client (kue-contro系统/运维ller-manager、kube键盘配置文件-scheduler、kubelet、kube-proxy 等)请二进制文件怎么打开求所带的证书;
  • --enable-bootstrap-token-autsu二进制文件h:启用 kubelet bootstrap 的 token 认证;
  • --requesthea高可用der-*:kube-apiserver 的二进制文件传输 aggregator layer 相关的配置参linux数,proxy-client & HPA 需要使用;
  • --reque配置文件怎么创建stheader-client-ca-file:用于签名 --proxy-client-cert-file--proxy-client-ke配置文件y-file系统运维工资一般多少定的证书;在启用linux删除文件命令了 metric aggregatorlinux删除文件命令 时使用;
  • --requestheader-allowed-names:不能为空,值为逗号分割的 --proxy-clienlinux常用命令t-cert-file 证书的 CN 名称,这里设置为 "aggregatolinux删除文件命令r";
  • --service-account-key-file:签名 ServiceAccoun二进制文件t Token 的公钥文件,kube-controller-manager 的 --service-account-privalinux是什么操作系统te-key-file 指定私配置文件中没有undefined的js对象钥文件,两者配对使用;
  • --runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
  • --a二进制文件转换为文本文件uthorization-mode=Node,RBAC--anonlinux操作系统基础知识ymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
  • --enable-admission-plugins网络配置文件启用一些配置文件的扩展名是什么默认关闭的 plugins;
  • --alllinux重启命令ow-privileged:运行执行 privileged 权限的容器;
  • --二进制文件转换为文本文件apiserver-count=3:指定 apiserver 实例的数量;
  • --event-ttl:指定 events 的保存时间;
  • --kubelet-*系统运维工程师:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 k高可用ubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
  • --proxy-client-用户配置文件*:apiserver 访问 metrics-server 使用的证书;
  • --service-cl配置文件在哪uster-ip-range: 指定 Service Cluster IP 地址段;
  • --service-node-port-range: 指定 NodePort 的端口范围;

如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数;

  • 参数详细说明

    10.5 分发kube-apiserver 证书及配置

  • 证书分发
# 分发server 证书
scp -r /opt/k8s/cfssl/pki/k8s/k8s-server* root@192.168.2.175:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-server* root@192.168.2.176:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-server* root@192.168.2.177:/apps/k8s/ssl/k8s
# 分发webhook证书
scp -r /opt/k8s/cfssl/pki/k8s/aggregator* root@192.168.2.175:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/aggregator* root@192.168.2.176:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/aggregator* root@192.168.2.177:/apps/k8s/ssl/k8s
  • 配置分发
cd  /opt/k8s/work
scp -r config root@192.168.2.175:/apps/k8s/
scp -r config root@192.168.2.176:/apps/k8s/
scp -r config root@192.168.2.177:/apps/k8s/

10.6 创建 kube-apiserver systemd unit 文件

k8s-master-1 k8s-master-2 k8s-master-3 节点上执行

cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
Type=notify
LimitNOFILE=655350
LimitNPROC=655350
LimitCORE=infinity
LimitMEMLOCK=infinity
EnvironmentFile=-/apps/k8s/conf/kube-apiserver
ExecStart=/apps/k8s/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

10.7 启动 kube-apiserverlinux系统 服务

k8s-master-1 k8s-master-2 k8s-master-3 节点上执行

# 全局刷新service
systemctl daemon-reload 
# 设置kube-apiserver开机启动
systemctl enable kube-apiserver
#重启kube-apiserver
systemctl restart kube-apiserver

10.8 检查启动结果

k8s-master-1 k8s-master-2 k8s-master-3 节点上执行

systemctl status kube-apiserver|grep Active
[root@k8s-master-1 ~]# systemctl status kube-apiserver|grep Active
   Active: active (running) since Fri 2022-02-11 13:49:41 CST; 3 days ago
[root@k8s-master-2 ~]# systemctl status kube-apiserver|grep Active
   Active: active (running) since Fri 2022-02-11 13:49:40 CST; 3 days ago
[root@k8s-master-3 ~]# systemctl status kube-apiserver|grep Active
   Active: active (running) since Mon 2022-02-14 14:39:40 CST; 1h 4min ago

确保状态为 a网络配置文件ctive (running),否则查看日志,确linux删除文件命令认原因:

journalctl -u kube-apiserver

10.9 验证服务状态

qist 节点上执行
部署完 kube-apiserver 集群后,在任一 qist 节点上执linux操作系统基础知识行如下命令:

# 配置环境变量
export KUBECONFIG=/opt/k8s/kubeconfig/admin.kubeconfig
root@Qist work# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                                                                                      ERROR
scheduler            Unhealthy   Get https://127.0.0.1:10259/healthz: dial tcp 127.0.0.1:10259: connect: connection refused
controller-manager   Unhealthy   Get https://127.0.0.1:10257/healthz: dial tcp 127.0.0.1:10257: connect: connection refused
etcd-0               Healthy   {"health":"true","reason":""}
etcd-2               Healthy   {"health":"true","reason":""}
etcd-1               Healthy   {"health":"true","reason":""}
  • scheduler controller-manager 还没部署所以报错

kubectl cluster-info
预期输出linux操作系统基础知识

root@Qist work# kubectl cluster-info
Kubernetes control plane is running at https://192.168.2.175:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

正常输出表示集群正常

11 master 节点k二进制文件ubel配置文件在哪et 部署

本文档介绍部署 kubelet 的步骤。

说明:kube-ap配置文件的扩展名是什么iserver 高可用用采用的是 kubelet 启动静态pod 模二进制文件的后缀名式提供给其它组件访问kube-apiserv配置文件怎么创建er api 所以就需要先master 节点部署kubelet

注意:如果没有特二进制文件是什么意思殊指明,本文档的所有操作均在 qist 节点上执行。

11.1 kubelet 生成bootstrap

生成bootstrap配置linux操作系统基础知识文件

 export  TOKEN_ID=$(head -c 30 /dev/urandom | od -An -t x | tr -dc a-f3-9|cut -c 3-8)
 export  TOKEN_SECRET=$(head -c 16 /dev/urandom | md5sum | head -c 16)
 export  BOOTSTRAP_TOKEN=${TOKEN_ID}.${TOKEN_SECRET}
 cat > /opt/k8s/work/yaml/bootstrap-secret.yaml << EOF
---
apiVersion: v1
kind: Secret
metadata:
  # Name MUST be of form "bootstrap-token-<token id>"
  name: bootstrap-token-${TOKEN_ID}
  namespace: kube-system
# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
  # Human readable description. Optional.
  description: "The default bootstrap token generated by 'kubelet '."
  # Token ID and secret. Required.
  token-id: ${TOKEN_ID}
  token-secret: ${TOKEN_SECRET}
  # Allowed usages.
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  # Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
  auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress
EOF

创建boolinuxtstrap 授权文件

cat > /opt/k8s/work/yaml/kubelet-bootstrap-rbac.yaml << EOF
---
# 允许 system:bootstrappers 组用户创建 CSR 请求
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers
---
# 自动批准 system:bootstrappers 组用户 TLS bootstrapping 首次申请证书的 CSR 请求
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-client-auto-approve-csr
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers
---
# 自动批准 system:nodes 组用户更新 kubelet 自身与 apiserver 通讯证书的 CSR 请求
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-client-auto-renew-crt
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
# 自动批准 system:nodes 组用户更新 kubelet 10250 api 端口证书的 CSR 请求
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-server-auto-renew-crt
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
EOF

生成集群组件授权

cat > /opt/k8s/work/yaml/kube-api-rbac.yaml << EOF
---
# kube-controller-manager 绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: controller-node-clusterrolebing
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-controller-manager
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:kube-controller-manager
---
# 创建kube-scheduler 绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: scheduler-node-clusterrolebing
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-scheduler
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:kube-scheduler
---
# 创建kube-controller-manager 到auth-delegator 绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: controller-manager:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:kube-controller-manager
---
#授予 kubernetes 证书访问 kubelet API 的权限
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-system-cluster-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:serviceaccount:kube-system:default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-node-clusterbinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-apiserver:kubelet-apis
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kubelet-api-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kubernetes
EOF

11.高可用是什么意思啊2 创建kubelet bootstrap kubeconfig

cd /opt/k8s/
  # 设置集群参数
   kubectl config set-cluster kubernetes \
        --certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
        --embed-certs=true \
        --server=https://127.0.0.1:6443 \
        --kubeconfig=/opt/k8s/kubeconfig/bootstrap.kubeconfig
  # 设置客户端认证参数
   kubectl config set-credentials system:bootstrap:${TOKEN_ID} \
        --token=${BOOTSTRAP_TOKEN} \
        --kubeconfig=/opt/k8s/kubeconfig/bootstrap.kubeconfig
  # 设置上下文参数
   kubectl config set-context default \
        --cluster=kubernetes \
        --user=system:bootstrap:${TOKEN_ID} \
        --kubeconfig=/opt/k8s/kubeconfig/bootstrap.kubeconfig
 # 设置默认上下文
 kubectl config use-context default --kubeconfig=/opt/k8s/kubeconfig/bootstrap.kubeconfig
  • server 使用kubelet 静态pod 启动 所以使用127.0.0.1访问api

分发到所有master节颜色配置文件点:

scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.2.175:/apps/k8s/conf/
scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.2.176:/apps/k8s/conf/
scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.2.177:/apps/k8s/conf/

11.3 提交bootstrap yaml 到集群

# 提交bootstrap secret 到集群
kubectl apply -f /opt/k8s/work/yaml/bootstrap-secret.yaml
# bootstrap授权
kubectl apply -f /opt/k8s/work/yaml/kubelet-bootstrap-rbac.yaml
# 提交组件授权
kubectl apply -f /opt/k8s/work/yaml/kube-api-rbac.yaml

11.4 生成kubelet 配置文件

生成kubelet 启动参数linux命令文件

  • 192.168.2.175节点:
    k8s-master-1 节点上执行
cat > /apps/k8s/conf/kubelet <<EOF
KUBELET_OPTS="--bootstrap-kubeconfig=/apps/k8s/conf/bootstrap.kubeconfig \
              --network-plugin=cni \
              --cni-conf-dir=/etc/cni/net.d \
              --cni-bin-dir=/opt/cni/bin \
              --kubeconfig=/apps/k8s/conf/kubelet.kubeconfig \
              --node-ip=192.168.2.175 \
              --hostname-override=k8s-master-1 \
              --cert-dir=/apps/k8s/ssl \
              --runtime-cgroups=/systemd/system.slice \
              --root-dir=/var/lib/kubelet \
              --log-dir=/apps/k8s/log \
              --alsologtostderr=true \
              --config=/apps/k8s/conf/kubelet.yaml \
              --logtostderr=true \
              --container-runtime=remote \
              --container-runtime-endpoint=unix:///var/run/crio/crio.sock \
              --containerd=unix:///var/run/crio/crio.sock \
              --pod-infra-container-image=docker.io/juestnow/pause:3.5 \
              --v=2 \
              --image-pull-progress-deadline=30s"
EOF
  • 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
  • --bootstrap-kubeconfiglinux重启命令指向 bootstrap kubelinux系统安装config 文件,kubel二进制文件的后缀名et 使高可用是什么意思啊用该文件中的用户名和 token 向 kube-apiserver 发配置文件是什么意思送 TLS B系统运维工资一般多少ootstrapping 请求;
  • K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;
  • --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容linux重启命令器的僵尸;

kubelet config配置生成

从 v1.10 开始,部分 kubelet 参数需在linux置文件中配置,kubelet --help 会提示:

DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag

创建 kubelet 参数配置文件模板(可linux命令配置项参考代码中注释):

cat > /apps/k8s/conf/kubelet.yaml <<EOF
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
staticPodPath: "/apps/work/kubernetes/manifests"
syncFrequency: 30s
fileCheckFrequency: 20s
httpCheckFrequency: 20s
address: 192.168.2.175
port: 10250
readOnlyPort: 0
tlsCipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_GCM_SHA256
rotateCertificates: true
authentication:
  x509:
    clientCAFile: "/apps/k8s/ssl/k8s/k8s-ca.pem"
  webhook:
    enabled: true
    cacheTTL: 2m0s
  anonymous:
    enabled: false
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
registryPullQPS: 5
registryBurst: 10
eventRecordQPS: 15
eventBurst: 30
enableDebuggingHandlers: true
healthzPort: 10248
healthzBindAddress: 192.168.2.175
oomScoreAdj: -999
clusterDomain: cluster.local
clusterDNS:
- 10.66.0.2
streamingConnectionIdleTimeout: 4h0m0s
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 5m0s
nodeLeaseDurationSeconds: 40
imageMinimumGCAge: 2m0s
imageGCHighThresholdPercent: 70
imageGCLowThresholdPercent: 50
volumeStatsAggPeriod: 1m0s
kubeletCgroups: "/systemd/system.slice"
cgroupsPerQOS: true
cgroupDriver: systemd
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
topologyManagerPolicy: none
runtimeRequestTimeout: 2m0s
hairpinMode: hairpin-veth
maxPods: 55
podsPerCore: 0
podPidsLimit: -1
resolvConf: "/etc/resolv.conf"
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
maxOpenFiles: 1000000
contentType: application/vnd.kubernetes.protobuf
kubeAPIQPS: 15
kubeAPIBurst: 30
serializeImagePulls: false
evictionHard:
  imagefs.available: 10%
  memory.available: 500Mi
  nodefs.available: 10%
evictionSoft:
  imagefs.available: 15%
  memory.available: 500Mi
  nodefs.available: 15%
evictionSoftGracePeriod:
  imagefs.available: 2m
  memory.available: 2m
  nodefs.available: 2m
evictionPressureTransitionPeriod: 20s
evictionMinimumReclaim:
  imagefs.available: 500Mi
  memory.available: 0Mi
  nodefs.available: 500Mi
enableControllerAttachDetach: true
makeIPTablesUtilChains: true
iptablesMasqueradeBit: 14
iptablesDropBit: 15
failSwapOn: false
containerLogMaxSize: 100Mi
containerLogMaxFiles: 10
configMapAndSecretChangeDetectionStrategy: Watch
systemReserved:
  cpu: 1000m
  ephemeral-storage: 1Gi
  memory: 1024Mi
kubeReserved:
  cpu: 500m
  ephemeral-storage: 1Gi
  memory: 512Mi
systemReservedCgroup: "/systemd/system.slice"
kubeReservedCgroup: "/systemd/system.slice"
enforceNodeAllocatable:
- pods
allowedUnsafeSysctls:
- kernel.msg*
- kernel.shm*
- kernel.sem
- fs.mqueue.*
- net.*
EOF
  • address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
  • readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
  • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
  • authentication.x509.系统运维工资一般多少clientCAFile:指定签名客户端证书的 CA 证书,配置文件怎么创建开启 HTTP 证书认证;
  • authe系统运维工作内容ntication.webhook.enabled=true:开启 HTTPs be配置文件arer token 认证;
  • 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或linux操作系统基础知识其他客户端),将被拒绝,提示 Unauthorized;
  • authroization.mode=Webhook:kubelinux删除文件命令let 使用 Subje二进制文件传输ctAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAlinux常用命令C);
  • 需要 roolinuxt 账户运行;

ku二进制文件传输belet参数

  • 192.16系统/运维8.2.176节点:
    k8s-master-2 节点上执行
cat > /apps/k8s/conf/kubelet <<EOF
KUBELET_OPTS="--bootstrap-kubeconfig=/apps/k8s/conf/bootstrap.kubeconfig \
              --network-plugin=cni \
              --cni-conf-dir=/etc/cni/net.d \
              --cni-bin-dir=/opt/cni/bin \
              --kubeconfig=/apps/k8s/conf/kubelet.kubeconfig \
              --node-ip=192.168.2.176 \
              --hostname-override=k8s-master-2 \
              --cert-dir=/apps/k8s/ssl \
              --runtime-cgroups=/systemd/system.slice \
              --root-dir=/var/lib/kubelet \
              --log-dir=/apps/k8s/log \
              --alsologtostderr=true \
              --config=/apps/k8s/conf/kubelet.yaml \
              --logtostderr=true \
              --container-runtime=remote \
              --container-runtime-endpoint=unix:///var/run/crio/crio.sock \
              --containerd=unix:///var/run/crio/crio.sock \
              --pod-infra-container-image=docker.io/juestnow/pause:3.5 \
              --v=2 \
              --image-pull-progress-deadline=30s"
EOF

kubelet conlinux系统安装fig配置生成

cat > /apps/k8s/conf/kubelet.yaml <<EOF
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
staticPodPath: "/apps/work/kubernetes/manifests"
syncFrequency: 30s
fileCheckFrequency: 20s
httpCheckFrequency: 20s
address: 192.168.2.176
port: 10250
readOnlyPort: 0
tlsCipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_GCM_SHA256
rotateCertificates: true
authentication:
  x509:
    clientCAFile: "/apps/k8s/ssl/k8s/k8s-ca.pem"
  webhook:
    enabled: true
    cacheTTL: 2m0s
  anonymous:
    enabled: false
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
registryPullQPS: 5
registryBurst: 10
eventRecordQPS: 15
eventBurst: 30
enableDebuggingHandlers: true
healthzPort: 10248
healthzBindAddress: 192.168.2.176
oomScoreAdj: -999
clusterDomain: cluster.local
clusterDNS:
- 10.66.0.2
streamingConnectionIdleTimeout: 4h0m0s
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 5m0s
nodeLeaseDurationSeconds: 40
imageMinimumGCAge: 2m0s
imageGCHighThresholdPercent: 70
imageGCLowThresholdPercent: 50
volumeStatsAggPeriod: 1m0s
kubeletCgroups: "/systemd/system.slice"
cgroupsPerQOS: true
cgroupDriver: systemd
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
topologyManagerPolicy: none
runtimeRequestTimeout: 2m0s
hairpinMode: hairpin-veth
maxPods: 55
podsPerCore: 0
podPidsLimit: -1
resolvConf: "/etc/resolv.conf"
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
maxOpenFiles: 1000000
contentType: application/vnd.kubernetes.protobuf
kubeAPIQPS: 15
kubeAPIBurst: 30
serializeImagePulls: false
evictionHard:
  imagefs.available: 10%
  memory.available: 500Mi
  nodefs.available: 10%
evictionSoft:
  imagefs.available: 15%
  memory.available: 500Mi
  nodefs.available: 15%
evictionSoftGracePeriod:
  imagefs.available: 2m
  memory.available: 2m
  nodefs.available: 2m
evictionPressureTransitionPeriod: 20s
evictionMinimumReclaim:
  imagefs.available: 500Mi
  memory.available: 0Mi
  nodefs.available: 500Mi
enableControllerAttachDetach: true
makeIPTablesUtilChains: true
iptablesMasqueradeBit: 14
iptablesDropBit: 15
failSwapOn: false
containerLogMaxSize: 100Mi
containerLogMaxFiles: 10
configMapAndSecretChangeDetectionStrategy: Watch
systemReserved:
  cpu: 1000m
  ephemeral-storage: 1Gi
  memory: 1024Mi
kubeReserved:
  cpu: 500m
  ephemeral-storage: 1Gi
  memory: 512Mi
systemReservedCgroup: "/systemd/system.slice"
kubeReservedCgroup: "/systemd/system.slice"
enforceNodeAllocatable:
- pods
allowedUnsafeSysctls:
- kernel.msg*
- kernel.shm*
- kernel.sem
- fs.mqueue.*
- net.*
EOF
  • 192.168.2.177节点:
    k8s-master-3 节点上执行
cat > /apps/k8s/conf/kubelet <<EOF
KUBELET_OPTS="--bootstrap-kubeconfig=/apps/k8s/conf/bootstrap.kubeconfig \
              --network-plugin=cni \
              --cni-conf-dir=/etc/cni/net.d \
              --cni-bin-dir=/opt/cni/bin \
              --kubeconfig=/apps/k8s/conf/kubelet.kubeconfig \
              --node-ip=192.168.2.177 \
              --hostname-override=k8s-master-3 \
              --cert-dir=/apps/k8s/ssl \
              --runtime-cgroups=/systemd/system.slice \
              --root-dir=/var/lib/kubelet \
              --log-dir=/apps/k8s/log \
              --alsologtostderr=true \
              --config=/apps/k8s/conf/kubelet.yaml \
              --logtostderr=true \
              --container-runtime=remote \
              --container-runtime-endpoint=unix:///var/run/crio/crio.sock \
              --containerd=unix:///var/run/crio/crio.sock \
              --pod-infra-container-image=docker.io/juestnow/pause:3.5 \
              --v=2 \
              --image-pull-progress-deadline=30s"
EOF

kulinuxbelet config配置生成

cat > /apps/k8s/conf/kubelet.yaml <<EOF
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
staticPodPath: "/apps/work/kubernetes/manifests"
syncFrequency: 30s
fileCheckFrequency: 20s
httpCheckFrequency: 20s
address: 192.168.2.177
port: 10250
readOnlyPort: 0
tlsCipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_GCM_SHA256
rotateCertificates: true
authentication:
  x509:
    clientCAFile: "/apps/k8s/ssl/k8s/k8s-ca.pem"
  webhook:
    enabled: true
    cacheTTL: 2m0s
  anonymous:
    enabled: false
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
registryPullQPS: 5
registryBurst: 10
eventRecordQPS: 15
eventBurst: 30
enableDebuggingHandlers: true
healthzPort: 10248
healthzBindAddress: 192.168.2.177
oomScoreAdj: -999
clusterDomain: cluster.local
clusterDNS:
- 10.66.0.2
streamingConnectionIdleTimeout: 4h0m0s
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 5m0s
nodeLeaseDurationSeconds: 40
imageMinimumGCAge: 2m0s
imageGCHighThresholdPercent: 70
imageGCLowThresholdPercent: 50
volumeStatsAggPeriod: 1m0s
kubeletCgroups: "/systemd/system.slice"
cgroupsPerQOS: true
cgroupDriver: systemd
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
topologyManagerPolicy: none
runtimeRequestTimeout: 2m0s
hairpinMode: hairpin-veth
maxPods: 55
podsPerCore: 0
podPidsLimit: -1
resolvConf: "/etc/resolv.conf"
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
maxOpenFiles: 1000000
contentType: application/vnd.kubernetes.protobuf
kubeAPIQPS: 15
kubeAPIBurst: 30
serializeImagePulls: false
evictionHard:
  imagefs.available: 10%
  memory.available: 500Mi
  nodefs.available: 10%
evictionSoft:
  imagefs.available: 15%
  memory.available: 500Mi
  nodefs.available: 15%
evictionSoftGracePeriod:
  imagefs.available: 2m
  memory.available: 2m
  nodefs.available: 2m
evictionPressureTransitionPeriod: 20s
evictionMinimumReclaim:
  imagefs.available: 500Mi
  memory.available: 0Mi
  nodefs.available: 500Mi
enableControllerAttachDetach: true
makeIPTablesUtilChains: true
iptablesMasqueradeBit: 14
iptablesDropBit: 15
failSwapOn: false
containerLogMaxSize: 100Mi
containerLogMaxFiles: 10
configMapAndSecretChangeDetectionStrategy: Watch
systemReserved:
  cpu: 1000m
  ephemeral-storage: 1Gi
  memory: 1024Mi
kubeReserved:
  cpu: 500m
  ephemeral-storage: 1Gi
  memory: 512Mi
systemReservedCgroup: "/systemd/system.slice"
kubeReservedCgroup: "/systemd/system.slice"
enforceNodeAllocatable:
- pods
allowedUnsafeSysctls:
- kernel.msg*
- kernel.shm*
- kernel.sem
- fs.mqueue.*
- net.*
EOF

11.5 kubelet systemd Unit 配置

cd /opt/k8s/work/
cat > kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Wants=network-online.target
After=network-online.target
[Service]
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/hugetlb/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/blkio/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpuset/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/devices/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/net_cls,net_prio/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/perf_event/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpu,cpuacct/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/freezer/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/memory/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/pids/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/systemd/systemd/system.slice
LimitNOFILE=655350
LimitNPROC=655350
LimitCORE=infinity
LimitMEMLOCK=infinity
EnvironmentFile=-/apps/k8s/conf/kubelet
ExecStart=/apps/k8s/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

分发到所有 master 节点:

cd /opt/k8s/work
scp kubelet.service root@192.168.2.175:/usr/lib/systemd/system/
scp kubelet.service root@192.168.2.176:/usr/lib/systemd/system/
scp kubelet.service root@192.168.2.177:/usr/lib/systemd/system/

11.6 启linux命令动 klinux系统ubelet 服务

k8s-master-1 k8s-master-2 k8s-master-3 节点上执行

# 全局刷新service
systemctl daemon-reload 
# 设置kubelet开机启动
systemctl enable kubelet
#重启kubelet
systemctl restart kubelet

11.7 检查启动结果

k8配置文件在哪s-mlinux删除文件命令aster-1 k8s-master-2 k8s-master-3 节点上执行

systemctl status kubelet|grep Active
[root@k8s-master-1 ~]# systemctl status kubelet|grep Active
   Active: active (running) since Fri 2022-02-11 13:49:41 CST; 3 days ago
[root@k8s-master-2 ~]# systemctl status kubelet|grep Active
   Active: active (running) since Fri 2022-02-11 13:49:40 CST; 3 days ago
[root@k8s-master-3 ~]# systemctl status kubelet|grep Active
   Active: active (running) since Mon 2022-02-14 14:39:40 CST; 1h 4min ago

确保状态为 active (runn网络配置文件ing),否则查看日志,确认原因二进制文件怎么打开

journalctl -u kubelet

11.8 检查静态pod 是否启动

root@k8s-master-3 ~]# crictl ps
f0e4c083dcce1       ad393d6a4d1b1ccbce65c2a4db6064635a8aac883d9dd66a38e14ce925e93b7f                                       2 hours ago         Running             kube-rbac-proxy           14                  f7ba3e70afb06
[root@k8s-master-3 ~]# netstat -tnlp| grep 6443
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      1790/nginx: master
tcp6       0      0 :::6443                 :::*                    LISTEN      1790/nginx: master
[root@k8s-master-3 ~]# curl -k https://127.0.0.1:6443
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}
#负载均衡正常部署其它插件

12 部署高可用 kube-controller-manager 集群

本文档介绍部署高可用 kube-controller-linux常用命令manager 集群的步骤。

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leaderlinux 节点,其它节点为阻塞状态。当 leader 节点不可二进制文件转换为文本文件用时,阻linux重启命令塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本文档先生成 x509 证书和私钥,kube-cont高可用是什么意思啊roller-manager 在如下两种情况下使用该证书:

  1. 与 kube-apiserver 的安全端口通信;
  2. 安全端口(https,10257) 输出 prometheus 格式的 metrics二进制文件转换为文本文件

注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执linux重启命令

12.1 创建 kube-controller-ha高可用manager 证书和私钥

创建证书签名请求:

cd /opt/k8s/work
cat > /opt/k8s/cfssl/k8s/k8s-controller-manager.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [""], 
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
            "ST": "$CERT_ST",
            "L": "$CERT_L",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
  • hosts 列表包含所有 kulinux系统be-controller-manager 节点 IP;
  • CN 和 O 均为 sy配置文件怎么创建stem:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings systemlinux常用命令:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert \
    -ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
    -ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \
    -config=/opt/k8s/cfssl/ca-config.json \
    -profile=kubernetes \
    /opt/k8s/cfssl/k8s/k8s-controller-manager.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/k8s/k8s-controller-manager
root@Qist work# ll /opt/k8s/cfssl/pki/k8s/k8s-controller-manager*
-rw------- 1 root root 1679 Dec  3  2020 /opt/k8s/cfssl/pki/k8s/k8s-controller-manager-key.pem
-rw-r--r-- 1 root root 1127 Dec  3  2020 /opt/k8s/cfssl/pki/k8s/k8s-controller-manager.csr
-rw-r--r-- 1 root root 1505 Dec  3  2020 /opt/k8s/cfssl/pki/k8s/k8s-controller-manager.pem

将生成的证书和私钥分发到所有 master 节点:

cd /opt/k8s/work
scp -r /opt/k8s/cfssl/pki/k8s/k8s-controller-manager-* root@192.168.2.175:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-controller-manager-* root@192.168.2.176:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-controller-manager-* root@192.168.2.177:/apps/k8s/ssl/k8s

12.2 创建和分发 kubeconfig 文件

kube-conlinux重启命令troller-manager 使用 k配置文件的扩展名是什么ubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的系统运维工作内容 CA 证书和 kube-controller-manager 证书等信息:

cd /opt/k8s/kubeconfig
      kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
      --embed-certs=true \
      --server=https://127.0.0.1:6443 \
      --kubeconfig=kube-controller-manager.kubeconfig
      kubectl config set-credentials system:kube-controller-manager \
      --client-certificate=/opt/k8s/cfssl/pki/k8s/k8s-controller-manager.pem \
      --embed-certs=true \
      --client-key=/opt/k8s/cfssl/pki/k8s/k8s-controller-manager-key.pem \
      --kubeconfig=kube-controller-manager.kubeconfig
      kubectl config set-context kubernetes \
      --cluster=kubernetes \
      --user=system:kube-controller-manager \
      --kubeconfig=kube-controller-manager.kubeconfig
      kubectl config use-context kubernetes --kubeconfig=kube-controller-manager.kubeconfig
  • kube-controller-manager 与 kube-apiserver 混布,故直linux是什么操作系统接通过节点 IP 访问 kube-apiserver;

分发 kubeconfig 到所有 master 节点:

cd /opt/k8s/kubeconfig
scp kube-controller-manager.kubeconfig root@192.168.2.175:/apps/k8s/config/
scp kube-controller-manager.kubeconfig root@192.168.2.176:/apps/k8s/config/
scp kube-controller-manager.kubeconfig root@192.168.2.177:/apps/k8s/config/

12.3 创建 kube-controller-man键盘配置文件ager 启动配置

cd /opt/k8s/work
cat >kube-controller-manager <<EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--profiling \
--concurrent-service-syncs=2 \
--concurrent-deployment-syncs=10 \
--concurrent-gc-syncs=30 \
--leader-elect=true \
--bind-address=0.0.0.0 \
--service-cluster-ip-range=10.66.0.0/16 \
--cluster-cidr=10.80.0.0/12 \
--node-cidr-mask-size=24 \
--cluster-name=kubernetes \
--allocate-node-cidrs=true \
--kubeconfig=/apps/k8s/config/kube-controller-manager.kubeconfig \
--authentication-kubeconfig=/apps/k8s/config/kube-controller-manager.kubeconfig \
--authorization-kubeconfig=/apps/k8s/config/kube-controller-manager.kubeconfig \
--use-service-account-credentials=true \
--client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
--requestheader-client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
--requestheader-client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
--requestheader-allowed-names=aggregator \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--node-monitor-grace-period=30s \
--node-monitor-period=5s \
--pod-eviction-timeout=1m0s \
--node-startup-grace-period=20s \
--terminated-pod-gc-threshold=50 \
--alsologtostderr=true \
--cluster-signing-cert-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
--cluster-signing-key-file=/apps/k8s/ssl/k8s/k8s-ca-key.pem  \
--deployment-controller-sync-period=10s \
--experimental-cluster-signing-duration=876000h0m0s \
--root-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
--service-account-private-key-file=/apps/k8s/ssl/k8s/k8s-ca-key.pem \
--enable-garbage-collector=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/apps/k8s/ssl/k8s/k8s-controller-manager.pem \
--tls-private-key-file=/apps/k8s/ssl/k8s/k8s-controller-manager-key.pem \
--kube-api-qps=100 \
--kube-api-burst=100 \
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
--log-dir=/apps/k8s/log \
--v=2"
EOF
  • --port=0:关闭监听系统运维工作内容非安全端口(http),linux命令同时 --address 参数无效,-linux-bind-address 参数有二进制文件和文本文件的区别效;
  • --secure-port=10257 端口的 https /metrics 请求;
  • --kubeconfig:指定 kubeconfig 文件路径,kube配置文件怎么创建-controller-manager 使用它连接和验证 kube-apiserver;
  • --authentication-kubeconfig--alinux是什么操作系统uthorization-kubeconfig:kube-controller-manager 使用它连接 apiserver,对 client 的请求进二进制文件怎么打开行认证和授权。kube-su二进制文件controller-manager 不再使用 --tls-ca-file 对请求 https metrics 的 Client 证书进行校验。如果没有配置这两配置文件中没有undefined的js对象个 kubeconfig 参数,则 client 连接 k键盘配置文件ube-controller-系统运维工资一般多少manager https 端口的网络配置文件网络配置文件求会被拒绝(提示权限不足)。
  • --cluster-signing-*-file:签名 TLS Bootstrap 创建的证书;linux是什么操作系统
  • --experimental-cluster-signing-duratio高可用n:指定 TLS Boots用户配置文件trap 证书的有效期;
  • --root-ca-file:放置到容器 S系统运维工程师ervilinux是什么操作系统ceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
  • --s二进制文件传输ervice二进制文件的后缀名-account-privalinux删除文件命令te-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 --高可用架构service-account-key-filinux操作系统基础知识le 指定的公钥文件配对使用;
  • --service-cluster-ip-range :指定 Service Cluster IP 网段,必须配置文件和 kube-apiserver 中配置文件的同名参数一致;
  • --leader-elect高可用架构=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
  • --cha高可用ontrollers=*,bootstrapsigner,tokencleaner:启用的控制配置文件后缀器列表,tokencleaner 用于自动清理过期的 Bootstrap二进制文件是什么意思 token;
  • --hlinux常用命令orizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
  • --tls-cert-file--tls-private-key-用户配置文件file:使用 ht高可用架构tps 输出 metrics 时使用的 Server 证书和秘钥;
  • --use-service-account-credentials=true: kube-controller-manager 中各 controller 使用 serv高可用架构iceaccount 访问 kube-apiserver;

kube-controller-manager参数

分发 kube-controller-manager 配置文件到所有 master 节点:

cd /opt/k8s/work
scp kube-controller-manager root@192.168.2.175:/apps/k8s/conf/
scp kube-controller-manager root@192.168.2.176:/apps/k8s/conf/
scp kube-controller-manager root@192.168.2.177:/apps/k8s/conf/

12.4 创建 kube高可用-controller-man配置文件在哪ager systemd unit 文件

cd /opt/k8s/work
cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
LimitNOFILE=655350
LimitNPROC=655350
LimitCORE=infinity
LimitMEMLOCK=infinity
EnvironmentFile=-/apps/k8s/conf/kube-controller-manager
ExecStart=/apps/k8s/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

12.5 为各节点创建和分发 kube-controller-mananger systemd unit 文件

分发到所有 master 节点:

cd /opt/k8s/work
scp kube-controller-manager.service root@192.168.2.175:/usr/lib/systemd/system/
scp kube-controller-manager.service root@192.168.2.176:/usr/lib/systemd/system/
scp kube-controller-manager.service root@192.168.2.177:/usr/lib/systemd/system/

12.6 启动 kube-controller-manager 服务

k8s-mas网络配置文件ter-1 k8s-master-2 k8s-master-3 节点上执行

# 全局刷新service
systemctl daemon-reload 
# 设置kube-controller-manager开机启动
systemctl enable kube-controller-manager
#重启kube-controller-manager
systemctl restart kube-controller-manager

12.7 检查服务运行状态

k8s-master-1 k8s-master-2 k8s-master-3 节点上执行配置文件的扩展名是什么

systemctl status kube-controller-manager|grep Active

确保状态为 active (running),否则查看日志,确认原因:

journalctl -u kube-controller-manager

kube-controller-manager 监听 10257 端口,接收 https 请求:

[root@k8s-master-1 conf]# netstat -lnpt | grep kube-cont
tcp6       0      0 :::10257                :::*                    LISTEN      24078/kube-controll

12.8 查看当前的 leader

kubectl -n kube-system get leases kube-controller-manager
NAME                      HOLDER                                              AGE
kube-controller-manager   k8s-master-2_c445a762-adc1-4623-a9b5-4d8ea3d34933   1d

可见,当前的 leader 为 k8s-master-2 节点。

12.9 测试 kube-controlle系统运维工资一般多少r-manager 集群的高可用

停掉一个或两个节点的 kube-clinux系统安装ontroller-manager 服务,观察其它节二进制文件传输点的日志,看是否获取了 leader 权限。

13 部署高可用 kube-schedul高可用架构er 集群

本文档介linux系统安装绍部署高可用 kube-schedu二进制文件和文本文件的区别ler 集群的步骤。

该集群包含 3 个节点,启动后ha高可用将通过linux必学的60个命令竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本文档先生成 x509 证linux系统安装书和私钥,kube-scheduler 在如下两种情况下使用该配置文件的扩展名是什么证书:

  1. 与 kube-apiserver 的安全端口通信;
  2. 安全端口(https,10259) 输出 prometheus 格式的 metrics;

注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行

13.1 创建 klinux删除文件命令ube-scheduler 证书和私钥

创建证书签名请求:

cd /opt/k8s/work
cat > /opt/k8s/cfssl/k8s/k8s-scheduler.json <<EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [""], 
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
  • hosts 列表包含所有 kube颜色配置文件-scheduler 节点 IP;
  • CN 和 O 均为 system:kube-scheduler,kubernetes 内置的linux命令 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限;

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert \
    -ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
    -ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \
    -config=/opt/k8s/cfssl/ca-config.json \
    -profile=kubernetes \
    /opt/k8s/cfssl/k8s/k8s-scheduler.json | \
    cfssljson -bare /opt/k8s/cfssl/pki/k8s/k8s-scheduler
ls /opt/k8s/cfssl/pki/k8s/k8s-scheduler*pem

su二进制文件生成的证书和私钥分发到所有 master 节点:

cd /opt/k8s/work
scp -r /opt/k8s/cfssl/pki/k8s/k8s-scheduler* root@192.168.2.175:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-scheduler* root@192.168.2.176:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-scheduler* root@192.168.2.177:/apps/k8s/ssl/k8s

13.2 创建和分发 kubeconfig 文件

kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 ap系统/运维iserver 地址、嵌入的 CA 证书linux必学的60个命令和 kube-scheduler 证书:

cd /opt/k8s/kubeconfig
      # 设置集群参数
      kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
      --embed-certs=true \
      --server=https://127.0.0.1:6443 \
      --kubeconfig=kube-scheduler.kubeconfig
      # 设置客户端认证参数
      kubectl config set-credentials system:kube-scheduler \
      --client-certificate=/opt/k8s/cfssl/pki/k8s/k8s-scheduler.pem \
      --embed-certs=true \
      --client-key=/opt/k8s/cfssl/pki/k8s/k8s-scheduler-key.pem \
      --kubeconfig=kube-scheduler.kubeconfig
       # 设置上下文参数
      kubectl config set-context kubernetes \
      --cluster=kubernetes \
      --user=system:kube-scheduler \
      --kubeconfig=kube-scheduler.kubeconfig
      # 设置默认上下文
      kubectl config use-context kubernetes --kubeconfig=kube-scheduler.kubeconfig

分发 kubeconfig 到所有 master 节点:

cd /opt/k8s/kubeconfig
scp kube-scheduler.kubeconfig root@192.168.2.175:/apps/k8s/config/
scp kube-scheduler.kubeconfig root@192.168.2.176:/apps/k8s/config/
scp kube-scheduler.kubeconfig root@192.168.2.177:/apps/k8s/config/

13.3 创建 kube-scheduler 配置文件

cd /opt/k8s/work
cat >kube-scheduler <<EOF
KUBE_SCHEDULER_OPTS=" \
                   --logtostderr=true \
                   --bind-address=0.0.0.0 \
                   --leader-elect=true \
                   --kubeconfig=/apps/k8s/config/kube-scheduler.kubeconfig \
                   --authentication-kubeconfig=/apps/k8s/config/kube-scheduler.kubeconfig \
                   --authorization-kubeconfig=/apps/k8s/config/kube-scheduler.kubeconfig \
                   --tls-cert-file=/apps/k8s/ssl/k8s/k8s-scheduler.pem \
                   --tls-private-key-file=/apps/k8s/ssl/k8s/k8s-scheduler-key.pem \
                   --client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
                   --requestheader-allowed-names= \
                   --requestheader-extra-headers-prefix=X-Remote-Extra- \
                   --requestheader-group-headers=X-Remote-Group \
                   --requestheader-username-headers=X-Remote-User \
                   --alsologtostderr=true \
                   --kube-api-qps=100 \
                   --authentication-tolerate-lookup-failure=false \
                   --kube-api-burst=100 \
                   --log-dir=/apps/k8s/log \
                   --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
                   --v=2"
EOF
  • --kubeconfig:指定 kubecon配置文件在哪fig 文件路径,kub用户配置文件e-scheduler 使用它连接和验证 ku配置文件怎么创建be-linux系统安装apiserver;
  • --leader-elect=true配置文件中没有undefined的js对象集群运行模式,启用选举功能;被选为 leaderlinux删除文件命令 的节点负责处理工作,其它节点为阻塞状态;

kube-scheduler参数

分发 klinux重启命令ube-s用户配置文件cheduler 配置文件到所有 mast二进制文件是什么意思er 节点:

cd /opt/k8s/work
scp kube-scheduler root@192.168.2.175:/apps/k8s/conf/
scp kube-scheduler root@192.168.2.176:/apps/k8s/conf/
scp kube-scheduler root@192.168.2.177:/apps/k8s/conf/

13.4 创建 kube-scheduler systemd unit 模板文件

cd /opt/k8s/work
cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
LimitNOFILE=655350
LimitNPROC=655350
LimitCORE=infinity
LimitMEMLOCK=infinity
EnvironmentFile=-/apps/k8s/conf/kube-scheduler
ExecStart=/apps/k8s/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

13.5 为各节点创建和分发 kube-scheduler systemd系统运维工作内容 unit 文件

分发到所有 massu二进制文件ter 节点:

cd /opt/k8s/work
scp kube-scheduler.service root@192.168.2.175:/usr/lib/systemd/system/
scp kube-scheduler.service root@192.168.2.176:/usr/lib/systemd/system/
scp kube-scheduler.service root@192.168.2.177:/usr/lib/systemd/system/

13.6 启动 kube-schedlinux删除文件命令uler 服务

k8s-master-1 k8s-master-2 k8s-ma二进制文件ste键盘配置文件r-3 节点上执行

# 全局刷新service
systemctl daemon-reload 
# 设置kube-scheduler开机启动
systemctl enable kube-scheduler
#重启kube-scheduler
systemctl restart kube-scheduler

13.7 检查服务运系统运维工程师行状态

k8slinux常用命令-master-1 k8s-malinux系统安装ster-2 k8s-master-3 节点上执行

systemctl status kube-scheduler|grep Active

确保状态为 active (running),否则查看日linux操作系统基础知识志,确认原因:

journalctl -u kube-scheduler

kube-scheduler 监听 102二进制文件的后缀名59 端口,接收 https 请求:

[root@k8s-master-3 conf]# netstat -tnlp| grep kube-sc
tcp6       0      0 :::10259                :::*                    LISTEN      1887/kube-scheduler

13.8 查看当前的 leader

kubectl -n kube-system get leases kube-scheduler
root@Qist work# kubectl -n kube-system get leases kube-scheduler
NAME             HOLDER                                              AGE
kube-scheduler   k8s-master-2_383bedd9-26ec-40c3-95e6-182aebe9b1b9   1d

可见,当前的 leader 为 k8s-master-2 节linux操作系统基础知识点。

13.9 测试 kube-scheduler二进制文件转换为文本文件 集群的高可用

随便找一个或两个 malinux常用命令ster 节点,停掉 kube-scheduler 服务,看其它节点是否获取了 leader 权限。

14 node节点kubelet部署

配置使用master kubelet 配置

14.1 bootstrap kubeconfig 分发

分发到所有node节点:

scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.2.185:/apps/k8s/conf/
scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.2.187:/apps/k8s/conf/
scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.3.62:/apps/k8s/conf/
scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.3.70:/apps/k8s/conf/

14.2 生成配置文件

配置参考

# kubelet 文件修改
--node-ip=192.168.2.177 #节点IP   
--hostname-override=k8s-master-3  # 节点名字
# kubelet.yaml 修改
address: 192.168.2.177 # 节点IP
healthzBindAddress: 192.168.2.177 # 当前节点IP

分发到所有 node 节点:配置文件的扩展名是什么

cd /opt/k8s/work
scp kubelet.service root@192.168.2.175:/usr/lib/systemd/system/
scp kubelet.service root@192.168.2.176:/usr/lib/systemd/system/
scp kubelet.service root@192.168.2.177:/usr/lib/systemd/system/

14.3 启动 kubelet 服务

k8s-node-1 k8系统/运维s-node-2 k8s-node-3 k8s-node-4 节点上执行

# 全局刷新service
systemctl daemon-reload 
# 设置kubelet开机启动
systemctl enable kubelet
#重启kubelet
systemctl restart kubelet

14.4 检查启动结果

k8s-node-1 k8s-node-2 k8s-node-3 k8s-node-高可用4 节点上执行

systemctl status kubelet|grep Active
[root@k8s-node-1 ~]#  systemctl status kubelet|grep Active
   Active: active (running) since Fri 2022-02-11 04:48:17 CST; 3 days ago

确保状态为 active (running),否则查看配置文件的扩展名是什么日志,确认原因:

journalctl -u kubelet

14.5 检查静态pod 是否启动

[root@k8s-node-1 ~]# crictl ps
e180c4244b490       ad393d6a4d1b1ccbce65c2a4db6064635a8aac883d9dd66a38e14ce925e93b7f                                        3 days ago          Running             kube-rbac-proxy           4                   94f09ded0f736
[root@k8s-node-1 ~]# netstat -tnlp| grep 6443
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      1790/nginx: master
tcp6       0      0 :::6443                 :::*                    LISTEN      1790/nginx: master
[root@k8s-node-1 ~]# curl -k https://127.0.0.1:6443
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}
#负载均衡正常部署其它插件

15 部署 kube-proxy 组件

kube-proxy 运行在所有 worker 节点上,它监听 apiserver 中 service 和 endpoint 的变化情况,创建路由规则以提供服务 IP 和负载均衡功能。

本文档讲解部署 ipvs 模式的 kube-proxy 过程。

注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行,然后远程分linux删除文件命令发文件和执行命令。

15.1 创建 kube-proxy 证书

创建证书签名请求系统运维工程师

cd /opt/k8s/work
cat > /opt/k8s/cfssl/k8s/kube-proxy.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [""], 
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
            "C": "CN",
            "ST": "GuangDong",
            "L": "GuangZhou",
      "O": "system:node-proxier",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
  • CN:指定该证书的 U系统运维工作内容ser 为 system:kube-proxy
  • 预定义的 RoleBinding system:node-proxier 将User slinux删除文件命令yslinux系统tem:kube-proxy 与 Role system:nod颜色配置文件e-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
  • 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

高可用架构成证书和私钥:

cd /opt/k8s/work
cfssl gencert \
        -ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
        -ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \
        -config=/opt/k8s/cfssl/ca-config.json \
        -profile=kubernetes \
         /opt/k8s/cfssl/k8s/kube-proxy.json | \
         cfssljson -bare /opt/k8s/cfssl/pki/k8s/kube-proxy
ls /opt/k8s/cfssl/pki/k8s/kube-proxy*

15.2 创建和分发 kubeconfig 文件

cd /opt/k8s/kubeconfig
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
  --client-certificate=/opt/k8s/cfssl/pki/k8s/kube-proxy.pem \
  --client-key=/opt/k8s/cfssl/pki/k8s/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=system:kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

分发 kubeconfig 文件:

cd /opt/k8s/kubeconfig
 scp kube-proxy.kubeconfig root@192.168.2.175:/apps/k8s/conf
 scp kube-proxy.kubeconfig root@192.168.2.176:/apps/k8s/conf
 scp kube-proxy.kubeconfig root@192.168.2.177:/apps/k8s/conf
 scp kube-proxy.kubeconfig root@192.168.2.185:/apps/k8s/conf
 scp kube-proxy.kubeconfig root@192.168.2.187:/apps/k8s/conf
 scp kube-proxy.kubeconfig root@192.168.3.62:/apps/k8s/conf
  scp kube-proxy.kubeconfig root@192.168.3.70:/apps/k8s/conf

15.3 创建 kube-proxy 配置文件

从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 --write-config-to 选项生成该配置文件,或者参考 源代码的注释。

创建 kube-proxy配置:
以 k8s-master-1 为例

修改以下参数为对应节点名字跟kubelet一致:

--hostnamlinux系统e-override=k8s-master-1

所有节点执行

cat > /apps/k8s/conf/kube-proxy <<EOF
KUBE_PROXY_OPTS="--logtostderr=true \
--v=2 \
--masquerade-all=true \
--proxy-mode=ipvs \
--profiling=true \
--ipvs-min-sync-period=5s \
--ipvs-sync-period=5s \
--ipvs-scheduler=rr \
--conntrack-max-per-core=0 \
--cluster-cidr=10.80.0.0/12 \
--log-dir=/apps/k8s/log \
--metrics-bind-address=0.0.0.0 \
--alsologtostderr=true \
--hostname-override=k8s-master-1 \
--kubeconfig=/apps/k8s/conf/kube-proxy.kubeconfig"
EOF
  • bindAddress: 监听地址;
  • clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr二进制文件转换为文本文件 --m二进制文件的后缀名asquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
  • hostnamlinux是什么操作系统eOverride: 参数值必须与 kubelet 的值一linux系统致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
  • mode: 使用 ipvs 模式;

kube-proxy参数

15.4 创建和分发 kube-proxy sys系统运维工程师temd unit 文件

cd /opt/k8s/work
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
LimitNOFILE=655350
LimitNPROC=655350
LimitCORE=infinity
LimitMEMLOCK=infinity
EnvironmentFile=-/apps/k8s/conf/kube-proxy
ExecStart=/apps/k8s/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

分发 klinux系统ube-proxy systemd unit 文件:

cd /opt/k8s/work
scp kube-proxy.service root@192.168.2.175:/usr/lib/systemd/system/
scp kube-proxy.service root@192.168.2.176:/usr/lib/systemd/system/
scp kube-proxy.service root@192.168.2.177:/usr/lib/systemd/system/
scp kube-proxy.service root@192.168.2.185:/usr/lib/systemd/system/
scp kube-proxy.service root@192.168.2.187:/usr/lib/systemd/system/
scp kube-proxy.service root@192.168.3.62:/usr/lib/systemd/system/
scp kube-proxy.service root@192.168.3.70:/usr/lib/systemd/system/

15.5 启动 kube-proxy 服务

所有节颜色配置文件点执行

# 全局刷新service
systemctl daemon-reload 
# 设置kubelet开机启动
systemctl enable kubelet
#重启kubelet
systemctl restart kubelet

15.6 检查启动结果

所有节点执行

systemctl status kube-proxy|grep Active

确保状态为 active (running),否则查看日志,确认原因:

journalctl -u kube-proxy

15.7 查看监听端口

[root@k8s-master-1 conf]# netstat -lnpt|grep kube-prox
tcp6       0      0 :::10249                :::*                    LISTEN      906/kube-proxy
tcp6       0      0 :::10256                :::*                    LISTEN      906/kube-proxy
  • 10249:http plinux是什么操作系统rometheus metrics port;
  • 10256:http healthz port;

15.8 查看 ipvs 路由规则

任意节点执行

/usr/sbin/ipvsadm -ln

预期输出:

[root@k8s-master-1 conf]# /usr/sbin/ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.66.0.1:443 rr
  -> 192.168.2.175:5443           Masq    1      2          0
  -> 192.168.2.176:5443           Masq    1      4          0
  -> 192.168.2.177:5443           Masq    1      0          0

可见所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 5443 端口二进制文件转换为文本文件

16 flannel 插件部署

16.1 生成部署yaml

官方地址:

  • "linux删除文件命令Network": &quo配置文件的扩展名是什么t;10.80.0.0/12" 改成kube-controller-manager 组件cluster-cidr 参数网段

    
    # 生成yaml
    cd /opt/k8s/yaml
    cat > kube-flannel.yml <<EOF
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
    name: flannel
    rules:
    - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
    - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
    - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
    name: flannel
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: flannel
    subjects:
    - kind: ServiceAccount
    name: flannel
    namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: flannel
    namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: kube-flannel-cfg
    namespace: kube-system
    labels:
    tier: node
    app: flannel
    data:
    cni-conf.json: |
     {
     "name":"cni0",
     "cniVersion":"0.3.1",
     "plugins":[
       {
         "type":"flannel",
         "delegate":{
           "forceAddress":false,
           "hairpinMode": true,
           "isDefaultGateway":true
         }
       },
       {
         "type":"portmap",
         "capabilities":{
           "portMappings":true
         }
       }
     ]
     }
    net-conf.json: |
    {
      "Network": "10.80.0.0/12",
      "Backend": {
        "Type": "VXLAN"
      }
    }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
    name: kube-flannel-ds-amd64
    namespace: kube-system
    labels:
    tier: node
    app: flannel
    spec:
    selector:
    matchLabels:
      app: flannel
    template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      priorityClassName: system-node-critical                
      tolerations:
        - effect: NoSchedule
          operator: Exists
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin      
      - name: install-cni
        image: rancher/mirrored-flannelcni-flannel:v0.16.3
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: rancher/mirrored-flannelcni-flannel:v0.16.3
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni-plugin
          hostPath:
            path: /opt/cni/bin            
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

EOF用户配置文件

  ## 16.2 部署 kube-flannel
```bash
 cd /opt/k8s/yaml 
kubectl apply -f kube-flannel.yml

16.3 查看pod 部署状态

 root@Qist work# kubectl get pod  | grep kube-flannel
kube-flannel-ds-amd64-h8nxx            1/1     Running   0    1d
kube-flannel-ds-amd64-psnrb            1/1     Running   0    1d
kube-flannel-ds-amd64-rxnml            1/1     Running   0    1d
kube-flannel-ds-amd64-s7r4b            1/1     Running   0    1d
kube-flannel-ds-amd64-t5lss            1/1     Running   0    1d
kube-flannel-ds-amd64-v79t9            1/1     Running   0    1d
kube-flannel-ds-amd64-z7btq            1/1     Running   0    1d

16.4 查看节点网卡

 ip a
 [root@k8s-master-3 ~]# ip a
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether be:1f:8c:e8:b3:10 brd ff:ff:ff:ff:ff:ff
    inet 10.80.2.0/32 brd 10.80.2.0 scope global flannel.1
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 8e:e2:08:d5:95:10 brd ff:ff:ff:ff:ff:ff
    inet 10.80.2.1/24 brd 10.80.2.255 scope global cni0
       valid_lft forever preferred_lft forever

16.5 查看集群节系统运维工作内容点状态

 kubectl get node
 root@Qist work# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k8s-master-1   Ready    <none>      1d   v1.23.3
k8s-master-2   Ready    <none>      1d   v1.23.3
k8s-master-3   Ready    <none>      1d   v1.23.3
k8s-node-1     Ready    <none>      1d   v1.23.3
k8s-node-2     Ready    <none>      1d   v1.23.3
k8s-node-3     Ready    <none>      1d   v1.23.3
k8s-node-4     Ready    <none>      1d   v1.23.3

17 coredns 部署

17.1 生成部署yaml

  • cllinux重启命令usterIP: 对应 kubelet kubelet.yaml 里面clusterDN系统运维工作内容S IP
  • cluster.local: 对应 对应 kubelet kubelet系统运维工程师.yaml 里面 clu系统/运维sterDomalinux删除文件命令in
cd /opt/k8s/work/yaml
cat > coredns.yaml <<EOF
# __MACHINE_GENERATED_WARNING__
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods verified
            endpoint_pod_names
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            max_concurrent 1000
        }
        cache 30
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns
        imagePullPolicy: Always
        resources:
          limits:
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.66.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF

17.2 部署 coredns

 cd /opt/k8s/yaml
kubectl apply -f coredns.yml

17.3 查看pod 部署状态

root@Qist work# kubectl -n kube-system get pod  | grep coredns
coredns-56b954df48-f97nw               1/1     Running   0                1d1h
coredns-56b954df48-mv6fq               1/1     Running   0                1d1h
coredns-56b954df48-z2ttl               1/1     Running   0                1d1h

17.4 测试dns

 # 任意K8S 集群节点测试
 ssh 192.168.2.175
# 安装dig
yum install bind-utils
[root@k8s-master-3 ~]# dig @10.66.0.2 www.qq.com
; <<>> DiG 9.11.26-RedHat-9.11.26-3.el8 <<>> @10.66.0.2 www.qq.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38815
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: f8ed2db7969a272a (echoed)
;; QUESTION SECTION:
;www.qq.com.                    IN      A
;; ANSWER SECTION:
www.qq.com.             30      IN      CNAME   ins-r23tsuuf.ias.tencent-cloud.net.
ins-r23tsuuf.ias.tencent-cloud.net. 30 IN A     121.14.77.201
ins-r23tsuuf.ias.tencent-cloud.net. 30 IN A     121.14.77.221
;; Query time: 61 msec
;; SERVER: 10.66.0.2#53(10.66.0.2)
;; WHEN: Mon Feb 14 20:24:57 CST 2022
;; MSG SIZE  rcvd: 209
正常解析

参考

  1. 系统内核相关参数参考:https://docs.openshift.com/enterprise/3.2/admin_gu用户配置文件ide/overcommit配置文件是什么意思.html
  2. 各种 CA 证书类型
  3. 关于 controller 权限和 use-se配置文件的扩展名是什么rvice-account-credentials 参数:https://githu高可用b.com/kubelinux必学的60个命令rnet配置文件怎么创建es/kubernetes/issues/48208
  4. kubelet 认证和授权:https://k配置文件在哪ubernete二进制文件和文本文件的区别s.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization