二进制方式部署kubernetes集群

二进制方式部署kubernetes集群

1、部署k8s常见的几种方式

1.1 kubeadm

Kubeadm 是一个 k8s 部署工具,提供 kubeadm init 和 kubeadm join,用于快速部署 Kubernetes 集群。

Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署。

Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

1.2 二进制

Kubernetes 系统由一组可执行程序组成,用户可以通过 GitHub 上的 Kubernetes 项目页下载编译好的二进制

包,或者下载源代码并编译后进行安装。

从 github 下载发行版的二进制包,手动部署每个组件,组成 Kubernetes 集群。

1.3 kubespray

kubespray 是 Kubernetes incubator 中的项目,目标是提供 Production Ready Kubernetes 部署方案,该项目

基础是通过 Ansible Playbook 来定义系统与 Kubernetes 集群部署的任务。

Kubernetes 需要容器运行时(Container Runtime Interface,CRI)的支持,目前官方支持的容器运行时包括:

Docker、Containerd、CRI-O 和 frakti,本文以 Docker 作为容器运行环境。

本文以二进制文件方式部署 Kubernetes 集群,并对每个组件的配置进行详细说明。

2、二进制部署环境准备

2.1 软硬件环境准备

软件环境:

软件版本
操作系统CentOS Linux release 7.9.2009 (Core)
容器引擎Docker version 20.10.21, build baeda1f
KubernetesKubernetes V1.20.15

服务器规划:

角色IP组件
k8s-master1192.168.54.101kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd
k8s-master2192.168.54.104kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd
k8s-node1192.168.54.102kubelet,kube-proxy,docker,etcd
k8s-node2192.168.54.103kubelet,kube-proxy,docker,etcd
虚拟IP192.168.54.105

搭建这套 k8s 高可用集群分两部分实施,先部署一套单 master 架构(3台),再扩容为多 master 架构(4台),顺便

再熟悉下 master 扩容流程。

单 master服务器规划:

角色IP组件
k8s-master1192.168.54.101kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd
k8s-node1192.168.54.102kubelet,kube-proxy,docker,etcd
k8s-node2192.168.54.103kubelet,kube-proxy,docker,etcd

2.2 操作系统初始化配置(所有节点)

# 关闭系统防火墙
# 临时关闭
systemctl stop firewalld
# 永久关闭
systemctl disable firewalld
# 关闭selinux
# 永久关闭
sed -i 's/enforcing/disabled/' /etc/selinux/config  
# 临时关闭
setenforce 0  
# 关闭swap
# 临时关闭
swapoff -a   
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab
# 根据规划设置主机名
hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-master2
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
# 添加hosts
cat >> /etc/hosts << EOF
192.168.54.101 k8s-master1
192.168.54.102 k8s-node1
192.168.54.103 k8s-node2
192.168.54.104 k8s-master2
EOF
# 将桥接的IPV4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
EOF
# 生效
sysctl --system  
# 时间同步
# 使用阿里云时间服务器进行临时同步
yum install ntpdate
ntpdate ntp.aliyun.com

上面的基本上不会出现任何问题。

下面在master上安装etcd、docker、kube-apiserver、kube-controller-manager和kube-scheduler服务。

3、部署etcd集群

etcd 服务作为 Kubernetes 集群的主数据库,在安装 Kubernetes 各服务之前需要首先安装和启动。

3.1 etcd简介

Etcd 是一个分布式键值存储系统,Kubernetes 使用 Etcd 进行数据存储,所以先准备一个 Etcd 数据库,为解决

Etcd 单点故障,应采用集群方式部署,这里使用 3 台组建集群,可容忍 1 台机器故障,当然,你也可以使用 5 台

组建集群,可容忍 2 台机器故障。

3.2 服务器规划

本文安装 Etcd 的服务规划:

节点名称IP
etcd-1192.168.54.101
etcd-2192.168.54.102
etcd-3192.168.54.103

说明:为了节省机器,这里与 k8s 节点复用,也可以部署在 k8s 机器之外,只要 apiserver 能连接到就行。

3.3 cfssl证书生成工具准备

cfssl 简介:cfssl 是一个开源的证书管理工具,使用 json 文件生成证书,相比 openssl 更方便使用。 找任意

一台服务器操作,这里用 k8s-master1 节点。

# k8s-master1节点执行
# 创建目录存放cfssl工具
mkdir /software-cfssl
# 下载相关工具
# 这些都是可执行文件
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -P /software-cfssl/
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -P /software-cfssl/
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -P /software-cfssl/
cd /software-cfssl/
chmod +x *
cp cfssl_linux-amd64 /usr/local/bin/cfssl
cp cfssljson_linux-amd64 /usr/local/bin/cfssljson
cp cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

3.4 自签证书颁发机构(CA)

3.4.1 创建工作目录

# k8s-master1节点执行
mkdir -p ~/TLS/{etcd,k8s}
cd ~/TLS/etcd/

3.4.2 生成自签CA配置

# k8s-master1节点执行
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "YuMingYu",
            "ST": "YuMingYu"
        }
    ]
}
EOF

3.4.3 生成自签CA证书

# k8s-master1节点执行
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

说明:当前目录下会生成 ca.pem 和 ca-key.pem 文件,同时会生成 ca.csr 文件。

查看证书:

# k8s-master1节点执行
[root@k8s-master1 etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

3.5 使用自签CA签发etcd https证书

3.5.1 创建证书申请文件

# k8s-master1节点执行
cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.54.101",
    "192.168.54.102",
    "192.168.54.103",
    "192.168.54.104"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "YuMingYu",
            "ST": "YuMingYu"
        }
    ]
}
EOF

说明:上述文件 hosts 字段中 ip 为所有 etcd 节点的集群内部通信 ip,一个都不能少,为了方便后期扩容可以多

写几个预留的 ip。

3.5.2 生成证书

# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

说明:当前目录下会生成 server.pem 和 server-key.pem。

查看证书:

# k8s-master1节点执行
[root@k8s-master1 etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

3.6 下载etcd二进制文件

下载地址:

https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

下载后上传到服务器任意位置即可。

3.7 部署etcd集群

从GitHub官网( https://github.com/coreos/etcd/releases )下载etcd二进制文件,将etcd和etcdctl文件复

制到/usr/bin目录。

以下操作在 k8s-master1 上面操作,为简化操作,待会将 k8s-master1 节点生成的所有文件拷贝到其他节点。

3.7.1 创建工作目录并解压二进制包

# k8s-master1节点执行
mkdir /opt/etcd/{bin,cfg,ssl} -p
# 将安装包放在~目录下
cd ~
tar -xf etcd-v3.4.9-linux-amd64.tar.gz
# etcd,etcdctl为可执行文件
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

3.8 创建etcd配置文件

# k8s-master1节点执行
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.54.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.54.101:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.54.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.54.101:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.54.101:2380,etcd-2=https://192.168.54.102:2380,etcd-3=https://192.168.54.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

配置说明:

  • ETCD_NAME:节点名称,集群中唯一

  • ETCD_DATA_DIR:数据目录

  • ETCD_LISTEN_PEER_URLS:集群通讯监听地址

  • ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

  • ETCD_INITIAL_CLUSTER:集群节点地址

  • ETCD_INITIALCLUSTER_TOKEN:集群Token

  • ETCD_INITIALCLUSTER_STATE:加入集群的状态,new是新集群,existing表示加入已有集群

3.9 systemd管理etcd

# k8s-master1节点执行
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

可以在[Service]下添加一个工作目录(可选):

[Service]
WorkingDirectory=/var/lib/etcd/

其中,WorkingDirectory(/var/lib/etcd/) 表示etcd数据保存的目录,需要在启动etcd服务之前创建。

配置文件 /etc/etcd/etcd.conf 通常不需要特别的参数设置(详细的参数配置内容参见官方文档),etcd默认使用

https://192.168.54.101:2379 地址供客户端连接。

3.10 将master1节点所有生成的文件拷贝到节点2和节点3

# k8s-master1节点执行
#!/bin/bash
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
for i in {2..3}
do
scp -r /opt/etcd/ root@192.168.54.10$i:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.54.10$i:/usr/lib/systemd/system/
done
# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/etcd/
/opt/etcd/
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd.conf
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── etcd.service
└── ......

# k8s-node1节点执行
[root@k8s-node1 ~]# tree /opt/etcd/
/opt/etcd/
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd.conf
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

[root@k8s-node1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── etcd.service
└── ......

# k8s-node2节点执行
[root@k8s-node2 ~]# tree /opt/etcd/
/opt/etcd/
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd.conf
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-node2 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── etcd.service
└── ......

3.11 修改节点2和节点3中etcd.conf配置文件中的节点名称和当前服务器IP

# k8s-node1节点执行
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.54.102:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.54.102:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.54.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.54.102:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.54.101:2380,etcd-2=https://192.168.54.102:2380,etcd-3=https://192.168.54.103:2380"  
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
# k8s-node2节点执行
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.54.103:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.54.103:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.54.103:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.54.103:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.54.101:2380,etcd-2=https://192.168.54.102:2380,etcd-3=https://192.168.54.103:2380"  
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

3.12 启动etcd并设置开机自启

配置完成后,通过systemctl start命令启动etcd服务。同时,使用systemctl enable命令将服务加入开机启动列表

中。

说明:etcd 须多个节点同时启动,不然执行 systemctl start etcd 会一直卡在前台,连接其他节点,建议通过批量

管理工具,或者脚本同时启动 etcd。

# k8s-master1、k8s-node1和k8s-node2节点执行
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
systemctl status etcd

3.13 检查etcd集群状态

通过执行etcdctl cluster-health,可以验证etcd是否正确启动:

# k8s-master1节点执行
[root@k8s-master1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.54.101:2379,https://192.168.54.102:2379,https://192.168.54.103:2379" endpoint health --write-out=table
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.54.103:2379 |   true | 19.533787ms |       |
| https://192.168.54.101:2379 |   true | 19.229071ms |       |
| https://192.168.54.102:2379 |   true | 23.769337ms |       |
+-----------------------------+--------+-------------+-------+
# k8s-node1节点执行
[root@k8s-node1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.54.101:2379,https://192.168.54.102:2379,https://192.168.54.103:2379" endpoint health --write-out=table
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.54.102:2379 |   true | 23.682349ms |       |
| https://192.168.54.103:2379 |   true | 23.718213ms |       |
| https://192.168.54.101:2379 |   true | 25.853315ms |       |
+-----------------------------+--------+-------------+-------+
# k8s-node2节点执行
[root@k8s-node2 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.54.101:2379,https://192.168.54.102:2379,https://192.168.54.103:2379" endpoint health --write-out=table
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.54.103:2379 |   true | 24.056756ms |       |
| https://192.168.54.102:2379 |   true | 24.108094ms |       |
| https://192.168.54.101:2379 |   true | 24.793733ms |       |
+-----------------------------+--------+-------------+-------+

如果为以上状态证明部署的没有问题。

3.14 etcd问题排查(日志)

less /var/log/message
journalctl -u etcd

4、安装Docker(所有节点)

这里使用 Docker 作为容器引擎,也可以换成别的,例如 containerd,k8s 在 1.20 版本就不在支持 docker。

4.1 解压二进制包

cd ~
wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
tar -xf docker-19.03.9.tgz
mv docker/* /usr/bin/

4.2 配置镜像加速

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

4.3 docker.service配置

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd --selinux-enabled=false --insecure-registry=127.0.0.1
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
#TasksMax=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

4.4 启动并设置开机启动

systemctl daemon-reload
systemctl start docker
systemctl enable docker
systemctl status docker

5、部署master节点

5.1 生成kube-apiserver证书

5.1.1 自签证书颁发机构(CA)

# k8s-master1节点执行
cd ~/TLS/k8s
# k8s-master1节点执行
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
# k8s-master1节点执行
cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成证书:

# k8s-master1节点执行
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

目录下会生成 ca.pem 和 ca-key.pem,同时还有 ca.csr 文件。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

5.1.2 使用自签CA签发kube-apiserver https证书

创建证书申请文件:

# k8s-master1节点执行
cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.54.101",
      "192.168.54.102",
      "192.168.54.103",
      "192.168.54.104",
      "192.168.54.105",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

说明:上述文件中hosts字段中IP为所有Master/LB/VIP IP,一个都不能少,为了方便后期扩容可以多写几个预留

的IP。

生成证书:

# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

说明:当前目录下会生成server.pem 和 server-key.pem 文件,还有server.csr。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

5.2 下载

下载地址:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md

在这里插入图片描述

下载:

https://storage.googleapis.com/kubernetes-release/release/v1.20.15/kubernetes-server-linux-amd64.tar.gz

5.3 解压二进制包

上传刚才下载的 k8s 软件包到服务器上。

将 kube-apiserver、kube-controller-manager 和 kube-scheduler 文件复制到 /opt/kubernetes/bin 目录。

# k8s-master1节点执行
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/kubernetes/bin
/opt/kubernetes/bin
├── kube-apiserver
├── kube-controller-manager
└── kube-scheduler

[root@k8s-master1 ~]# tree /usr/bin/
/usr/bin/
├── ......
├── kubectl
└── ......

5.4 部署kube-apiserver

5.4.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.54.101:2379,https://192.168.54.102:2379,https://192.168.54.103:2379 \\
--bind-address=192.168.54.101 \\
--secure-port=6443 \\
--advertise-address=192.168.54.101 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

配置文件 /opt/kubernetes/cfg/kube-apiserver.conf 的内容包括了 kube-apiserver 的全部启动参数,主要

的配置参数在变量 KUBE_APISERVER_OPTS 中指定。

说明:

  • 上面两个\\第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。

  • --logtostderr:启用日志,设置为false表示将日志写入文件,不写入stderr。

  • --v:日志等级。

  • --log-dir:日志目录。

  • --etcd-servers:etcd集群地址,指定etcd服务的URL。

  • --bind-address:监听地址,API Server绑定主机的安全IP地址,设置0.0.0.0表示绑定所有IP地址。

  • --secure-port:https安全端口,API Server绑定主机的安全端口号,默认为8080。

  • --advertise-address:集群通告地址。

  • --allow-privileged:启动授权。

  • --service-cluster-ip-range:Service虚拟IP地址段,Kubernetes集群中Service的虚拟IP地址范围,以

    CIDR格式表示,例如10.0.0.0/24,该IP范围不能与物理机的IP地址有重合。

  • --enable-admission-plugins :准入控制模块,Kubernetes集群的准入控制设置,各控制模块以插件的形

    式依次生效。

  • --authorization-mode:认证授权,启用RBAC授权和节点自管理。

  • --enable-bootstrap-token-auth:启用TLS bootstrap机制。

  • --token-auth-file:bootstrap token文件。

  • --service-node-port-range:Service nodeport类型默认分配端口范围,Kubernetes集群中Service可使

    用的物理机端口号范围,默认值为30000~32767。

  • --kubelet-client-xxx:apiserver访问kubelet客户端证书。

  • --tls-xxx-file:apiserver https证书。

  • 1.20版本必须加的参数--service-account-issuer--service-account-signing-key-file

  • --etcd-xxxfile:连接etcd集群证书。

  • --audit-log-xxx:审计日志。

  • 启动聚合层网关配置--requestheader-client-ca-file--proxy-client-cert-file

    --proxy-client-key-file--requestheader-allowed-names

    -requestheader-extra-headers-prefix--requestheader-group-headers

    --requestheader-username-headers--enable-aggregator-routing

  • --storage-backend:指定etcd的版本,从Kubernetes 1.6开始,默认为etcd 3。注意,在Kubernetes 1.6

    之前的版本中没有这个参数,kube-apiserver默认使用etcd 2,对于正在运行的1.5或旧版本的Kubernetes集

    群,etcd提供了数据升级方案,详见etcd文档:

    https://coreos.com/etcd/docs/latest/upgrades/upgrade_3_0.html

5.4.2 拷贝刚才生成的证书

把刚才生成的证书拷贝到配置文件中的路径:

# k8s-master1节点执行
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

5.4.3 启用TLS bootstrapping机制

TLS Bootstraping:Master apiserver 启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver

进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也

会增加集群扩展复杂度。为了简化流程,Kubernetes 引入了 TLS bootstraping 机制来自动颁发客户端证书,

kubelet 会以一个低权限用户自动向 apiserver 申请证书,kubelet的证书由apiserver动态签署。所以强烈建议

在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

创建上述配置文件中 token 文件:

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/token.csv << EOF
4136692876ad4b01bb9dd0988480ebba,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

格式:token,用户名,UID,用户组

token也可自行生成替换:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

5.4.4 systemd管理apiserver

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── etcd.service
└── ......

5.4.5 启动并设置开机启动

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kube-apiserver 
systemctl enable kube-apiserver
systemctl status kube-apiserver
# k8s-master1节点执行
[root@k8s-master1 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 13:49:42 CST; 10s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 44755 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─44755 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --...

12月 05 13:49:42 k8s-master1 systemd[1]: Started Kubernetes API Server.
12月 05 13:49:42 k8s-master1 kube-apiserver[44755]: E1205 13:49:42.475307   44755 instance.go:392] Could not... api
12月 05 13:49:45 k8s-master1 kube-apiserver[44755]: E1205 13:49:45.062415   44755 controller.go:152] Unable ...Msg:
Hint: Some lines were ellipsized, use -l to show in full.

5.5 部署kube-controller-manager

5.5.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF

配置文件 /opt/kubernetes/cfg/kube-controller-manager.conf 的内容包含了kube-controller-manager的

全部启动参数,主要的配置参数在变量 KUBE_CONTROLLER_MANAGER_OPTS 中指定,对启动参数说明如下:

  • --v:日志级别。

  • --log-dir:日志目录。

  • --logtostderr:设置为false表示将日志写入文件,不写入stderr。

  • --kubeconfig:连接apiserver配置文件,设置与API Server连接的相关配置。

  • --leader-elect:当该组件启动多个时,自动选举(HA)

  • --cluster-signing-cert-file:自动为kubelet颁发证书的CA,apiserver保持一致

  • --cluster-signing-key-file:自动为kubelet颁发证书的CA,apiserver保持一致

[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t
    server: https://192.168.54.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-controller-manager
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-controller-manager
  user:
    client-certificate-data: LS0t
    client-key-data: LS0t

5.5.2 生成kubeconfig文件

生成 kube-controller-manager 证书:

# k8s-master1节点执行
# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
# 生成证书
# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

会生成 kube-controller-manager.csr、kube-controller-manager-key.pem 和

kube-controller-manager.pem 文件。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls kube-controller-manager*
kube-controller-manager.csr       kube-controller-manager-key.pem
kube-controller-manager-csr.json  kube-controller-manager.pem

生成 kubeconfig 文件(以下是 shell 命令,直接在 shell 终端执行):

# k8s-master1节点执行
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.54.101:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-controller-manager \
  --client-certificate=./kube-controller-manager.pem \
  --client-key=./kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成kube-controller-manager.kubeconfig文件

5.5.3 systemd管理controller-manager

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

kube-controller-manager 服务依赖于 kube-apiserver 服务,systemd 服务配置文件

/usr/lib/systemd/system/kube-controller-manager.service 可以添加如下内容:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── kube-controller-manager.service
├── etcd.service
└── ......

5.5.4 启动并设置开机自启

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kube-controller-manager 
systemctl enable kube-controller-manager
systemctl status kube-controller-manager
# k8s-master1节点执行
[root@k8s-master1 k8s]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 13:55:33 CST; 11s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 46929 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─46929 /opt/kubernetes/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/opt/kubernete...

12月 05 13:55:33 k8s-master1 systemd[1]: Started Kubernetes Controller Manager.
12月 05 13:55:34 k8s-master1 kube-controller-manager[46929]: E1205 13:55:34.773588   46929 core.go:232] faile...ded
Hint: Some lines were ellipsized, use -l to show in full.

5.6 部署 kube-scheduler

5.6.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF

配置文件 /opt/kubernetes/cfg/kube-scheduler.conf 的内容包括了 kube-scheduler 的全部启动参数,主要

的配置参数在变量 KUBE_SCHEDULER_OPTS 中指定,对启动参数说明如下:

  • --logtostderr:设置为false表示将日志写入文件,不写入stderr。

  • --log-dir:日志目录。

  • --v:日志级别。

  • --kubeconfig:连接apiserver配置文件,设置与API Server连接的相关配置,可以与kube-controller-

    manager使用的kubeconfig文件相同。

  • --leader-elect:当该组件启动多个时,自动选举(HA)。

[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/kube-scheduler.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t
    server: https://192.168.54.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-scheduler
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-scheduler
  user:
    client-certificate-data: LS0t
    client-key-data: LS0t

5.6.2 生成kubeconfig文件

生成 kube-scheduler 证书:

# k8s-master1节点执行
# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
# k8s-master1节点执行
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

会生成 kube-scheduler.csr、kube-scheduler-key.pem 和 kube-scheduler.pem 文件。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls kube-scheduler*
kube-scheduler.csr  kube-scheduler-csr.json  kube-scheduler-key.pem  kube-scheduler.pem

生成 kubeconfig 文件:

# k8s-master1节点执行
KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.54.101:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-scheduler \
  --client-certificate=./kube-scheduler.pem \
  --client-key=./kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成 kube-scheduler.kubeconfig文件

5.6.3 systemd管理scheduler

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

kube-scheduler 服务也依赖于 kube-apiserver 服务,systemd 服务配置文件

/usr/lib/systemd/system/kube-scheduler.service 可以添加内容如下:

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
# k8s-master1节点执行
[root@k8s-master1 k8s]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── kube-controller-manager.service
├── kube-scheduler.service
├── etcd.service
└── ......

5.6.4 启动并设置开机启动

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler
# k8s-master1节点执行
[root@k8s-master1 k8s]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 14:03:18 CST; 6s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 49798 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─49798 /opt/kubernetes/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --...

12月 05 14:03:18 k8s-master1 systemd[1]: Started Kubernetes Scheduler.

kube-apiserver、kube-controller-manager 和 kube-scheduler 服务配置完成后,执行 systemctl start 命令按顺

序启动这3个服务,同时,使用systemctl enable命令将服务加入开机启动列表中(如果已经执行过就无须再执行):

systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service

通过 systemctl status <service_name> 验证服务的启动状态,running表示启动成功。

至此,Master上所需的服务就全部启动完成了。

5.6.5 查看集群状态

生成 kubectl 连接集群的证书 :

# k8s-master1节点执行
# 切换工作目录
cd ~/TLS/k8s

cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

会生成 admin.csr、admin-key.pem 和 admin.pem。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

生成 kubeconfig 文件 :

# k8s-master1节点执行
mkdir /root/.kube

KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.54.101:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials cluster-admin \
  --client-certificate=./admin.pem \
  --client-key=./admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成/root/.kube/config文件

通过 kubectl 工具查看当前集群组件状态 :

# k8s-master1节点执行
[root@k8s-master1 k8s]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}

如上说明 master 节点组件运行正常。

5.6.6 授权kubelet-bootstrap用户允许请求证书

# k8s-master1节点执行
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

接下来在 Node 上安装 kubelet 和 kube-proxy 服务。

6、部署Work Node

在 Work Node 上需要预先安装好 Docker Daemon 并且正常启动,Docker 的安装和启动详见 Docker 官网:

http://www.docker.com 的说明文档。

下面还是在master node上面操作,既当master节点,也当Work Node节点。

work node主要是指 kubeletkube-proxy

6.1 创建工作目录并拷贝二进制文件

注:在所有 work node 创建工作目录。

# k8s-master1、k8s-node1和k8s-node2节点执行
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

从 master 节点 k8s-server 软件包中拷贝到所有 work 节点:

# k8s-master1节点执行
#进入到k8s-server软件包目录
#!/bin/bash 
cd ~/kubernetes/server/bin
for i in {1..3}
do
scp kubelet kube-proxy root@192.168.54.10$i:/opt/kubernetes/bin/
done

6.2 部署kubelet

6.2.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
# 会生成kubelet.kubeconfig文件

配置文件 /opt/kubernetes/cfg/kubelet.conf 的内容包括了 kubelet 的全部启动参数,主要的配置参数在变

KUBELET_OPTS 中指定,对启动参数说明如下:

  • --logtostderr:设置为false表示将日志写入文件,不写入stderr。

  • --log-dir:日志目录。

  • --v:日志级别。

  • --hostname-override:显示名称,集群唯一(不可重复),设置本Node的名称。

  • --network-plugin:启用CNI。

  • --kubeconfig :空路径,会自动生成,后面用于连接 apiserver。设置与 API Server 连接的相关配置,可以

    与 kube-controller-manager 使用的 kubeconfig 文件相同。

# k8s-master1节点执行
[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/kubelet.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t
    server: https://192.168.54.101:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
  user:
    client-certificate: /opt/kubernetes/ssl/kubelet-client-current.pem
    client-key: /opt/kubernetes/ssl/kubelet-client-current.pem
  • --bootstrap-kubeconfig:首次启动向apiserver申请证书。
  • --config:配置文件参数。
  • --cert-dir:kubelet证书目录。
  • --pod-infra-container-image :管理Pod网络容器的镜像 init container。

6.2.2 配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

6.2.3 生成kubelet初次加入集群引导kubeconfig文件

# k8s-master1节点执行
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.54.101:6443" # apiserver IP:PORT
TOKEN="4136692876ad4b01bb9dd0988480ebba" # 与token.csv里保持一致  /opt/kubernetes/cfg/token.csv 

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成bootstrap.kubeconfig文件

6.2.4 systemd管理kubelet

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

kubelet 服务依赖于 Docker 服务,systemd 服务配置文件 /usr/lib/systemd/system/kubelet.service 可以

添加如下内容:

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet

其中,WorkingDirectory 表示 kubelet 保存数据的目录,需要在启动 kubelet 服务之前创建。

# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kubelet.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── kubelet-client-2022-12-05-14-20-15.pem
    ├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2022-12-05-14-20-15.pem
    ├── kubelet.crt
    ├── kubelet.key
    ├── server-key.pem
    └── server.pem
  
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── kube-controller-manager.service
├── kube-scheduler.service
├── etcd.service
├── kubelet.service
└── ......

6.2.5 启动并设置开机启动

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet
# k8s-master1节点执行
[root@k8s-master1 bin]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 14:18:27 CST; 4s ago
 Main PID: 55291 (kubelet)
    Tasks: 9
   Memory: 25.8M
   CGroup: /system.slice/kubelet.service
           └─55291 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostnam...

12月 05 14:18:27 k8s-master1 systemd[1]: Started Kubernetes Kubelet.

6.2.6 允许kubelet证书申请并加入集群

# k8s-master1节点执行
# 查看kubelet证书请求
[root@k8s-master1 k8s]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8   28s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
# k8s-master1节点执行
# 允许kubelet节点申请
# node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8是上面生成的
[root@k8s-master1 k8s]# kubectl certificate approve node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8
certificatesigningrequest.certificates.k8s.io/node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8 approved
# k8s-master1节点执行
# 查看申请
[root@k8s-master1 k8s]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8   2m31s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
# k8s-master1节点执行
# 查看节点
[root@k8s-master1 k8s]# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
k8s-master1   NotReady   <none>   62s   v1.20.15

说明:由于网络插件还没有部署,节点会没有准备就绪 NotReady。

6.3 部署kube-proxy

6.3.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

配置文件 /opt/kubernetes/cfg/kube-proxy.conf 的内容包括了 kube-proxy 的全部启动参数,主要的配置参

数在变量 KUBE_PROXY_OPTS 中指定,对启动参数说明如下:

  • --logtostderr:设置为false表示将日志写入文件,不写入stderr。
  • --log-dir:日志目录。
  • --v:日志级别。

6.3.2 配置参数文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.244.0.0/16
EOF

6.3.3 生成kube-proxy证书文件

# k8s-master1节点执行
# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
# 生成证书
# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

会生成 kube-proxy.csr、kube-proxy-key.pem 和 kube-proxy.pem 文件。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

6.3.4 生成kube-proxy.kubeconfig文件

# k8s-master1节点执行
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.54.101:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成kube-proxy.kubeconfig文件

6.3.5 systemd管理kube-proxy

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

kube-proxy 服务依赖于 network 服务,systemd 服务配置文件

/usr/lib/systemd/system/kube-proxy.service 添加如下内容:

[Unit]
Description=Kubernetes Proxy
After=network.target
Requires=network.service
# k8s-master1节点执行
[root@k8s-master1 k8s]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kubelet.kubeconfig
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   ├── kube-proxy.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── kubelet-client-2022-12-05-14-20-15.pem
    ├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2022-12-05-14-20-15.pem
    ├── kubelet.crt
    ├── kubelet.key
    ├── server-key.pem
    └── server.pem
    
    
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── kube-controller-manager.service
├── kube-scheduler.service
├── etcd.service
├── kubelet.service
├── kube-proxy.service
└── ......    

6.3.6 启动并设置开机自启

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy
[root@k8s-master1 k8s]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 14:36:12 CST; 8s ago
 Main PID: 65578 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           └─65578 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --conf...

12月 05 14:36:12 k8s-master1 systemd[1]: Started Kubernetes Proxy.

6.4 部署网络组件(Calico)

Calico 是一个纯三层的数据中心网络方案,是目前 Kubernetes 主流的网络方案。

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl apply -f calico.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system

更改 calico 网段:

"ipam": {
        "type": "calico-ipam",
        "assign_ipv4": "true",
        "assign_ipv6": "true"
    },
    - name: IP
      value: "autodetect"

    - name: IP6
      value: "autodetect"

    # 此处要进行修改,保证和前面的一样
    - name: CALICO_IPV4POOL_CIDR
      value: "172.16.0.0/16"

    - name: CALICO_IPV6POOL_CIDR
      value: "fc00::/48"

    - name: FELIX_IPV6SUPPORT
      value: "true"

等 Calico Pod 都 Running,节点也会准备就绪。

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-d55ffb795-ngzgz   1/1     Running   0          34m
calico-node-v9wtk                         1/1     Running   0          34m
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    <none>   62m   v1.20.15

6.5 授权apiserver访问kubelet

# k8s-master1节点执行
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl apply -f apiserver-to-kubelet-rbac.yaml

7、新增加Work Node

7.1 拷贝以部署好的相关文件到新节点

在 k8s-master1 节点将 Work Node 涉及文件拷贝到新节点 54.102/54.103

# k8s-master1节点执行
#!/bin/bash 

for i in {2..3}; do scp -r /opt/kubernetes root@192.168.54.10$i:/opt/; done

for i in {2..3}; do scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.54.10$i:/usr/lib/systemd/system; done

for i in {2..3}; do scp -r /opt/kubernetes/ssl/ca.pem root@192.168.54.10$i:/opt/kubernetes/ssl/; done

7.2 删除kubelet证书和kubeconfig文件

# k8s-node1和k8s-node2节点执行
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*

说明:这几个文件是证书申请审批后自动生成的,每个 Node 不同,必须删除。

# k8s-node1节点执行
[root@k8s-node1 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   ├── kube-proxy.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

[root@k8s-node1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── kubelet.service
├── kube-proxy.service
└── ......      
# k8s-node2节点执行
[root@k8s-node2 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   ├── kube-proxy.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-node2 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── kubelet.service
├── kube-proxy.service
└── ......      

7.3 修改主机名

# k8s-node1和k8s-node2节点执行
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1
# 和
--hostname-override=k8s-node2

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1
# 和
hostnameOverride: k8s-node2

7.4 启动并设置开机自启

# k8s-node1和k8s-node2节点执行
systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy
systemctl status kubelet kube-proxy
# k8s-node1节点执行
[root@k8s-node1 ~]# systemctl status kubelet kube-proxy
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 15:36:13 CST; 7min ago
 Main PID: 18510 (kubelet)
    Tasks: 14
   Memory: 49.8M
   CGroup: /system.slice/kubelet.service
           └─18510 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-node1 --network-plugin=cni --ku...

12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.443830   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.443843   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.443951   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.443961   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444157   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444181   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444411   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444427   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444723   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444737   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input

● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 15:36:13 CST; 7min ago
 Main PID: 18516 (kube-proxy)
    Tasks: 7
   Memory: 15.5M
   CGroup: /system.slice/kube-proxy.service
           └─18516 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-proxy-config.yml

12月 05 15:36:13 k8s-node1 systemd[1]: Started Kubernetes Proxy.
12月 05 15:36:13 k8s-node1 kube-proxy[18516]: E1205 15:36:13.717137   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:14 k8s-node1 kube-proxy[18516]: E1205 15:36:14.769341   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:16 k8s-node1 kube-proxy[18516]: E1205 15:36:16.907789   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:21 k8s-node1 kube-proxy[18516]: E1205 15:36:21.350694   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:30 k8s-node1 kube-proxy[18516]: E1205 15:36:30.701762   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:47 k8s-node1 kube-proxy[18516]: E1205 15:36:47.323079   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
Hint: Some lines were ellipsized, use -l to show in full.
# k8s-node2节点执行
[root@k8s-node2 ~]# systemctl status kubelet kube-proxy
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 15:36:16 CST; 7min ago
 Main PID: 39153 (kubelet)
    Tasks: 14
   Memory: 50.8M
   CGroup: /system.slice/kubelet.service
           └─39153 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-node2 --network-...

12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.898329   39153 remote_image.go:113] PullImage "calico/node:v3.13.1" from image servic...
12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.898365   39153 kuberuntime_image.go:51] Pull image "calico/node:v3.13.1" failed: rpc ...
12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.898507   39153 kuberuntime_manager.go:829] container &Container{Name:calico-n...e:true,V
12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.898540   39153 pod_workers.go:191] Error syncing pod 00c4d7e7-b2de-4d71-85a1-8e021450...
12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.978288   39153 pod_workers.go:191] Error syncing pod 00c4d7e7-b2de-4d71-85a1-8e021450...
12月 05 15:43:04 k8s-node2 kubelet[39153]: E1205 15:43:04.378277   39153 remote_image.go:113] PullImage "calico/node:v3.13.1" from image servic...
12月 05 15:43:04 k8s-node2 kubelet[39153]: E1205 15:43:04.378299   39153 kuberuntime_image.go:51] Pull image "calico/node:v3.13.1" failed: rpc ...
12月 05 15:43:04 k8s-node2 kubelet[39153]: E1205 15:43:04.378414   39153 kuberuntime_manager.go:829] container &Container{Name:calico-n...e:true,V
12月 05 15:43:04 k8s-node2 kubelet[39153]: E1205 15:43:04.378437   39153 pod_workers.go:191] Error syncing pod 00c4d7e7-b2de-4d71-85a1-8e021450...
12月 05 15:43:18 k8s-node2 kubelet[39153]: E1205 15:43:18.399947   39153 pod_workers.go:191] Error syncing pod 00c4d7e7-b2de-4d71-85a1-8e021450...

● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 15:36:16 CST; 7min ago
 Main PID: 39163 (kube-proxy)
    Tasks: 7
   Memory: 18.2M
   CGroup: /system.slice/kube-proxy.service
           └─39163 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-pro...

12月 05 15:36:16 k8s-node2 systemd[1]: Started Kubernetes Proxy.
12月 05 15:36:17 k8s-node2 kube-proxy[39163]: E1205 15:36:17.031550   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:18 k8s-node2 kube-proxy[39163]: E1205 15:36:18.149111   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:20 k8s-node2 kube-proxy[39163]: E1205 15:36:20.398528   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:25 k8s-node2 kube-proxy[39163]: E1205 15:36:25.009895   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:33 k8s-node2 kube-proxy[39163]: E1205 15:36:33.635518   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:50 k8s-node2 kube-proxy[39163]: E1205 15:36:50.280862   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
Hint: Some lines were ellipsized, use -l to show in full.

7.5 在master上同意新的Node kubelet证书申请

# k8s-master1节点执行
# 查看证书请求
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-VRB35635LOQVmbSE1f-dH4ZTQ1DBrFjhv5lU_PjSkgU   98s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8   79m    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-dsFFKL7woGXhMoA6_VNw4BbE2R0XQqr3d4wXXrI_7jI   102s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
# k8s-master1节点执行
# 同意node加入
[root@k8s-master1 ~]# kubectl certificate approve node-csr-VRB35635LOQVmbSE1f-dH4ZTQ1DBrFjhv5lU_PjSkgU
certificatesigningrequest.certificates.k8s.io/node-csr-VRB35635LOQVmbSE1f-dH4ZTQ1DBrFjhv5lU_PjSkgU approved

[root@k8s-master1 ~]# kubectl certificate approve node-csr-dsFFKL7woGXhMoA6_VNw4BbE2R0XQqr3d4wXXrI_7jI
certificatesigningrequest.certificates.k8s.io/node-csr-dsFFKL7woGXhMoA6_VNw4BbE2R0XQqr3d4wXXrI_7jI approved

7.6 查看Node状态(要稍等会才会变成ready,会下载一些初始化镜像)

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE    VERSION
k8s-master1   Ready    <none>   81m    v1.20.15
k8s-node1     Ready    <none>   98s    v1.20.15
k8s-node2     Ready    <none>   118s   v1.20.15
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get pods  --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-d55ffb795-ngzgz   1/1     Running   0          62m
kube-system   calico-node-fr5fq                         1/1     Running   0          9m46s
kube-system   calico-node-v9wtk                         1/1     Running   0          62m
kube-system   calico-node-zp6cz                         1/1     Running   0          10m

说明:其他节点同上,至此,3 个节点的集群搭建完成。

8、部署Dashboard和CoreDNS

8.1 部署Dashboard(k8s-master1)

# k8s-master1节点执行
# 部署
[root@k8s-master1 ~]# kubectl apply -f kubernetes-dashboard.yaml
# k8s-master1节点执行
# 查看部署情况
[root@k8s-master1 ~]# kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-7b59f7d4df-fbkww   1/1     Running   0          7m53s
pod/kubernetes-dashboard-74d688b6bc-tzbjb        1/1     Running   0          7m53s

NAME                                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.0.0.75    <none>        8000/TCP        7m53s
service/kubernetes-dashboard        NodePort    10.0.0.3     <none>        443:31856/TCP   7m53s

创建 service account 并绑定默认 cluster-admin 管理员集群角色。

# k8s-master1节点执行
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master1 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-trzcf
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: f615afd7-310d-45b9-aadf-24c44591e613

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjloWmk2alpOY0JPMkFHOFEwcGVEdGdxQjJzMnYtbXU1Xy14ckJfd0FTbEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdHJ6Y2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjYxNWFmZDctMzEwZC00NWI5LWFhZGYtMjRjNDQ1OTFlNjEzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.KO1Lw8rtZxDqgA2NWcUU8yaCjtisuJ-xTGayyAHDM7CWy9rq3GSmRW397ExFTsazu572HvoDSUHcvCUCQFXBMuUUa0qxVqWzuAktUsVleIPl3ch32B9oudCDYcxAlZhc7C_qDa69Id9wEkicQTGPowWnTL0SJGhSvwt1Q_po5EjyUNTrXzAW96yPF6UQ0bb4379m1hKp8FIE05c9kPju9VipkWXmJxYfn9kzXfRpLnVO9Ly-QNuCt-umJGTs2aRfwy_h7bVwBtODlbZTxQrtDc21efXmVXEeXAB4yCgmAbWCXbPDNOpUpwSsVAVyl44JOD4Vnk8DqWt0Ltxa-9evIA

访问地址:https://NodeIP:31856

使用输出的 token 登陆 Dashboard (如访问提示 https 异常,可使用火狐浏览器)。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-J4HOy42O-1687083974045)(…/…/images/Kubernetes/0169.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-UkXS3U3u-1687083974046)(…/…/images/Kubernetes/0170.png)]

8.2 部署CoreDNS(k8s-master1)

CoreDNS 主要用于集群内部 Service 名称解析。

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl apply -f coredns.yaml 

[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-d55ffb795-ngzgz   1/1     Running   1          105m
calico-node-fr5fq                         1/1     Running   1          53m
calico-node-v9wtk                         1/1     Running   1          105m
calico-node-zp6cz                         1/1     Running   1          53m
coredns-6cc56c94bd-jjjq6                  1/1     Running   0          33s

测试解析是否正常:

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # ns
nsenter   nslookup
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # exit

至此一个单 Master 的 k8s 节点就已经完成了。

9、增加master节点(k8s-master2)(高可用架构)

说明:Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现

Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可

用性。 针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:

Etcd数据库的高可用性和Kubernetes master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高

可用,本节将对master节点高可用进行说明和实施。 master节点扮演着总控中心的角色,通过不断与工作节点

上的Kubeletkube-proxy进行通信来维护整个集群的健康工作状态。如果master节点故障,将无法使用

kubectl工具或者API做任何集群管理。 master节点主要有三个服务kube-apiserver

kube-controller-managerkube-scheduler,其中kube-controller-managerkube-scheduler组件自

身通过选择机制已经实现了高可用,所以master高可用主要针对kube-apiserver组件,而该组件是以HTTP

API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。

说明:现在需要再增加一台新服务器,作为k8s-masterIP192.168.54.104k8s-master2与已部署的

k8s-master1所有操作一致。所以我们只需将k8s-master1所有K8s文件拷贝过来,再修改下服务器IP和主机

名启动即可。

9.1 安装Docker(k8s-master1)

# k8s-master1节点执行
#!/bin/bash
scp /usr/bin/docker* root@192.168.54.104:/usr/bin
scp /usr/bin/runc root@192.168.54.104:/usr/bin
scp /usr/bin/containerd* root@192.168.54.104:/usr/bin
scp /usr/lib/systemd/system/docker.service root@192.168.54.104:/usr/lib/systemd/system
scp -r /etc/docker root@192.168.54.104:/etc

9.2 启动Docker、设置开机自启(k8s-master2)

# k8s-master2节点执行
systemctl daemon-reload
systemctl start docker
systemctl enable docker
systemctl status docker
# k8s-master2节点执行
[root@k8s-master2 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:49:27 CST; 21s ago
     Docs: https://docs.docker.com
 Main PID: 17089 (dockerd)
   CGroup: /system.slice/docker.service
           ├─17089 /usr/bin/dockerd --selinux-enabled=false --insecure-registry=127.0.0.1
           └─17099 containerd --config /var/run/docker/containerd/containerd.toml --log-level info

12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.608063063+08:00" level=info msg="sche...grpc
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.608073300+08:00" level=info msg="ccRe...grpc
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.608080626+08:00" level=info msg="Clie...grpc
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.645248247+08:00" level=info msg="Load...rt."
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.792885392+08:00" level=info msg="Defa...ess"
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.848943865+08:00" level=info msg="Load...ne."
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.890760085+08:00" level=info msg="Dock...03.9
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.891223273+08:00" level=info msg="Daem...ion"
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.909746131+08:00" level=info msg="API ...ock"
12月 05 16:49:27 k8s-master2 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

9.3 创建etcd证书目录(k8s-master2)

# k8s-master2节点执行
mkdir -p /opt/etcd/ssl

9.4 拷贝文件(k8s-master1)

拷贝 k8s-master1 上所有 k8s 文件和 etcd 证书到 k8s-master2:

# k8s-master1节点执行
#!/bin/bash
scp -r /opt/kubernetes root@192.168.54.104:/opt
scp -r /opt/etcd/ssl root@192.168.54.104:/opt/etcd
scp /usr/lib/systemd/system/kube* root@192.168.54.104:/usr/lib/systemd/system
scp /usr/bin/kubectl  root@192.168.54.104:/usr/bin
scp -r ~/.kube root@192.168.54.104:~

9.5 删除证书(k8s-master2)

删除 kubelet 和 kubeconfig 文件:

# k8s-master2节点执行
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*
# k8s-master2节点执行
[root@k8s-master2 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   ├── kube-proxy.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

[root@k8s-master2 ~]# tree /opt/etcd/
/opt/etcd/
└── ssl
    ├── ca-config.json
    ├── ca.csr
    ├── ca-csr.json
    ├── ca-key.pem
    ├── ca.pem
    ├── server.csr
    ├── server-csr.json
    ├── server-key.pem
    └── server.pem
   
[root@k8s-master2 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── kube-apiserver.service
├── kube-controller-manager.service
├── kube-scheduler.service
├── kubelet.service
├── kube-proxy.service
└── ......   

9.6 修改配置文件和主机名(k8s-master2)

修改 apiserver、kubelet 和 kube-proxy 配置文件为本地 IP:

# k8s-master2节点执行
vi /opt/kubernetes/cfg/kube-apiserver.conf 
...
--bind-address=192.168.54.104 \
--advertise-address=192.168.54.104 \
...

vi /opt/kubernetes/cfg/kube-controller-manager.kubeconfig
server: https://192.168.54.104:6443

vi /opt/kubernetes/cfg/kube-scheduler.kubeconfig
server: https://192.168.54.104:6443

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master2

vi ~/.kube/config
...
server: https://192.168.54.104:6443

9.7 启动并设置开机自启(k8s-master2)

# k8s-master2节点执行
systemctl daemon-reload
systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
[root@k8s-master2 ~]# systemctl status kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:44 CST; 28s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 20411 (kube-apiserver)
    Tasks: 10
   Memory: 317.4M
   CGroup: /system.slice/kube-apiserver.service
           └─20411 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.54...

12月 05 16:58:44 k8s-master2 systemd[1]: Started Kubernetes API Server.
12月 05 16:58:45 k8s-master2 kube-apiserver[20411]: E1205 16:58:45.980478   20411 instance.go:392] Could not construct pre-rendered resp...ot: api
12月 05 16:58:48 k8s-master2 kube-apiserver[20411]: E1205 16:58:48.994575   20411 controller.go:152] Unable to remove old endpoints from...rorMsg:

● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:44 CST; 28s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 20418 (kube-controller)
    Tasks: 7
   Memory: 25.9M
   CGroup: /system.slice/kube-controller-manager.service
           └─20418 /opt/kubernetes/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect=true --ku...

12月 05 16:58:44 k8s-master2 systemd[1]: Started Kubernetes Controller Manager.

● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:44 CST; 28s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 20423 (kube-scheduler)
    Tasks: 9
   Memory: 18.9M
   CGroup: /system.slice/kube-scheduler.service
           └─20423 /opt/kubernetes/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --kubeconfig=/opt/...

12月 05 16:58:44 k8s-master2 systemd[1]: Started Kubernetes Scheduler.

● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:45 CST; 28s ago
 Main PID: 20429 (kubelet)
    Tasks: 8
   Memory: 28.2M
   CGroup: /system.slice/kubelet.service
           └─20429 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-master2 --networ...

12月 05 16:58:45 k8s-master2 systemd[1]: Started Kubernetes Kubelet.

● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:45 CST; 28s ago
 Main PID: 20433 (kube-proxy)
    Tasks: 7
   Memory: 14.7M
   CGroup: /system.slice/kube-proxy.service
           └─20433 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-pro...

12月 05 16:58:45 k8s-master2 systemd[1]: Started Kubernetes Proxy.
12月 05 16:58:45 k8s-master2 kube-proxy[20433]: E1205 16:58:45.262842   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
12月 05 16:58:46 k8s-master2 kube-proxy[20433]: E1205 16:58:46.362804   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
12月 05 16:58:48 k8s-master2 kube-proxy[20433]: E1205 16:58:48.490551   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
12月 05 16:58:52 k8s-master2 kube-proxy[20433]: E1205 16:58:52.634076   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
12月 05 16:59:01 k8s-master2 kube-proxy[20433]: E1205 16:59:01.559297   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
Hint: Some lines were ellipsized, use -l to show in full.

9.8 查看集群状态(k8s-master2)

# k8s-master2节点执行
[root@k8s-master2 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}

9.9 审批kubelet证书申请(k8s-master1)

# 查看证书请求
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-CMPsAwf8hGyMEC205me9C5KXMkBthr8J1ihv67VLPMo   7m9s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
# 同意请求
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl certificate approve node-csr-CMPsAwf8hGyMEC205me9C5KXMkBthr8J1ihv67VLPMo
certificatesigningrequest.certificates.k8s.io/node-csr-CMPsAwf8hGyMEC205me9C5KXMkBthr8J1ihv67VLPMo approved
# 查看Node
# k8s-master1节点执行
[root@k8s-master1 ~]#  kubectl get nodes
NAME          STATUS   ROLES    AGE    VERSION
k8s-master1   Ready    <none>   167m   v1.20.15
k8s-master2   Ready    <none>   78s    v1.20.15
k8s-node1     Ready    <none>   87m    v1.20.15
k8s-node2     Ready    <none>   88m    v1.20.15
# 查看Node
# k8s-master2节点执行
[root@k8s-master2 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    <none>   3h48m   v1.20.15
k8s-master2   Ready    <none>   62m     v1.20.15
k8s-node1     Ready    <none>   149m    v1.20.15
k8s-node2     Ready    <none>   149m    v1.20.15

至此一个双 master 节点 k8s 集群已经部署完毕。

kubelet 默认采用向 Master 自动注册本 Node 的机制,在 Master 上查看各 Node 的状态,状态为 Ready 表示

Node 已经成功注册并且状态为可用。

等所有 Node 的状态都为 Ready 之后,一个 Kubernetes 集群就启动完成了。接下来可以创建 Pod、

Deployment、Service 等资源对象来部署容器应用了。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/30147.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

TDesign电商小程序模板解析02-首页功能

目录 1 home.json2 goods-list组件3 goods-card组件总结 上一篇我们搭建了底部的导航条&#xff0c;这一篇来拆解一下首页的功能。首页有如下功能 可以进行搜索显示轮播图横向可拖动的页签图文卡片列表 1 home.json 因为是要使用组件库的组件搭建页面&#xff0c;自然是先需要…

【win11+Visual Studio 2019 配置 PCL 1.12.1 的经验总结分享】

点云pc库的下载与安装参考另外一篇文章&#xff0c;链接&#xff1a; https://blog.csdn.net/weixin_47869094/article/details/131270772?spm1001.2014.3001.5501 各种教程里面这都很好&#xff0c;就不赘述了&#xff0c;当然&#xff0c;这里也给出一个个人认为不错的安装…

java项目之病人跟踪治疗信息管理系统(ssm+vue)

风定落花生&#xff0c;歌声逐流水&#xff0c;大家好我是风歌&#xff0c;混迹在java圈的辛苦码农。今天要和大家聊的是一款基于ssm的病人跟踪治疗信息管理系统。项目源码以及部署相关请联系风歌&#xff0c;文末附上联系信息 。 &#x1f495;&#x1f495;作者&#xff1a;风…

智慧绿色档案馆之八防一体化解决系统方案

主要涉及系统&#xff1a; 智慧档案馆温湿度监控系统 智慧档案馆净化系统 智慧档案馆防火监控系统 智慧档案馆防盗监控系统 智慧档案馆漏水监控系统 智慧档案馆空气质量监控系统 智慧档案馆自动化恒温恒净化系统 智慧档案馆大数据云平台建设系统 &#xff08;一&#xff09;技…

在webpack中配置bable

一、什么是bable Babel是一个JavaScript**编译工具**&#xff0c;主要用于在旧浏览器或过时的JavaScript语言版本中转换新的或标准的JavaScript语法和功能。它的主要作用是解决跨浏览器的兼容性问题&#xff0c;让我们能够使用最新的JavaScript特性&#xff0c;而不必担心它们…

【C数据结构】无头非循环单向链表_SList

目录 无头非循环单向链表LinkedList 【1】链表概念 【2】链表分类 【3】无头单向非循环链表 【3.1】无头单向非循环链表数据结构与接口定义 【3.2】无头单向非循环链表初始化 【3.3】无头单向非循环链表开辟节点空间 【3.4】无头单向非循环链表销毁 【3.5】 无头单向非…

【WinForm】C#实现商场收银软件,从面向过程到面向对象,设计模式的应用

文章目录 前言一、收银系统版本11、运行效果2、界面设计3、代码 二、收银系统版本21、运行效果2、界面设计3、代码&#xff1a; 三、收银系统版本31、运行效果2、界面设计3、代码 四、收银系统版本41、运行效果2、界面设计3、代码 总结面向对象23中设计模式总结设计模式关系图 …

【新版】系统架构设计师 - 数据库系统

个人总结&#xff0c;仅供参考&#xff0c;欢迎加好友一起讨论 文章目录 架构 - 数据库系统考点摘要数据库系统模式数据库视图数据模型&#xff08;基本数据模型&#xff09;数据库完整性约束关系模型关系代数规范化理论候选键、主键、外键、主属性&#xff0c;非主属性求候选键…

【MySQL】数据库的查询语言DQL

目录 前言&#xff1a; 一.基本查询 1.1查询多个字段 1.2设置别名 1.3去除字段中重复的值 二.条件查询 2.1条件的种类 2.1.1比较运算符 2.1.2逻辑运算符 三.结尾 前言&#xff1a; 在前面讲完了如何增删改数据表中的记录后&#xff0c;那么如何使用这些数据就成了另一…

自定义阿里云OSS上传文件的start依赖

说明&#xff1a;SpringBoot项目之所以开发起来很方便&#xff0c;是因为SpringBoot项目在启动时自动为我们装配了很多Bean对象&#xff08;参考&#xff1a;http://t.csdn.cn/MddMO&#xff09;&#xff0c;这取决于我们是否在pom.xml文件添加对应的依赖&#xff0c;称为起步依…

【Spring】循环依赖

一、什么情况下会出现循环依赖&#xff1f; 二、解决方案 &#xff08;一&#xff09;一级缓存&#xff1a;存放完整的Bean实例对象 缺点&#xff1a;一级缓存的方式无法保证多线程下的一级缓存Bean的完整性&#xff0c;可以用加锁的方式来解决此问题。 &#xff08;二&#…

springboot+vue项目之MOBA类游戏攻略分享平台(java项目源码+文档)

风定落花生&#xff0c;歌声逐流水&#xff0c;大家好我是风歌&#xff0c;混迹在java圈的辛苦码农。今天要和大家聊的是一款基于springboot的MOBA类游戏攻略分享平台。项目源码以及部署相关请联系风歌&#xff0c;文末附上联系信息 。 &#x1f495;&#x1f495;作者&#xf…

Git操作方法

目录 Git是什么 Git特点 Git作用 Git原理 集中式 分布式 Git安装 修改语言 Git操作 1.初始化Git仓库 2.提交工作区的内容到版本库 3.查看版本记录 4.版本回退 5.版本前进 Git 命令 通用操作 工作状态 版本回退 版本前进 远程仓 1.GitHub 2.GitLab 3.码云…

2022年山东省职业院校技能大赛网络搭建与应用赛项网络搭建与安全部署服务器配置及应用

2022年山东省职业院校技能大赛 网络搭建与应用赛项 第二部分 网络搭建与安全部署&服务器配置及应用 竞赛说明&#xff1a; 一、竞赛内容分布 竞赛共分二个模块&#xff0c;其中&#xff1a; 第一模块&#xff1a;网络搭建及安全部署项目 第二模块&#xff1a;服务器…

后端(三):后端实战(表白墙的设计)

上一章结束了 Servlet 的学习&#xff0c;ok&#xff0c;现在我们已经学会了 1 1 了&#xff0c;现在开始我们要学会 百以内的加减乘除法。 本章就做一个最简单的 小小项目&#xff1a;表白墙。 在开始表白墙项目开始之间&#xff0c;我们先提前说好&#xff0c;这里主要跟关…

使用yolox训练自己的数据集并测试

1.首先给出yolox原模型的下载地址: ​​​​​​https://github.com/bubbliiiing/yolox-pytorch 百度网盘链接给出自己完整的模型&#xff08;包括数据集以及权重文件&#xff09;&#xff1a; 链接&#xff1a;https://pan.baidu.com/s/1JNjB42u9eGNhRjr1SfD_Tw 提取码&am…

线程的创建和使用(二)

1、线程的类和方法 Thread类是JVM用来管理线程的一个类&#xff0c;换句话说&#xff0c;每个线程都有唯一一个的Thread对象与之关联。 1.1、Thread的常见方法 方法说明Thread()创建线程对象Thread(Runnable target)使用Runnable对象创建线程对象Thread(String name)创建线程…

Python中对基本文件操作

1.文件的作用 保存数据放在磁盘中 2.打开文件 fopen(‘文件’,‘w’)或者fopen(‘文件’,‘r’) 3.文件操作 3.1 写数据(write) 如果文件不存在那么创建&#xff0c;如果存在那么就先清空&#xff0c;然后写入数据 对象open(“文件”,w) 对象.write&#xff08;“写入数…

【数据结构与算法】04 哈希表 / 散列表 (哈希函数、哈希冲突、链地址法、开放地址法、SHA256)

一种很好用&#xff0c;很高效&#xff0c;又一学就会的数据结构&#xff0c;你确定不看看&#xff1f; 一、哈希表 Hash Table1.1 核心概念1.2 哈希函数 Hash Function1.3 哈希冲突 Hash Collision1.4 哈希冲突解决1.41 方法概述1.42 链地址法 Separate Chaining1.43 开放寻址…

C语言学习笔记:指针

✨博文作者&#xff1a;烟雨孤舟 &#x1f496; 喜欢的可以 点赞 收藏 关注哦~~ ✍️ 作者简介: 一个热爱大数据的学习者 ✍️ 笔记简介&#xff1a;作为大数据爱好者&#xff0c;以下是个人总结的学习笔记&#xff0c;如有错误&#xff0c;请多多指教&#xff01; 目录 简介 …
最新文章