Kubernetes二进制部署方案

目录

一、环境准备

2.1、主机配置

2.2、安装 Docker

2.3、生成通信加密证书

2.3.1、生成 CA 证书(所有主机操作)

2.3.2、生成 Server 证书(所有主机)

2.3.3、生成 admin 证书(所有主机)

2.3.4、生成 proxy 证书

三、部署 Etcd 集群

3.1、在 k8s-master主机上部署 Etcd 节点

3.2、在k8s-node01 、k8s-node02 主机上部署 Etcd 节点

3.3、查看 Etcd 集群部署状况

四、部署 Flannel 网络

4.1、分配子网段到 Etcd

4.2、配置 Flannel

4.3、启动Flannel

4.4、测试 Flanneld 是否安装成功

五、部署 Kubernetes-master 组件

5.1、添加 kubectl 命令环境

5.2、创建 TLS Bootstrapping Token

5.3、创建 Kubelet kubeconfig

5.4、创建 kube-proxy kubeconfig

5.5、部署 Kube-apiserver

5.6、部署 Kube-controller-manager

5.7、部署 Kube-scheduler

5.8、组件运行是否正常

六、部署 Kubernetes-node 组件

6.1、准备环境

6.2、部署 kube-kubelet

6.3、部署 kube-proxy

6.4、查看 Node 节点组件是否安装成功

6.5、查看自动签发证书

七、以Deployment方式创建Nginx服务


一、环境准备

二进制所需源码包提取链接:https://pan.baidu.com/s/1LHnJjn4mbG0dRoDzChVIfg?pwd=uz4m 
提取码:uz4m

操作系统

IP地址

主机名

组件

CentOS7.x

192.168.2.116

k8s-master

CentOS7.x

192.168.2.117

k8s-node1

CentOS7.x

192.168.2.118

k8s-node2

注意:所有主机配置推荐CPU2C+  Memory:2G+ 

2.1、主机配置

为三台主机分别设置主机名

[root@localhost ~]# hostname k8s-master
[root@localhost ~]# bash
[root@k8s-master ~]#

[root@localhost ~]# hostname k8s-node1
[root@localhost ~]# bash
[root@k8s-node1 ~]# 

[root@localhost ~]# hostname k8s-node2
[root@localhost ~]# bash
[root@k8s-node2 ~]# 

在三台主机上修改 hosts 文件添加地址解析记录

[root@k8s-master ~]# cat << EOF >> /etc/hosts
192.168.2.116 k8s-master
192.168.2.117 k8s-node1
192.168.2.118 k8s-node2
EOF

[root@k8s-master ~]# scp /etc/hosts 192.168.2.117:/etc/
[root@k8s-master ~]# scp /etc/hosts 192.168.2.118:/etc/

2.2、安装 Docker

在所有主机上安装并配置 Docker

[root@k8s-master ~]# yum -y install iptable* wget telnet lsof vim rsync lrzsz net-tools unzip


[root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

[root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master ~]# yum clean all && yum makecache fast

[root@k8s-master ~]# yum -y install docker-ce
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker

[root@k8s-master ~]# cat << EOF >> /etc/docker/daemon.json
{
  "registry-mirrors": [
    "https://dockerhub.azk8s.cn",
    "https://hub-mirror.c.163.com"
  ]
}
EOF

[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker

        K8S 创建容器时需要生成 iptables 规则,需要将 CentOS默认的 Firewalld 防火墙

换成 iptables。在所有主机上设置防火墙

[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld
[root@k8s-master ~]# systemctl start iptables
[root@k8s-master ~]# iptables -F
[root@k8s-master ~]# iptables -I INPUT -s 192.168.2.0/24 -j ACCEPT

禁用Selinux

[root@k8s-master ~]# sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config
[root@k8s-master ~]# setenforce 0

2.3、生成通信加密证书

        Kubernetes 系统各组件之间需要使用 TLS 证书对通信进行加密,本实验使用CloudFlare 的 PKI 工具集 CFSSL 来生成 Certificate Authority 和其他证书。(所有主机操作)

Kubernetes工具提取链接:https://pan.baidu.com/s/16GaKmbCBjWr8ZIAf3QCYNQ?pwd=62fn 
提取码:62fn

[root@k8s-master ~]# tar xzf kubernetes-server-linux-amd64.tar.gz 

2.3.1、生成 CA 证书(所有主机操作)

ca证书工具提取链接:https://pan.baidu.com/s/1HY_5YXpyFO9OKagyjeq2NA?pwd=zvi3 
提取码:zvi3

执行以下操作,创建证书存放位置并安装证书生成工具。

[root@k8s-master ~]# cd /usr/local/bin/

[root@k8s-master bin]# rz        #上传工具

[root@k8s-master bin]# mv cfssl_linux-amd64 ./cfssl

[root@k8s-master bin]# mv cfssljson_linux-amd64 ./cfssljion

[root@k8s-master bin]# mv cfssl-certinfo_linux-amd64 ./cfssl-certinfo

[root@k8s-master bin]# chmod +x ./*

[root@k8s-master bin]# ll

总用量 18808
-rwxr-xr-x. 1 root root 10376657 7月   9 2020 cfssl
-rwxr-xr-x. 1 root root  6595195 7月   9 2020 cfssl-certinfo
-rwxr-xr-x. 1 root root  2277873 7月   9 2020 cfssljion

[root@k8s-master ~]# cfssl --help

Usage:
Available commands:
	ocsprefresh
	scan
	genkey
	ocspdump
	ocspsign
	ocspserve
	sign
	serve
	gencert
	selfsign
	revoke
	certinfo
	version
	info
	print-defaults
	bundle
	gencrl
Top-level flags:
  -allow_verification_with_non_compliant_keys
    	Allow a SignatureVerifier to use keys which are technically non-compliant with RFC6962.
  -loglevel int
    	Log level (0 = DEBUG, 5 = FATAL) (default 1)

执行以下命令,拷贝证书生成脚本。

[root@k8s-master ~]# cat << EOF > ca-config.json
> {
>   "signing": {
>     "default": {
>       "expiry": "87600h"
>     },
>     "profiles": {
>       "kubernetes": {
>         "expiry": "87600h",
>         "usages": [
>           "signing",
>           "key encipherment",
>           "server auth",
>           "client auth"
>         ]
>       }
>     }
>   }
> }
> EOF

[root@k8s-master ~]# cat << EOF > ca-csr.json
> {
>   "CN": "kubernetes",
>   "key": {
>     "algo": "rsa",
>     "size": 2048
>   },
>   "names": [
>     {
>       "C": "CN",
>       "L": "Beijing",
>       "ST": "Beijing",
>       "O": "k8s",
>       "OU": "System"
>     }
>   ]
> }
> EOF

执行以下操作,生成 CA 证书。

[root@k8s-master ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

2023/08/10 19:44:09 [INFO] generating a new CA key and certificate from CSR
2023/08/10 19:44:09 [INFO] generate received request
2023/08/10 19:44:09 [INFO] received CSR
2023/08/10 19:44:09 [INFO] generating key: rsa-2048
2023/08/10 19:44:09 [INFO] encoded CSR
2023/08/10 19:44:09 [INFO] signed certificate with serial number 232408171082706122668724082483527707664314357277

2.3.2、生成 Server 证书(所有主机)

        执行以下操作,创建 kubernetes-csr.json 文件,并生成 Server 证书。文件中配置的 IP地址,是使用该证书的主机 IP 地址,根据实际的实验环境填写。其中 10.10.10.1 是

kubernetes 自带的 Service。

[root@k8s-master ~]# vim /etc/server-csr.json

{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.2.116",
    "192.168.2.117",
    "192.168.2.118",
    "10.10.10.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}


[root@k8s-master ~]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

2023/08/10 19:57:50 [INFO] generate received request
2023/08/10 19:57:50 [INFO] received CSR
2023/08/10 19:57:50 [INFO] generating key: rsa-2048
2023/08/10 19:57:50 [INFO] encoded CSR
2023/08/10 19:57:50 [INFO] signed certificate with serial number 424188719705968634905526760201201991499922096108
2023/08/10 19:57:50 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

2.3.3、生成 admin 证书(所有主机)

执行以下操作,创建 admin-csr.json 文件,并生成 admin 证书。

[root@k8s-master ~]# vim admin-csr.json 

{
  "CN": "admin",
  "hosts": [],
  "key": {
  "algo": "rsa",
  "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}


[root@k8s-master ~]#  cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin            // admin 证书是用于管理员访问集群的证书

2023/08/10 20:03:12 [INFO] generate received request
2023/08/10 20:03:12 [INFO] received CSR
2023/08/10 20:03:12 [INFO] generating key: rsa-2048
2023/08/10 20:03:12 [INFO] encoded CSR
2023/08/10 20:03:12 [INFO] signed certificate with serial number 159836210599051633906118237113258532670720286284
2023/08/10 20:03:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

2.3.4、生成 proxy 证书

执行以下操作,创建 kube-proxy-csr.json 文件并生成证书。

[root@k8s-master ~]# vim kube-proxy-csr.json 

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}


[root@k8s-master ~]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2023/08/10 20:05:09 [INFO] generate received request
2023/08/10 20:05:09 [INFO] received CSR
2023/08/10 20:05:09 [INFO] generating key: rsa-2048
2023/08/10 20:05:10 [INFO] encoded CSR
2023/08/10 20:05:10 [INFO] signed certificate with serial number 59446791205648555156331506972188557314618920013
2023/08/10 20:05:10 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").


[root@k8s-master ~]# ls | grep -v pem | xargs -i rm {}            //删除证书以外的 json 文件,只保留 pem 证书

[root@k8s-master ~]# ll
总用量 32
-rw------- 1 root root 1679 8月  10 20:03 admin-key.pem
-rw-r--r-- 1 root root 1399 8月  10 20:03 admin.pem
-rw------- 1 root root 1679 8月  10 19:44 ca-key.pem
-rw-r--r-- 1 root root 1359 8月  10 19:44 ca.pem
-rw------- 1 root root 1679 8月  10 20:05 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 8月  10 20:05 kube-proxy.pem
drwxr-xr-x 4 root root   79 2月  12 2020 kubernetes
-rw------- 1 root root 1679 8月  10 19:57 server-key.pem
-rw-r--r-- 1 root root 1627 8月  10 19:57 server.pem

三、部署 Etcd 集群

执行以下操作,创建配置文件目录。

[root@k8s-master ~]# mkdir /opt/kubernetes
[root@k8s-master ~]# mkdir /opt/kubernetes/{bin,cfg,ssl}

        上传 etcd-v3.3.18-linux-amd64.tar.gz 软件包并执行以下操作,解压 etcd 软件包并拷贝二进制 bin 文件。

[root@k8s-master ~]# tar xf etcd-v3.3.18-linux-amd64.tar.gz 
[root@k8s-master ~]# cd etcd-v3.3.18-linux-amd64
[root@k8s-master etcd-v3.3.18-linux-amd64]# mv etcd /opt/kubernetes/bin/
[root@k8s-master etcd-v3.3.18-linux-amd64]# mv etcdctl /opt/kubernetes/bin/

创建完配置目录并准备好 Etcd 软件安装包后,即可配置 Etcd 集群。具体操作如下所示。

3.1、在 k8s-master主机上部署 Etcd 节点

创建 Etcd 配置文件。

[root@k8s-master etcd-v3.3.18-linux-amd64]# vim /opt/kubernetes/cfg/etcd

#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.116:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.116:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.116:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.116:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

创建脚本配置文件。

[root@k8s-master etcd-v3.3.18-linux-amd64]# vim /usr/lib/systemd/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state=new \
--cert-file=/opt/kubernetes/ssl/server.pem \
--key-file=/opt/kubernetes/ssl/server-key.pem \
--peer-cert-file=/opt/kubernetes/ssl/server.pem \
--peer-key-file=/opt/kubernetes/ssl/server-key.pem \
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

拷贝 Etcd 启动所依赖的证书。

[root@k8s-master ~]# ls

admin-key.pem  ca-key.pem  etcd-v3.3.18-linux-amd64         kube-proxy-key.pem  kubernetes      server.pem
admin.pem      ca.pem      etcd-v3.3.18-linux-amd64.tar.gz  kube-proxy.pem      server-key.pem

[root@k8s-master ~]# cp ca*.pem /opt/kubernetes/ssl/

        启动 Etcd 主节点。若主节点启动卡顿,直接 ctrl +c 终止即可。实际 Etcd 进程已经启动,在连接另外两个节点时会超时,因为另外两个节点尚未启动。(建议先做下面node节点在启动)

[root@k8s-master software]# systemctl start etcd
[root@k8s-master software]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

查看 Etcd 启动结果

[root@k8s-master software]# ps aux | grep etcd
root      10755  1.0  1.1 10610764 46032 ?      Ssl  14:50   0:01 /opt/kubernetes/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.2.116:2380 --listen-client-urls=https://192.168.2.116:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.2.116:2379 --initial-advertise-peer-urls=https://192.168.2.116:2380 --initial-cluster=etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380 --initial-cluster-token=etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root      10798  0.0  0.0 112828   980 pts/1    S+   14:53   0:00 grep --color=auto etcd

3.2、在k8s-node01 、k8s-node02 主机上部署 Etcd 节点

拷贝 Etcd 配置文件到计算节点主机,然后修改对应的主机 IP 地址。

[root@k8s-master ~]# rsync -avcz /opt/kubernetes/* 192.168.2.117:/opt/kubernetes/
root@192.168.2.117's password: 
sending incremental file list
bin/
bin/etcd
bin/etcdctl
bin/default.etcd/
bin/default.etcd/member/
bin/default.etcd/member/snap/
bin/default.etcd/member/snap/db
bin/default.etcd/member/wal/
bin/default.etcd/member/wal/0.tmp
bin/default.etcd/member/wal/0000000000000000-0000000000000000.wal
cfg/
cfg/etcd
ssl/
ssl/ca-key.pem
ssl/ca.pem

sent 14,065,864 bytes  received 200 bytes  1,339,625.14 bytes/sec
total size is 168,388,923  speedup is 11.97



[root@k8s-master ~]# rsync -avcz /opt/kubernetes/* 192.168.2.118:/opt/kubernetes/
root@192.168.2.118's password: 
sending incremental file list
bin/
bin/etcd
bin/etcdctl
bin/default.etcd/
bin/default.etcd/member/
bin/default.etcd/member/snap/
bin/default.etcd/member/snap/db
bin/default.etcd/member/wal/
bin/default.etcd/member/wal/0.tmp
bin/default.etcd/member/wal/0000000000000000-0000000000000000.wal
cfg/
cfg/etcd
ssl/
ssl/ca-key.pem
ssl/ca.pem

sent 14,065,864 bytes  received 200 bytes  1,654,831.06 bytes/sec
total size is 168,388,923  speedup is 11.97
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/etcd 

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.117:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.117:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.117:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.117:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node2 ~]# vim /opt/kubernetes/cfg/etcd 
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.118:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.118:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.118:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.118:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

拷贝启动脚本文件。

[root@k8s-master software]#  scp /usr/lib/systemd/system/etcd.service 192.168.2.117:/usr/lib/systemd/system/
root@192.168.2.117's password: 
etcd.service                                                                                                   100%  994     1.8MB/s   00:00    

[root@k8s-master software]#  scp /usr/lib/systemd/system/etcd.service 192.168.2.118:/usr/lib/systemd/system/
root@192.168.2.118's password: 
etcd.service                                                                                                   100%  994     1.8MB/s   00:00    

启动 Node 节点上的 Etcd。

[root@k8s-node1 ~]# systemctl start etcd
[root@k8s-node1 ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/etcd 



[root@k8s-node2 ~]# systemctl start etcd
[root@k8s-node2 ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8s-node2 ~]# vim /opt/kubernetes/cfg/etcd 

3.3、查看 Etcd 集群部署状况

为 Etcd 命令添加全局环境变量。所有节点都执行。

[root@k8s-master ~]# vim /etc/profile

export PATH=$PATH:/opt/kubernetes/bin

[root@k8s-master ~]# source /etc/profile

 查看 Etcd 集群部署状况。

[root@k8s-master ~]# cd /root/software/ssl/

[root@k8s-master ssl]# etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.2.116:2379,https://192.168.2.117,https://192.168.2.118:2379" cluster-health
member 2e77788f6268c28d is healthy: got healthy result from https://192.168.2.117:2379
member 60b0a20770468ca4 is healthy: got healthy result from https://192.168.2.116:2379
member 980d2d199a3b6f16 is healthy: got healthy result from https://192.168.2.118:2379
cluster is healthy

至此完成 Etcd 集群部署。

四、部署 Flannel 网络

        Flannel 是 Overlay 网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持 UDP、VXLAN、AWS、VPC、和 GCE 路由等数据转发方式。多主机容器网络通信的其他主流方案包括:隧道方案(Weave、OpenSwitch)、路由方案(Calico)等。

4.1、分配子网段到 Etcd

在主节点写入分配子网段到 Etcd,供 Flanneld 使用。

[root@k8s-master ~]# cd /root/software/ssl/

[root@k8s-master ssl]# etcdctl -ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.2.116:2379,https://192.168.2.117:2379,https://192.168.2.118:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"} }'

{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"} }

 上传 flannel-v0.12.0-linux-amd64.tar.gz 软件包,解压 Flannel 二进制并分别拷贝到 Node 节点。

[root@k8s-master ~]# tar xf flannel-v0.12.0-linux-amd64.tar.gz 

[root@k8s-master ~]# scp flannel mk-docker-opts.sh 192.168.2.117:/opt/kubernetes/bin/
root@192.168.2.117's password: 
flannel: No such file or directory
mk-docker-opts.sh                                                                                              100% 2139     2.6MB/s   00:00    

[root@k8s-master ~]# scp flannel mk-docker-opts.sh 192.168.2.118:/opt/kubernetes/bin/
root@192.168.2.118's password: 
flannel: No such file or directory
mk-docker-opts.sh                                                                                              100% 2139     2.2MB/s   00:00    

 

4.2、配置 Flannel

在 k8s-node1 与 k8s-node2 主机上分别编辑 flanneld 配置文件。下面以 k8s-node1 为例进行操作演示。

[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/flanneld


FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.2.116:2379,https://192.168.2.117:2379,https://192.168.2.118:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"


[root@k8s-node1 ~]# scp /opt/kubernetes/cfg/flanneld 192.168.2.118:/opt/kubernetes/cfg/flanneld 

The authenticity of host '192.168.2.118 (192.168.2.118)' can't be established.
ECDSA key fingerprint is SHA256:Xw4oZiqfBLe+vo6o1blQqSAQlde5FbnrawBscx+/dh0.
ECDSA key fingerprint is MD5:fd:e9:93:a2:fe:a1:f1:15:8d:f2:d8:c9:31:35:8c:85.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.118' (ECDSA) to the list of known hosts.
root@192.168.2.118's password: 
flanneld                                                                                                       100%  251   443.9KB/s   00:00    

 在 k8s-node1 与 k8s-node2 主机上分别创建 flanneld.service 脚本文件管理 Flanneld。

[root@k8s-node1 ~]#  cat <<EOF >/usr/lib/systemd/system/flanneld.service
> [Unit]
> Description=Flanneld overlay address etcd agent
> After=network-online.target network.target
> Before=docker.service
> [Service]
> Type=notify
> EnvironmentFile=/opt/kubernetes/cfg/flanneld
> ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
> ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
> Restart=on-failure
> [Install]
> WantedBy=multi-user.target
> EOF

[root@k8s-node1 ~]# scp /usr/lib/systemd/system/flanneld.service 192.168.2.118:/usr/lib/systemd/system/
root@192.168.2.118's password: 
flanneld.service                                                                                               100%  398   708.4KB/s   00:00    

   在 k8s-node01 与 k8s-node02 主机上配置 Docker 启动指定网段,修改 Docker 配置脚本文件。

[root@k8s-node01 ~]# vim /usr/lib/systemd/system/docker.service

EnvironmentFile=/run/flannel/subnet.env	//新添加[Service]块内,目的是让 Docker 网桥分发的 ip 地址与 flanned 网桥在同一个网段

ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS	//添加$ DOCKER_NETWORK_OPTIONS 变量,替换原来的 ExecStart,目的是调用 Flannel 网桥 IP
地址

#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

4.3、启动Flannel

启动 k8s-node01和k8s-node02主机上的 Flannel 服务。

[root@k8s-node1 ~]# systemctl start flanneld
[root@k8s-node1 ~]# systemctl enable flanneld

Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl restart docker



[root@k8s-node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.84.1  netmask 255.255.255.0  broadcast 172.17.84.255
        ether 02:42:76:ad:ac:bb  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.84.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::3058:cff:fe3f:fe1a  prefixlen 64  scopeid 0x20<link>
        ether 32:58:0c:3f:fe:1a  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

4.4、测试 Flanneld 是否安装成功

在 k8s-node02 上测试到 node01 节点 docker0 网桥 IP 地址的连通性,出现如下结果说明Flanneld 安装成功。

[root@k8s-node2 ~]# ping 172.17.84.0

PING 172.17.84.0 (172.17.84.0) 56(84) bytes of data.
64 bytes from 172.17.84.0: icmp_seq=1 ttl=64 time=0.515 ms
64 bytes from 172.17.84.0: icmp_seq=2 ttl=64 time=0.206 ms
64 bytes from 172.17.84.0: icmp_seq=3 ttl=64 time=0.226 ms
^C
--- 172.17.84.0 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.206/0.315/0.515/0.142 ms

至此 Node 节点的 Flannel 配置完成。

五、部署 Kubernetes-master 组件

        Kubernetes 二进制安装方式所需的二进制安装程序 Google 已经提供了下载,可以通过地址 https://github.com/kubernetes/kubernetes/releases 进行下载,选择对应的版本之后,从 CHANGELOG 页面下载二进制文件。

在 k8s-master 主机上依次进行如下操作,部署 Kubernetes-master 组件,具体操作如下所示。

5.1、添加 kubectl 命令环境

上传 tar zxf kubernetes-server-linux-amd64.tar.gz 软件包,解压并添加 kubectl 命令环境。

[root@k8s-master ~]# tar xf kubernetes-server-linux-amd64.tar.gz 
[root@k8s-master ~]# cd kubernetes/server/bin/
[root@k8s-master bin]# cp kubectl /opt/kubernetes/bin/

 

5.2、创建 TLS Bootstrapping Token

执行以下命令,创建 TLS Bootstrapping Token。

[root@k8s-master bin]# cd /opt/kubernetes/

[root@k8s-master kubernetes]# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

[root@k8s-master kubernetes]#  cat > token.csv <<EOF
> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF

5.3、创建 Kubelet kubeconfig

执行以下命令,创建 Kubelet kubeconfig。

[root@k8s-master kubernetes]#  export KUBE_APISERVER="https://192.168.2.116:6443"

(1)设置集群参数

[root@k8s-master kubernetes]# cd /root/software/ssl/

[root@k8s-master ssl]#  kubectl config set-cluster kubernetes \
> --certificate-authority=./ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=bootstrap.kubeconfig
Cluster "kubernetes" set.

(2)设置客户端认证参数

[root@k8s-master ssl]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfig

User "kubelet-bootstrap" set.

(3) 设置上下文参数

[root@k8s-master ssl]#  kubectl config set-context default \
> --cluster=kubernetes \
> --user=kubelet-bootstrap \
> --kubeconfig=bootstrap.kubeconfig

Context "default" created.

(4)设置默认上下文

[root@k8s-master ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

Switched to context "default".

5.4、创建 kube-proxy kubeconfig

执行以下命令,创建 kuby-proxy kubeconfig。

[root@k8s-master ssl]# kubectl config set-cluster kubernetes \
> --certificate-authority=./ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=kube-proxy.kubeconfig

Cluster "kubernetes" set.

[root@k8s-master ssl]# kubectl config set-credentials kube-proxy \
> --client-certificate=./kube-proxy.pem \
> --client-key=./kube-proxy-key.pem \
> --embed-certs=true \
> --kubeconfig=kube-proxy.kubeconfig

User "kube-proxy" set.

[root@k8s-master ssl]#  kubectl config set-context default \
> --cluster=kubernetes \
> --user=kube-proxy \
> --kubeconfig=kube-proxy.kubeconfig

Context "default" created.

[root@k8s-master ssl]# kubectl config use-context default \
> --kubeconfig=kube-proxy.kubeconfig

Switched to context "default".

5.5、部署 Kube-apiserver

执行以下命令,部署 Kube-apiserver。

[root@k8s-master ssl]# cd /root/kubernetes/server/bin/

[root@k8s-master bin]# cp kube-controller-manager kube-scheduler kube-apiserver /opt/kubernetes/bin/

[root@k8s-master bin]# cp /opt/kubernetes/token.csv /opt/kubernetes/cfg/

[root@k8s-master bin]# cd /opt/kubernetes/bin/

上传master.zip到当前目录

[root@k8s-master bin]# ./apiserver.sh 192.168.2.116 https://192.168.2.116:2379,https://192.168.2.117:2379,https://192.168.2.118:2379

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

5.6、部署 Kube-controller-manager

执行以下命令,部署 Kube-controller-manager。

[root@k8s-master bin]# sh controller-manager.sh 127.0.0.1

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

5.7、部署 Kube-scheduler

执行以下命令,部署 Kube-scheduler。

[root@k8s-master bin]# sh scheduler.sh 127.0.0.1

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

5.8、组件运行是否正常

执行以下命令,检测组件运行是否正常。

[root@k8s-master bin]#  kubectl get cs

NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   

六、部署 Kubernetes-node 组件

        部署完 Kubernetes-master 组件后,即可开始部署 Kubernetes-node 组件。需要依次执行以下步骤。

6.1、准备环境

执行以下命令,准备 Kubernetes-node 组件的部署环境。

在 k8s-master 主机上执行

[root@k8s-master ~]# cd /root/software/ssl/

[root@k8s-master ssl]# scp *kubeconfig 192.168.2.117:/opt/kubernetes/cfg/
root@192.168.2.117's password: 
bootstrap.kubeconfig                                                                                           100% 2167     2.6MB/s   00:00    
kube-proxy.kubeconfig                                                                                          100% 6269     8.6MB/s   00:00    

[root@k8s-master ssl]# scp *kubeconfig 192.168.2.118:/opt/kubernetes/cfg/
root@192.168.2.118's password: 
bootstrap.kubeconfig                                                                                           100% 2167     3.1MB/s   00:00    
kube-proxy.kubeconfig                                                                                          100% 6269     7.5MB/s   00:00    


[root@k8s-master ssl]# cd /root/kubernetes/server/bin/

[root@k8s-master bin]#  scp kubelet kube-proxy 192.168.2.117:/opt/kubernetes/bin
root@192.168.2.117's password: 
kubelet                                                                                                        100%  106MB 129.4MB/s   00:00    
kube-proxy                                                                                                     100%   36MB 134.3MB/s   00:00    

[root@k8s-master bin]#  scp kubelet kube-proxy 192.168.2.118:/opt/kubernetes/bin
root@192.168.2.118's password: 
kubelet                                                                                                        100%  106MB 120.3MB/s   00:00    
kube-proxy                                                                                                     100%   36MB 119.5MB/s   00:00    


[root@k8s-master bin]#  kubectl create clusterrolebinding kubelet-bootstrap \
> --clusterrole=system:node-bootstrapper \
> --user=kubelet-bootstrap

clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created


[root@k8s-master bin]# kubectl describe clusterrolebinding kubelet-bootstrap

Name:         kubelet-bootstrap
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  system:node-bootstrapper
Subjects:
  Kind  Name               Namespace
  ----  ----               ---------
  User  kubelet-bootstrap  

6.2、部署 kube-kubelet

执行以下命令,部署 kubelet。在 k8s-node1、k8s-node2 主机上都要执行

[root@k8s-node1 ~]# cd /opt/kubernetes/bin/

上传node.zip

[root@k8s-node1 bin]# unzip node.zip
Archive:  node.zip
  inflating: kubelet.sh              
  inflating: proxy.sh                

[root@k8s-node1 bin]# chmod + *.sh

[root@k8s-node1 bin]# sh kubelet.sh 192.168.2.117 192.168.2.254

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-node2 bin]# unzip node.zip

Archive:  node.zip
  inflating: kubelet.sh              
  inflating: proxy.sh                

[root@k8s-node2 bin]# chmod + *.sh

[root@k8s-node2 bin]# sh kubelet.sh 192.168.2.118 192.168.2.254

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

6.3、部署 kube-proxy

执行以下命令,部署 kube-proxy。在 k8s-node1、k8s-node2 主机上都要执行

[root@k8s-node1 bin]# sh proxy.sh 192.168.2.117

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.


[root@k8s-node2 bin]# sh proxy.sh 192.168.2.118

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

6.4、查看 Node 节点组件是否安装成功

执行以下命令,查看 Node 节点组件是否安装成功。

[root@k8s-node2 bin]# ps -ef | grep kube

root       4859      1  1 14:51 ?        00:01:31 /opt/kubernetes/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.2.118:2380 --listen-client-urls=https://192.168.2.118:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.2.118:2379 --initial-advertise-peer-urls=https://192.168.2.118:2380 --initial-cluster=etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380 --initial-cluster-token=etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root       5190      1  0 15:59 ?        00:00:01 /opt/kubernetes/bin/flanneld --ip-masq
root       9001      1  0 16:45 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --address=192.168.2.118 --hostname-override=192.168.2.118 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --experimental-bootstrap-kubeconfig=/opt/kubrnetes/cfg/bootstrap.kubeconfig --cert-dir=/opt/kubernetes/ssl --cluster-dns=192.168.2.254 --cluster-domain=cluster.local --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root       9236      1  0 16:47 ?        00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.2.118 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
root       9365   2753  0 16:48 pts/0    00:00:00 grep --color=auto kube

6.5、查看自动签发证书

部署完组件后,Master 节点即可获取到 Node 节点的请求证书,然后允许加入集群即

可。

[root@k8s-master bin]# kubectl get csr        //查看请求证书

NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-8l5R966htJ1yECVdKq97-yDX25_KREynxrskUFs_ZIs   8m26s   kubelet-bootstrap   Pending
node-csr-D9o_6AXRpMqRnLU2O0riqbpylNWZhZ6PD0aP6voiC_c   8m27s   kubelet-bootstrap   Pending
node-csr-nTHbHBv3Wpsk5f1HuaaTEzw0OD6CK5okqnuwFid7rhk   5m21s   kubelet-bootstrap   Pending

[root@k8s-master bin]# kubectl certificate approve node-csr-8l5R966htJ1yECVdKq97-yDX25_KREynxrskUFs_ZIs           // 允许节点加入集群,替换为自己的节点名 

certificatesigningrequest.certificates.k8s.io/node-csr-8l5R966htJ1yECVdKq97-yDX25_KREynxrskUFs_ZIs approved


[root@k8s-master bin]# kubectl certificate approve node-csr-D9o_6AXRpMqRnLU2O0riqbpylNWZhZ6PD0aP6voiC_c

certificatesigningrequest.certificates.k8s.io/node-csr-D9o_6AXRpMqRnLU2O0riqbpylNWZhZ6PD0aP6voiC_c approved

[root@k8s-master bin]# kubectl certificate approve node-csr-nTHbHBv3Wpsk5f1HuaaTEzw0OD6CK5okqnuwFid7rhk

certificatesigningrequest.certificates.k8s.io/node-csr-nTHbHBv3Wpsk5f1HuaaTEzw0OD6CK5okqnuwFid7rhk approved


[root@k8s-master bin]# kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
192.168.2.117   Ready    <none>   2m41s   v1.17.3
192.168.2.118   Ready    <none>   39s     v1.17.3

七、以Deployment方式创建Nginx服务

创建deployment

[root@k8s-master ~]# vim nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19.4
        ports:
        - containerPort: 80

创建nginx-deployment应用

[root@k8s-master ~]# kubectl create -f nginx-deployment.yaml
deployment.apps/nginx-deployment created

查看deployment详情

[root@k8s-master ~]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           4m49s

查看具体某个pod的状态信息

[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-fc75999cc-f5lvg   1/1     Running   0          4m52s
nginx-deployment-fc75999cc-fdpsm   1/1     Running   0          4m52s
nginx-deployment-fc75999cc-rmblk   1/1     Running   0          4m52s
[root@k8s-master ~]# kubectl describe deployment nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Fri, 18 Aug 2023 16:54:56 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.19.4
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-fc75999cc (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  5m42s  deployment-controller  Scaled up replica set nginx-deployment-fc75999cc to 3

查看pod在状态

[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-fc75999cc-f5lvg   1/1     Running   0          6m8s
nginx-deployment-fc75999cc-fdpsm   1/1     Running   0          6m8s
nginx-deployment-fc75999cc-rmblk   1/1     Running   0          6m8s

查看具体某个pod的状态信息

[root@k8s-master ~]# kubectl describe pod nginx-deployment-fc75999cc-f5lvg
Name:         nginx-deployment-fc75999cc-f5lvg
Namespace:    default
Node:         192.168.2.117/192.168.2.117
Start Time:   Fri, 18 Aug 2023 16:54:56 +0800
Labels:       app=nginx
              pod-template-hash=fc75999cc
Annotations:  <none>
Status:       Running
IP:           172.17.84.2
IPs:
  IP:           172.17.84.2
Controlled By:  ReplicaSet/nginx-deployment-fc75999cc
Containers:
  nginx:
    Container ID:   docker://f36134e89b059ebeb214d8ebc0ed3625af9e2a4ba8aaf27542fe1f122e832cef
    Image:          nginx:1.19.4
    Image ID:       docker-pullable://nginx@sha256:c3a1592d2b6d275bef4087573355827b200b00ffc2d9849890a4f3aa2128c4ae
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 18 Aug 2023 16:59:34 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-frzl2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-frzl2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-frzl2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type     Reason     Age                    From                    Message
  ----     ------     ----                   ----                    -------
  Normal   Scheduled  <unknown>              default-scheduler       Successfully assigned default/nginx-deployment-fc75999cc-f5lvg to 192.168.2.117
  Warning  Failed     4m25s                  kubelet, 192.168.2.117  Failed to pull image "nginx:1.19.4": rpc error: code = Unknown desc = context canceled
  Warning  Failed     4m25s                  kubelet, 192.168.2.117  Error: ErrImagePull
  Normal   BackOff    4m25s                  kubelet, 192.168.2.117  Back-off pulling image "nginx:1.19.4"
  Warning  Failed     4m25s                  kubelet, 192.168.2.117  Error: ImagePullBackOff
  Normal   Pulling    4m14s (x2 over 6m47s)  kubelet, 192.168.2.117  Pulling image "nginx:1.19.4"
  Normal   Pulled     2m12s                  kubelet, 192.168.2.117  Successfully pulled image "nginx:1.19.4"
  Normal   Created    2m12s                  kubelet, 192.168.2.117  Created container nginx
  Normal   Started    2m12s                  kubelet, 192.168.2.117  Started container nginx
[root@k8s-master ~]# kubectl get pod -o wide            #创建成功,状态为Running    
NAME                               READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
nginx-deployment-fc75999cc-f5lvg   1/1     Running   0          7m30s   172.17.84.2   192.168.2.117   <none>           <none>
nginx-deployment-fc75999cc-fdpsm   1/1     Running   0          7m30s   172.17.34.2   192.168.2.118   <none>           <none>
nginx-deployment-fc75999cc-rmblk   1/1     Running   0          7m30s   172.17.84.3   192.168.2.117   <none>           <none>

测试Pod访问

[root@k8s-node1 bin]#  elinks --dump http://172.17.84.3
                               Welcome to nginx!

   If you see this page, the nginx web server is successfully installed and
   working. Further configuration is required.

   For online documentation and support please refer to [1]nginx.org.
   Commercial support is available at [2]nginx.com.

   Thank you for using nginx.

References

   Visible links
   1. http://nginx.org/
   2. http://nginx.com/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/80051.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

JMeter接口自动化测试实例—JMeter引用javaScript

Jmeter提供了JSR223 PreProcessor前置处理器&#xff0c;通过该工具融合了Java 8 Nashorn 脚本引擎&#xff0c;可以执行js脚本以便对脚本进行前置处理。其中比较典型的应用就是通过执行js脚本对前端数据进行rsa加密&#xff0c;如登录密码加密。但在这里我就简单的应用javaScr…

java练习6. 求完数

题目: 请编程求出1000 以内的所有完数。 完数:一个数如果恰好等于它的所有真因子&#xff08;即除了自身外的所有因数&#xff09;之和&#xff0c;这个数就称为"完数"。 public static void main(String[] args) {for (int i 2; i < 1000; i) {int sum0;for (in…

ARM M33架构入门

概述 Arm Cortex-M33核心处理器专为需要高效安全或数字信号控制的物联网和嵌入式应用而设计。该处理器具有许多可选功能&#xff0c;包括数字信号处理扩展 (DSP)、用于硬件强制隔离的TrustZone 安全性、内存保护单元 (MPU)和浮点单元 (FPU)。 Cortex-M33 的性能比 Cortex-M…

【笔试题心得】关于正则的一些整理

本文部分内容摘抄整理自 正则表达式 – 教程 | 菜鸟教程 在笔试的过程中&#xff0c;也常常会对正则表达式进行考察&#xff0c;这里对正则表达式的常见用法&#xff0c;做一个学习和总结。 正则表达式的模式可以包括以下内容&#xff1a; 字面值字符&#xff1a;例如字母、数…

使用 Visual Studio GoogleTest编写 C/C++ 单元测试——入门篇

入门教程 Visual Studio 新建 GoogleTest项目&#xff0c;一路选默认参数 pch.h #pragma once#include "gtest/gtest.h"int add(int a, int b);pch.cpp #include "pch.h"int add(int a, int b) {return a b; }test.cpp #include "pch.h"TES…

Mac平台最佳PDF编辑软件,Qoppa PDF Studio Pro助您实现PDF文件的完美编辑

Qoppa PDF Studio Pro是一款功能强大的PDF编辑软件&#xff0c;现已推出Mac版本&#xff01;无论是个人用户还是企业用户&#xff0c;都能够从中受益。 Qoppa PDF Studio Pro为用户提供了一系列丰富的编辑工具&#xff0c;可以轻松地对PDF文件进行编辑、注释和标记。 用户可以…

【PACS源码】认识PACS的架构和工作流程

&#xff08;一&#xff09;PACS系统的组成及架构 PACS系统的基本组成部分包括&#xff1a;数字影像采集、通讯和网络、医学影像存储、医学影像管理、各类工作站五个部分。 而目前PACS系统的软件架构选型上看&#xff0c;主要有C/S和B/S两种形式。 C/S架构&#xff0c;即Client…

大数据平台是什么意思?有什么用?一般包含哪些模块?

大数据时代&#xff0c;还有很多人不知道大数据平台是什么意思&#xff1f;有什么用&#xff1f;一般包含哪些模块&#xff1f;今天我们就一起来简单了解一下吧&#xff01;仅供参考哦&#xff01; 大数据平台是什么意思&#xff1f;有什么用&#xff1f;一般包含哪些模块&am…

Java 项目日志实例基础:Log4j

点击下方关注我&#xff0c;然后右上角点击...“设为星标”&#xff0c;就能第一时间收到更新推送啦~~~ 介绍几个日志使用方面的基础知识。 1 Log4j 1、Log4j 介绍 Log4j&#xff08;log for java&#xff09;是 Apache 的一个开源项目&#xff0c;通过使用 Log4j&#xff0c;我…

Visual Studio 2019源码编译cpu版本onnxruntime

1.下载onnxruntime源码 源码地址&#xff1a;gitee 》https://gitee.com/mirrors/onnx-runtime github 》https://github.com/microsoft/onnxruntime git clone --recursive https://gitee.com/mirrors/onnx-runtime 2.安装anaconda并配置python环境 安装anaconda时记得勾选默…

【C++深入浅出】初识C++中篇(引用、内联函数)

目录 一. 前言 二. 引用 2.1 引用的概念 2.2 引用的使用 2.3 引用的特性 2.4 常引用 2.5 引用的使用场景 2.6 传值、传引用效率比较 2.7 引用和指针的区别 三. 内联函数 3.1 内联函数的概念 3.2 内联函数的特性 一. 前言 上期说道&#xff0c;C是在C的基础之上&…

常见前端基础面试题(HTML,CSS,JS)(三)

JS 中如何进行数据类型的转换&#xff1f; 类型转换可以分为两种&#xff0c;隐性转换和显性转换 显性转换 主要分为三大类&#xff1a;数值类型、字符串类型、布尔类型 三大类的原始类型值的转换规则我就不一一列举了 数值类型&#xff08;引用类型转换&#xff09; Numbe…

广度优先遍历与最短路径(Java 实例代码源码包下载)

目录 广度优先遍历与最短路径 Java 实例代码 src/runoob/graph/ShortestPath.java 文件代码&#xff1a; 广度优先遍历与最短路径 广度优先遍历从某个顶点 v 出发&#xff0c;首先访问这个结点&#xff0c;并将其标记为已访问过&#xff0c;然后顺序访问结点v的所有未被访问…

系统学习Linux-Mariadb高可用MHA

概念 MHA&#xff08;MasterHigh Availability&#xff09;是一套优秀的MySQL高可用环境下故障切换和主从复制的软件。 MHA 的出现就是解决MySQL 单点的问题。 MySQL故障切换过程中&#xff0c;MHA能做到0-30秒内自动完成故障切换操作。 MHA能在故障切换的过程中最大程度上…

使用git rebase 之后的如何恢复到原始状态

我们常常喜欢使用git rebase去切换分支提交代码,操作流程就是: 先切换分支:比如当前是master 我们修改了一堆代码产生一个commit id :5555555567777 那么我们常常比较懒就直接切换了:git checkout dev 然后呢?使用命令git rebase 5555555567777,想把这笔修改提交到d…

C语言:选择+编程(每日一练)

目录 选择题&#xff1a; 题一&#xff1a; 题二&#xff1a; 题三&#xff1a; 题四&#xff1a; 题五&#xff1a; 编程题&#xff1a; 题一&#xff1a;尼科彻斯定理 示例1 题二&#xff1a;等差数列 示例2 本人实力有限可能对一些地方解释和理解的不够清晰&…

Python Opencv实践 - 图像高斯滤波(高斯模糊)

import cv2 as cv import numpy as np import matplotlib.pyplot as pltimg cv.imread("../SampleImages/pomeranian.png", cv.IMREAD_COLOR) rows,cols,channels img.shape print(rows,cols,channels)#为图像添加高斯噪声 #使用np.random.normal(loc0.0, scale1.0…

同步jenkinsfile流水线(sync-job)

环境 变量&#xff1a;env&#xff08;环境变量&#xff1a;sit/dev/simulation/prod/all&#xff09;&#xff0c;job&#xff08;job-name/all&#xff09;目录&#xff1a;/var/lib/jenkins/jenkinsfile environment.json&#xff1a; [roottest-01 jenkinsfile]# cat env…

学习笔记|按键原理|消抖|按键点灯的4种模式|STC32G单片机视频开发教程(冲哥)|第七集:按键点灯

文章目录 第六集&#xff08;下&#xff09;课后练习解答&#xff1a;SOS求救灯光编写求救信号原理冲哥代码及解析分模块设计&#xff1a;math.h&#xff1a;math.c:while主程序部分 按键点灯&#xff08;下&#xff09;1.按键的原理Tips&#xff1a;按键消抖 2.按键的代码实现…

2023年国赛数学建模思路 - 案例:异常检测

文章目录 赛题思路一、简介 -- 关于异常检测异常检测监督学习 二、异常检测算法2. 箱线图分析3. 基于距离/密度4. 基于划分思想 建模资料 赛题思路 &#xff08;赛题出来以后第一时间在CSDN分享&#xff09; https://blog.csdn.net/dc_sinor?typeblog 一、简介 – 关于异常…