概要

Kubernetes 这个名字源于希腊语,意为“舵手”或“飞行员”。k8s 这个缩写是因为 k 和 s 之间有八个字符的关系。 Kubernetes 是一个可移植、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。 Kubernetes 拥有一个庞大且快速增长的生态,其服务、支持和工具的使用范围相当广泛。

container_evolution

随着容器的广泛应用,Kubernetes能够满足容器的资源管理和任务编排的需要,它可以实现:

  • 服务发现和负载均衡
    Kubernetes 可以使用 DNS 名称或自己的 IP 地址来暴露容器。 如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。

  • 存储编排
    Kubernetes 允许你自动挂载你选择的存储系统,例如本地存储、公共云提供商等。

  • 自动部署和回滚
    你可以使用 Kubernetes 描述已部署容器的所需状态, 它可以以受控的速率将实际状态更改为期望状态。 例如,你可以自动化 Kubernetes 来为你的部署创建新容器, 删除现有容器并将它们的所有资源用于新容器。

  • 自动完成装箱计算
    你为 Kubernetes 提供许多节点组成的集群,在这个集群上运行容器化的任务。 你告诉 Kubernetes 每个容器需要多少 CPU 和内存 (RAM)。 Kubernetes 可以将这些容器按实际情况调度到你的节点上,以最佳方式利用你的资源。

  • 自我修复
    Kubernetes 将重新启动失败的容器、替换容器、杀死不响应用户定义的运行状况检查的容器, 并且在准备好服务之前不将其通告给客户端。

  • 密钥与配置管理
    Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。 你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。

组件

K8sComonet

API-Server:大脑组件

对外暴露k8s的HTTP API,允许最终用户、集群的不同部分和外部组件彼此通信,是所有服务请求的统一入口,是控制平面的核心。

Kube-scheduler:左膀右臂

任务调度器,负责监视新创建、未指定运行节点的Pods,然后选择合适的节点来运行这些pod。

Controller-manager:集群监工

通过APIServer监控集群公共状态,是集群的控制器,处理集群种常规后台任务。集群中的扩容、维护都是由它来实现。通常一个资源对应一个控制器。

Etcd:保险柜

存储集群数据,一致且高可用的键值对存储数据库用于所有集群数据的备份,实现集群的所有重要信息持久化。etcd官方将其定为为一个可信赖的分布式KV存储服务。

Worker节点:打工人

  • kubelet:负责Pod对应容器的创建、启停等任务,同时与Master协作。
    • 监视分配给该Node节点的pods
    • 挂载pod所需要volumes
    • 下载pod的secret
    • 通过CRI来运行pod中的容器
    • 周期执行pod中为容器定义的liveness探针
    • 上报pod的状态给系统其他组件
    • 上报Node的状态
  • kube-proxy:简单的网络代理和负载均衡器,负责Service的实现,能够使Node上的pod被外部访问
  • CRI
    • Docker
    • Contained

K8sParts

部署

准备

因Kubernetes 1.24版本以后不支持直接使用docker,因此使用1.20.1版本,后续升级至1.21.14版本。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 以CentOS7为例,使用华为源
# 设置默认防火墙配置为信任区域
firewall-cmd --set-default-zone=trusted
firewall-cmd --reload
# 关闭SWAP
sed -i '/swap/s|/dev|#/dev|g' /etc/fstab
# 关闭selinux
sed -i 's/enforceing/disabled' /etc/selinux/config
# 修改内核参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 配置时间同步
yum install -y ntpdate
ntpdate time.windows.com
hwclock -w
# 配置软件安装源
rm /etc/yum.repos.d/*
wget -P /etc/yum.repos.d/ ftp://ftp.rhce.cc/k8s/*
# 执行部署
yum makecache fast
yum update
yum remove -y docker docker-common docker-selinux docker-engine
yum install -y yum-utils device-mapper-persistent-data lvm2 psmisc net-tools
yum install -y docker-ce-19.03.9-3.el7
# 配置加速源
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://37y8py0j.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl restart docker
# 配置开机启动
systemctl enable docker
# 锁定docker版本,以免误升级
yum -y install yum-versionlock
yum versionlock add docker-ce
# 重启主机
systemctl reboot

部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# 部署1.20.1版本集群
# 在所有节点上部署
# 安装kubelet和kubeadm
yum install -y kubelet-1.20.1-0 kubeadm-1.20.1-0 kubectl-1.20.1-0 --disableexcludes=kubernetes
systemctl enable --now kubelet
yum versionlock add kubeadm kubectl kubelet
yum versionlock status
# 安装coredns
wget ftp://ftp.rhce.cc/cka-tool/coredns-1.21.tar
docker load -i coredns-1.21.tar
# 部署k8s
kubeadm init --kubernetes-version 1.20.1 --pod-network-cidr=10.245.0.0/16 --image-repository registry.aliyuncs.com/google_containers
# 复制kubeconfig文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 在WorkNode上加入集群
kubeadm join 172.16.10.10:6443 --token rd4el2.q4mfotxvwm195p7m --discovery-token-ca-cert-hash sha256:0021ae786417d2a3202b42d580a71218ba17fa563ce7fe2b56d21624ffa389f9
# 若忘记保存此命令,可使用如下命令
kubeadmin token create --print-join-command
# 查询主机状态
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready control-plane,master 43m v1.20.1
vms11 Ready <none> 43m v1.20.1
vms12 Ready <none> 42m v1.20.1
# 添加k8s的执行补全
yum install -y bash-completion bash-completion-extras
source /usr/share/bash-completion/bash_completion

# 临时生效kubectl自动补全
source <(kubectl completion bash)

# 只在当前用户生效kubectl自动补全
echo 'source <(kubectl completion bash)' >>~/.bashrc

# 全局生效
echo 'source <(kubectl completion bash)' >/etc/profile.d/k8s.sh && source /etc/profile

# 生成kubectl的自动补全脚本
kubectl completion bash >/etc/bash_completion.d/kubectl
# 部署Calico
wget ftp://ftp.rhce.cc/calico/calico-3.19-img.tar
wget ftp://ftp.rhce.cc/calico/calico.yaml
for i in docker.io/calico/cni:v3.19.1 docker.io/calico/cni:v3.19.1 docker.io/calico/pod2daemon-flexvol:v3.19.1 docker.io/calico/node:v3.19.1 docker.io/calico/kube-controllers:v3.19.1 ; do docker pull $i;done
kubectl apply -f calico.yaml
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready control-plane,master 3h38m v1.20.1
vms11 Ready <none> 3h37m v1.20.1
vms12 Ready <none> 3h37m v1.20.1

管理

常用命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 查看版本
[root@vms10 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-18T12:09:25Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-18T12:00:47Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
# 精简版本信息
[root@vms10 ~]# kubectl version --short
Client Version: v1.20.1
Server Version: v1.20.1
# 集群信息
[root@vms10 ~]# kubectl cluster-info
Kubernetes control plane is running at https://172.16.10.10:6443
KubeDNS is running at https://172.16.10.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
# 集群主机
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready control-plane,master 10m v1.20.1
vms11 Ready <none> 5m44s v1.20.1
vms12 Ready <none> 5m38s v1.20.1
# 备份安装配置
[root@vms10 ~]# kubectl get cm -o yaml -n kube-system kubeadm-config > kubeadm-config.yml
# 同等环境重安装
[root@vms10 ~]# kubectl init --config=kubeadm-config.yml

删除节点&重新加入

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# 设置worker节点为维护模式
[root@vms10 ~]# kubectl drain vms12 --delete-emptydir-data --force --ignore-daemonsets
node/vms12 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-87846, kube-system/kube-proxy-7f2jx
evicting pod kube-system/coredns-7f89b7bc75-q24dt
evicting pod kube-system/calico-kube-controllers-7f4f5bf95d-9c95m
pod/calico-kube-controllers-7f4f5bf95d-9c95m evicted
pod/coredns-7f89b7bc75-q24dt evicted
node/vms12 evicted
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready control-plane,master 17m v1.20.1
vms11 Ready <none> 12m v1.20.1
vms12 Ready,SchedulingDisabled <none> 12m v1.20.1
# 删除节点
[root@vms10 ~]# kubectl delete node vms12
node "vms12" deleted
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready control-plane,master 18m v1.20.1
vms11 Ready <none> 13m v1.20.1
# 清空节点配置
[root@vms12 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0520 01:30:17.934650 13451 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@vms12 ~]# rm -rf /etc/cni/net.d/*
# 重新加入集群
[root@vms12 ~]# kubeadm join 172.16.10.10:6443 --token rd4el2.q4mfotxvwm195p7m --discovery-token-ca-cert-hash sha256:0021ae786417d2a3202b42d580a71218ba17fa563ce7fe2b56d21624ffa389f9
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready control-plane,master 21m v1.20.1
vms11 Ready <none> 16m v1.20.1
vms12 Ready <none> 53s v1.20.1

升级

顺序

  1. 先升级master节点,再升级worker。如有多台master,则需要一台一台完成升级之后,再升级worker节点。
  2. 不管节点属性,都是先升级kubeadm,然后执行kubeadm upgrade,再升级kubelet和kkubectl。

案例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
# 升级Master
# 查看节点情况
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready control-plane,master 15h v1.20.1
vms11 Ready <none> 15h v1.20.1
vms12 Ready <none> 15h v1.20.1
# 查看版本情况
[root@vms10 ~]# kubectl version --short
Client Version: v1.20.1
Server Version: v1.20.1
# 解除升级锁
[root@vms10 ~]# yum versionlock del kubeadm
Loaded plugins: fastestmirror, langpacks, versionlock
Deleting versionlock for: 0:kubeadm-1.20.1-0.*
versionlock deleted: 1
# 查看可升级版本,确定升级到1.21.14版本,不建议跨大版本升级
[root@vms10 ~]# yum list --showduplicates kubeadm
kubeadm.x86_64 1.21.14-0
kubeadm.x86_64 1.22.0-0
# 升级kubeadm
[root@vms10 ~]# yum install -y kubeadm-1.21.14-0
# 验证kubeadm版本
[root@vms10 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.14", GitCommit:"0f77da5bd4809927e15d1658fb4aa8f13ad890a5", GitTreeState:"clean", BuildDate:"2022-06-15T14:16:13Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
# 查看升级方案
[root@vms10 ~]# kubeadm upgrade plan
# 检查当前集群状态
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.20.1
[upgrade/versions] kubeadm version: v1.21.14
I0520 17:54:43.190274 54385 version.go:254] remote version is much newer: v1.27.2; falling back to: stable-1.21
[upgrade/versions] Target version: v1.21.14
[upgrade/versions] Latest version in the v1.20 series: v1.20.15

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.20.1 v1.20.15

Upgrade to the latest version in the v1.20 series:

COMPONENT CURRENT TARGET
kube-apiserver v1.20.1 v1.20.15
kube-controller-manager v1.20.1 v1.20.15
kube-scheduler v1.20.1 v1.20.15
kube-proxy v1.20.1 v1.20.15
CoreDNS 1.7.0 v1.8.0
etcd 3.4.13-0 3.4.13-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.20.15
# 由1.20.1升级到1.20.15,也即1.20版本的最高小版本
_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.20.1 v1.21.14

Upgrade to the latest stable version:

COMPONENT CURRENT TARGET
kube-apiserver v1.20.1 v1.21.14
kube-controller-manager v1.20.1 v1.21.14
kube-scheduler v1.20.1 v1.21.14
kube-proxy v1.20.1 v1.21.14
CoreDNS 1.7.0 v1.8.0
etcd 3.4.13-0 3.4.13-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.21.14
# 使用带版本号的升级方案,则是升级到指定的1.21.14版本
_____________________________________________________________________

The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________

# 设置Master为维护模式,并清空pod
[root@vms10 ~]# kubectl drain vms10 --ignore-daemonsets
node/vms10 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-tjswd, kube-system/kube-proxy-9cfhk
node/vms10 drained
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready,SchedulingDisabled control-plane,master 16h v1.20.1
vms11 Ready <none> 16h v1.20.1
vms12 Ready <none> 16h v1.20.1
[root@vms10 ~]# kubeadm upgrade apply 1.21.14
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.14". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
# 解除Master的维护模式
[root@vms10 ~]# kubectl uncordon vms10
node/vms10 uncordoned
# 恢复对kubeadm的加锁
[root@vms10 ~]# yum versionlock add kubeadm
Loaded plugins: fastestmirror, langpacks, versionlock
Adding versionlock on: 0:kubeadm-1.21.14-0
versionlock added: 1
# 升级kubelet和kukbectl
# 解锁kubelet和kubectl
[root@vms10 ~]# yum versionlock del kubelet kubectl
Loaded plugins: fastestmirror, langpacks, versionlock
Deleting versionlock for: 0:kubectl-1.20.1-0.*
Deleting versionlock for: 0:kubelet-1.20.1-0.*
versionlock deleted: 2
[root@vms10 ~]# yum install -y kubelet-1.21.14-0 kubectl-1.21.14-0
[root@vms10 ~]# systemctl daemon-reload && systemctl restart kubelet
[root@vms10 ~]# kubectl version --short
Client Version: v1.21.14
Server Version: v1.21.14
# 再对kubelet和kubectl加锁以免误升级
[root@vms10 ~]# yum versionlock add kubelet kubectl
Loaded plugins: fastestmirror, langpacks, versionlock
Adding versionlock on: 0:kubelet-1.21.14-0
Adding versionlock on: 0:kubectl-1.21.14-0
versionlock added: 2
# Master升级完毕
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready control-plane,master 17h v1.21.14
vms11 Ready <none> 17h v1.20.1
vms12 Ready <none> 16h v1.20.1
# 升级Worker节点
# 创建升级脚本
[root@vms11 ~]# tee ./upgrade-k8s.sh <<-'EOF'
#!/bin/bash
# set the upgrade target
version="1.21.14"
# unlock kubeadm kubelet kubectl
yum versionlock del kubeadm kubelet kubectl
# upgrade kubeadm
yum install -y kubeadm-$version
# upgrade kuberentes
kubectl drain $(hostname) --ignore-daemonsets
kubeadm upgrade apply -y $version
kubectl uncordon $(hostname)
# upgrade
yum install -y kubelet-$version kubectl-$version
systemctl daemon-reload && systemctl restart kubelet
yum versionlock add kubeadm kubelet kubectl
EOF
[root@vms11 ~]# chmod +x upgrade-k8s.sh
[root@vms11 ~]# ./upgrade-k8s.sh
# 升级完成所有worker节点之后查看结果
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10 Ready control-plane,master 17h v1.21.14
vms11 Ready <none> 17h v1.21.14
vms12 Ready <none> 16h v1.21.14
# 升级完毕

监控

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# 在各个主机上预先下载镜像
[root@vms10 ~]# docker pull ccr.ccs.tencentyun.com/mirrors/metrics-server:v0.5.0
# 下载配置文件
[root@vms10 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml
# 修改配置文件
[root@vms10 ~]# vim components.yaml

containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls #使用http而非默认https
image: ccr.ccs.tencentyun.com/mirrors/metrics-server:v0.5.0 #修改为国内镜像

# 执行部署
[root@vms10 ~]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
# 查看运行pod
[root@vms10 ~]# kubectl get pod -n kube-system |grep metrics-server
metrics-server-c44f75469-ltpgz 0/1 Running 0 34s
# 查看集群状态
[root@vms10 ~]# kubectl top nodes
W0520 18:59:54.167920 103768 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
vms10 153m 7% 1669Mi 43%
vms11 96m 4% 787Mi 20%
vms12 86m 4% 767Mi 20%

面板

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# 在Master节点上配置部署文件
# 1.21版本对应dashboard的2.4版本
[root@vms10 ~]# docker pull kubernetesui/dashboard:v2.4.0
v2.4.0: Pulling from kubernetesui/dashboard
5a24d13191c9: Pull complete
476e0d029a85: Pull complete
Digest: sha256:526850ae4ea9aba360e72b6df69fd3126b129d446efe83ac5250282b85f95b7f
Status: Downloaded newer image for kubernetesui/dashboard:v2.4.0
docker.io/kubernetesui/dashboard:v2.4.0
[root@vms10 ~]# docker pull kubernetesui/metrics-scraper:v1.0.7
v1.0.7: Pulling from kubernetesui/metrics-scraper
18dd5eddb60d: Pull complete
1930c20668a8: Pull complete
Digest: sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172
Status: Downloaded newer image for kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/metrics-scraper:v1.0.7
[root@vms10 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
# 对上述文件进行如下修改
[root@vms10 ~]# vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #新增
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #新增
selector:
k8s-app: kubernetes-dashboard
# 创建面板
[root@vms10 ~]# kubectl apply -f recommended.yaml
# 创建账户
[root@vms10 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
# 账户赋权
[root@vms10 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
# 创建账户Token
[root@vms10 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-zm45s
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: b6679e77-4d4c-4fbc-bbb4-c05965cf7af7

Type: kubernetes.io/service-account-token

Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlRqYTQzSUtER0lVa1B1WVAtV1drSkpCSjdseUJ5bmJJbjhTRTZYR28xMjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tem00NXMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjY2NzllNzctNGQ0Yy00ZmJjLWJiYjQtYzA1OTY1Y2Y3YWY3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.hGsLdZ8zg7ZbC47fYj5ZxrFbGBgdabnzQQaaiPe_oHKHucWT0Ewq79s6VXgDlOOCrgWCSwhV_s8M_3h5MD7E53xOapIdZKJ8QPATRKzD6sqS2OQ3l6APbQYDU_He_2apCHNhgTrZJvg5QmPToU-GayphWda9i3JRxtryo2xujASTx5futs06TP0Ue_SP3gzPIxdmsBahVSdBs4t6IB-_UJ-T25bcmxs_GT3TEnMyjoYFWER30Ypktha6-wE3DGW6GX1gIQldhUCtOHHcoW12dD74Mvmwq8gMUUcqTAm4oS_laaYKr3OddzdYGz4Gvv-HkfxSeX3RRB3dnjUvavn3rw
ca.crt: 1066 bytes
namespace: 11 bytes
# 然后使用上述token登录