k8s节点搭建

Ethereal Lv4

一. mysql节点

搭建nfs服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 安装
apt install nfs-kernel-server
# 创建文件夹
mkdir -p /root/sharedata/mysql
# 编辑
vim /etc/exports
/root/sharedata/mysql *(rw,sync,no_root_squash)
# 使配置生效,不用重启 nfs 服务器,客户端实时更新
exportfs -rv
# 启动
systemctl start rpcbind
systemctl start nfs-server
systemctl enable --now nfs-server
# 查看
showmount -e
1
2
3
4
5
# 说明
/root/sharedata/mysql # 共享的目录
* # 指客户端所有主机都可以使用, 也可以指定某个主机
rw # 读写权限
no_root_squash # 登入 NFS 主机使用者如果是 root 的话,那么他就具有 root 的权限

配置文件mysql-pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
path: /root/sharedata/mysql
server: 127.0.0.1

配置文件mysql-pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

配置文件mysql-server.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
serviceName: "mysql"
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8
name: mysql
env:
- name: "MYSQL_ROOT_PASSWORD"
value: '123456'
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
resources:
requests:
memory: "100Mi"
limits:
memory: "1200Mi"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim

如果要使得mysql能被外部访问,可以将service部分改成如下(要删掉原service,否则会报错):

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30080 #对k8s外部30080端口
targetPort: 3306
selector:
app: mysql

分别执行

1
2
3
kubectl create -f mysql-pv.yaml
kubectl create -f mysql-pvc.yaml
kubectl create -f mysql-server.yaml

进入mysql

1
2
kubectl exec -it mysql-0 /bin/bash
mysql -uroot -p123456

二. springboot节点

dockerfile如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#!/bin/bash
# 使用Ubuntu作为基础镜像
FROM arm64v8/ubuntu:latest

# 维护人信息
MAINTAINER ethereal

# 切换到usr/local 目录下
WORKDIR ./usr/local

# 创建jdk目录
RUN mkdir jdk

# 对jdk赋权
RUN chmod 777 /usr/local/jdk

# 将下载的jdk 的压缩包拷贝到镜像中,注意 ADD和COPY的区别,ADD 会解压,COPY不会解压
ADD jdk-8u341-linux-aarch64.tar /usr/local/jdk

# 设置JAVA_HOME 的环境变量
ENV JAVA_HOME /usr/local/jdk/jdk1.8.0_341

# 设置JAVA 环境
ENV CLASSPATH=$JAVA_HOME/bin:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

# 将java可执行文件设置到PATH中,这样就可以使用java命令了
ENV PATH=.:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

COPY ./target/backend-0.0.1-SNAPSHOT.jar /app/spring-boot-k8s-app.jar

ENTRYPOINT ["java", "-jar" , "/app/spring-boot-k8s-app.jar"]

推送与拉取

1
2
3
4
5
6
7
8
9
10
# build
docker buildx build --platform arm64 -f Dockerfile -t jdk1.8 .
# tag
docker tag <imageID> registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1
# push
docker push registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1
# pull
crictl pull --creds aliyun8516592724(username):[password] registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1
# tag
ctr -n k8s.io i tag registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1 javabackend.io/springboot:v1

配置文件springboot.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
apiVersion: apps/v1
kind: Deployment #部署
metadata:
name: springboot-app
spec:
replicas: 2 #2个副本
selector:
matchLabels:
app: springboot-app
template:
metadata:
labels:
app: springboot-app
spec:
containers:
- name: springboot-app
# image: registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1 #刚刚push到阿里云上的镜像地址
image: javabackend.io/springboot:v1
ports:
- containerPort: 8080 #默认springboot端口
resources:
requests:
memory: "500Mi"
limits:
memory: "3600Mi"
---

apiVersion: v1
kind: Service
metadata:
name: springboot-app
spec:
type: NodePort
selector:
app: springboot-app #选中上面的 Deployment
ports:
- port: 8080 #对service外部8080端口
nodePort: 30090 #对k8s外部30090端口
targetPort: 8080

部署

1
kubectl apply -f springboot.yaml

cicd

1
2
3
4
5
#!/bin/bash
crictl rmi registry.cn-shanghai.aliyuncs.com/miaoa/miaoa:prod
crictl pull --creds Ethereal@1608148795872129:SichaoMiaoA123456 registry.cn-shanghai.aliyuncs.com/miaoa/miaoa:prod
ctr -n k8s.io i tag registry.cn-shanghai.aliyuncs.com/miaoa/miaoa:prod javabackend.io/springboot-main:v1
kubectl get pods | grep springboot-main | awk -F ' ' '{print $1}' | xargs kubectl delete pod

如果想要免密拉取,则需要配置containerd

1
2
3
containerd config dump # containerd config default 查看当前/初始配置
mkdir -p /etc/containerd
vim /etc/containerd/config.toml

plugins."io.containerd.grpc.v1.cri".registry段改为如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""

[plugins."io.containerd.grpc.v1.cri".registry.auths]

[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.cn-shanghai.aliyuncs.com".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.cn-shanghai.aliyuncs.com".auth]
username = "账户名"
password = "密码"

[plugins."io.containerd.grpc.v1.cri".registry.headers]

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.cn-shanghai.aliyuncs.com"]
endpoint = ["https://registry.cn-shanghai.aliyuncs.com"]

配置好后可以通过此账户密码访问以registry.cn-shanghai.aliyuncs.com开头的所有镜像仓库。

重启containerd

1
2
systemctl daemon-reload
systemctl restart containerd

之后yaml中containers段改为

1
2
3
4
5
6
7
8
9
10
11
12

containers:
- name: springboot-app
image: registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1
imagePullPolicy: Always
ports:
- containerPort: 8080 #默认springboot端口
resources:
requests:
memory: "500Mi"
limits:
memory: "3600Mi"

cicd改为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/bash
deployment_name="springboot-main"
kubectl get pods | grep $deployment_name | awk -F ' ' '{print $1}' | xargs kubectl delete pod

max_wait_epoch=15
for i in $(seq 1 $max_wait_epoch)
do
sleep 20
echo "["$i"] wait for pod start..."
kubectl get pods | grep $deployment_name
kubectl get pods | grep $deployment_name | grep Running > /dev/null
if [ $? -eq 0 ]; then
echo "pod start success!"
break
fi
done

if [ $i -eq $max_wait_epoch ]; then
echo "pod start failed!"
exit 1
fi

三. redis节点

搭建nfs服务器

1
2
3
4
5
6
7
# 创建文件夹
mkdir /root/sharedata/redis
# 编辑
vim /etc/exports
/root/sharedata/redis *(rw,sync,no_root_squash)
# 使配置生效,不用重启 nfs 服务器,客户端实时更新
exportfs -rv

配置文件redis-pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv-volume
labels:
type: local
spec:
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
path: /root/sharedata/redis
server: 127.0.0.1

配置文件redis.conf

1
2
3
4
5
6
7
8
ignore-warnings ARM64-COW-BUG
appendonly yes
cluster-enabled yes
cluster-config-file /var/lib/redis/nodes.conf
cluster-node-timeout 5000
maxmemory 1G
dir /var/lib/redis
port 6379

配置文件redis-server.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- name: redis-port
port: 6379
clusterIP: None
selector:
app: redis
appCluster: redis-cluster

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
appCluster: redis-cluster
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- redis
topologyKey: kubernetes.io/hostname
containers:
- name: redis
image: redis:7
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
resources:
requests:
cpu: "100m"
memory: "100Mi"
limits:
memory: "1200Mi" # 与redis.conf联合限定内存上限
ports:
- name: redis
containerPort: 6379
protocol: "TCP"
- name: cluster
containerPort: 16379
protocol: "TCP"
volumeMounts:
- name: "redis-conf"
mountPath: "/etc/redis"
- name: "redis-data"
mountPath: "/var/lib/redis"
volumes:
- name: "redis-conf"
configMap:
name: "redis-conf"
items:
- key: "redis.conf"
path: "redis.conf"
volumeClaimTemplates:
- metadata:
name: redis-data
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 200M

分别执行

1
2
3
kubectl create -f redis-pv.yaml 
kubectl create configmap redis-conf --from-file=redis.conf
kubectl create -f redis-server.yaml

分配slots

1
2
3
4
5
6
7
8
9
10
11
12
# 进入主机
kubectl exec -it redis-0 /bin/bash
# 检查
redis-cli --cluster check 127.0.0.1:6379
# 修复
redis-cli --cluster fix 127.0.0.1:6379
# 检验
redis-cli
set a 1
get a
del a
get a

四. python节点

dockerfile如下:

1
2
3
4
5
6
7
8
9
10
FROM ubuntu:20.04
RUN mkdir /sichao_python
COPY ./ /sichao_python
WORKDIR /sichao_python
RUN apt-get update && apt-get install -y python3 python3-pip python3-dev gcc g++ libffi-dev libssl-dev
RUN pip install cython
RUN pip install cryptography
RUN pip install -r requirements.txt

ENTRYPOINT ["gunicorn", "--workers=4", "--bind=0.0.0.0:5000", "miaoA:app"]

推送与拉取

1
2
3
4
5
6
7
8
9
10
# build
docker buildx build --platform arm64 -f Dockerfile -t python-backend .
# tag
docker tag <imageID> registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1
# push
docker push registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1
# pull
crictl pull --creds aliyun8516592724(username):[password] registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1
# tag
ctr -n k8s.io i tag registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1 javabackend.io/springboot:v1

配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-recommend
spec:
replicas: 1
selector:
matchLabels:
app: python-recommend
template:
metadata:
labels:
app: python-recommend
spec:
containers:
- name: python-recommend
image: pythonbackend.io/python-recommend:v1
ports:
- containerPort: 5000
resources:
requests:
memory: "100Mi"
limits:
memory: "1200Mi"
---
apiVersion: v1
kind: Service
metadata:
name: python-recommend
spec:
type: NodePort
selector:
app: python-recommend
ports:
- port: 5000
nodePort: 30150
targetPort: 5000

部署

1
kubectl apply -f python.yaml

cicd

1
2
3
4
5
#!/bin/bash
crictl rmi registry.cn-shanghai.aliyuncs.com/miaoa/miaoa-python:prod
crictl pull --creds Ethereal@1608148795872129:SichaoMiaoA123456 registry.cn-shanghai.aliyuncs.com/miaoa/miaoa-python:prod
ctr -n k8s.io i tag registry.cn-shanghai.aliyuncs.com/miaoa/miaoa-python:prod pythonbackend.io/python-recommend:v1
kubectl get pods | grep python-recommend | awk -F ' ' '{print $1}' | xargs kubectl delete pod

同理如果想免密拉取,参考二.-springboot节点

五. mongodb节点

搭建nfs服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 安装
apt install nfs-kernel-server
# 创建文件夹
mkdir -p /root/sharedata/mongodb
# 编辑
vim /etc/exports
/root/sharedata/mongodb *(rw,sync,no_root_squash)
# 使配置生效,不用重启 nfs 服务器,客户端实时更新
exportfs -rv
# 启动
systemctl start rpcbind
systemctl start nfs-server
systemctl enable --now nfs-server
# 查看
showmount -e
1
2
3
4
5
# 说明
/root/sharedata/mongodb # 共享的目录
* # 指客户端所有主机都可以使用, 也可以指定某个主机
rw # 读写权限
no_root_squash # 登入 NFS 主机使用者如果是 root 的话,那么他就具有 root 的权限

配置文件mongodb-pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
path: /root/sharedata/mongodb
server: 127.0.0.1

配置文件mongodb-pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

配置文件mongodb-server.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
apiVersion: v1
kind: Service
metadata:
name: mongodb
spec:
type: NodePort
ports:
- port: 27017
nodePort: 30161 #对k8s外部端口
targetPort: 27017
selector:
app: mongodb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
serviceName: "mongodb"
template:
metadata:
labels:
app: mongodb
spec:
containers:
- image: mongo:6.0.3
name: mongodb
env:
- name: "MONGO_INITDB_ROOT_USERNAME"
value: 'root'
- name: "MONGO_INITDB_ROOT_PASSWORD"
value: '123456'
ports:
- containerPort: 27017
name: mongodb
volumeMounts:
- name: mongodb-persistent-storage
mountPath: /data/db
resources:
requests:
memory: "100Mi"
limits:
memory: "1200Mi"
volumes:
- name: mongodb-persistent-storage
persistentVolumeClaim:
claimName: mongodb-pv-claim

分别执行

1
2
3
kubectl create -f mongodb-pv.yaml
kubectl create -f mongodb-pvc.yaml
kubectl create -f mongodb-server.yaml

六. nginx节点

搭建nfs服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 安装
apt install nfs-kernel-server
# 创建文件夹
mkdir -p /root/sharedata/nginx
# 编辑
vim /etc/exports
/root/sharedata/nginx *(rw,sync,no_root_squash)
# 使配置生效,不用重启 nfs 服务器,客户端实时更新
exportfs -rv
# 启动
systemctl start rpcbind
systemctl start nfs-server
systemctl enable --now nfs-server
# 查看
showmount -e
1
2
3
4
5
# 说明
/root/sharedata/nginx # 共享的目录
* # 指客户端所有主机都可以使用, 也可以指定某个主机
rw # 读写权限
no_root_squash # 登入 NFS 主机使用者如果是 root 的话,那么他就具有 root 的权限

配置文件nginx-pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
path: /root/sharedata/nginx
server: 127.0.0.1

配置文件nginx-pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

配置文件nginx-server.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx
spec:
type: NodePort
ports:
- nodePort: 80
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.22.0
name: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-persistent-storage
mountPath: /etc/nginx/conf.d
resources:
requests:
memory: "100Mi"
limits:
memory: "1200Mi"
volumes:
- name: nginx-persistent-storage
persistentVolumeClaim:
claimName: nginx-pv-claim

创建nginx配置文件

1
2
3
4
5
6
7
8
9
10
11
12
vim /root/sharedata/nginx/default.conf
# 以下为内容
server {
listen 80;
listen [::]:80;

server_name www.ethereal.aaa.com;

location ^~ / {
proxy_pass http://ethereal:8080/;
}
}

修改kubectl端口范围

1
2
3
4
5
6
vim /etc/kubernetes/manifests/kube-apiserver.yaml
# 在spec.containers.command末尾加上
- --service-node-port-range=1-65535
# 保存后执行
systemctl daemon-reload
systemctl restart kubelet

分别执行

1
2
3
kubectl create -f nginx-pv.yaml
kubectl create -f nginx-pvc.yaml
kubectl create -f nginx-server.yaml

七. sensor节点

配置文件sensor.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sensor
name: sensor
spec:
replicas: 1
selector:
matchLabels:
app: sensor
template:
metadata:
labels:
app: sensor
spec:
containers:
- image: sensor:v1
name: sensor
securityContext:
privileged: true
volumeMounts:
- name: sensor-dev
mountPath: /dev
resources:
requests:
memory: "100Mi"
limits:
memory: "1200Mi"
volumes:
- name: sensor-dev
hostPath:
path: "/dev/"

该文件将/dev映射到Pod下,使得Pod可以访问所有设备。且设置安全策略为privileged,允许以root身份访问。

分别执行

1
kubectl create -f sensor.yaml

八. 附录

消除master节点的taint

1
2
kubectl taint nodes --all node-role.kubernetes.io/master-  # 旧版本
kubectl taint nodes --all node-role.kubernetes.io/control-plane-

kubectl常用命令

1
2
3
4
5
6
7
kubectl get nodes
kubectl get pods
kubectl delete pod <podname>
kubectl describe pod <podname>
kubectl describe node
kubectl logs <podname>
kubectl exec -it <podname> /bin/bash

docker常用命令

1
2
3
4
5
6
docker images
docker rmi <imagesID>
docker rm <containerID>
docker buildx build --platform arm64 -f Dockerfile -t jdk1.8 .
docker tag [imageID] registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1
docker push registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1

crictl常用命令

1
2
3
4
crictl img
crictl rmi
crictl pull --creds aliyun8516592724(username):[password] registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1
crictl rmi --prune # 清理未使用的镜像

ctr常用命令

1
2
3
4
ctr -n k8s.io i rm <imageID>
ctr -n k8s.io i tag registry.cn-shanghai.aliyuncs.com/ethereal-o/docker:v1 javabackend.io/springboot:v1
ctr -n=k8s.io i ls
ctr -n=k8s.io i ls|awk -F ' ' '{print $1}'

containerd常用命令

1
2
3
containerd config dump # containerd config default 查看当前/初始配置
vim /etc/containerd/config.toml # 修改配置
systemctl daemon-reload && systemctl restart containerd # 重新应用配置

节点自动启动脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/bin/bash
deployment_name="springboot-main"
kubectl get pods | grep $deployment_name | awk -F ' ' '{print $1}' | xargs kubectl delete pod

crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 registry.k8s.io/pause:3.8

max_wait_epoch=15
for i in $(seq 1 $max_wait_epoch)
do
sleep 20
echo "["$i"] wait for pod start..."
kubectl get pods | grep $deployment_name
kubectl get pods | grep $deployment_name | grep Running > /dev/null
if [ $? -eq 0 ]; then
echo "pod start success!"
# crictl rmi --prune
break
fi
done

if [ $i -eq $max_wait_epoch ]; then
echo "pod start failed!"
exit 1
fi

将docker镜像导出到containerd

1
2
3
4
5
6
7
8
# 将镜像保存下来
docker save -o ./image.tar image:v1
# 导入,-n 参数为指定命名空间,必须为k8s.io命名空间
ctr -n k8s.io image import ./image.tar
# 确认下导入
ctr -n k8s.io image list
# crictl是Kubernetes社区定义的CRI接口工具,在这边也确认下
crictl image

containerd配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
path = ""

[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0

[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0

[metrics]
address = ""
grpc_histogram = false

[plugins]

[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"

[plugins."io.containerd.grpc.v1.cri"]
cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]
device_ownership_from_security_context = false
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
drain_exec_sync_io_timeout = "0s"
enable_cdi = false
enable_selinux = false
enable_tls_streaming = false
enable_unprivileged_icmp = false
enable_unprivileged_ports = false
ignore_image_defined_volumes = false
image_pull_progress_timeout = "1m0s"
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "registry.k8s.io/pause:3.8"
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""

[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
ip_pref = ""
max_conf_num = 1
setup_serially = false

[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
ignore_blockio_not_enabled_errors = false
ignore_rdt_not_enabled_errors = false
no_pivot = false
snapshotter = "overlayfs"

[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
privileged_without_host_devices_all_devices_allowed = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
sandbox_mode = ""
snapshotter = ""

[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
privileged_without_host_devices_all_devices_allowed = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
sandbox_mode = "podsandbox"
snapshotter = ""

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = false

[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
privileged_without_host_devices_all_devices_allowed = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
sandbox_mode = ""
snapshotter = ""

[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"

[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""

[plugins."io.containerd.grpc.v1.cri".registry.auths]

[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.cn-shanghai.aliyuncs.com".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.cn-shanghai.aliyuncs.com".auth]
username = "xxx"
password = "xxx"

[plugins."io.containerd.grpc.v1.cri".registry.headers]

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.cn-shanghai.aliyuncs.com"]
endpoint = ["https://registry.cn-shanghai.aliyuncs.com"]

[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"

[plugins."io.containerd.internal.v1.restart"]
interval = "10s"

[plugins."io.containerd.internal.v1.tracing"]
sampling_ratio = 1.0
service_name = "containerd"

[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"

[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false

[plugins."io.containerd.nri.v1.nri"]
disable = true
disable_connections = false
plugin_config_path = "/etc/nri/conf.d"
plugin_path = "/opt/nri/plugins"
plugin_registration_timeout = "5s"
plugin_request_timeout = "2s"
socket_path = "/var/run/nri/nri.sock"

[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false

[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
sched_core = false

[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]

[plugins."io.containerd.service.v1.tasks-service"]
blockio_config_file = ""
rdt_config_file = ""

[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""

[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
upperdir_label = false

[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""

[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ""
insecure = false
protocol = ""

[plugins."io.containerd.transfer.v1.local"]
config_path = ""
max_concurrent_downloads = 3
max_concurrent_uploaded_layers = 3

[[plugins."io.containerd.transfer.v1.local".unpack_config]]
differ = ""
platform = "linux/amd64"
snapshotter = "overlayfs"

[proxy_plugins]

[stream_processors]

[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"

[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
"io.containerd.timeout.bolt.open" = "0s"
"io.containerd.timeout.metrics.shimstats" = "2s"
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"

[ttrpc]
address = ""
gid = 0
uid = 0

九. 参考

https://blog.csdn.net/mshxuyi/article/details/115102838

https://blog.csdn.net/asufeiya/article/details/119595862

https://blog.csdn.net/sebeefe/article/details/124473706

https://blog.csdn.net/zxc_123_789/article/details/122924616

containerd配置国内镜像源及使用私有镜像仓库_containerd 镜像源-CSDN博客

k8s搭配containerd:如何从harbor私有仓库pull镜像_containerd pull-CSDN博客

  • Title: k8s节点搭建
  • Author: Ethereal
  • Created at: 2024-01-31 21:11:48
  • Updated at: 2024-06-05 18:38:18
  • Link: https://ethereal-o.github.io/2024/01/31/k8s节点搭建/
  • License: This work is licensed under CC BY-NC-SA 4.0.
 Comments