k8s搭建influxdb和kafka集群

Ethereal Lv4

1. 搭建influxdb

1.1 开启nfs服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 安装
apt install nfs-kernel-server
# 创建文件夹
mkdir -p /root/sharedata/influxdb
# 编辑
vim /etc/exports
/root/sharedata/influxdb *(rw,sync,no_root_squash)
# 使配置生效,不用重启 nfs 服务器,客户端实时更新
exportfs -rv
# 启动
systemctl start rpcbind
systemctl start nfs-server
systemctl enable --now nfs-server
# 查看
showmount -e

1.2 创建pv

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: PersistentVolume
metadata:
name: influxdb-dev-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
nfs:
path: /root/sharedata/influxdb_dev
server: 127.0.0.1

1.3 创建pvc

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: influxdb-dev-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

1.4 创建server

要注意influxdb的版本必须是2+,否则没有web界面

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
apiVersion: v1
kind: Service
metadata:
name: influxdb-dev
spec:
type: NodePort
ports:
- port: 8086
nodePort: 30121 #对k8s外部30080端口
targetPort: 8086
selector:
app: influxdb-dev
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: influxdb-dev
spec:
replicas: 1
selector:
matchLabels:
app: influxdb-dev
serviceName: "influxdb-dev"
template:
metadata:
labels:
app: influxdb-dev
spec:
containers:
- image: influxdb:2.0.6
name: infuxdb-dev
ports:
- containerPort: 8086
name: influxdb-dev
volumeMounts:
- name: influxdb-dev-persistent-storage
mountPath: /var/lib/influxdb2
resources:
requests:
memory: "100Mi"
limits:
memory: "1200Mi"
volumes:
- name: influxdb-dev-persistent-storage
persistentVolumeClaim:
claimName: influxdb-dev-pv-claim

2. 搭建kafka集群

2.1 开启nfs服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 安装
apt install nfs-kernel-server
# 创建文件夹
mkdir -p /root/sharedata/zookeeper/1
mkdir -p /root/sharedata/zookeeper/2
mkdir -p /root/sharedata/zookeeper/3
mkdir -p /root/sharedata/kafka/1
mkdir -p /root/sharedata/kafka/2
mkdir -p /root/sharedata/kafka/3
# 编辑
vim /etc/exports
/root/sharedata/zookeeper_dev/1 *(rw,sync,no_root_squash)
/root/sharedata/zookeeper_dev/2 *(rw,sync,no_root_squash)
/root/sharedata/zookeeper_dev/3 *(rw,sync,no_root_squash)
/root/sharedata/kafka_dev/1 *(rw,sync,no_root_squash)
/root/sharedata/kafka_dev/2 *(rw,sync,no_root_squash)
/root/sharedata/kafka_dev/3 *(rw,sync,no_root_squash)
# 使配置生效,不用重启 nfs 服务器,客户端实时更新
exportfs -rv
# 启动
systemctl start rpcbind
systemctl start nfs-server
systemctl enable --now nfs-server
# 查看
showmount -e

2.2 创建zookeeper的pv

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-zk01
labels:
app: zk
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
server: 127.0.0.1
path: "/root/sharedata/zookeeper_dev/1"
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-zk02
labels:
app: zk
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
server: 127.0.0.1
path: "/root/sharedata/zookeeper_dev/2"
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-zk03
labels:
app: zk
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
server: 127.0.0.1
path: "/root/sharedata/zookeeper_dev/3"
persistentVolumeReclaimPolicy: Recycle

2.2 创建zookeeper的server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
clusterIP: None
ports:
- name: server
port: 2888
- name: leader-election
port: 3888
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
type: NodePort
ports:
- name: client
port: 2181
nodePort: 30131
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: "zk-hs"
replicas: 3 # by default is 1
selector:
matchLabels:
app: zk # has to match .spec.template.metadata.labels
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk # has to match .spec.selector.matchLabels
spec:
containers:
- name: zk
imagePullPolicy: Always
image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
resources:
requests:
memory: "100Mi"
limits:
memory: "4800Mi"
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=4G \ # 设置堆大小
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

2.3 创建kafka的server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
apiVersion: v1
kind: Service
metadata:
name: kafka-service-1
spec:
type: NodePort
ports:
- port: 9092
name: kafka-service-1
targetPort: 9092
nodePort: 30143
protocol: TCP
selector:
app: kafka-1
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service-2
spec:
type: NodePort
ports:
- port: 9092
name: kafka-service-2
targetPort: 9092
nodePort: 30144
protocol: TCP
selector:
app: kafka-2
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service-3
spec:
type: NodePort
ports:
- port: 9092
name: kafka-service-3
targetPort: 9092
nodePort: 30145
protocol: TCP
selector:
app: kafka-3
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment-1
spec:
replicas: 1
selector:
matchLabels:
app: kafka-1
template:
metadata:
labels:
app: kafka-1
spec:
containers:
- name: kafka-1
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-hs:2181,zk-1.zk-hs:2181,zk-2.zk-hs:2181 #kafka 连接zookeeper集群的地址
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: mytopic:2:1
- name: KAFKA_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_ADVERTISED_PORT
value: "30143"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: KAFKA_HEAP_OPTS
value: "-Xmx512m -Xms512m" # 设置堆大小
resources:
requests:
memory: "100Mi"
limits:
memory: "1200Mi"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
volumes:
- name: datadir
nfs:
server: 127.0.0.1
path: "/root/sharedata/kafka_dev/1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment-2
spec:
replicas: 1
selector:
matchLabels:
app: kafka-2
template:
metadata:
labels:
app: kafka-2
spec:
containers:
- name: kafka-2
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-hs:2181,zk-1.zk-hs:2181,zk-2.zk-hs:2181
- name: KAFKA_BROKER_ID
value: "2"
- name: KAFKA_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_ADVERTISED_PORT
value: "30144"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: KAFKA_HEAP_OPTS
value: "-Xmx512m -Xms512m" # 设置堆大小
resources:
requests:
memory: "100Mi"
limits:
memory: "1200Mi"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
volumes:
- name: datadir
nfs:
server: 127.0.0.1
path: "/root/sharedata/kafka_dev/2"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment-3
spec:
replicas: 1
selector:
matchLabels:
app: kafka-3
template:
metadata:
labels:
app: kafka-3
spec:
containers:
- name: kafka-3
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-0.zk-hs:2181,zk-1.zk-hs:2181,zk-2.zk-hs:2181
- name: KAFKA_BROKER_ID
value: "3"
- name: KAFKA_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_ADVERTISED_PORT
value: "30145"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: KAFKA_HEAP_OPTS
value: "-Xmx512m -Xms512m" # 设置堆大小
resources:
requests:
memory: "100Mi"
limits:
memory: "1200Mi"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
volumes:
- name: datadir
nfs:
server: 127.0.0.1
path: "/root/sharedata/kafka_dev/3"

其中,KAFKA_ADVERTISED_HOST_NAME可能与实际访问的ip不同,即此处获得的status.hostIP为私网ip,但是访问应该需要公网ip。

1
2
3
4
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.hostIP

解决方式问替换成如下:

1
2
- name: KAFKA_ADVERTISED_HOST_NAME
value: "101.133.164.151"

3. 部署

1
2
3
4
5
6
kubectl apply -f influxdb-pv.yaml
kubectl apply -f influxdb-pvc.yaml
kubectl apply -f influxdb-server.yaml
kubectl apply -f zookeeper-pv.yaml
kubectl apply -f zookeeper-server.yaml
kubectl apply -f kafka-server.yaml

4. 参考

K8S部署InfluxDB - zhangsi-lzq - 博客园 (cnblogs.com)

【全网最全最详细】K8s部署Mysql 8主从复制+读写分离_k8s部署mysql8.0读写分离-CSDN博客

K8S环境快速部署Kafka(K8S外部可访问)-腾讯云开发者社区-腾讯云 (tencent.com)

k8s部署有状态(StatefulSet)zk-kafka集群_k8s statefulset-CSDN博客

k8s部署Kafka集群-腾讯云开发者社区-腾讯云 (tencent.com)

K8s - 安装部署Kafka、Zookeeper集群教程(支持从K8s外部访问) - 蜂蜜log - 博客园 (cnblogs.com)

  • Title: k8s搭建influxdb和kafka集群
  • Author: Ethereal
  • Created at: 2024-01-08 01:00:56
  • Updated at: 2024-02-01 22:54:26
  • Link: https://ethereal-o.github.io/2024/01/08/k8s搭建influxdb和kafka集群/
  • License: This work is licensed under CC BY-NC-SA 4.0.
 Comments