argo/airflow部署

Ethereal Lv4

1. argo

1.1 准备离线helm部署文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
wget https://github.com/argoproj/argo-helm/releases/downloa/argo-workflows-0.45.8/argo-workflows-0.45.8.tgz
tar xvf argo-workflows-0.45.8.tgz
vim values.yaml # 替换server的ClusterIP为NodePort,修改持久化(包括日志和workflow)
server:
serviceType: NodePort
# -- Service port for server
servicePort: 2746
# -- Service node port
serviceNodePort: 32746
persistence:
connectionPool:
maxIdleConns: 100
maxOpenConns: 0
# save the entire workflow into etcd and DB
nodeStatusOffLoad: false
# enable archiving of old workflows
archive: false
postgresql:
host: postgres.service.com
port: 32635
database: argo_workflows
tableName: argo_workflows
# postgresql:
# host: localhost
# port: 5432
# database: postgres
# tableName: argo_workflows
# # the database secrets must be in the same namespace of the controller
# userNameSecret:
# name: argo-postgres-config
# key: username
# passwordSecret:
# name: argo-postgres-config
# key: password
# ssl: true
# # sslMode must be one of: disable, require, verify-ca, verify-full
# # you can find more information about those ssl options here: https://godoc.org/github.com/lib/pq
# sslMode: requartifactRepository:
# -- Archive the main container logs as an artifact
artifactRepository:
archiveLogs: true
# -- Store artifact in a S3-compliant object store
# @default -- See [values.yaml]
s3: # # Note the `key` attribute is not the actual secret, it's the PATH to
# # the contents in the associated secret, as defined by the `name` attribute.
accessKeySecret:
name: argo-s3-config
key: accesskey
secretKeySecret:
name: argo-s3-config
key: secretkey
# sessionTokenSecret:
# name: "{{ .Release.Name }}-minio"
# key: sessionToken
# # insecure will disable TLS. Primarily used for minio installs not configured with TLS
insecure: true
bucket: argo_bucket
endpoint: s3.service.com:80
region: US

镜像列表:

1
2
3
quay.io/argoproj/workflow-controller:v3.6.4
quay.io/argoproj/argoexec:v3.6.4
quay.io/argoproj/argocli:v3.6.4

使用以下命令打包离线镜像文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/bin/bash
set -e

# first into the script directory
script_dir=$(c "$(dirname "$0")" && pwd)

cd $script_dir
rm -rf images && mkdir images && cd images

images_list=(
# k8s
"quay.io/argoproj/workflow-controller:v3.6.4"
"quay.io/argoproj/argoexec:v3.6.4"
"quay.io/argoproj/argocli:v3.6.4"
)

images_list="${images_list[*]}"

for img in $images_list; do
echo -e "\e[94m -> Preparing $img... \e[39m"
./bin/ctr -n k8s.io images pull --platform linux/amd64 $img --hosts-dir $script_dir
done

eval "ctr -n k8s.io images export --platform linux/amd64 ../containerd_images.tar ${images_list}"

1.2 部署

1
helm install argo-workflows /disk2/shared/build_offline_origin/argo/argo-workflows -n argo-workflows

1.3 创建role-binding

1
k create rolebinding argo-binding --role argo-argo-workflows-workflow --serviceaccount argo:default -n argo

1.4 获取登录token

1
kubectl exec -it argo-argo-workflows-server-667cddff87-5hg5m -n argo -- argo auth token

1.5 前提

  • 已经创建postgres持久化数据库

  • 已经创建s3存储

  • 已经修改coredns

  • 已经创建secret.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Secret
metadata:
name: argo-postgres-config
namespace: argo
type: Opaque
stringData:
username: postgres
password: E6^a3)zVD48mMNYaA)bF@wPv
---
apiVersion: v1
kind: Secret
metadata:
name: argo-s3-config
namespace: argo
type: Opaque
stringData:
accessKey: ENL7QVDGNNYNNEX3X3VS
secretKey: vaUjPhUkR8yLAdqVD6FRnXGVNrxBNDs9bMWFb6Kb

1.6 参考

Service Accounts - Argo Workflows - The workflow engine for Kubernetes

Argo Workflows 中文快速指南·-腾讯云开发者社区-腾讯云

Access Token - Argo Workflows - The workflow engine for Kubernetes

argo workflows — 配置持久化 persistencehelm方式安装的argo workflows需 - 掘金

Workflow Archive - Argo Workflows - The workflow engine for Kubernetes

2. airflow

2.1 准备离线helm部署文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
wget https://github.com/airflow-helm/charts/releases/download/airflow-8.9.0/airflow-8.9.0.tgz
tar xvf airflow-8.9.0.tgz
vim values.yaml # 替换web和flower的ClusterIP为NodePort
server:
service:
annotations: {}
sessionAffinity: "None"
sessionAffinityConfig: {}
type: NodePort

# 修改PVC of postgresql
persistence:
## if postgres will use Persistent Volume Claims to store data
## - [WARNING] if false, data will be LOST as postgres Pods restart
##
enabled: true

## the name of the StorageClass used by the PVC
##
storageClass: "local-path"

镜像列表:

1
2
3
4
5
apache/airflow:2.8.4-python3.9
registry.k8s.io/git-sync/git-sync:v3.6.9
ghcr.io/airflow-helm/pgbouncer:1.22.1-patch.0
ghcr.io/airflow-helm/postgresql-bitnami:11.22-patch.0
docker.io/bitnami/redis:6.2.14-debian-12-r17

使用以下命令打包离线镜像文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#!/bin/bash
set -e

# first into the script directory
script_dir=$(c "$(dirname "$0")" && pwd)

cd $script_dir
rm -rf images && mkdir images && cd images

images_list=(
# k8s
"apache/airflow:2.8.4-python3.9"
"registry.k8s.io/git-sync/git-sync:v3.6.9"
"ghcr.io/airflow-helm/pgbouncer:1.22.1-patch.0"
"ghcr.io/airflow-helm/postgresql-bitnami:11.22-patch.0"
"docker.io/bitnami/redis:6.2.14-debian-12-r17"
)

images_list="${images_list[*]}"

for img in $images_list; do
echo -e "\e[94m -> Preparing $img... \e[39m"
./bin/ctr -n k8s.io images pull --platform linux/amd64 $img --hosts-dir $script_dir
done

eval "ctr -n k8s.io images export --platform linux/amd64 ../containerd_images.tar ${images_list}"

2.2 部署

1
helm install argo-workflows /disk2/shared/build_offline_origin/argo/argo-workflows -n argo-workflows
1
2
3
Default Airflow Webserver login:
* Username: admin
* Password: admin
  • Title: argo/airflow部署
  • Author: Ethereal
  • Created at: 2025-03-10 18:32:21
  • Updated at: 2025-03-10 18:41:16
  • Link: https://ethereal-o.github.io/2025/03/10/argo-airflow部署/
  • License: This work is licensed under CC BY-NC-SA 4.0.
 Comments