diff --git a/docs/docs/administrator_guide/studio/k8s_guide/_category_.json b/docs/docs/administrator_guide/studio/k8s_guide/_category_.json
deleted file mode 100644
index 1fea42224d..0000000000
--- a/docs/docs/administrator_guide/studio/k8s_guide/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
- "label": "k8s 集成手册",
- "position": 4
-}
diff --git a/docs/docs/administrator_guide/studio/k8s_guide/dinky_k8s_quick_start.mdx b/docs/docs/administrator_guide/studio/k8s_guide/dinky_k8s_quick_start.mdx
deleted file mode 100644
index 782078a95a..0000000000
--- a/docs/docs/administrator_guide/studio/k8s_guide/dinky_k8s_quick_start.mdx
+++ /dev/null
@@ -1,556 +0,0 @@
----
-sidebar_position: 1
-id: dinky_k8s_quick_start
-title: Dinky 快速集成 k8s
----
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-
-## k8s 环境初始化
-**1. Flink 对应集成 k8s 文档链接**
-
-
-
-
-
-[https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/resource-providers/native_kubernetes/](https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/)
-
-
-
-
-[https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/deployment/resource-providers/native_kubernetes/](https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/)
-
-
-
-
-[https://nightlies.apache.org/flink/flink-docs-release-1.15/zh/docs/deployment/resource-providers/native_kubernetes/](https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/)
-
-
-
-
-[https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/](https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/)
-
-
-
-
-**2. 执行如下命令,以便增加 RBAC 权限**
-```shell
-# 创建命名空间
-kubectl create namespace dinky
-# 为命名空间创建权限
-kubectl create clusterrolebinding flink-role-binding-default --clusterrole=edit --serviceaccount=dinky:default
-```
-
-:::tip
-其中有个 NAMESPACE 变量,就是你需要运行flink集群的k8s命名空间
-```shell
-kubectl create clusterrolebinding flink-role-binding-default --clusterrole=edit --serviceaccount=${NAMESPACE}:default
-```
-:::
-
----
-## 镜像制作
-### 自定义制作镜像
-#### 搭建私有镜像仓库
-##### 配置Registry
-1、拉取镜像
-```shell
-docker pull registry
-```
-2、创建镜像存贮目录
-```shell
-mkdir /docker/registry -p
-```
-3、运行registry
-```shell
-docker run -itd -v /docker/registry/:/docker/registry -p 5000:5000 --restart=always --name registry registry:latest
-```
-参数说明:
-* -itd:在容器中打开伪终端进行交互操作,并在后台运行
-* -v:容器目录与主机目录映射,用于存储镜像
-* -p:容器端口与主机端口映射,用于对外服务
-* --restart=always:指定重启策略
-* --name registry:指定容器名称为registry
-* registry:latest:指定容器镜像
-
-##### 配置Clinet
-指定registry地址及端口
-```shell
-vim /etc/docker/daemon.json
-
-# 添加如下内容
-{
- "insecure-registries": ["192.168.0.10:5000"]
-}
-```
-192.168.0.10 为安装私有镜像仓库机器IP
-
-
-#### Dockerfile模板修改
-`Dinky_HOME/config/DinkyFlinkDockerfile` 提供 Dockerfile,内置flink1.15制作方案。如需使用其他版本,请按照对应修改;
-
-```shell
-ARG FLINK_VERSION=1.15.4
-ARG FLINK_BIG_VERSION=1.15
-
-FROM flink:${FLINK_VERSION}-scala_2.12
-
-ARG FLINK_VERSION
-ARG FLINK_BIG_VERSION
-ENV PYTHON_HOME /opt/miniconda3
-
-USER root
-RUN wget "https://s3.jcloud.sjtu.edu.cn/899a892efef34b1b944a19981040f55b-oss01/anaconda/miniconda/Miniconda3-py38_4.9.2-Linux-x86_64.sh" -O "miniconda.sh" && chmod +x miniconda.sh
-RUN ./miniconda.sh -b -p $PYTHON_HOME && chown -R flink $PYTHON_HOME && ls $PYTHON_HOME
-
-RUN mkdir /opt/dinky
-ADD app /opt/dinky
-ADD plugins /opt/flink/lib
-
-ENV HADOOP_VERSION 3.3.4
-ENV HADOOP_HOME=/opt/hadoop
-ADD hadoop-${HADOOP_VERSION}.tar.gz /opt
-RUN ln -s /opt/hadoop-${HADOOP_VERSION} ${HADOOP_HOME}
-ENV HADOOP_CLASSPATH=${HADOOP_HOME}/etc/hadoop:${HADOOP_HOME}/share/hadoop/common/lib/*:${HADOOP_HOME}/share/hadoop/common/*:${HADOOP_HOME}/share/hadoop/hdfs:${HADOOP_HOME}/share/hadoop/hdfs/lib/*:${HADOOP_HOME}/share/hadoop/hdfs/*:${HADOOP_HOME}/share/hadoop/yarn/lib/*:${HADOOP_HOME}/share/hadoop/yarn/*:${HADOOP_HOME}/share/hadoop/mapreduce/lib/*:${HADOOP_HOME}/share/hadoop/mapreduce/*:${HADOOP_HOME}/contrib/capacity-scheduler/*.jar
-ENV HADOOP_CONF_DIR=${HADOOP_HOME}/conf
-ENV PATH=${PATH}:${HADOOP_CLASSPATH}:${HADOOP_CONF_DIR}:${HADOOP_HOME}/bin
-
-USER flink
-RUN rm -rf ${FLINK_HOME}/lib/flink-table-planner-loader-1.15.4.jar
-ENV PATH $PYTHON_HOME/bin:$PATH
-RUN pip install "apache-flink==${FLINK_VERSION}" -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
-```
-app目录下文件
-dinky-app-1.15-0.8.0-jar-with-dependencies.jar
-
-plugins目录下文件
-flink-table-common-1.15.4.jar
-flink-table-planner_2.12-1.15.4.jar
-
-
-
-#### 构建镜像并推送到私有镜像仓库
-```shell
-docker build -t dinky-flink:0.8.0-1.15.4 . --no-cache
-
-docker tag dinky-flink:0.8.0-1.15.4 192.168.0.10:5000/dinky-flink:0.8.0-1.15.4
-
-docker push 192.168.0.10:5000/dinky-flink:0.8.0-1.15.4
-```
-
-### 通过页面构建镜像
-`注册中心 -> 集群管理 -> 集群配置管理 -> 新建 -> 测试`
-![add_k8s_conf.png](http://www.aiwenmo.com/dinky/dev/docs/k8s/add_k8s_conf.png)
-> 当配置好信息后,点击测试,大约3-5分钟左右就出现测试成功案例,此刻输入 `docker images` ,即可查看构建成功的镜像
-
-![docker_images.png](http://www.aiwenmo.com/dinky/dev/docs/k8s/docker_images.png)
-参数详解
-* instance: 容器实例,本地:unix:///var/run/docker.sock 或者 远程:tcp://remoteIp:2375(目前远程不支持使用 **COPY** 指令,请合理规避)
-* registry.url: hub容器地址,如:(阿里云,docker.io,harbor)
-* (registry-username registry-password): hub登录凭证
-* image-namespace: 镜像命名空间
-* image-storehouse: 镜像仓库
-* image-dinkyVersion: 镜像版本
-* dinky远程地址: 此参数是k8s 容器与dinky通讯的地址
-
-## 配置k8s Pod访问宿主机dinky服务
-```shell
-cd /opt/dinky/k8s
-cat < dinky-endpoints.yaml
-apiVersion: v1
-kind: Endpoints
-metadata:
- name: dinky-endpoint
- namespace: dinky
-subsets:
- - addresses:
- - ip: 192.168.0.10
- ports:
- - name: port3306
- port: 3306
- - name: port8888
- port: 8888
-EOF
-
-cat < dinky-service.yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: dinky-endpoint
- namespace: dinky
-spec:
- ports:
- - name: port3306
- protocol: TCP
- port: 3306
- - name: port8888
- protocol: TCP
- port: 8888
-EOF
-
-# 创建endpoint
-
-kubectl create -f dinky-endpoints.yaml
-
-# 创建service
-kubectl create -f dinky-service.yaml
-
-# 查看endpoint
-kubectl get endpoints dinky-endpoint -n dinky
-
-# 查看service
-kubectl get svc dinky-endpoint -n dinky
-```
-
-## 配置k8s Pod访问hadoop服务
-cd /opt/dinky/k8s
-```shell
-cat < dinky_emr_external_name.yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: dinky-namenode-0-externalname
- namespace: dinky
-spec:
- type: ExternalName
- externalName: my-hdfs-namenode-0.my-hdfs-namenode.my-hdfs.svc.cluster.local
- ports:
- - name: namenode-0
- port: 8020
- targetPort: 8020
----
-apiVersion: v1
-kind: Service
-metadata:
- name: dinky-namenode-1-externalname
- namespace: dinky
-spec:
- type: ExternalName
- externalName: my-hdfs-namenode-1.my-hdfs-namenode.my-hdfs.svc.cluster.local
- ports:
- - name: namenode-1
- port: 8020
- targetPort: 8020
----
-apiVersion: v1
-kind: Service
-metadata:
- name: dinky-zookeeper-0-externalname
- namespace: dinky
-spec:
- type: ExternalName
- externalName: my-hdfs-zookeeper-0.my-hdfs-zookeeper-headless.my-hdfs.svc.cluster.local
- ports:
- - name: zookeeper-0
- port: 2181
- targetPort: 2181
----
-apiVersion: v1
-kind: Service
-metadata:
- name: dinky-zookeeper-1-externalname
- namespace: dinky
-spec:
- type: ExternalName
- externalName: my-hdfs-zookeeper-1.my-hdfs-zookeeper-headless.my-hdfs.svc.cluster.local
- ports:
- - name: zookeeper-1
- port: 2181
- targetPort: 2181
----
-apiVersion: v1
-kind: Service
-metadata:
- name: dinky-zookeeper-2-externalname
- namespace: dinky
-spec:
- type: ExternalName
- externalName: my-hdfs-zookeeper-2.my-hdfs-zookeeper-headless.my-hdfs.svc.cluster.local
- ports:
- - name: zookeeper-2
- port: 2181
- targetPort: 2181
-EOF
-```
-创建
-kubectl create -f dinky_emr_external_name.yaml
-
-查看
-kubectl get svc -n dinky
-
-## 创建hadoop confmap
-cd /opt/dinky/k8s
-```shell
-cat < hadoop-configmap.yaml
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: hadoop-configmap
-data:
- core-site.xml: |
-
-
-
-
- hadoop.proxyuser.dcos.groups
- *
-
-
- hadoop.proxyuser.dcos.hosts
- *
-
-
- hadoop.proxyuser.ec2-user.groups
- *
-
-
- hadoop.proxyuser.ec2-user.hosts
- *
-
-
- hadoop.proxyuser.hive.groups
- *
-
-
- hadoop.proxyuser.hive.hosts
- *
-
-
- hadoop.proxyuser.httpfs.groups
- *
-
-
- hadoop.proxyuser.httpfs.hosts
- *
-
-
- hadoop.proxyuser.hue.groups
- *
-
-
- hadoop.proxyuser.hue.hosts
- *
-
-
- hadoop.proxyuser.livy.groups
- *
-
-
- hadoop.proxyuser.livy.hosts
- *
-
-
- hadoop.proxyuser.root.groups
- *
-
-
- hadoop.proxyuser.root.hosts
- *
-
-
- hadoop.proxyuser.sqoop.groups
- *
-
-
- hadoop.proxyuser.sqoop.hosts
- *
-
-
- hadoop.proxyuser.zeppelin.groups
- *
-
-
- hadoop.proxyuser.zeppelin.hosts
- *
-
-
- hadoop.tmp.dir
- /usr/local/hadoop/tmp
-
-
- fs.defaultFS
- hdfs://default
-
-
- ha.zookeeper.quorum
- my-hdfs-zookeeper-1.my-hdfs-zookeeper-headless.my-hdfs.svc.cluster.local:2181,my-hdfs-zookeeper-2.my-hdfs-zookeeper-headless.my-hdfs.svc.cluster.local:2181,my-hdfs-zookeeper-0.my-hdfs-zookeeper-headless.my-hdfs.svc.cluster.local:2181
-
-
- ha.zookeeper.parent-znode
- /hadoop-ha/hdfs-k8s
-
-
- fs.trash.interval
- 1440
-
-
- fs.trash.checkpoint.interval
- 0
-
-
- fs.permissions.umask-mode
- 037
-
-
-
- hdfs-site.xml: |
-
-
-
-
- dfs.nameservices
- default
-
-
- dfs.ha.namenodes.default
- nn0,nn1
-
-
- dfs.namenode.rpc-address.default.nn0
- my-hdfs-namenode-0.my-hdfs-namenode.my-hdfs.svc.cluster.local:8020
-
-
- dfs.namenode.rpc-address.default.nn1
- my-hdfs-namenode-1.my-hdfs-namenode.my-hdfs.svc.cluster.local:8020
-
-
- dfs.namenode.shared.edits.dir
- qjournal://my-hdfs-journalnode-1.my-hdfs-journalnode.my-hdfs.svc.cluster.local:8485;my-hdfs-journalnode-2.my-hdfs-journalnode.my-hdfs.svc.cluster.local:8485;my-hdfs-journalnode-0.my-hdfs-journalnode.my-hdfs.svc.cluster.local:8485/default
-
-
- dfs.ha.automatic-failover.enabled
- true
-
-
- dfs.ha.fencing.methods
-
- sshfence
- shell(/bin/true)
-
-
-
- dfs.ha.fencing.ssh.private-key-files
- /etc/security/ssh/id_rsa
-
-
- dfs.journalnode.edits.dir
- /hadoop/dfs/journal
-
-
- dfs.client.failover.proxy.provider.default
- org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
-
-
- dfs.namenode.name.dir
- file:///hadoop/dfs/name
-
-
- dfs.namenode.datanode.registration.ip-hostname-check
- false
-
-
- dfs.namenode.rpc-bind-host
- 0.0.0.0
-
-
- dfs.namenode.http-bind-host
- 0.0.0.0
-
-
- dfs.datanode.handler.count
- 10
-
-
- dfs.datanode.max.xcievers
- 8192
-
-
- dfs.datanode.max.transfer.threads
- 8192
-
-
- dfs.datanode.data.dir
- /hadoop/dfs/data/0
-
-
- dfs.client.use.datanode.hostname
- true
-
-
- dfs.datanode.use.datanode.hostname
- false
-
-
- dfs.replication
- 3
-
-
-EOF
-```
-创建ConfigMap
-```shell
-kubectl create configmap hadoop-configmap -n dinky --from-file=/opt/dinky/k8s/hadoop-configmap.yaml
-```
-查看ConfigMap
-```shell
-kubectl get configmap -n dinky
-```
-
-### 创建pod
-cd /opt/dinky/k8s
-```shell
-cat < pod-template.yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: flink-pod-template
-spec:
- initContainers:
- - name: init-volume-mount
- image: 192.168.0.10:5000/dinky-flink:0.8.0-1.15.4
- command: [ 'sh', '-c', 'chown -R flink:flink /opt/flink/log']
- volumeMounts:
- - name: flink-logs
- mountPath: /opt/flink/log
- containers:
- - name: flink-main-container
- volumeMounts:
- - name: flink-logs
- mountPath: /opt/flink/log
- volumes:
- - name: flink-logs
- hostPath:
- path: /data/logs/flink/
- type: Directory
-EOF
-```
-
-## 创建k8s类型集群配置
-
-![add_k8s_conf_001.png](./img/add_k8s_conf_001.jpg)
-![add_k8s_conf_002.png](./img/add_k8s_conf_002.jpg)
-![add_k8s_conf_003.png](./img/add_k8s_conf_003.jpg)
-![add_k8s_conf_004.png](./img/add_k8s_conf_004.jpg)
-
-配置参数如下
-```shell
-dinky远程地址:172.23.31.0:8888
-# k8s 名字空间
-kubernetes.namespace:dinky
-# 使用私有镜像地址
-kubernetes.container.image:192.168.0.10:5000/dinky-flink:0.8.0-1.15.4
-kubernetes.rest-service.exposed.type:NodePort
-kubernetes.jobmanager.cpu:1
-kubernetes.taskmanager.cpu:1
-kubernetes.pod-template-file:/opt/dinky/k8s/pod-template.yaml
-# 指定hadoop configmap 名称
-kubernetes.hadoop.conf.config-map.name:hadoop-configmap
-Flink 配置文件路径:/opt/flink/conf/
-jobmanager.memory.process.size:1G
-taskmanager.memory.process.size:2G
-taskmanager.numberOfTaskSlots:2
-state.savepoints.dir:hdfs:///flink/savepoints
-state.checkpoints.dir:hdfs:///flink/checkpoints
-Jar 文件路径:local:///opt/dinky/dinky-app-1.15-0.8.0-jar-with-dependencies.jar
-```
diff --git a/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_001.jpg b/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_001.jpg
deleted file mode 100644
index 2dfcb96b0f..0000000000
Binary files a/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_001.jpg and /dev/null differ
diff --git a/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_002.jpg b/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_002.jpg
deleted file mode 100644
index 2e52cff612..0000000000
Binary files a/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_002.jpg and /dev/null differ
diff --git a/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_003.jpg b/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_003.jpg
deleted file mode 100644
index df370a7763..0000000000
Binary files a/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_003.jpg and /dev/null differ
diff --git a/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_004.jpg b/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_004.jpg
deleted file mode 100644
index 9ee65116b0..0000000000
Binary files a/docs/docs/administrator_guide/studio/k8s_guide/img/add_k8s_conf_004.jpg and /dev/null differ
diff --git a/docs/docs/administrator_guide/studio/k8s_guide/remote_link_k8s.md b/docs/docs/administrator_guide/studio/k8s_guide/remote_link_k8s.md
deleted file mode 100644
index 1fff33bdd2..0000000000
--- a/docs/docs/administrator_guide/studio/k8s_guide/remote_link_k8s.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-sidebar_position: 2
-id: remote_link_k8s
-title: 远程连接 k8s 集群
----
-
-## 文件准备
-1. 获取 .kube/config,并放在本地机器上面
-
-> 通常执行 cat ~/.kube/config 指令,可以获得该文件的内容。
->
-> 如果是 kubeadm 或者基于 kubeadm (例如 kuboard-spray)的安装工具安装的 Kubernetes 集群,请在控制节点执行 cat /etc/kubernetes/admin.conf
->
-> 如果是 k3s 请执行 cat /etc/rancher/k3s/k3s.yaml
-
-:::warning 注意事项
-
-部分 kubernetes 集群(例如 Amazon AKS)因为 kubeconfig 文件的内容不同于 kubeadm 安装的集群,暂不支持使用 kubeconfig。
-
-以上的内容参考来自:[Kuboard](https://kuboard.cn/)
-:::
-0
\ No newline at end of file
diff --git a/docs/docs/data_integration_guide/dinky_k8s_quick_start.mdx b/docs/docs/data_integration_guide/dinky_k8s_quick_start.mdx
new file mode 100644
index 0000000000..615006c4ac
--- /dev/null
+++ b/docs/docs/data_integration_guide/dinky_k8s_quick_start.mdx
@@ -0,0 +1,198 @@
+---
+sidebar_position: 1
+id: dinky_k8s_quick_start
+title: K8s集成
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+Dinky支持以下几种 Flink on k8s 运行模式:
+
+- Native-Kubernetes Application
+- Native-Kubernetes Session
+- Kubernetes Operator Application(基于官方operator实现)
+
+> Dinky 不强制要求部署在 Kubernetes 集群中或Kubernetes 节点,但务必保障Dinky与 Kubernetes 集群的网络互通,以便于 Dinky 与 Kubernetes 集群进行交互,
+如果使用ClusterIP模式提交还需要确保Kubernetes内部网络与dinky互通
+
+
+## k8s环境准备
+**部分内容可参考 Flink 对应集成 k8s 文档链接**
+
+
+
+
+
+[https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/resource-providers/native_kubernetes/](https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/)
+
+
+
+
+[https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/deployment/resource-providers/native_kubernetes/](https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/)
+
+
+
+
+[https://nightlies.apache.org/flink/flink-docs-release-1.15/zh/docs/deployment/resource-providers/native_kubernetes/](https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/)
+
+
+
+
+[https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/](https://nightlies.apache.org/flink/flink-docs-release-1.16/zh/docs/deployment/resource-providers/native_kubernetes/)
+
+
+
+
+
+如果你的k8s集群已经创建好了命名空间与权限配置,那么可以跳过这一步
+```shell
+# 创建命名空间
+kubectl create namespace dinky
+# 为命名空间创建权限
+kubectl create clusterrolebinding flink-role-binding-default --clusterrole=edit --serviceaccount=dinky:default
+```
+
+:::tip
+上述操作为创建一个命名空间 dinky 并为default用户赋予所有权限,请根据自身需求自行更改,参考官方文档 [https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/native_kubernetes/#rbac](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/resource-providers/native_kubernetes/#rbac)
+:::
+
+---
+## 镜像制作
+本教程以下内容均使用Flink 1.15版本为例,如果使用其他版本,请根据文档自行更改版本号即可
+### 手动制作镜像
+#### 基础镜像Dockerfile模板
+**首先,你需要在当前extends目录下准备以下几个jar包**
+- commons-cli-1.3.1.jar
+- dinky-app-1.15-1.0.0-SNAPSHOT-jar-with-dependencies.jar
+- flink-table-planner_2.12-1.15.4.jar
+
+> 上述仅为基础必须的依赖,如果你的flink任务有其他依赖需求,请自行补充添加
+
+**编写Dockerfile**
+```shell
+ARG FLINK_VERSION=1.15.4 # flink 版本号
+
+FROM flink:${FLINK_VERSION}-scala_2.12 # flink官方镜像tag
+
+ADD extends /opt/flink/lib # 把当前extends目录下的jar添加进依赖目录
+
+RUN rm -rf ${FLINK_HOME}/lib/flink-table-planner-loader-*.jar # 删除loader包,替换为不带loader的
+```
+
+**构建镜像并推送到私有镜像仓库**
+```shell
+# 构建Dinky app镜像
+docker build -t dinky-flink:1.0.0-1.15.4 . --no-cache
+# 这一步为推送到私有镜像仓库过程,请根据需要自行修改参数
+docker tag dinky-flink:1.0.0-1.15.4 192.168.0.10:5000/dinky-flink-1.0.0-1.15.4
+docker push 192.168.0.10:5000/dinky-flink-1.0.0-1.15.4
+```
+
+#### 其他镜像Dockerfile模板
+##### Python支持
+
+```shell
+ARG FLINK_VERSION=1.15.4
+
+FROM flink:${FLINK_VERSION}-scala_2.12
+
+ARG FLINK_VERSION
+ENV PYTHON_HOME /opt/miniconda3
+
+USER root
+RUN wget "https://s3.jcloud.sjtu.edu.cn/899a892efef34b1b944a19981040f55b-oss01/anaconda/miniconda/Miniconda3-py38_4.9.2-Linux-x86_64.sh" -O "miniconda.sh" && chmod +x miniconda.sh
+RUN ./miniconda.sh -b -p $PYTHON_HOME && chown -R flink $PYTHON_HOME && ls $PYTHON_HOME
+
+USER flink
+RUN rm -rf ${FLINK_HOME}/lib/flink-table-planner-loader-*.jar
+ADD extends /opt/flink/lib # 把当前extends目录下的jar添加进依赖目录
+ENV PATH $PYTHON_HOME/bin:$PATH
+RUN pip install "apache-flink==${FLINK_VERSION}" -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com
+```
+
+#### Hadoop支持
+请提前下载好hadoop安装包`hadoop-3.3.4.tar.gz`到当前目录,版本可自定义更改
+```shell
+ARG FLINK_VERSION=1.15.4
+
+FROM flink:${FLINK_VERSION}-scala_2.12
+
+ARG FLINK_VERSION
+
+ENV HADOOP_VERSION 3.3.4
+ENV HADOOP_HOME=/opt/hadoop
+ADD hadoop-${HADOOP_VERSION}.tar.gz /opt
+RUN ln -s /opt/hadoop-${HADOOP_VERSION} ${HADOOP_HOME}
+ENV HADOOP_CLASSPATH=${HADOOP_HOME}/etc/hadoop:${HADOOP_HOME}/share/hadoop/common/lib/*:${HADOOP_HOME}/share/hadoop/common/*:${HADOOP_HOME}/share/hadoop/hdfs:${HADOOP_HOME}/share/hadoop/hdfs/lib/*:${HADOOP_HOME}/share/hadoop/hdfs/*:${HADOOP_HOME}/share/hadoop/yarn/lib/*:${HADOOP_HOME}/share/hadoop/yarn/*:${HADOOP_HOME}/share/hadoop/mapreduce/lib/*:${HADOOP_HOME}/share/hadoop/mapreduce/*:${HADOOP_HOME}/contrib/capacity-scheduler/*.jar
+ENV HADOOP_CONF_DIR=${HADOOP_HOME}/conf
+ENV PATH=${PATH}:${HADOOP_CLASSPATH}:${HADOOP_CONF_DIR}:${HADOOP_HOME}/bin
+
+USER flink
+RUN rm -rf ${FLINK_HOME}/lib/flink-table-planner-loader-1.15.4.jar
+ENV PATH $PYTHON_HOME/bin:$PATH
+ADD extends /opt/flink/lib # 把当前extends目录下的jar添加进依赖目录
+```
+
+### 通过页面构建镜像
+> 正在快马加鞭赶来中,请耐心等待
+
+## 配置Kubernetes集群信息
+在**注册中心页面**,点击 **集群==>集群配置==>新建** 进入新建集群页面
+![图片](http://www.aiwenmo.com/dinky/docs/test/screenshot20231225.png)
+类型选择`Kubernetes Native`或者`Kubernetes Operator` 目前阶段仍推荐Kubernetes Native方式,operator还在beta中
+
+#### 填写集群信息
+
+##### Kubernetes 配置
+
+| 参数 | 说明 | 是否必填 | 默认值 | 示例值 |
+|--------|--------------------------------------------------------|:----:|:---:|:----------:|
+| 暴露端口类型 | 支持NodePort与ClusterIP | 是 | 无 | NodePort |
+| Kubernetes 命名空间 | 集群所在的 Kubernetes 命名空间 | 是 | 无 | dinky |
+| K8s 提交账号 | 集群提交任务的账号 | 是 | 无 | default |
+| Flink 镜像地址 | 上一步打包的镜像地址 | 是 | 无 | dinky-flink-1.0.0-1.15.4 |
+| JobManager CPU 配置 | JobManager 资源配置 | 否 | 无 | 1 |
+| TaskManager CPU 配置 | TaskManager 资源配置 | 否 | 无 | 1 |
+| Flink 配置文件路径 | 仅指定到文件夹,dinky会自行读取文件夹下的配置文件并作为flink的默认配置 | 否 | 无 | /opt/flink/conf |
+
+> 如果您有其他的配置项需要添加,请点击添加配置项按钮,添加完毕后,点击保存即可
+
+##### Kubernetes 连接与pod配置
+| 参数 | 说明 | 是否必填 | 默认值 | 示例值 |
+|--------|--------------------------------------------------------|:----:|:---:|:----------:|
+|K8s KubeConfig |集群的KubeConfig内容,如果不填写,则默认使用`~/.kube/config`文件 | 否 | 无 | 无 |
+|Default Pod Template |默认的Pod模板 | 否 | 无 | 无 |
+|JM Pod Template |JobManager的Pod模板 | 否 | 无 | 无 |
+|TM Pod Template |TaskManager的Pod模板 | 否 | 无 | 无 |
+
+##### 提交 FlinkSQL 配置项 (Application 模式必填)-公共配置
+
+| 参数 | 说明 | 是否必填 | 默认值 | 示例值 |
+|--------|----------------------------------------------------------------------------------------------------------|:----:|:---:|:--------------:|
+| Jar 文件路径 | 指定镜像内dinky-app的 Jar 文件路径,如果该集群配置用于提交 Application 模式任务时 则必填| 否 | 无 | local:///opt/flink/dinky-app-1.16-1.0.0-jar-with-dependencies.jar |
+> 由于flink限制,k8s模式只能加载镜像内的jar包,也就是地址必须为local://开头,如果想要自定义jar提交,请查阅jar提交部分
+
+
+#### Flink 预设配置(高优先级)-公共配置
+
+| 参数 | 说明 | 是否必填 | 默认值 | 示例值 |
+|-----------------|----|:----:|:---:|:--:|
+| JobManager 内存 | JobManager 内存大小! | 否 | 无 | 1g |
+| TaskManager 内存 | TaskManager 内存大小! | 否 | 无 | 1g |
+| TaskManager 堆内存 | TaskManager 堆内存大小! | 否 | 无 | 1g |
+| 插槽数 | 插槽数量 | 否 | 无 | 2 |
+| 保存点路径 | 对应SavePoint目录 | 否 | 无 | hdfs:///flink/savepoint |
+| 检查点路径 | 对应CheckPoint目录 | 否 | 无 | hdfs:///flink/checkpoint |
+
+## 启动session集群(可选)
+除了您自己手动部署session集群外,dinky还提供了快捷方式部署Kubernetes session集群,在上面Kubernetes集群配置完成后,点击启动按钮即可向指定Kubernetes集群提交session集群
+![图片](http://www.aiwenmo.com/dinky/docs/test/20231225194322.png)
+
+至此,所有准备工作均已完成完成,接下来就可以通过` kubernetes session`模式或`kubernetes application`模式进行任务提交啦。
+
+## 提交 kubernetes application 任务
+进入数据开发页面,新建一个flink sql任务,选择集群类型为`kubernetes application`,集群选择为我们刚刚配置的集群,点击提交即可
+![图片](http://www.aiwenmo.com/dinky/docs/test/20231225194949.png)
+## 提交 kubernetes session 任务
+进入数据开发页面,新建一个flink sql任务,选择集群类型为`kubernetes session`,集群选择为我们刚刚配置的集群,点击提交即可
+图片同上
\ No newline at end of file