1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

ZooKage on K3dで始めるData/Streaming Lakehouse向けSandbox環境の用意する

Last updated at Posted at 2025-01-03

ちょっと手元でHadoopやSpark、Ozone、Trinoなど、組み合わせた上で触りたいときってよくあるじゃないですか(?)

そんなときに、Docker Desktop(Rancher Desktopとか)などのデスクトップ環境でそれらをさくっと用意するのもいささか面倒。

そこで、ZooKageを使って、さくっと環境を用意する話。

ZooKageは以下のコンポーネントをDocker Desktopなどのデスクトップ環境に簡単にSandbox環境を構築出来るToolです。

  • Apache Hadoop(HDFS, YARN, MapReduce)
  • Apache HBase
  • Apache Hive
  • Apache Ozone
  • Apache Spark
  • Apache Tez
  • Apache ZooKeeper
  • Trino

環境自体は、Docker ComposeかKubernetesで構築できるようになっています。
また、Kubernetesで構築する場合は、Docker Desktop(やRancher Desktop)に同梱のKubernetesへの適用が前提になっています。具体的には、1Nodeで稼働させることを前提にして、PVをhostPath使っているので、複数Nodeでの運用は出来ないです。

例: zookage/kubernetes/base/common/package/hadoop/persistentvolume.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookage-package-hadoop
spec:
  accessModes:
  - ReadOnlyMany
  capacity:
    storage: 1Gi
  hostPath:
    path: /opt/zookage/hadoop    # ★ ここ
  storageClassName: zookage-package-hadoop

ただ、私のデスクトップ環境が貧弱なので、適当なDockerホストのインスタンスで、K3dを使って環境を用意してみました。

環境の用意

Proxy環境を前提としてますが、不要な方は適当に省いてください。

前提

  • ZooKage v0.2.6
  • K3d v5.7.5

Kubernetesクラスタの用意

冒頭に書いたとおり、hostPath使ってるので 1 Node で動かす。

# イメージを改変したい場合が出てくるのでレジストリを用意
$ k3d registry create myregistry.localhost --port 5000

# クラスタの構築
$ k3d cluster create zookage \
  --servers 1 \
  --registry-use k3d-myregistry.localhost:5000 \
  -e "HTTP_PROXY=proxy.internal:8080@all:*" \
  -e "HTTPS_PROXY=proxy.internal:8080@all:*" \
  -e "NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,cattle-system.svc,192.168.10.0/24,.svc,.cluster.local,(あれやこれや),.localhost@all:*"

# 立ち上がってくるのを待つ
$ k get po -A --watch --output-watch-events
  • Proxy不要な人は、 -e のオプションを削除してください
  • -e のHTTP_PROXYの指定方法について、 http:// などは不要なので注意

Zookageで環境を構築

Quick Start | ZooKage に従って対応するだけ。

$ git clone --branch v0.2.6 git@github.com:zookage/zookage.git
$ cd zookage

必要になるコンポーネントは、Configuration | ZooKage の通り、 zookage/kubernetes/kustomization.yaml を編集して選択する。

具体的には、以下のように、resourcesにて、必要ないコンポーネントをコメントアウト。
以下の例では、HBaseとMapReduceを除外してます。

namespace: zookage
commonLabels:
  owner: zookage
resources:
- base/client
- base/common
# - base/hbase
- base/hdfs
- base/hive
# - base/mapreduce
- base/ozone
- base/spark
- base/tez
- base/trino
- base/yarn
- base/zookeeper 

利用するバージョンは、kustomization.yamlimages以下で選べるが、依存の問題で組み合わせあるので、そこは Configuration | ZooKageのSupported versions を参照

あとは、以下を実行して、5分ほど待てば、各コンポーネントが起動してくる。

$ ./bin/up

HTTP Proxyの対応

そのままだと、client経由でアクセスする際に、Proxyが邪魔になる。awsコマンドなど。
なので、対応が必要。

zookage/config の各コンポーネントの設定でホスト指定指定箇所をすべて、クラスターDNSに対応出来るように、 .zookage.svc.cluster.local を付けてFQDNにしてやる。

他にも kubernetes/base/client/node/statefulset.yaml にて、ProxyのEnvを加えてやる必要がある。

対応した際の差分

この対応は、Kubernetes環境が前提になっているので、Docker Composeの場合は対応が異なります

$ git diff 
error: cannot run delta: No such file or directory
diff --git a/kubernetes/base/client/node/statefulset.yaml b/kubernetes/base/client/node/statefulset.yaml
index c81611a..71e3f7c 100644
--- a/kubernetes/base/client/node/statefulset.yaml
+++ b/kubernetes/base/client/node/statefulset.yaml
@@ -35,6 +35,12 @@ spec:
         - configMapRef:
             name: zookeeper-env
         env:
+        - name: HTTP_PROXY
+          value: http://proxy.internal:8080
+        - name: HTTPS_PROXY
+          value: http://proxy.internal:8080
+        - name: NO_PROXY
+          value: localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,cattle-system.svc,192.168.10.0/24,.svc,.cluster.local,.local,(あれやこれや),.localhost
         - name: HADOOP_USER_NAME
           value: hdfs
         - name: HADOOP_CLASSPATH
diff --git a/kubernetes/base/common/config/hadoop/core-site.xml b/kubernetes/base/common/config/hadoop/core-site.xml
index d4c3269..72bdc6a 100644
--- a/kubernetes/base/common/config/hadoop/core-site.xml
+++ b/kubernetes/base/common/config/hadoop/core-site.xml
@@ -39,11 +39,11 @@
 
   <property>
     <name>ha.zookeeper.quorum</name>
-    <value>zookeeper-server-0.zookeeper-server:2181,zookeeper-server-1.zookeeper-server:2181,zookeeper-server-2.zookeeper-server:2181</value>
+    <value>zookeeper-server-0.zookeeper-server.zookage.svc.cluster.local:2181,zookeeper-server-1.zookeeper-server.zookage.svc.cluster.local:2181,zookeeper-server-2.zookeeper-server.zookage.svc.cluster.local:2181</value>
   </property>
   <property>
     <name>hadoop.zk.address</name>
-    <value>zookeeper-server-0.zookeeper-server:2181,zookeeper-server-1.zookeeper-server:2181,zookeeper-server-2.zookeeper-server:2181</value>
+    <value>zookeeper-server-0.zookeeper-server.zookage.svc.cluster.local:2181,zookeeper-server-1.zookeeper-server.zookage.svc.cluster.local:2181,zookeeper-server-2.zookeeper-server.zookage.svc.cluster.local:2181</value>
   </property>
 
   <property>
@@ -52,7 +52,7 @@
   </property>
   <property>
     <name>fs.s3a.endpoint</name>
-    <value>ozone-s3g:9878</value>
+    <value>ozone-s3g.zookage.svc.cluster.local:9878</value>
   </property>
   <property>
     <name>fs.s3a.path.style.access</name>
diff --git a/kubernetes/base/common/config/hadoop/hdfs-site.xml b/kubernetes/base/common/config/hadoop/hdfs-site.xml
index 8ee78ea..1500818 100644
--- a/kubernetes/base/common/config/hadoop/hdfs-site.xml
+++ b/kubernetes/base/common/config/hadoop/hdfs-site.xml
@@ -5,11 +5,11 @@
   </property>
   <property>
     <name>dfs.namenode.rpc-address.zookage.namenode-0</name>
-    <value>hdfs-namenode-0.hdfs-namenode:8020</value>
+    <value>hdfs-namenode-0.hdfs-namenode.zookage.svc.cluster.local:8020</value>
   </property>
   <property>
     <name>dfs.namenode.http-address.zookage.namenode-0</name>
-    <value>hdfs-namenode-0.hdfs-namenode:9870</value>
+    <value>hdfs-namenode-0.hdfs-namenode.zookage.svc.cluster.local:9870</value>
   </property>
   <property>
     <name>dfs.client.failover.proxy.provider.zookage</name>
@@ -38,7 +38,7 @@
   </property>
   <property>
     <name>dfs.namenode.shared.edits.dir</name>
-    <value>qjournal://hdfs-journalnode-0.hdfs-journalnode:8485;hdfs-journalnode-1.hdfs-journalnode:8485;hdfs-journalnode-2.hdfs-journalnode:8485/zookage</value>
+    <value>qjournal://hdfs-journalnode-0.hdfs-journalnode.zookage.svc.cluster.local:8485;hdfs-journalnode-1.hdfs-journalnode.zookage.svc.cluster.local:8485;hdfs-journalnode-2.hdfs-journalnode.zookage.svc.cluster.local:8485/zookage</value>
   </property>
   <property>
     <name>dfs.journalnode.edits.dir</name>
@@ -46,11 +46,11 @@
   </property>
   <property>
     <name>dfs.namenode.rpc-address.zookage.namenode-1</name>
-    <value>hdfs-namenode-1.hdfs-namenode:8020</value>
+    <value>hdfs-namenode-1.hdfs-namenode.zookage.svc.cluster.local:8020</value>
   </property>
   <property>
     <name>dfs.namenode.http-address.zookage.namenode-1</name>
-    <value>hdfs-namenode-1.hdfs-namenode:9870</value>
+    <value>hdfs-namenode-1.hdfs-namenode.zookage.svc.cluster.local:9870</value>
   </property>
   ### For HA ### -->
 
diff --git a/kubernetes/base/common/config/hadoop/yarn-site.xml b/kubernetes/base/common/config/hadoop/yarn-site.xml
index dc4ccee..2862ae5 100644
--- a/kubernetes/base/common/config/hadoop/yarn-site.xml
+++ b/kubernetes/base/common/config/hadoop/yarn-site.xml
@@ -2,7 +2,7 @@
   <!-- ### For non-HA ### -->
   <property>
     <name>yarn.resourcemanager.hostname</name>
-    <value>yarn-resourcemanager-0.yarn-resourcemanager</value>
+    <value>yarn-resourcemanager-0.yarn-resourcemanager.zookage.svc.cluster.local</value>
   </property>
   <!-- ### For non-HA ### -->
 
@@ -25,11 +25,11 @@
   </property>
   <property>
     <name>yarn.resourcemanager.webapp.address.resourcemanager-0</name>
-    <value>yarn-resourcemanager-0.yarn-resourcemanager:8088</value>
+    <value>yarn-resourcemanager-0.yarn-resourcemanager.zookage.svc.cluster.local:8088</value>
   </property>
   <property>
     <name>yarn.resourcemanager.webapp.address.resourcemanager-1</name>
-    <value>yarn-resourcemanager-1.yarn-resourcemanager:8088</value>
+    <value>yarn-resourcemanager-1.yarn-resourcemanager.zookage.svc.cluster.local:8088</value>
   </property>
   ### For HA ### -->
 
diff --git a/kubernetes/base/common/config/hive/beeline-hs2-connection.xml b/kubernetes/base/common/config/hive/beeline-hs2-connection.xml
index 85a32eb..a1b4bec 100644
--- a/kubernetes/base/common/config/hive/beeline-hs2-connection.xml
+++ b/kubernetes/base/common/config/hive/beeline-hs2-connection.xml
@@ -1,7 +1,7 @@
 <configuration>
   <property>
     <name>beeline.hs2.connection.hosts</name>
-    <value>hive-hiveserver2:10000</value>
+    <value>hive-hiveserver2.zookage.svc.cluster.local:10000</value>
   </property>
   <property>
     <name>beeline.hs2.connection.user</name>
diff --git a/kubernetes/base/common/config/hive/hive-site.xml b/kubernetes/base/common/config/hive/hive-site.xml
index cb6a693..cee9d64 100644
--- a/kubernetes/base/common/config/hive/hive-site.xml
+++ b/kubernetes/base/common/config/hive/hive-site.xml
@@ -5,7 +5,7 @@
   </property>
   <property>
     <name>hive.metastore.uris</name>
-    <value>thrift://hive-metastore-0.hive-metastore:9083</value>
+    <value>thrift://hive-metastore-0.hive-metastore.zookage.svc.cluster.local:9083</value>
   </property>
   <property>
     <name>datanucleus.autoStartMechanismMode</name>
diff --git a/kubernetes/base/common/config/ozone/ozone-site.xml b/kubernetes/base/common/config/ozone/ozone-site.xml
index a7a45a2..cc29f69 100644
--- a/kubernetes/base/common/config/ozone/ozone-site.xml
+++ b/kubernetes/base/common/config/ozone/ozone-site.xml
@@ -6,15 +6,15 @@
 
   <property>
     <name>ozone.scm.block.client.address</name>
-    <value>ozone-scm-0.ozone-scm</value>
+    <value>ozone-scm-0.ozone-scm.zookage.svc.cluster.local</value>
   </property>
   <property>
     <name>ozone.scm.client.address</name>
-    <value>ozone-scm-0.ozone-scm</value>
+    <value>ozone-scm-0.ozone-scm.zookage.svc.cluster.local</value>
   </property>
   <property>
     <name>ozone.scm.names</name>
-    <value>ozone-scm-0.ozone-scm</value>
+    <value>ozone-scm-0.ozone-scm.zookage.svc.cluster.local</value>
   </property>
   <property>
     <name>ozone.scm.db.dirs</name>
@@ -27,7 +27,7 @@
 
   <property>
     <name>ozone.om.address</name>
-    <value>ozone-om-0.ozone-om:9862</value>
+    <value>ozone-om-0.ozone-om.zookage.svc.cluster.local:9862</value>
   </property>
   <property>
     <name>ozone.om.db.dirs</name>
@@ -53,7 +53,7 @@
 
   <property>
     <name>ozone.recon.address</name>
-    <value>ozone-recon-0.ozone-recon:9891</value>
+    <value>ozone-recon-0.ozone-recon.zookage.svc.cluster.local:9891</value>
   </property>
   <property>
     <name>ozone.recon.db.dir</name>
diff --git a/kubernetes/base/common/config/spark/spark-defaults.conf b/kubernetes/base/common/config/spark/spark-defaults.conf
index b474e90..baed3a0 100644
--- a/kubernetes/base/common/config/spark/spark-defaults.conf
+++ b/kubernetes/base/common/config/spark/spark-defaults.conf
@@ -13,7 +13,7 @@ spark.yarn.maxAppAttempts 2
 spark.eventLog.enabled true
 spark.eventLog.dir hdfs://zookage/user/spark/applicationHistory
 spark.history.fs.logDirectory hdfs://zookage/user/spark/applicationHistory
-spark.yarn.historyServer.address spark-historyserver-0.spark-historyserver:18080
+spark.yarn.historyServer.address spark-historyserver-0.spark-historyserver.zookage.svc.cluster.local:18080
 
 spark.hadoop.fs.s3a.fast.upload.buffer bytebuffer
 
diff --git a/kubernetes/base/common/config/trino/catalog/hive.properties b/kubernetes/base/common/config/trino/catalog/hive.properties
index a80be17..a9100cb 100644
--- a/kubernetes/base/common/config/trino/catalog/hive.properties
+++ b/kubernetes/base/common/config/trino/catalog/hive.properties
@@ -1,10 +1,10 @@
 connector.name=hive
-hive.metastore.uri=thrift://hive-metastore-0.hive-metastore:9083
+hive.metastore.uri=thrift://hive-metastore-0.hive-metastore.zookage.svc.cluster.local:9083
 hive.config.resources=/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml
 hive.non-managed-table-writes-enabled=true
 
 hive.s3.aws-access-key=dummy
 hive.s3.aws-secret-key=dummy
-hive.s3.endpoint=ozone-s3g:9878
+hive.s3.endpoint=ozone-s3g.zookage.svc.cluster.local:9878
 hive.s3.path-style-access=true
 hive.s3.ssl.enabled=false
diff --git a/kubernetes/base/common/config/trino/catalog/iceberg.properties b/kubernetes/base/common/config/trino/catalog/iceberg.properties
index 0dde365..13eb7e3 100644
--- a/kubernetes/base/common/config/trino/catalog/iceberg.properties
+++ b/kubernetes/base/common/config/trino/catalog/iceberg.properties
@@ -1,9 +1,9 @@
 connector.name=iceberg
-hive.metastore.uri=thrift://hive-metastore-0.hive-metastore:9083
+hive.metastore.uri=thrift://hive-metastore-0.hive-metastore.zookage.svc.cluster.local:9083
 hive.config.resources=/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml
 
 hive.s3.aws-access-key=dummy
 hive.s3.aws-secret-key=dummy
-hive.s3.endpoint=ozone-s3g:9878
+hive.s3.endpoint=ozone-s3g.zookage.svc.cluster.local:9878
 hive.s3.path-style-access=true
 hive.s3.ssl.enabled=false
diff --git a/kubernetes/base/common/config/trino/cli.properties b/kubernetes/base/common/config/trino/cli.properties
index ab14bbf..a70148a 100644
--- a/kubernetes/base/common/config/trino/cli.properties
+++ b/kubernetes/base/common/config/trino/cli.properties
@@ -1,2 +1,2 @@
 catalog=hive
-server=http://trino-coordinator-0.trino-coordinator:8080
+server=http://trino-coordinator-0.trino-coordinator.zookage.svc.cluster.local:8080
diff --git a/kubernetes/base/common/config/trino/config-coordinator.properties b/kubernetes/base/common/config/trino/config-coordinator.properties
index 3df8759..c85004e 100644
--- a/kubernetes/base/common/config/trino/config-coordinator.properties
+++ b/kubernetes/base/common/config/trino/config-coordinator.properties
@@ -1,7 +1,7 @@
 coordinator=true
 node-scheduler.include-coordinator=false
 http-server.http.port=8080
-discovery.uri=http://trino-coordinator-0.trino-coordinator:8080
+discovery.uri=http://trino-coordinator-0.trino-coordinator.zookage.svc.cluster.local:8080
 
 web-ui.authentication.type=fixed
 web-ui.user=admin
diff --git a/kubernetes/base/common/config/trino/config-worker.properties b/kubernetes/base/common/config/trino/config-worker.properties
index a0ea8b8..521acdf 100644
--- a/kubernetes/base/common/config/trino/config-worker.properties
+++ b/kubernetes/base/common/config/trino/config-worker.properties
@@ -1,3 +1,3 @@
 coordinator=false
 http-server.http.port=8080
-discovery.uri=http://trino-coordinator-0.trino-coordinator:8080
+discovery.uri=http://trino-coordinator-0.trino-coordinator.zookage.svc.cluster.local:8080
diff --git a/kubernetes/base/common/config/zookeeper/zoo.cfg b/kubernetes/base/common/config/zookeeper/zoo.cfg
index c2a08f3..2aa5aa4 100644
--- a/kubernetes/base/common/config/zookeeper/zoo.cfg
+++ b/kubernetes/base/common/config/zookeeper/zoo.cfg
@@ -6,6 +6,6 @@ clientPort=2181
 maxCnxns=60
 leaderConnectDelayDuringRetryMs=20000
 electionPortBindRetry=50
-server.1=zookeeper-server-0.zookeeper-server:2888:3888
-server.2=zookeeper-server-1.zookeeper-server:2888:3888
-server.3=zookeeper-server-2.zookeeper-server:2888:3888
+server.1=zookeeper-server-0.zookeeper-server.zookage.svc.cluster.local:2888:3888
+server.2=zookeeper-server-1.zookeeper-server.zookage.svc.cluster.local:2888:3888
+server.3=zookeeper-server-2.zookeeper-server.zookage.svc.cluster.local:2888:3888

各コンポーネントの操作

Tools | ZooKage にあるように、client-nodeのPodから利用出来るようになっています。

# クライアントPodに入る
$ k -n zookage exec -ti client-node-0 -- bash
zookage@client-node-0:~$ 

以降はクライアントPodから実行した際の例。

## HDFS
$ hdfs dfs -ls /
Found 3 items
drwxr-xr-x   - hdfs supergroup          0 2025-01-01 08:39 /apps
drwxrwxrwt   - hdfs supergroup          0 2025-01-01 08:41 /tmp
drwxr-xr-x   - hdfs supergroup          0 2025-01-01 08:39 /user
## Hive
$ beeline
Connecting to jdbc:hive2://hive-hiveserver2.zookage.svc.cluster.local:10000/default;password=dummy;user=zookage
Connected to: Apache Hive (version 4.0.1)
Driver: Hive JDBC (version 4.0.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 4.0.1 by Apache Hive
0: jdbc:hive2://hive-hiveserver2.zookage.svc.> show databases ;
INFO  : Compiling command(queryId=hive_20250103022254_a56d931a-7485-4ef8-8ad8-3f35b69969f3): show databases
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Created Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=hive_20250103022254_a56d931a-7485-4ef8-8ad8-3f35b69969f3); Time taken: 1.326 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hive_20250103022254_a56d931a-7485-4ef8-8ad8-3f35b69969f3): show databases
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hive_20250103022254_a56d931a-7485-4ef8-8ad8-3f35b69969f3); Time taken: 0.061 seconds
+----------------+
| database_name  |
+----------------+
| default        |
+----------------+
1 row selected (1.677 seconds)
## Spark
$ spark-shell
Spark context Web UI available at http://client-node-0.client-node.zookage.svc.cluster.local:4040
Spark context available as 'sc' (master = yarn, app id = application_1735720785148_0001).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/

   /___/ .__/\_,_/_/ /_/\_\   version 3.5.1
      /_/
         
Using Scala version 2.12.18 (OpenJDK 64-Bit Server VM, Java 1.8.0_275)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 
## Ozone
$ aws s3 mb --endpoint http://ozone-s3g.zookage.svc.cluster.local:9878 s3://test2
$ aws s3api --endpoint http://ozone-s3g.zookage.svc.cluster.local:9878 create-bucket --bucket=iceberg-warehouse

$ aws s3 ls --endpoint http://ozone-s3g.zookage.svc.cluster.local:9878 s3://

### やってもいいけど、S3バケットだけの利用なら特にいらない
$ ozone sh volume create zookage
$ ozone sh bucket create /zookage/hoge
$ ozone sh bucket ls zookage
$ ozone sh bucket link /zookage/ice /s3v/hoge
## Trino
$ trino
trino> show catalogs ;
 Catalog 
---------
 hive    
 iceberg 
 system  
 tpcds   
 tpch    
(5 rows)

Query 20250103_014138_00000_sr57h, FINISHED, 1 node
Splits: 7 total, 7 done (100.00%)
0.15 [0 rows, 0B] [0 rows/s, 0B/s]

trino> show schemas from iceberg ;
       Schema       
--------------------
 default            
 information_schema 
(2 rows)

Query 20250103_014153_00001_sr57h, FINISHED, 2 nodes
Splits: 7 total, 7 done (100.00%)
0.19 [2 rows, 35B] [10 rows/s, 189B/s]

各UIへのアクセス

サービスの公開はしてないので、そのままでは各種UIにアクセスできません。
Tools | ZooKage にあるように ./bin/kubectl port-forward client-node-0 5005:8000 としてポートフォワーディングすればOK。

今回は、Exposing Services - k3d の通り、k3dでのやり方でクラスタの外からアクセス出来るようにしてやる。

ポートの公開

今回は利用しているポートと同じポート番号を使って公開しているけど必要に応じて適当に変えたら良い

各コンポーネントでの対応は以下の通り。

# HDFS <http://localhost:9870/>
k3d node edit k3d-zookage-serverlb --port-add 9870:9870

# RM <http://localhost:8088/cluster>
# YARN UI v2: <http://localhost:8088/ui2/>
k3d node edit k3d-zookage-serverlb --port-add 8088:8088

# YARN Timeline Server <http://localhost:8188/applicationhistory>
k3d node edit k3d-zookage-serverlb --port-add 8188:8188

# Tez UI <http://localhost:9999/tez-ui/>
k3d node edit k3d-zookage-serverlb --port-add 9999:9999

# HS2 <http://localhost:10002/>
k3d node edit k3d-zookage-serverlb --port-add 10002:10002

# Trino <http://localhost:8080/ui/>
k3d node edit k3d-zookage-serverlb --port-add 8080:8080

# Ozone Manager <http://localhost:9874/#!/>
k3d node edit k3d-zookage-serverlb --port-add 9874:9874

# Ozone SCM <http://localhost:9876/#!/>
k3d node edit k3d-zookage-serverlb --port-add 9876:9876

# Ozone Recon <http://localhost:9888/#/Overview>
k3d node edit k3d-zookage-serverlb --port-add 9888:9888

# Spark History Server
k3d node edit k3d-zookage-serverlb --port-add 18080:18080

上記を実行すると以下のようにクラスタの外からアクセス可能になる。

最後に

これで、お手軽にいじれる環境が出来ました。

後は、Apache Icebergで遊ぶなり、 Flink Kubernetes Operator も入れて、Flink&S3/HDFSで、CheckpointやSavepointを意識して動かしたり、Debezium でCDC環境お試しする際に、ある程度、本番に近い環境を想定して動かせそうです。

1
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?