0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

KubernetesクラスタをVMware Workstationでオンプレ想定で構築してみる (RHEL9) - (3) Harbor導入編 -

Last updated at Posted at 2025-05-13

はじめに

KubernetesクラスタにHarborをプライベートレジストリとしてインストールする際の備忘録。ディプロイ環境はオンプレ想定で構成してみる。

ディプロイのテストでは簡単な自作のAPIサービス(テストアプリ)をPodで動作させてみる。

利用するソフトウェア

  • VMware Workstation 17 Pro (Windows 11 / X86_64)
  • RHEL 9.5 (VM)
  • Dnsmasq (2.79)
  • HA-Proxy (1.8.27)
  • Kubernetes (v1.32)
  • CRI-O (v1.32)
  • Calico (v3.29.3)
  • MetalLB (v0.14.9)
  • Helm (v3.17.3)
  • Harbor (2.13.0)
  • harbor-helm (1.17.0)

テスト用のアプリケーション作成 (参考)

  • go (1.23.6)
  • gin-gonic (v1.10.0)
  • Podman (5.2.2)
  • Visual Studio Code (1.100.0)
  • Visual Studio Code拡張: Go (0.46.1)
  • Visual Studio Code拡張: Remote Development (0.26.0)
  • Visual Studio Code拡張: Remote Explorer (0.5.0)
  • Visual Studio Code拡張: Dev Containers (0.413.0)
  • Visual Studio Code拡張: YAML (1.18.0)
  • コンテナ: ubi9/ubi-minimal (latest)
  • コンテナ: ubi9/ubi (latest)
  • コンテナ: ubi9/go-toolset (1.23)

※ goのバージョンはコンテナビルド時に利用する「ubi9/go-toolset」イメージでサポートされるgoのバージョンに(マイナーバージョン含めて)合わせたほうがよいかも。

Kubernetesクラスタ構成

以下で構築済みのKubernetesクラスタ、Cephによる永続ボリューム(PV)やPrivateCAを利用する。

Helm values.yamlの設定

[mng ~]$ git clone --branch 1.17.0 https://github.com/goharbor/harbor-helm.git

[mng ~]$ cd harbor-helm

[mng harbor-helm]$ vi values.yaml

[mng harbor-helm]$ git diff values.yaml
git diff values.yaml
diff --git a/values.yaml b/values.yaml
index 232f1bc..b2ff990 100644
--- a/values.yaml
+++ b/values.yaml
@@ -1,21 +1,21 @@
 expose:
   # Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer"
   # and fill the information in the corresponding section
-  type: ingress
+  type: loadBalancer
   tls:
     # Enable TLS or not.
     # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"
     # Note: if the "expose.type" is "ingress" and TLS is disabled,
     # the port must be included in the command when pulling/pushing images.
     # Refer to https://github.com/goharbor/harbor/issues/5291 for details.
-    enabled: true
+    enabled: false
     # The source of the tls certificate. Set as "auto", "secret"
     # or "none" and fill the information in the corresponding section
     # 1) auto: generate the tls certificate automatically
     # 2) secret: read the tls certificate from the specified secret.
     # The tls certificate can be generated manually or by cert manager
     # 3) none: configure no tls certificate for the ingress. If the default
     # tls certificate is configured in the ingress controller, choose this option
     certSource: auto
     auto:
       # The common name used to generate the certificate, it's necessary
@@ -102,21 +102,21 @@ expose:
 #
 # Format: protocol://domain[:port]. Usually:
 # 1) if "expose.type" is "ingress", the "domain" should be
 # the value of "expose.ingress.hosts.core"
 # 2) if "expose.type" is "clusterIP", the "domain" should be
 # the value of "expose.clusterIP.name"
 # 3) if "expose.type" is "nodePort", the "domain" should be
 # the IP address of k8s node
 #
 # If Harbor is deployed behind the proxy, set it as the URL of proxy
-externalURL: https://core.harbor.domain
+externalURL: http://harbor.test.k8s.local

 # The persistence is enabled by default and a default StorageClass
 # is needed in the k8s cluster to provision volumes dynamically.
 # Specify another StorageClass in the "storageClass" or set "existingClaim"
 # if you already have existing persistent volumes to use
 #
 # For storing images and charts, you can also use "azure", "gcs", "s3",
 # "swift" or "oss". Set it in the "imageChartStorage" section
 persistence:
   enabled: true
@@ -126,57 +126,57 @@ persistence:
   # and redis components, i.e. they are never deleted automatically)
   resourcePolicy: "keep"
   persistentVolumeClaim:
     registry:
       # Use the existing PVC which must be created manually before bound,
       # and specify the "subPath" if the PVC is shared with other components
       existingClaim: ""
       # Specify the "storageClass" used to provision the volume. Or the default
       # StorageClass will be used (the default).
       # Set it to "-" to disable dynamic provisioning
-      storageClass: ""
+      storageClass: "rook-ceph-block"
       subPath: ""
       accessMode: ReadWriteOnce
-      size: 5Gi
+      size: 1Gi
       annotations: {}
     jobservice:
       jobLog:
         existingClaim: ""
-        storageClass: ""
+        storageClass: "rook-ceph-block"
         subPath: ""
         accessMode: ReadWriteOnce
-        size: 1Gi
+        size: 500Mi
         annotations: {}
     # If external database is used, the following settings for database will
     # be ignored
     database:
       existingClaim: ""
-      storageClass: ""
+      storageClass: "rook-ceph-block"
       subPath: ""
       accessMode: ReadWriteOnce
-      size: 1Gi
+      size: 500Mi
       annotations: {}
     # If external Redis is used, the following settings for Redis will
     # be ignored
     redis:
       existingClaim: ""
-      storageClass: ""
+      storageClass: "rook-ceph-block"
       subPath: ""
       accessMode: ReadWriteOnce
-      size: 1Gi
+      size: 500Mi
       annotations: {}
     trivy:
       existingClaim: ""
-      storageClass: ""
+      storageClass: "rook-ceph-block"
       subPath: ""
       accessMode: ReadWriteOnce
-      size: 5Gi
+      size: 1Gi
       annotations: {}
   # Define which storage backend is used for registry to store
   # images and charts. Refer to
   # https://github.com/distribution/distribution/blob/release/2.8/docs/configuration.md#storage
   # for the detail.
   imageChartStorage:
     # Specify whether to disable `redirect` for images and chart storage, for
     # backends which not supported it (such as using minio for `s3` storage type), please disable
     # it. To disable redirects, simply set `disableredirect` to `true` instead.
     # Refer to
@@ -325,21 +325,21 @@ internalTLS:
     # secret name for trivy's tls certs
     secretName: ""
     # Content of trivy's TLS key file, only available when `certSource` is "manual"
     crt: ""
     # Content of trivy's TLS key file, only available when `certSource` is "manual"
     key: ""

 ipFamily:
   # ipv6Enabled set to true if ipv6 is enabled in cluster, currently it affected the nginx related component
   ipv6:
-    enabled: true
+    enabled: false
   # ipv4Enabled set to true if ipv4 is enabled in cluster, currently it affected the nginx related component
   ipv4:
     enabled: true

 imagePullPolicy: IfNotPresent

 # Use this set to assign a list of default pullSecrets
 imagePullSecrets:
 #  - name: docker-registry-secret
 #  - name: internal-registry-secret

インストール

インストール
[mng ~]$ helm repo add harbor https://helm.goharbor.io

[mng ~]$ helm repo update

[mng ~]$ kubectl create namespace harbor

[mng ~]$ helm install harbor ./harbor-helm -n harbor -f harbor-helm/values.yaml
NAME: harbor
LAST DEPLOYED: Mon May 04 01:01:33 2025
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at http://harbor.test.k8s.local
For more details, please visit https://github.com/goharbor/harbor
確認
[mng ~]$ kubectl get all -n harbor -o wide
NAME                                     READY   STATUS    RESTARTS      AGE   IP               NODE          NOMINATED NODE   READINESS GATES
pod/harbor-core-dc65fd6b9-2qph7          1/1     Running   0             51m   172.20.194.104   k8s-worker1   <none>           <none>
pod/harbor-database-0                    1/1     Running   0             51m   172.20.194.92    k8s-worker1   <none>           <none>
pod/harbor-jobservice-5d89fdb89f-xcnxn   1/1     Running   4 (50m ago)   51m   172.23.229.146   k8s-worker0   <none>           <none>
pod/harbor-nginx-d9969d466-pthw9         1/1     Running   0             51m   172.20.194.93    k8s-worker1   <none>           <none>
pod/harbor-portal-55798bf78b-94dd9       1/1     Running   0             51m   172.23.229.166   k8s-worker0   <none>           <none>
pod/harbor-redis-0                       1/1     Running   0             51m   172.20.194.91    k8s-worker1   <none>           <none>
pod/harbor-registry-55bcdc8d6-spt55      2/2     Running   0             51m   172.23.229.147   k8s-worker0   <none>           <none>
pod/harbor-trivy-0                       1/1     Running   0             51m   172.23.229.167   k8s-worker0   <none>           <none>

NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE   SELECTOR
service/harbor              LoadBalancer   10.110.217.115   10.0.0.63     80:30826/TCP        51m   app=harbor,component=nginx,release=harbor
service/harbor-core         ClusterIP      10.101.60.60     <none>        80/TCP              51m   app=harbor,component=core,release=harbor
service/harbor-database     ClusterIP      10.104.178.231   <none>        5432/TCP            51m   app=harbor,component=database,release=harbor
service/harbor-jobservice   ClusterIP      10.103.187.162   <none>        80/TCP              51m   app=harbor,component=jobservice,release=harbor
service/harbor-portal       ClusterIP      10.101.129.9     <none>        80/TCP              51m   app=harbor,component=portal,release=harbor
service/harbor-redis        ClusterIP      10.96.152.7      <none>        6379/TCP            51m   app=harbor,component=redis,release=harbor
service/harbor-registry     ClusterIP      10.106.174.43    <none>        5000/TCP,8080/TCP   51m   app=harbor,component=registry,release=harbor
service/harbor-trivy        ClusterIP      10.96.42.46      <none>        8080/TCP            51m   app=harbor,component=trivy,release=harbor

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS             IMAGES                                                                 SELECTOR
deployment.apps/harbor-core         1/1     1            1           51m   core                   goharbor/harbor-core:v2.13.0                                           app=harbor,component=core,release=harbor
deployment.apps/harbor-jobservice   1/1     1            1           51m   jobservice             goharbor/harbor-jobservice:v2.13.0                                     app=harbor,component=jobservice,release=harbor
deployment.apps/harbor-nginx        1/1     1            1           51m   nginx                  goharbor/nginx-photon:v2.13.0                                          app=harbor,component=nginx,release=harbor
deployment.apps/harbor-portal       1/1     1            1           51m   portal                 goharbor/harbor-portal:v2.13.0                                         app=harbor,component=portal,release=harbor
deployment.apps/harbor-registry     1/1     1            1           51m   registry,registryctl   goharbor/registry-photon:v2.13.0,goharbor/harbor-registryctl:v2.13.0   app=harbor,component=registry,release=harbor

NAME                                           DESIRED   CURRENT   READY   AGE   CONTAINERS             IMAGES                                                                 SELECTOR
replicaset.apps/harbor-core-dc65fd6b9          1         1         1       51m   core                   goharbor/harbor-core:v2.13.0                                           app=harbor,component=core,pod-template-hash=dc65fd6b9,release=harbor
replicaset.apps/harbor-jobservice-5d89fdb89f   1         1         1       51m   jobservice             goharbor/harbor-jobservice:v2.13.0                                     app=harbor,component=jobservice,pod-template-hash=5d89fdb89f,release=harbor
replicaset.apps/harbor-nginx-d9969d466         1         1         1       51m   nginx                  goharbor/nginx-photon:v2.13.0                                          app=harbor,component=nginx,pod-template-hash=d9969d466,release=harbor
replicaset.apps/harbor-portal-55798bf78b       1         1         1       51m   portal                 goharbor/harbor-portal:v2.13.0                                         app=harbor,component=portal,pod-template-hash=55798bf78b,release=harbor
replicaset.apps/harbor-registry-55bcdc8d6      1         1         1       51m   registry,registryctl   goharbor/registry-photon:v2.13.0,goharbor/harbor-registryctl:v2.13.0   app=harbor,component=registry,pod-template-hash=55bcdc8d6,release=harbor

NAME                               READY   AGE   CONTAINERS   IMAGES
statefulset.apps/harbor-database   1/1     51m   database     goharbor/harbor-db:v2.13.0
statefulset.apps/harbor-redis      1/1     51m   redis        goharbor/redis-photon:v2.13.0
statefulset.apps/harbor-trivy      1/1     51m   trivy        goharbor/trivy-adapter-photon:v2.13.0
PVCの確認
[mng ~]$ kubectl get pvc -n harbor -o wide
NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE   VOLUMEMODE
data-harbor-redis-0               Bound    pvc-c31276b4-e6ca-4d4b-9dee-3ea072804ff9   500Mi      RWO            rook-ceph-block   <unset>                 50m   Filesystem
data-harbor-trivy-0               Bound    pvc-07d89dc7-6013-4efd-bbe2-01582e3a6d70   1Gi        RWO            rook-ceph-block   <unset>                 50m   Filesystem
database-data-harbor-database-0   Bound    pvc-fdc55a03-25cf-4e09-9540-dd6831b49628   500Mi      RWO            rook-ceph-block   <unset>                 50m   Filesystem
harbor-jobservice                 Bound    pvc-b92e64c4-30cf-4abf-92e9-50965515966e   500Mi      RWO            rook-ceph-block   <unset>                 50m   Filesystem
harbor-registry                   Bound    pvc-5738bcb4-7867-4db5-b131-1f0cb2f5153d   1Gi        RWO            rook-ceph-block   <unset>                 50m   Filesystem
VIPの確認
[mng ~]$ kubectl get svc -n harbor
NAME                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE ★
harbor              LoadBalancer   10.110.217.115   10.0.0.63     80:30826/TCP        12m
harbor-core         ClusterIP      10.101.60.60     <none>        80/TCP              12m
harbor-database     ClusterIP      10.104.178.231   <none>        5432/TCP            12m
harbor-jobservice   ClusterIP      10.103.187.162   <none>        80/TCP              12m
harbor-portal       ClusterIP      10.101.129.9     <none>        80/TCP              12m
harbor-redis        ClusterIP      10.96.152.7      <none>        6379/TCP            12m
harbor-registry     ClusterIP      10.106.174.43    <none>        5000/TCP,8080/TCP   12m
harbor-trivy        ClusterIP      10.96.42.46      <none>        8080/TCP            12m

MetalLBにより「10.0.0.63」がVIPとして割り当てられている。

Dnsmasq (DNSサービス)の設定を更新

Web UIのVIPにホスト名を割り当てる。lb.test.k8s.localノードで作業する。

設定ファイルを編集
[lb ~]$ sudo vi /etc/dnsmasq.d/k8s.conf
k8s.conf
# VIP by MetalLB
...
address=/harbor.test.k8s.local/10.0.0.63   # Harbor Web UI
...
Dnsmasqサービスを再起動
[lb ~]$ sudo systemctl restart dnsmasq

Web UIアクセスのTLS設定

Harbor用のサーバ証明書の作成

Web UI用のサーバ証明書を発行する。

Web UI用のサーバ証明書
[mng ~]$ mkdir tls
[mng ~]$ cd tls

# 証明書設定の作成
[mng tls]$ vi openssl.cnf

# 秘密鍵の作成
[mng tls]$ openssl genpkey -algorithm ec -pkeyopt ec_paramgen_curve:P-256 -out harbor.key

# CSRの作成
[mng tls]$ openssl req -new -key harbor.key -out harbor.csr -config openssl.cnf

# 証明書作成
[mng tls]$ openssl x509 -req -in harbor.csr -CA private_ca.crt -CAkey private_ca.key -CAcreateserial -out harbor.crt -days 365 -sha256 -extfile openssl.cnf -extensions server-cert
Signature ok
subject=C = JP, O = k8s, OU = test, CN = harbor
Getting CA Private Key

[mng tls]$  openssl x509 -in harbor.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
...
        Issuer: CN = private_ca
...
        Subject: C = JP, O = k8s, OU = test, CN = harbor
...
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Key Agreement
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Subject Alternative Name:
                DNS:harbor.test.k8s.local, DNS:harbor.harbor.svc.cluster.local, DNS:harbor, IP Address:10.0.0.63, IP Address:10.110.217.115, IP Address:127.0.0.1
...

SAN(alt_name)をService(MetalLB)の情報に合わせて指定しておく。

openssl.cnf
[server-cert]
keyUsage = critical, digitalSignature, keyEncipherment, keyAgreement
extendedKeyUsage = serverAuth
subjectAltName = @alt_name

[req]
distinguished_name = dn
prompt = no

[dn]
C = JP
O = k8s
OU = test
CN = harbor

[alt_name]
DNS.1 = harbor.test.k8s.local # 外部アクセス用のホスト名
DNS.2 = harbor.harbor.svc.cluster.local # クラスタ内のホスト名
DNS.3 = harbor # クラスタ内のホスト名 (同一ネームスペース)
IP.1 = 10.0.0.63 # 外部アクセス用のIP
IP.2 = 10.110.217.115 # ClusterIP
IP.3 = 127.0.0.1 # ローカルホストからのアクセス用

Web UIサーバ証明書のSecretを作成

作成
[mng tls]$ kubectl create secret tls harbor-tls-cert --cert=harbor.crt --key=harbor.key -n harbor
secret/harbor-tls-cert created
確認
[mng tls]$ kubectl get secret -n harbor harbor-tls-cert -o yaml
apiVersion: v1
data:
  tls.crt: xxxx...
...
  tls.key: yyyy...
...
  kind: Secret
metadata:
  creationTimestamp: "2025-05-11T03:09:45Z"
  name: harbor-tls-cert
  namespace: harbor
  resourceVersion: "472992"
  uid: 21afac23-3a45-4fa2-9e45-95ab86736891
type: kubernetes.io/tls

Helm values.yamlの設定

[mng ~]$ cd harbor-helm

[mng harbor-helm]$ vi values.yaml

[mng harbor-helm]$ git diff values.yaml
git diff values.yaml
diff --git a/values.yaml b/values.yaml
index 232f1bc..f71979e 100644
--- a/values.yaml
+++ b/values.yaml
@@ -1,7 +1,7 @@
 expose:
   # Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer"
   # and fill the information in the corresponding section
-  type: ingress
+  type: loadBalancer
   tls:
     # Enable TLS or not.
     # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"
@@ -16,7 +16,7 @@ expose:
     # The tls certificate can be generated manually or by cert manager
     # 3) none: configure no tls certificate for the ingress. If the default
     # tls certificate is configured in the ingress controller, choose this option
-    certSource: auto
+    certSource: secret
     auto:
       # The common name used to generate the certificate, it's necessary
       # when the type isn't "ingress"
@@ -25,7 +25,7 @@ expose:
       # The name of secret which contains keys named:
       # "tls.crt" - the certificate
       # "tls.key" - the private key
-      secretName: ""
+      secretName: "harbor-tls-cert"
...
設定を更新
[mng ~]$ helm upgrade harbor ./harbor-helm -n harbor -f harbor-helm/values.yaml
Release "harbor" has been upgraded. Happy Helming!
NAME: harbor
LAST DEPLOYED: Mon May 11 03:23:34 2025
NAMESPACE: harbor
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at http://harbor.test.k8s.local
For more details, please visit https://github.com/goharbor/harbor
Service設定を確認
mng ~]$ kubectl get svc -n harbor
NAME                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
harbor              LoadBalancer   10.110.217.115   10.0.0.63     80:30826/TCP,443:30520/TCP   11h ★
harbor-core         ClusterIP      10.101.60.60     <none>        80/TCP                       11h
harbor-database     ClusterIP      10.104.178.231   <none>        5432/TCP                     11h
harbor-jobservice   ClusterIP      10.103.187.162   <none>        80/TCP                       11h
harbor-portal       ClusterIP      10.101.129.9     <none>        80/TCP                       11h
harbor-redis        ClusterIP      10.96.152.7      <none>        6379/TCP                     11h
harbor-registry     ClusterIP      10.106.174.43    <none>        5000/TCP,8080/TCP            11h
harbor-trivy        ClusterIP      10.96.42.46      <none>        8080/TCP                     11h

※MetalLBにより「10.0.0.63:443」が追加されている。

HTTPSアクセス確認
[mng ~]$ curl -v https://harbor.test.k8s.local/ --cacert tls/private_ca.crt
...
* Connected to harbor.test.k8s.local (10.0.0.63) port 443 (#0)
...
* successfully set certificate verify locations:
...
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
*  subject: C=JP; O=k8s; OU=test; CN=harbor
...
*  subjectAltName: host "harbor.test.k8s.local" matched cert's "harbor.test.k8s.local"
*  issuer: CN=private_ca
*  SSL certificate verify ok.
...
> GET / HTTP/1.1
> Host: harbor.test.k8s.local
> User-Agent: curl/7.61.1
> Accept: */*
>
...
< HTTP/1.1 200 OK
< Server: nginx
...
<
<!DOCTYPE html>
<html>
...
</html>

ちなみにHTTPアクセスに対してのリダイレクト設定も入っている模様。

[mng ~]$ curl -v http://harbor.test.k8s.local/
...
* Connected to harbor.test.k8s.local (10.0.0.63) port 80 (#0)
...
> GET / HTTP/1.1
> Host: harbor.test.k8s.local
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.26.2
...
< Location: https://harbor.test.k8s.local/
<
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.26.2</center>
</body>
</html>

Web UIアクセス確認

User: admin
Password: values.yamlのharborAdminPasswordフィールド値

harbar_ui1.png

アプリケーションのディプロイテスト

簡単なAPIサービスをPodで動作させて実施してみる。

事前準備

PrivateCA証明書をノードシステムに登録

今回構築したプライベートレジストリ(Harbor)とのTLS通信はContainer runtime (CRI-O)により実行されるため、PrivateCA証明書をワーカーノードのシステム(kubernetesクラスタの外側)へ登録しておく(またコントロールプレーンノードのシステムにも一応登録しておく)。

PrivateCA証明書の登録 (k8s-worker0の例)
[mng ~]$ scp tls/private_ca.crt k8s-worker0.test.k8s.local:/tmp/

[mng ~]$ ssh k8s-worker0.test.k8s.local "sudo -S cp /tmp/private_ca.crt /etc/pki/ca-trust/source/anchors/test_k8s_local_ca.crt && sudo -S update-ca-trust extract && curl -v https://harbor.test.k8s.local"
...
* Connected to harbor.test.k8s.local (10.0.0.63) port 443 ...
...
* SSL connection using TLSv1.3 ...
...
HTTP/1.1 200 OK
...

※ 登録したPrivatreCA証明書が各ノードの「/etc/pki/tls/certs/ca-bundle.crt」へマージされる。

上記を全ノードに対して実施する。

  • k8s-worker0.test.k8s.local
  • k8s-worker1.test.k8s.local
  • k8s-worker2.test.k8s.local
  • k8s-master0.test.k8s.local
  • k8s-master1.test.k8s.local
  • k8s-master2.test.k8s.local

テストアプリケーションの作成

[mng ~]$ wget https://go.dev/dl/go1.23.6.linux-amd64.tar.gz

[mng ~]$ sudo rm -rf /usr/local/go && tar -C /usr/local -xzf go1.23.6.linux-amd64.tar.gz

[mng ~]$ export PATH=$PATH:/usr/local/go/bin

[mng ~]$ go version
go version go1.23.6 linux/amd64
[mng ~]$ mkdir k8s-test-app
[mng ~]$ cd k8s-test-app

[mng k8s-test-app]$ go mod init test.k8s.local/test-app

[mng k8s-test-app]$ vi main.go

[mng k8s-test-app]$ go mod tidy

[mng k8s-test-app]$ go build

[mng k8s-test-app]$ ls
go.mod  go.sum  main.go  test-app
main.go
package main

import (
    "github.com/gin-gonic/gin"
    "net/http"
)

func main() {
    r := gin.Default()

    r.GET("/ping", func(c *gin.Context) {
        c.JSON(http.StatusOK, gin.H{"message": "pong"})
    })

    r.Run(":8080")
}

テストアプリケーションのコンテナイメージを作成

[mng ~]$ sudo dnf install -y podman
[mng ~]$ podman --version
podman version 5.2.2

[mng ~]$ cd k8s-test-app

[mng k8s-test-app]$ podman build -t k8s-test-app:latest -f Containerfile .

[mng k8s-test-app]$ podman images
REPOSITORY                                  TAG         IMAGE ID      CREATED         SIZE
localhost/k8s-test-app                      latest      1aba92ed3d43  7 seconds ago   229 MB
<none>                                      <none>      ff57c1ac1fe0  25 minutes ago  1.53 GB
<none>                                      <none>      3932eb3d7723  29 minutes ago  1.38 GB
<none>                                      <none>      384b325ddbb4  34 minutes ago  1.13 GB
registry.access.redhat.com/ubi9/go-toolset  1.23        d8663eae6e1a  5 days ago      1.13 GB
registry.access.redhat.com/ubi9             latest      18ac20acd5ec  13 days ago     217 MB
Containerfile
# Stage 1: Build
FROM registry.access.redhat.com/ubi9/go-toolset:1.23 AS builder

USER root
WORKDIR /app

COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o app

# Stage 2: Runtime
FROM registry.access.redhat.com/ubi9

WORKDIR /app
COPY --from=builder /app/app .

EXPOSE 8080
CMD ["./app"]
[mng k8s-test-app]$ podman run -d --name my-k8s-test-app -p 8080:8080 k8s-test-app:latest

[mng k8s-test-app]$ podman ps
CONTAINER ID  IMAGE                          COMMAND     CREATED         STATUS         PORTS                             NAMES
dd9355810bc5  localhost/k8s-test-app:latest  ./app       16 seconds ago  Up 16 seconds  0.0.0.0:8080->8080/tcp, 8080/tcp  my-k8s-test-app
[mng k8s-test-app]$ curl http://localhost:8080/ping |jq .
{
  "message": "pong"
}
[mng k8s-test-app]$ podman stop my-k8s-test-app

[mng k8s-test-app]$ podman ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

コンテナイメージをレジストリ(Harbor)へPush

プロジェクトを新規作成

※Web UIから作成してもOK。

新規作成
[mng k8s-test-app]$ curl -u admin:<password> -X POST  --cacert ../tls/private_ca.crt "https://harbor.test.k8s.local/api/v2.0/projects" \
  -H "Content-Type: application/json" \
  -d '{
    "project_name": "k8s-test-app",
    "public": false
  }'
確認
[mng k8s-test-app]$ curl -u admin:Harbor12345 -X GET --cacert ../ceph-test/tls/private_ca.crt "https://harbor.test.k8s.local/api/v2.0/projects/k8s-test-app" | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   389  100   389    0     0  14961      0 --:--:-- --:--:-- --:--:-- 15560
{
  "creation_time": "2025-05-04T13:30:08.963Z",
  "current_user_role_id": 1,
  "current_user_role_ids": [
    1
  ],
  "cve_allowlist": {
    "creation_time": "0001-01-01T00:00:00.000Z",
    "id": 2,
    "items": [],
    "project_id": 2,
    "update_time": "0001-01-01T00:00:00.000Z"
  },
  "metadata": {
    "public": "false"
  },
  "name": "k8s-test-app",
  "owner_id": 1,
  "owner_name": "admin",
  "project_id": 2,
  "repo_count": 0,
  "update_time": "2025-05-04T13:30:08.963Z"
}

Podman用にPrivateCA証明書を登録

[mng k8s-test-app]$ sudo mkdir -p /etc/containers/certs.d/harbor.test.k8s.local
[mng k8s-test-app]$ sudo cp ../tls/private_ca.crt /etc/containers/certs.d/harbor.test.k8s.local
[mng k8s-test-app]$ ls /etc/containers/certs.d/harbor.test.k8s.local
private_ca.crt

コンテナイメージをPush

タグ付け
[mng k8s-test-app]$ podman tag k8s-test-app:latest harbor.test.k8s.local/k8s-test-app/k8s-test-app:latest
ログイン
[mng k8s-test-app]$ podman login harbor.test.k8s.local
Username: admin
Password:
Login Succeeded!
Push
[mng k8s-test-app]$ podman push k8s-test-app harbor.test.k8s.local/k8s-test-app/k8s-test-app:latest
Getting image source signatures
Copying blob d13b1160182f done   |
Copying blob 58ff4fd5ef9c done   |
Copying config 1aba92ed3d done   |
Writing manifest to image destination

レジストリアクセス用の認証情報でSecretを作成

作成
[mng k8s-test-app]$ kubectl create secret docker-registry harbor-creds --docker-server=harbor.test.k8s.local --docker-username=admin --docker-password='password' --docker-email=admin@test.k8s.local
確認
[mng k8s-test-app]$ kubectl get secret
NAME                 TYPE                             DATA   AGE
ceph-delete-bucket   Opaque                           2      4d19h
harbor-creds         kubernetes.io/dockerconfigjson   1      11s

[mng k8s-test-app]$ kubectl get secret harbor-creds -o yaml
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRo...
kind: Secret
metadata:
  creationTimestamp: "2025-05-13T02:44:01Z"
  name: harbor-creds
  namespace: default
  resourceVersion: "589568"
  uid: 764b6c0d-c39a-4a8a-ade0-4e68d934595e
type: kubernetes.io/dockerconfigjson

[mng k8s-test-app]$ echo eyJhdXRo... | base64 -d | jq .
{
  "auths": {
    "harbor.test.k8s.local": {
      "username": "admin",
      "password": "password",
      "email": "admin@test.k8s.local",
      "auth": "xxxxxx="
    }
  }
}

テストアプリケーションのPodをクラスタへディプロイ

ディプロイ実施
[mng k8s-test-app]$ kubectl apply -f k8s-test-app-deploy.yaml
k8s-test-app-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-test-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: k8s-test-app
  template:
    metadata:
      labels:
        app: k8s-test-app
    spec:
      imagePullSecrets:
      - name: harbor-creds
      containers:
      - name: k8s-test-app
        image: harbor.test.k8s.local/k8s-test-app/k8s-test-app:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 8080

ディプロイ確認
[k8s-test-app]$ kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
k8s-test-app-f6689b7d6-lcxvn   1/1     Running   0          11m   172.20.194.119   k8s-worker1   <none>           <none>
k8s-test-app-f6689b7d6-vk42g   1/1     Running   0          10m   172.30.126.48    k8s-worker2   <none>           <none>
k8s-test-app-f6689b7d6-vk5fv   1/1     Running   0          11m   172.23.229.161   k8s-worker0   <none>           <none>
shellへ接続確認
[mng k8s-test-app]$ kubectl exec -it k8s-test-app-f6689b7d6-lcxvn -- bash
[root@k8s-test-app-f6689b7d6-lcxvn etc]# cat /etc/redhat-release
Red Hat Enterprise Linux release 9.5 (Plow)

[root@k8s-test-app-f6689b7d6-lcxvn app]# pwd
/app
[root@k8s-test-app-f6689b7d6-lcxvn app]# ls -la
total 11364
drwxr-xr-x. 2 root root       17 May 04 13:14 .
dr-xr-xr-x. 1 root root       40 May 04 02:53 ..
-rwxr-xr-x. 1 root root 11632648 May 04 12:49 app

[root@k8s-test-app-f6689b7d6-lcxvn etc]# exit

テストアプリケーション用のService(MetalLB)を作成

作成
[mng k8s-test-app]$ kubectl apply -f k8s-test-app-svc.yaml
k8s-test-app-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: k8s-test-app
spec:
  type: LoadBalancer
  selector:
    app: k8s-test-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
確認
[mng k8s-test-app]$ kubectl get svc -o wide
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
k8s-test-app   LoadBalancer   10.100.133.21   10.0.0.64     80:32278/TCP   14s   app=k8s-test-app
...

※ MetalLBにより「10.0.0.64」がVIPとして割り当てられている。

テスト実施

外部クライアントからアクセス疎通確認 (管理端末)

[mng k8s-test-app]$ curl http://10.0.0.64/ping | jq .
{
  "message": "pong"
}

他のPodからアクセス疎通確認

テスト用Podの実行
[mng k8s-test-app]$ kubectl run testpod --image=registry.access.redhat.com/ubi9/ubi-minimal --restart=Never -it -- sh

# アクセス疎通確認
sh-5.1# curl http://k8s-test-app/ping
{"message":"pong"}

sh-5.1# exit
テスト用Podを削除
[mng k8s-test-app]$ k8s-test-app]$ kubectl get pods
NAME                           READY   STATUS      RESTARTS   AGE
testpod                        0/1     Completed   0          4m37s
...

[mng k8s-test-app]$ kubectl delete pod testpod
pod "testpod" deleted
0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?