はじめに
KubernetesクラスタにIngress NGINX Controllerをインストールする際の備忘録。ディプロイ環境はオンプレ想定で構成してみる。
転送先のバックエンドアプリケーションPodへのアクセスをAPIパスを条件にしてルーティングする。また転送可否のヘルスチェック(Probe)設定をバックエンドアプリケーションのPod側へ追加する。
VirtualHostによるルーティング条件については上述のバックエンドアプリケーション側とプライベートレジストリ(Harbor)側へのルーティング設定を試してみる。
また以下で推奨されるTLS設定(暗号方式など)も追加設定する。
さらに以下の設定も追加する。
- Proxyで追加するHTTPヘッダ
- Nginxの個別パラメータ設定
利用するソフトウェア
- VMware Workstation 17 Pro (Windows 11 / X86_64)
- RHEL 9.5 (VM)
- Dnsmasq (2.79)
- HA-Proxy (1.8.27)
- Kubernetes (v1.32)
- CRI-O (v1.32)
- Calico (v3.29.3)
- MetalLB (v0.14.9)
- Ingress NGINX Controller (1.12.2)
- Helm (v3.17.3)
※ サポートされるKubernetesバージョン範囲に留意。
Kubernetesクラスタ構成
以下で構築済みの環境を利用する。
Ingressのルーティング構成
エンドポイント (VirtualHost) | APIパス | 転送先 |
---|---|---|
ingress.test.k8s.local | /ping | test-app-appアプリケーション |
/s3-buckets | test-app-appgwアプリケーション | |
/json-from-cephfs | test-app-appgwアプリケーション | |
その他 | 404-NotFoundサービス | |
ingress-harbor.test.k8s.local | 全て | プライベートレジストリ(Harbor) |
Ingress NGINX Controllerインストール
[mng ~]$ git clone --branch release-1.12 https://github.com/kubernetes/ingress-nginx.git
/home/hoge/ingress-nginx/charts/ingress-nginx
├── Chart.yaml
├── templates
│ ├──...
│
└── values.yaml
[mng ~]$ git clone --branch release-1.12 https://github.com/kubernetes/ingress-nginx.git
[mng ~]$ cd ingress-nginx/charts/ingress-nginx
[mng ingress-nginx]$ vi values.yaml
[mng ingress-nginx]$ git diff values.yaml
diff --git a/charts/ingress-nginx/values.yaml b/charts/ingress-nginx/values.yaml
index c1d3e68e7..1324eb53a 100644
--- a/charts/ingress-nginx/values.yaml
+++ b/charts/ingress-nginx/values.yaml
@@ -221,7 +221,7 @@ controller:
# name: secret-resource
# -- Use a `DaemonSet` or `Deployment`
- kind: Deployment
+ kind: DaemonSet
# -- Annotations to be added to the controller Deployment or DaemonSet
##
annotations: {}
@@ -532,7 +532,7 @@ controller:
ipFamilies:
- IPv4
# -- Enable the HTTP listener on both controller services or not.
- enableHttp: true
+ enableHttp: false
# -- Enable the HTTPS listener on both controller services or not.
enableHttps: true
ports:
※ Ingress Controller用のServiceはMetalLB(L2モード)によって割り当てられるVIPにより外部アクセス経路を冗長化している。MetalLBはIngress Controller Podが配置されているノードでのみVIPに対するARP応答を行う。そのためすべてのワーカーノードで通信を受けられるようにDeploymentではなくDaemonSetを用いて全ノードにPodを配置している。
※ バックエンドアプリケーションはHTTPSのみAPIを公開しているのでHTTPは無効化。
[mng ingress-nginx]$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
[mng ingress-nginx]$ helm repo update
[mng ingress-nginx]$ helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace -f values.yaml
[mng ingress-nginx]$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
...
ingress-nginx ingress-nginx 1 2025-05-03 06:09:06.626617369 +0900 JST deployed ingress-nginx-4.12.2 1.12.2
...
[mng ingress-nginx]$ kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-controller-bpbdw 1/1 Running 0 3m55s
pod/ingress-nginx-controller-gp5vv 1/1 Running 0 3m55s
pod/ingress-nginx-controller-l7q97 1/1 Running 0 3m55s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.105.10.187 10.0.0.66 443:30932/TCP 3m57s
service/ingress-nginx-controller-admission ClusterIP 10.107.97.224 <none> 443/TCP 3m57s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/ingress-nginx-controller 3 3 3 3 3 kubernetes.io/os=linux 3m56s
[mng ingress-nginx]$ kubectl get daemonset -n ingress-nginx
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ingress-nginx-controller 3 3 3 3 3 kubernetes.io/os=linux 4m27s
404-NotFoundサービスをインストール
IngressのAPIルーティング先のデフォルト (不明/未定義のAPIパス用) として404 (Not Found)を常に応答するSorryサービスのPodをインストールしておく。
/home/hoge/dummy-404
├── dummy-404-deploy-ingress.yaml
└── dummy-404-svc-ingress.yaml
[mng ~]$ mkdir dummy-404
[mng ~]$ cd dummy-404
[mng dummy-404]$ cd dummy-404
[mng dummy-404]$ vi dummy-404-deploy-ingress.yaml
[mng dummy-404]$ vi dummy-404-svc-ingress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-404
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: dummy-404
template:
metadata:
labels:
app: dummy-404
spec:
containers:
- name: dummy-404
image: registry.access.redhat.com/ubi9-minimal
livenessProbe:
tcpSocket:
port: 8443
initialDelaySeconds: 5
periodSeconds: 20
timeoutSeconds: 3
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 8443
initialDelaySeconds: 3
periodSeconds: 10
failureThreshold: 2
ports:
- containerPort: 8443
command: ["/bin/sh"]
args:
- -c
- |
microdnf install -y nginx openssl && \
openssl req -x509 -nodes -days 365 -subj "/CN=dummy-404" -newkey rsa:2048 -keyout /etc/nginx/tls/tls.key -out /etc/nginx/tls/tls.crt && \
ls -l /etc/nginx/tls/ && \
openssl x509 -in /etc/nginx/tls/tls.crt -text && \
mkdir -p /etc/nginx/conf.d && \
echo 'server { listen 8443 ssl; ssl_certificate /etc/nginx/tls/tls.crt; ssl_certificate_key /etc/nginx/tls/tls.key; return 404; }' > /etc/nginx/conf.d/default.conf && \
ls -l /etc/nginx/conf.d/ && \
cat /etc/nginx/conf.d/default.conf && \
nginx -g 'daemon off;'
volumeMounts:
- name: dummy-tls
mountPath: /etc/nginx/tls
volumes:
- name: dummy-tls
emptyDir: {}
※ 上記の簡易的な例では起動時にインターネット経由でのレポジトリアクセスが必要なので留意。
※ 起動時に生成する自己署名証明書および常に404 (Not Found)を返すのでヘルスチェックはTCPレイヤ(tcpSocket)で実施。
apiVersion: v1
kind: Service
metadata:
name: dummy-404
namespace: default
spec:
type: ClusterIP
selector:
app: dummy-404
ports:
- name: https
port: 443
targetPort: 8443
[mng dummy-404]$ kubectl apply -f dummy-404-deploy-ingress.yaml
[mng dummy-404]$ kubectl apply -f dummy-404-svc-ingress.yaml
[mng dummy-404]$ dummy-404]$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
dummy-404-6c4c9d44f4-fpj9k 0/1 Running 0 27s
...
[mng dummy-404]$ kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dummy-404 ClusterIP 10.98.180.77 <none> 443/TCP 49s
※ ClusterIPに「10.98.180.77」が割り当てられている。
[mng dummy-404]$ kubectl run testpod --image=registry.access.redhat.com/ubi9/ubi-minimal --restart=Never -it -- sh
sh-5.1# curl -k -v https://10.98.180.77
* Trying 10.98.180.77:443...
* Connected to 10.98.180.77 (10.98.180.77) port 443 (#0)
...
* Server certificate:
* subject: CN=dummy-404
...
> GET / HTTP/1.1
> Host: 10.98.180.77
> User-Agent: curl/7.76.1
> Accept: */*
>
...
< HTTP/1.1 404 Not Found
< Server: nginx/1.20.1
...
<
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.20.1</center>
</body>
</html>
k8s-test-appアプリケーションにProbe設定を追加
/home/hoge/k8s-test-app
├── Containerfile
├── go.mod
├── go.sum
├── k8s-test-app-api-key.yaml
├── k8s-test-app-deploy.yaml
├── k8s-test-app-svc.yaml
├── k8s-test-app-tls-secret.yaml
├── main.go
├── openssl.cnf
├── test-app
├── tls.crt
├── tls.csr
└── tls.key
k8s-test-appアプリケーションのDeploymentにヘルスチェック (Probe) 設定を追加する。
PrivateCA証明書は以前にノードのシステムに追加済みのため特に変更はない。またProbeにもAPI-KEYを設定しておく。
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-test-app
spec:
replicas: 3
selector:
matchLabels:
app: k8s-test-app
template:
metadata:
labels:
app: k8s-test-app
spec:
imagePullSecrets:
- name: harbor-creds
initContainers:
- name: k8s-test-app-init
image: busybox
command:
- sh
- -c
- |
echo "[INIT] Creating log directory...";
if mkdir -p /mnt/cephfs/k8s-test-app/log; then
chmod -R 777 /mnt/cephfs/k8s-test-app;
else
echo "[ERROR] Failed to create /mnt/cephfs/k8s-test-app/log" >&2;
exit 1;
fi
volumeMounts:
- name: history-vol
mountPath: /mnt/cephfs
containers:
- name: k8s-test-app
image: harbor.test.k8s.local/k8s-test-app/k8s-test-app:latest
imagePullPolicy: Always
ports:
- containerPort: 8443
startupProbe: # ★追加
httpGet:
path: /ping
port: 8443
scheme: HTTPS
httpHeaders:
- name: Authorization
value: "TestApp xxxx..." # ★API-KEYを追加
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
readinessProbe: # ★追加
httpGet:
path: /ping
port: 8443
scheme: HTTPS
httpHeaders:
- name: Authorization
value: "TestApp xxxx..." # ★API-KEYを追加
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe: # ★追加
httpGet:
path: /ping
port: 8443
scheme: HTTPS
httpHeaders:
- name: Authorization
value: "TestApp xxxx..." # ★API-KEYを追加
initialDelaySeconds: 10
periodSeconds: 20
timeoutSeconds: 3
failureThreshold: 3
volumeMounts:
- name: certs
mountPath: /app/certs
readOnly: true
- name: api-key
mountPath: /app/secret
readOnly: true
- name: history-vol
mountPath: /mnt/cephfs
volumes:
- name: certs
secret:
secretName: k8s-test-app-tls
- name: api-key
secret:
secretName: k8s-test-app-api-key
- name: history-vol
persistentVolumeClaim:
claimName: cephfs-pvc
※ 設定ファイルに直接API-KEYを埋め込むのは避けたいがとりあえず...
[mng k8s-test-app]$ kubectl apply -f k8s-test-app-deploy.yaml
[mng k8s-test-appgw]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
...
k8s-test-app-6cbdf5d455-6l7fv 1/1 Running 0 81s
k8s-test-app-6cbdf5d455-flpsx 1/1 Running 0 60s
k8s-test-app-6cbdf5d455-h27w2 1/1 Running 0 38s
...
[mng k8s-test-appgw]$ kubectl get endpoints -o wide
NAME ENDPOINTS AGE
...
k8s-test-app 172.20.194.74:8443,172.23.229.183:8443,172.30.126.4:8443 4d18h
...
[mng k8s-test-appgw]$ kubectl describe pod k8s-test-app-6cbdf5d455-4nwvx
Name: k8s-test-app-6cbdf5d455-6l7fv
Namespace: default
...
Status: Running
...
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
...
Normal Scheduled 2m34s default-scheduler Successfully assigned default/k8s-test-app-6cbdf5d455-6l7fv to k8s-worker1
Normal Pulling 2m33s kubelet Pulling image "busybox"
Normal Pulled 2m28s kubelet Successfully pulled image "busybox" in 4.099s (4.099s including waiting). Image size: 4527719 bytes.
Normal Created 2m28s kubelet Created container: k8s-test-app-init
Normal Started 2m28s kubelet Started container k8s-test-app-init
Normal Pulling 2m27s kubelet Pulling image "harbor.test.k8s.local/k8s-test-app/k8s-test-app:latest"
Normal Pulled 2m27s kubelet Successfully pulled image "harbor.test.k8s.local/k8s-test-app/k8s-test-app:latest" in 704ms (704ms including waiting). Image size: 228874413 bytes.
Normal Created 2m27s kubelet Created container: k8s-test-app
Normal Started 2m27s kubelet Started container k8s-test-app
[mng k8s-test-app]$ kubectl rollout restart deployment/k8s-test-app
(おまけ) ProbeのUser-Agent
以下となる模様
User-Agent: kube-probe/1.32
また、送信元IPは同じノードのIPとなる模様 (以下はk8s-worker02ノードのIP)
[GIN] 2025/05/04 - 23:03:44 | 200 | 68.138866ms | 10.0.0.157 | GET "/ping"
Ingressの転送先バックエンドのためにServiceを追加 (k8s-test-appアプリケーション用)
apiVersion: v1
kind: Service
metadata:
name: k8s-test-app-ingress
namespace: default
spec:
type: ClusterIP
selector:
app: k8s-test-app
ports:
- name: https
port: 443
targetPort: 8443
[mng k8s-test-app]$ kubectl apply -f k8s-test-app-svc-ingress.yaml
[mng k8s-test-app]$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
SELECTOR
...
k8s-test-app-ingress ClusterIP 10.98.224.199 <none> 443/TCP 3m36s app=k8s-test-app
...
k8s-test-appgwアプリケーションにヘルスチェックを追加
同様にProbeを設定する。
/home/hoge/k8s-test-appgw
├── Containerfile
├── go.mod
├── go.sum
├── k8s-test-appgw-helm
│ ├── charts
│ ├── Chart.yaml
│ ├── templates
│ │ ├── configmap.yaml
│ │ ├── deployment.yaml
│ │ ├── _helpers.tpl
│ │ ├── ingress-service.yaml
│ │ ├── secret.yaml
│ │ ├── service.yaml
│ │ ├── tests
│ │ └── tls-secret.yaml
│ └── values.yaml
├── main.go
├── openssl.cnf
├── test-appgw
├── tls.crt
├── tls.csr
├── tls.key
└── values-cert-update.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-test-appgw
labels:
app: k8s-test-appgw
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: k8s-test-appgw
template:
metadata:
labels:
app: k8s-test-appgw
spec:
imagePullSecrets:
- name: harbor-creds
initContainers:
- name: k8s-test-appgw-init
image: busybox
command:
- sh
- -c
- |
DIR=/mnt/cephfs/k8s-test-appgw
FILE="$DIR/data.json"
if [ ! -f "$FILE" ]; then
if mkdir -p "$DIR"; then
echo '{"message": "Hello, world"}' > "$FILE"
chmod 644 "$FILE"
echo "[INIT] data.json created at $FILE"
else
echo "[ERROR] Failed to create directory $DIR" >&2
exit 1
fi
else
echo "[INIT] $FILE already exists. Skipping creation."
fi
echo "[INIT] Creating log directory...";
LOGDIR="$DIR/log"
if mkdir -p "$LOGDIR"; then
chmod -R 777 "$DIR";
else
echo "[ERROR] Failed to create " "$LOGDIR" >&2;
exit 1;
fi
volumeMounts:
- name: history-vol
mountPath: /mnt/cephfs
containers:
- name: app
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 8443
startupProbe: # ★追加
httpGet:
path: /ping # 連携先のk8s-test-appアプリケーションへのRachabilityも含めてチェック
port: 8443
scheme: HTTPS
httpHeaders:
- name: Authorization
value: "TestApp xxxx..." # ★API-KEYを追加
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
readinessProbe: # ★追加
httpGet:
path: /ping # 連携先のk8s-test-appアプリケーションへのRachabilityも含めてチェック
port: 8443
scheme: HTTPS
httpHeaders:
- name: Authorization
value: "TestApp xxxx..." # ★API-KEYを追加
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe: # ★追加
httpGet:
path: /ping # 連携先のk8s-test-appアプリケーションへのRachabilityも含めてチェック
port: 8443
scheme: HTTPS
httpHeaders:
- name: Authorization
value: "TestApp xxxx..." # ★API-KEYを追加
initialDelaySeconds: 10
periodSeconds: 20
timeoutSeconds: 3
failureThreshold: 3
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: config-vol
mountPath: /app/config
readOnly: true
- name: secret-vol
mountPath: /app/secret
readOnly: true
- name: tls-vol
mountPath: /app/certs
readOnly: true
- name: history-vol
mountPath: /mnt/cephfs
- name: ca
mountPath: /app/ca/ca.crt
subPath: ca.crt
volumes:
- name: config-vol
configMap:
name: k8s-test-appgw-config
- name: secret-vol
secret:
secretName: k8s-test-appgw-secret
- name: tls-vol
secret:
secretName: k8s-test-appgw-tls
- name: history-vol
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim }}
- name: ca
configMap:
name: rgw-ca-cert
items:
- key: ca.crt
path: ca.crt
apiVersion: v1
kind: Service
metadata:
name: k8s-test-appgw-ingress
labels:
app: k8s-test-appgw
spec:
type: ClusterIP
ports:
- port: {{ .Values.service.port }}
targetPort: 8443
selector:
app: k8s-test-appgw
[mng ~]$ cd k8s-test-appgw
[mng k8s-test-appgw]$ helm upgrade k8s-test-appgw ./k8s-test-appgw-helm --namespace default --reuse-values
[mng k8s-test-appgw]$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
k8s-test-appgw default 2 2025-05-04 01:28:25.488826309 +0900 JST deployed k8s-test-appgw-0.1.0 1.0.0
[mng k8s-test-appgw]$ helm history k8s-test-appgw -n default
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Sat May 03 20:59:46 2025 superseded k8s-test-appgw-0.1.0 1.0.0 Install complete
2 Sun May 04 01:28:25 2025 deployed k8s-test-appgw-0.1.0 1.0.0 Upgrade complete
[mng k8s-test-appgw]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
...
k8s-test-appgw-579f8d95c9-fpww2 1/1 Running 0 86s
k8s-test-appgw-579f8d95c9-kbgdm 1/1 Running 0 2m11s
k8s-test-appgw-579f8d95c9-rll59 1/1 Running 0 51s
...
[mng k8s-test-appgw]$ kubectl get endpoints -o wide
NAME ENDPOINTS AGE
...
k8s-test-app-ingress 172.20.194.123:8443,172.23.229.170:8443,172.30.126.56:8443 11m
k8s-test-appgw-ingress 172.20.194.71:8443,172.23.229.148:8443,172.30.126.31:8443 2m43s
...
[mng k8s-test-appgw]$ kubectl describe pod k8s-test-appgw-579f8d95c9-fpww2
Name: k8s-test-appgw-579f8d95c9-fpww2
Namespace: default
...
Status: Running
...
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m2s default-scheduler Successfully assigned default/k8s-test-appgw-579f8d95c9-fpww2 to k8s-worker2
Normal Pulling 2m59s kubelet Pulling image "busybox"
Normal Pulled 2m55s kubelet Successfully pulled image "busybox" in 4.101s (4.101s including waiting). Image size: 4527719 bytes.
Normal Created 2m55s kubelet Created container: k8s-test-appgw-init
Normal Started 2m55s kubelet Started container k8s-test-appgw-init
Normal Pulling 2m55s kubelet Pulling image "harbor.test.k8s.local/k8s-test-app/k8s-test-appgw:latest"
Normal Pulled 2m54s kubelet Successfully pulled image "harbor.test.k8s.local/k8s-test-app/k8s-test-appgw:latest" in 202ms (202ms including waiting). Image size: 235415213 bytes.
Normal Created 2m54s kubelet Created container: app
Normal Started 2m54s kubelet Started container app
[mng k8s-test-appgw]$ kubectl rollout restart deployment/k8s-test-appgw
Ingressリソースを作成してディプロイ
/home/hoge/ingress-nginx-work
├── dhparam.pem
├── ingress-nginx-configmap-add-headers.yaml
├── ingress-nginx-configmap-proxy-set-headers.yaml
├── ingress-nginx-ingress.yaml
├── ingress-nginx-secret-tls-dh-param.yaml
├── ingress-nginx-secret-tls.yaml
├── ingress-nginx-svc-harbor-alias.yaml
├── openssl.cnf
├── tls.crt
├── tls.csr
└── tls.key
[mng ~]$ mkdir ingress-nginx-work
[mng ~]$ cd ingress-nginx-work
[mng ingress-nginx-work] vi ingress-nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-test-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- hosts:
- ingress.test.k8s.local
- ingress-harbor.test.k8s.local
secretName: k8s-test-ingress-tls
defaultBackend:
service:
name: dummy-404 # 404 Notfound応答
port:
number: 443
rules:
- host: ingress.test.k8s.local # バックエンドアプリケーション転送用
http:
paths:
- path: /ping
pathType: Exact
backend:
service:
name: k8s-test-app-ingress # k8s-test-appアプリケーション
port:
number: 443
- path: /s3-buckets
pathType: Exact
backend:
service:
name: k8s-test-appgw-ingress # k8s-test-appgwアプリケーション
port:
number: 443
- path: /json-from-cephfs
pathType: Exact
backend:
service:
name: k8s-test-appgw-ingress # k8s-test-appgwアプリケーション
port:
number: 443
- path: /
pathType: Prefix
backend:
service:
name: dummy-404 # 404 Notfound応答
port:
number: 443
- host: ingress-harbor.test.k8s.local # プライベートレジストリ転送用
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: harbor-ingress-alias
port:
number: 443
※ 細かいルーティング条件(例えばHTTPメソッドさえも)指定はあまりできない模様?
※ 同一のIngress内では転送先毎にHTTTS/HTTPを選べない模様? (SorryサーバにもTLSが必要だった)
※ nginx.ingress.kubernetes.io/configuration-snippetの利用は(ドキュメント類を追っていると)セキュリティ制約のデフォルトが徐々にシビアになってきている傾向がありそう?なのでなるべく使うのは避けたいなぁという所感(なるべく開発元に意図されているデフォルトを変更せずにそれをベストプラクティスとしてベースに使いたい)。そのため例えばCORSのPreflightの場合など細かい設定をしたかったが今のところペンディングにした。
[mng ingress-nginx-work]$ kubectl apply -f ingress-nginx-ingress.yaml
[mng ingress-nginx-work]$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
k8s-test-ingress nginx ingress.test.k8s.local 10.0.0.66 80, 443 107s
プライベートレジストリ (Harbor)のエイリアス用のServiceリソースを作成
[mng ingress-nginx-work] vi ingress-nginx-svc-harbor-alias.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-ingress-alias
namespace: default
spec:
type: ExternalName
externalName: harbor.harbor.svc.cluster.local # Harbor本体のFQDN (Cluster内部名でOK)
ports:
- port: 443
targetPort: 443
protocol: TCP
[mng ingress-nginx-work]$ kubectl apply -f ingress-nginx-svc-harbor-alias.yaml
[mng ingress-nginx-work]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
harbor-ingress-alias ExternalName <none> harbor.harbor.svc.cluster.local 443/TCP 30m
...
Ingressのエンドポイント用サーバ証明書とSecretを作成
サーバ証明書を作成
IngressのVIP用のサーバ証明書を発行する。
# VIPのアドレスを確認
[mng ingress-nginx-work]$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.105.10.187 10.0.0.66 443:30932/TCP 56m
# 証明書設定の作成
[mng ingress-nginx-work]$ vi openssl.cnf
# 秘密鍵の作成
[mng ingress-nginx-work]$ openssl genpkey -algorithm ec -pkeyopt ec_paramgen_curve:P-256 -out tls.key
# CSRの作成
[mng ingress-nginx-work]$ openssl req -new -key tls.key -out tls.csr -config openssl.cnf
# 証明書作成
[mng ingress-nginx-work]$ openssl x509 -req -in tls.csr -CA ../tls/private_ca.crt -CAkey ../tls/private_ca.key -CAcreateserial -out tls.crt -days 365 -sha256 -extfile openssl.cnf -extensions server-cert
※ tls.crt(証明書)とtls.key(秘密鍵)というファイルが生成される。
SAN(alt_name)をService(MetalLB)の情報へ合わせて指定しておく。
[server-cert]
keyUsage = critical, digitalSignature, keyEncipherment, keyAgreement
extendedKeyUsage = serverAuth
subjectAltName = @alt_name
[req]
distinguished_name = dn
prompt = no
[dn]
C = JP
O = k8s
OU = test
CN = ingress
[alt_name]
DNS.1 = ingress.test.k8s.local # バックエンドのアプリケーション用
DNS.2 = ingress-harbor.test.k8s.local # プライベートレジストリ(Harbor)用
DNS.3 = ingress-nginx-controller.ingress-nginx.svc.cluster.local # Ingress Controller Service名
DNS.4 = ingress-nginx-controller # Ingress Controller Service名
IP.1 = 10.0.0.66 # VIP
IP.2 = 10.105.10.187 # ClusterIP
IP.3 = 127.0.0.1 # Loopback
TLSサーバ証明書用のSecretを作成
[mng ingress-nginx-work]$ kubectl create secret tls k8s-test-ingress-tls --cert=tls.crt --key=tls.key --namespace=default --dry-run=client -o yaml > ingress-nginx-secret-tls.yaml
apiVersion: v1
data:
tls.crt: LS0tL...=
tls.key: LS0...==
kind: Secret
metadata:
creationTimestamp: null
name: k8s-test-ingress-tls
namespace: default
type: kubernetes.io/tls
[mng ingress-nginx-work]$ kubectl apply -f ingress-nginx-secret-tls.yaml
[mng ingress-nginx-work]$ kubectl get secret
NAME TYPE DATA AGE
...
k8s-test-ingress-tls kubernetes.io/tls 2 46s
...
[mng ~]$ curl -v --cacert ./tls/private_ca.crt -H 'Authorization: TestApp xxx...' https://ingress.test.k8s.local/ping |jq .
Trying 10.0.0.66:443...
* Connected to ingress.test.k8s.local (10.0.0.66) port 443 (#0)
...
* Server certificate:
* subject: C=JP; O=k8s; OU=test; CN=ingress
...
* subjectAltName: host "ingress.test.k8s.local" matched cert's "ingress.test.k8s.local"
* issuer: CN=private_ca
* SSL certificate verify ok.
...
< HTTP/2 200
< content-type: application/json; charset=utf-8
...
{
"message": "pong"
}
[mng ~]$ curl -v --cacert ../tls/private_ca.crt -H 'Authorization: TestApp xxx...' https://ingress.test.k8s.local/s3-buckets |jq .
Trying 10.0.0.66:443...
* Connected to ingress.test.k8s.local (10.0.0.66) port 443 (#0)
...
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* Server certificate:
* subject: C=JP; O=k8s; OU=test; CN=ingress
...
* subjectAltName: host "ingress.test.k8s.local" matched cert's "ingress.test.k8s.local"
* issuer: CN=private_ca
* SSL certificate verify ok.
...
< HTTP/2 200
< content-type: application/json; charset=utf-8
...
[
{
"CreationDate": "2025-05-09T06:13:39.058Z",
"Name": "bucket-for-testuser1"
}
]
[mng ~]$ curl -v --cacert ../tls/private_ca.crt -H 'Authorization: TestApp xxx...' https://ingress.test.k8s.local/json-from-cephfs |jq .
Trying 10.0.0.66:443...
* Connected to ingress.test.k8s.local (10.0.0.66) port 443 (#0)
...
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* Server certificate:
* subject: C=JP; O=k8s; OU=test; CN=ingress
...
* subjectAltName: host "ingress.test.k8s.local" matched cert's "ingress.test.k8s.local"
* issuer: CN=private_ca
* SSL certificate verify ok.
...
< HTTP/2 200
< content-type: application/json; charset=utf-8
...
{
"message": "Hello, world"
}
[mng ~]$ kubectl logs -n ingress-nginx ingress-nginx-controller-gp5vv
...
10.0.0.156 - - [04/May/2025:22:07:22 +0000] "GET /ping HTTP/2.0" 200 18 "-" "curl/7.76.1" 74 0.013 [default-k8s-test-app-ingress-443] [] 172.20.194.123:8443 18 0.013 200 3be385d45eb0016d67c59a493742b1e0
10.0.0.156 - - [04/May/2025:22:13:23 +0000] "GET /s3-buckets HTTP/2.0" 200 75 "-" "curl/7.76.1" 78 0.096 [default-k8s-test-appgw-ingress-443] [] 172.23.229.148:8443 75 0.096 200 a9b5a1ca16a8df581f1705204281ae51
10.0.0.156 - - [04/May/2025:22:16:08 +0000] "GET /json-from-cephfs HTTP/2.0" 200 26 "-" "curl/7.76.1" 83 0.027 [default-k8s-test-appgw-ingress-443] [] 172.20.194.71:8443 26 0.027 200 63a2ce537f3d7a46628fd182e7d7e0a5
...
※ Ingress経由のアクセスなっていることを確認
Ingress Controllerにより設定されるX-Forwarded-Forヘッダを実IPにする
=== Request Headers ===
2025/05/04 23:07:37 X-Request-Id: 20edbf198e6b024800f9b65426bbbdeb
2025/05/04 23:07:37 X-Real-Ip: 10.0.0.156
2025/05/04 23:07:37 X-Forwarded-For: 10.0.0.156
2025/05/04 23:07:37 X-Forwarded-Host: ingress.test.k8s.local
2025/05/04 23:07:37 X-Scheme: https
2025/05/04 23:07:37 User-Agent: curl/7.76.1
2025/05/04 23:07:37 X-Forwarded-Port: 443
2025/05/04 23:07:37 X-Forwarded-Proto: https
2025/05/04 23:07:37 X-Forwarded-Scheme: https
2025/05/04 23:07:37 Accept: */*
2025/05/04 23:07:37 Authorization: TestApp xxx...
[GIN] 2025/05/04 - 23:07:37 | 200 | 10.895582ms | 10.0.0.156 | GET "/json-from-cephfs"
デフォルトのHelmのvalues.yamlの設定のまま適用した場合、X-Real-IpやX-Forwarded-Forが接続元の外部クライアントのIPではなくVIPの紐づけられているワーカーノードのIP(上記の例では「10.0.0.156 (k8s-worker02.test.k8s.localノード)」)になっている。
バックエンドのアプリケーションでもクライアントの実IPを知るためX-Forwarded-Forにそれを載せたいためIngress Controller用ServiceのexternalTrafficPolicyを(デフォルトの「Cluster」から)「Local」へ変更する。
-
externalTrafficPolicy: Cluster (デフォルト)
Ingress Controller Podへ転送される際にIngress Controller用のService(kube-proxy)によりクライアントの送信元IPがワーカーノードのIPへ書き変えられる。これは転送先のIngress Controller Podは別のワーカーノードの場合もあるので戻り経路を保証するためである。よってIngress ControllerがX-Real-IpやX-Forwarded-ForをHTTPヘッダに追加する場合にも値がワーカーノードのIPになってしまう。 -
externalTrafficPolicy: Local
ワーカーノードに来たパケットをIngress Controller用のServiceはそのパケットが最初に到達したノード上のIngress Controller Podにのみ転送する。その際にクライアントの送信元IPはそのまま保持される(戻り経路もそのワーカーノードになるため問題ない)。
ちなみにこれはIngress Controller用のServiceとPodの間のことであり、その後のIngressの転送先であるバックエンドアプリケーション用のServiceとPodには関係ない(通常通りkube-proxyによって負荷分散される)。
[mng ~]$ cd ingress-nginx/charts/ingress-nginx
[mng ingress-nginx]$ vi values.yaml
[mng ingress-nginx]$ git diff values.yaml
diff --git a/charts/ingress-nginx/values.yaml b/charts/ingress-nginx/values.yaml
...
@@ -514,7 +514,7 @@ controller:
# -- External traffic policy of the external controller service. Set to "Local" to preserve source IP on providers supporting it.
# Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
- externalTrafficPolicy: ""
+ externalTrafficPolicy: "Local"
# -- Session affinity of the external controller service. Must be either "None" or "ClientIP" if set. Defaults to "None".
# Ref: https://kubernetes.io/docs/reference/networking/virtual-ips/#session-affinity
sessionAffinity: ""
...
[mng ingress-nginx]$ kubectl get svc -n ingress-nginx ingress-nginx-controller -o yaml
apiVersion: v1
kind: Service
metadata:
...
name: ingress-nginx-controller
namespace: ingress-nginx
...
spec:
...
externalTrafficPolicy: Local # ★変更
internalTrafficPolicy: Cluster
...
type: LoadBalancer
...
推奨されるTLSパラメータ設定を追加
[mng ~]$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-dtlls 1/1 Running 1 13h
ingress-nginx-controller-jdz4l 1/1 Running 1 13h
ingress-nginx-controller-nd5l9 1/1 Running 1 13h
[mng ~]$ kubectl kubectl exec -it ingress-nginx-controller-dtlls -n ingress-nginx -- nginx -V
nginx version: nginx/1.25.5
built by gcc 14.2.0 (Alpine 14.2.0)
built with OpenSSL 3.3.3 11 Feb 2025
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx...
以下のサイトで推奨のTLS設定パラメータを取得する。
# generated 2025-05-18, Mozilla Guideline v5.7, nginx 1.25.5 (UNSUPPORTED; end-of-life), OpenSSL 3.3.3, intermediate config, no OCSP
# https://ssl-config.mozilla.org/#server=nginx&version=1.25.5&config=intermediate&openssl=3.3.3&ocsp=false&guideline=5.7
http {
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
ssl_certificate /path/to/signed_cert_plus_intermediates;
ssl_certificate_key /path/to/private_key;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
}
# intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ecdh_curve X25519:prime256v1:secp384r1;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers off;
# see also ssl_session_ticket_key alternative to stateful session cache
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
# curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam
ssl_dhparam "/path/to/dhparam";
# HSTS
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
}
※ 以下のパラメータについてはHelmのvalues.yamlで"controller.config"へ上記の値をそのまま設定する。
- ssl-protocols
- ssl-ecdh-curve
- ssl-ciphers
- ssl-session-timeout
- ssl-session-cache
ssl-dh-paramのSecretを作成
[mng ~]$ cd ingress-nginx-work
[mng ingress-nginx-work]$ openssl dhparam -out dhparam.pem 4096
Generating DH parameters, 4096 bit long safe prime
...
[mng ingress-nginx-work]$ echo "dhparam.pem: \"$(base64 -w 0 dhparam.pem)\""
dhparam.pem: "LS0tL......."
...
[mng ingress-nginx-work]$ vi ingress-nginx-secret-tls-dh-param.yaml
apiVersion: v1
kind: Secret
metadata:
name: dhparam-secret
namespace: ingress-nginx # ネームスペースはIngress Controllerの方で設定
type: Opaque
data:
dhparam.pem: "LS0tLS1......"
[mng ingress-nginx-work]$ kubectl apply -f ingress-nginx-secret-tls-dh-param.yaml
[mng ingress-nginx-work]$ kubectl get secret -n ingress-nginx
NAME TYPE DATA AGE
dhparam-secret Opaque 1 20s
...
※上記で作成したSecret名をHelmのvalues.yamlで"controller.config.ssl-dh-param"へ設定する。
Strict-Transport-Securityヘッダ用のConfigMapを作成
応答メッセージに追加設定するヘッダ情報を定義したConfigMapを作成する。今回はその一つとしてStrict-Transport-Securityヘッダを設定する。
[mng ingress-nginx-work]$ vi ingress-nginx-configmap-add-headers.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-add-headers
namespace: ingress-nginx # ネームスペースはIngress Controllerの方で設定
data:
Strict-Transport-Security: 'max-age=63072000; includeSubDomains; preload'
[mng ingress-nginx-work]$ kubectl apply -f ingress-nginx-configmap-add-headers.yaml
[mng ingress-nginx-work]$ kubectl get cm -n ingress-nginx
NAME DATA AGE
ingress-add-headers 1 15s
Ingressでバックエンド転送時に追加するヘッダ用のConfigMapを作成
[mng ingress-nginx-work]$ vi ingress-nginx-configmap-proxy-set-headers.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-proxy-set-headers
namespace: ingress-nginx # ネームスペースはIngress Controllerの方で設定
data:
X-TLS-SNI: $ssl_server_name;
X-Server-Addr: $server_addr;
X-SSL-Protocol: $ssl_protocol;
X-SSL-Cipher: $ssl_cipher;
[mng ingress-nginx-work]$ kubectl apply -f ingress-nginx-configmap-proxy-set-headers.yaml
[mng ingress-nginx-work]$ kubectl get cm -n ingress-nginx
NAME DATA AGE
...
ingress-proxy-set-headers 4 7s
...
Helmのvalues.yamlを設定
[mng ~]$ cd ingress-nginx/charts/ingress-nginx
[mng ingress-nginx]$ vi values.yaml
[mng ingress-nginx]$ git diff values.yaml
diff --git a/charts/ingress-nginx/values.yaml b/charts/ingress-nginx/values.yaml
index 405b22212..042b82493 100644
--- a/charts/ingress-nginx/values.yaml
+++ b/charts/ingress-nginx/values.yaml
@@ -53,7 +53,24 @@ controller:
https: 443
# -- Global configuration passed to the ConfigMap consumed by the controller. Values may contain Helm templates.
# Ref.: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
- config: {}
+ config:
+ ssl-protocols: 'TLSv1.2 TLSv1.3'
+ ssl-ecdh-curve: 'X25519:prime256v1:secp384r1'
+ ssl-ciphers: 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305'
+ ssl-session-timeout: '1d'
+ ssl-session-cache: 'shared:MozSSL:10m'
+ ssl-dh-param: ingress-nginx/dhparam-secret
+ add-headers: ingress-nginx/ingress-add-headers
+ proxy-set-headers: ingress-nginx/ingress-proxy-set-headers
+
+ proxy-request-buffering: "off"
+ client-max-body-size: 50M
+ client-body-timeout: 600
+ client-header-timeout: 300
+ send-timeout: 600
+ proxy-send-timeout: 600
+ proxy-read-timeout: 600
+
# -- Annotations to be added to the controller config configuration configmap.
configAnnotations: {}
# -- Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headers
※ ConfigMapやSecretを参照するものは(ingress-nginxなど)設定先のネームスペースをつけないとIngress Controllerが参照できないので留意。
- ssl-dh-param
- add-headers
- proxy-set-headers
[mng charts]$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-sgpsk 1/1 Terminating 0 9m50s
ingress-nginx-controller-xj775 1/1 Terminating 0 9m50s
ingress-nginx-controller-zkctn 1/1 Terminating 0 9m50s
# ログを確認 (ConfigMap)
[mng charts]$ kubectl logs -n ingress-nginx ingress-nginx-controller-sgpsk |grep ConfigMap
I0518 15:15:47.631935 2 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c19ff325-b031-4c95-9982-2e2f269edcbf", APIVersion:"v1", ResourceVersion:"969616", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
W0518 15:15:49.012828 2 nginx.go:558] Error reading ConfigMap "ingress-proxy-set-headers" from local store: no object matching key "ingress-proxy-set-headers" in local store
W0518 15:15:49.012880 2 nginx.go:568] Error reading ConfigMap "ingress-add-headers" from local store: no object matching key "ingress-add-headers" in local store
# ログを確認 (Secret)
[mng charts]$ kubectl logs -n ingress-nginx ingress-nginx-controller-sgpsk |grep Secret
W0518 15:15:49.012891 2 nginx.go:580] Error reading Secret "dhparam-secret" from local store: no object matching key "dhparam-secret" in local store
Helmで再インストール
[mng charts]$ helm uninstall ingress-nginx -n ingress-nginx
[mng charts]$ helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace -f ./ingress-nginx/values.yaml
Ingress定義を更新
ssl-prefer-server-ciphersの設定はIngressリソースのアノテーションで実施する。
[mng ~]$ cd ingress-nginx-work
[mng ingress-nginx-work] vi ingress-nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-test-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers: "false" # ★追加
spec:
ingressClassName: nginx
tls:
- hosts:
- ingress.test.k8s.local
- ingress-harbor.test.k8s.local
secretName: k8s-test-ingress-tls
defaultBackend:
service:
name: dummy-404
port:
number: 443
rules:
- host: ingress.test.k8s.local
http:
paths:
- path: /ping
pathType: Exact
backend:
service:
name: k8s-test-app-ingress
port:
number: 443
- path: /s3-buckets
pathType: Exact
backend:
service:
name: k8s-test-appgw-ingress
port:
number: 443
- path: /json-from-cephfs
pathType: Exact
backend:
service:
name: k8s-test-appgw-ingress
port:
number: 443
- path: /
pathType: Prefix
backend:
service:
name: dummy-404
port:
number: 443
- host: ingress-harbor.test.k8s.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: harbor-ingress-alias
port:
number: 443
[mng ingress-nginx-work]$ kubectl apply -f ingress-nginx-ingress.yaml
設定の反映確認
[mng charts]$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-drd4q 1/1 Running 0 11m
ingress-nginx-controller-fxz7k 1/1 Running 0 11m
ingress-nginx-controller-z4p2p 1/1 Running 0 11m
[mng charts]$ kubectl exec -it ingress-nginx-controller-drd4q -n ingress-nginx -- bash
ingress-nginx-controller-drd4q:/etc/nginx$ pwd
/etc/nginx
ingress-nginx-controller-drd4q:/etc/nginx$ ls -l
total 108
-rw-r--r-- 1 www-data www-data 1077 Apr 23 10:54 fastcgi.conf
-rw-r--r-- 1 www-data www-data 1077 Apr 23 10:54 fastcgi.conf.default
-rw-r--r-- 1 www-data www-data 1007 Apr 23 10:54 fastcgi_params
-rw-r--r-- 1 www-data www-data 1007 Apr 23 10:54 fastcgi_params.default
-rw-r--r-- 1 www-data www-data 2837 Apr 23 10:54 koi-utf
-rw-r--r-- 1 www-data www-data 2223 Apr 23 10:54 koi-win
drwxr-xr-x 1 www-data www-data 22 Apr 18 15:26 lua
-rw-r--r-- 1 www-data www-data 5349 Apr 23 10:54 mime.types
-rw-r--r-- 1 www-data www-data 5349 Apr 23 10:54 mime.types.default
drwxr-xr-x 2 www-data www-data 53 Apr 23 10:53 modsecurity
drwxr-xr-x 2 www-data www-data 4096 Apr 23 10:55 modules
-rw-r--r-- 1 www-data www-data 29732 Apr 18 15:26 nginx.conf
-rw-r--r-- 1 www-data www-data 2656 Apr 23 10:54 nginx.conf.default
-rw-r--r-- 1 www-data www-data 2 Apr 29 11:10 opentracing.json
drwxr-xr-x 8 www-data www-data 4096 Apr 23 10:55 owasp-modsecurity-crs
-rw-r--r-- 1 www-data www-data 636 Apr 23 10:54 scgi_params
-rw-r--r-- 1 www-data www-data 636 Apr 23 10:54 scgi_params.default
drwxr-xr-x 2 www-data www-data 24 Apr 29 11:10 template
-rw-r--r-- 1 www-data www-data 664 Apr 23 10:54 uwsgi_params
-rw-r--r-- 1 www-data www-data 664 Apr 23 10:54 uwsgi_params.default
-rw-r--r-- 1 www-data www-data 3610 Apr 23 10:54 win-utf
# 設定が反映されているかどうか確認できる (通常のnginxの設定ファイルとして生成されたもの)
ingress-nginx-controller-drd4q:/etc/nginx$ less nginx.conf
[mng ~]$ curl -v --cacert ./tls/private_ca.crt -H 'Authorization: TestApp xxx...' https://ingress.test.k8s.local/json-from-cephfs | jq .
* Connected to ingress.test.k8s.local (10.0.0.66) port 443 (#0)
...
* Server certificate:
* subject: C=JP; O=k8s; OU=test; CN=ingress
...
* subjectAltName: host "ingress.test.k8s.local" matched cert's "ingress.test.k8s.local"
...
< HTTP/2 200
...
< strict-transport-security: max-age=63072000; includeSubDomains; preload ★★★
{
"message": "Hello, world"
}
[mng ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
...
k8s-test-appgw-78c6998fbb-48cq4 1/1 Running 3 16h
k8s-test-appgw-78c6998fbb-d4ckq 1/1 Running 3 16h
k8s-test-appgw-78c6998fbb-mrz9n 1/1 Running 3 16h
...
[mng ~]$ kubectl logs k8s-test-appgw-78c6998fbb-48cq4
...
=== Request Headers ===
2025/05/05 15:54:17 X-Forwarded-Port: 443
2025/05/05 15:54:17 X-Forwarded-Proto: https
2025/05/05 15:54:17 X-Scheme: https
2025/05/05 15:54:17 X-Request-Id: 89ebd4258a1c97c0aab4b2e4e1646d35
2025/05/05 15:54:17 User-Agent: curl/7.76.1
2025/05/05 15:54:17 X-Forwarded-Host: ingress.test.k8s.local
2025/05/05 15:54:17 X-Ssl-Protocol: TLSv1.3; ★★★
2025/05/05 15:54:17 X-Server-Addr: 172.20.194.73; ★★★
2025/05/05 15:54:17 X-Tls-Sni: ingress.test.k8s.local; ★★★
2025/05/05 15:54:17 Accept: */*
2025/05/05 15:54:17 Authorization: TestApp xxx...
2025/05/05 15:54:17 X-Real-Ip: 10.0.0.190
2025/05/05 15:54:17 X-Forwarded-For: 10.0.0.190
2025/05/05 15:54:17 X-Forwarded-Scheme: https
2025/05/05 15:54:17 X-Ssl-Cipher: TLS_AES_256_GCM_SHA384; ★★★
[GIN] 2025/05/05 - 15:54:17 | 200 | 7.657546ms | 10.0.0.190 | GET "/json-from-cephfs"
...