Ubuntu で Kubernetes 三昧その11(Hashicorp Vault)からの続きです。
sidecar こんてなについて見るために、istio について確認してみます。
istio ネームスペースを作成します。
@masternode1:~/kubernetes-examples/service-mesh/istio$ kubectl create namespace istio-system
namespace/istio-system created
次に、istio CRD をインストールします。
@masternode1:~/kubernetes-examples/service-mesh/istio$ helm repo add istio https://istio-release.storage.googleapis.com/charts
"istio" has been added to your repositories
@masternode1:~/kubernetes-examples/service-mesh/istio$ helm install istio-base istio/base -n istio-system
NAME: istio-base
LAST DEPLOYED: Sat Jul 5 21:48:17 2025
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Istio base successfully installed!
To learn more about the release, try:
$ helm status istio-base -n istio-system
$ helm get all istio-base -n istio-system
@masternode1:~/kubernetes-examples/service-mesh/istio$ helm install istiod istio/istiod -n istio-system --wait
NAME: istiod
LAST DEPLOYED: Sat Jul 5 21:54:24 2025
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
"istiod" successfully installed!
To learn more about the release, try:
$ helm status istiod -n istio-system
$ helm get all istiod -n istio-system
Next steps:
* Deploy a Gateway: https://istio.io/latest/docs/setup/additional-setup/gateway/
* Try out our tasks to get started on common configurations:
* https://istio.io/latest/docs/tasks/traffic-management
* https://istio.io/latest/docs/tasks/security/
* https://istio.io/latest/docs/tasks/policy-enforcement/
* Review the list of actively supported releases, CVE publications and our hardening guide:
* https://istio.io/latest/docs/releases/supported-releases/
* https://istio.io/latest/news/security/
* https://istio.io/latest/docs/ops/best-practices/security/
For further documentation see https://istio.io website
確認します。
@masternode1:~/kubernetes-examples/service-mesh/istio$ kubectl get all -n istio-system
NAME READY STATUS RESTARTS AGE
pod/istiod-674559ccfd-vftqx 1/1 Running 0 68s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/istiod ClusterIP 10.107.14.98 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 68s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/istiod 1/1 1 1 68s
NAME DESIRED CURRENT READY AGE
replicaset.apps/istiod-674559ccfd 1 1 1 68s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/istiod Deployment/istiod cpu: <unknown>/80% 1 5 1 68s
@masternode1:~/kubernetes-examples/service-mesh/istio$ kubectl label namespace default istio-injection=enabled --overwrite
namespace/default labeled
@masternode1:~/kubernetes-examples/service-mesh/istio$ kubectl get namespace -L istio-injection
NAME STATUS AGE ISTIO-INJECTION
default Active 14d enabled
dev Active 7d17h
istio-system Active 18m
kube-node-lease Active 14d
kube-public Active 14d
kube-system Active 14d
vault Active 4d
一つ Deployment を作成してみます。
@masternode1:~/kubernetes-examples$ kubectl apply -f - <<EOF
> apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginxdeployment
replicas: 2
template:
metadata:
labels:
app: nginxdeployment
spec:
containers:
- name: nginxdeployment
image: nginx:latest
ports:
- containerPort: 80
> EOF
deployment.apps/nginx-deployment created
sidecar が inject されたか確認します。
@masternode1:~/kubernetes-examples$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-7cc47cc8c9-kj6f5 2/2 Running 0 115s
nginx-deployment-7cc47cc8c9-qjd5t 2/2 Running 0 115s
@masternode1:~/kubernetes-examples$ kubectl describe pod nginx-deployment-7cc47cc8c9-qjd5t
Name: nginx-deployment-7cc47cc8c9-qjd5t
Namespace: default
Priority: 0
Service Account: default
Node: workernode1/10.0.2.7
Start Time: Sat, 05 Jul 2025 22:05:50 +0900
Labels: app=nginxdeployment
pod-template-hash=7cc47cc8c9
security.istio.io/tlsMode=istio
service.istio.io/canonical-name=nginxdeployment
service.istio.io/canonical-revision=latest
Annotations: istio.io/rev: default
kubectl.kubernetes.io/default-container: nginxdeployment
kubectl.kubernetes.io/default-logs-container: nginxdeployment
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
sidecar.istio.io/status:
{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["workload-socket","credential-socket","workload-certs","istio-env...
Status: Running
IP: 10.32.0.7
IPs:
IP: 10.32.0.7
Controlled By: ReplicaSet/nginx-deployment-7cc47cc8c9
Init Containers:
istio-init:
Container ID: containerd://039a4169ba9e5914f3115202d0a8d86926302b6c1a07fe985881635a42d6c3db
Image: docker.io/istio/proxyv2:1.26.2
Image ID: docker.io/istio/proxyv2@sha256:921c5ce2c5122facd9a25a7f974aaa4d3679cee38cb3f809e10618384fc3ce7f
Port: <none>
Host Port: <none>
Args:
istio-iptables
-p
15001
-z
15006
-u
1337
-m
REDIRECT
-i
*
-x
-b
*
-d
15090,15021,15020
--log_output_level=default:info
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 05 Jul 2025 22:06:02 +0900
Finished: Sat, 05 Jul 2025 22:06:02 +0900
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zx724 (ro)
Containers:
nginxdeployment:
Container ID: containerd://c3116ad65cd2e057fb6bb6f33c89004d1e4ef424575bcc05788696a50f6dd71a
Image: nginx:latest
Image ID: docker.io/library/nginx@sha256:93230cd54060f497430c7a120e2347894846a81b6a5dd2110f7362c5423b4abc
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 05 Jul 2025 22:06:05 +0900
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zx724 (ro)
istio-proxy:
Container ID: containerd://439bc22c2ccb363d3ec8c7ed55f3af636a9ff0aef71adda2447c8414caa48f0d
Image: docker.io/istio/proxyv2:1.26.2
Image ID: docker.io/istio/proxyv2@sha256:921c5ce2c5122facd9a25a7f974aaa4d3679cee38cb3f809e10618384fc3ce7f
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--log_output_level=default:info
State: Running
Started: Sat, 05 Jul 2025 22:06:06 +0900
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Readiness: http-get http://:15021/healthz/ready delay=0s timeout=3s period=15s #success=1 #failure=4
Startup: http-get http://:15021/healthz/ready delay=0s timeout=3s period=1s #success=1 #failure=600
Environment:
PILOT_CERT_PROVIDER: istiod
CA_ADDR: istiod.istio-system.svc:15012
POD_NAME: nginx-deployment-7cc47cc8c9-qjd5t (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
HOST_IP: (v1:status.hostIP)
ISTIO_CPU_LIMIT: 2 (limits.cpu)
PROXY_CONFIG: {}
ISTIO_META_POD_PORTS: [
{"containerPort":80,"protocol":"TCP"}
]
ISTIO_META_APP_CONTAINERS: nginxdeployment
GOMEMLIMIT: 1073741824 (limits.memory)
GOMAXPROCS: 2 (limits.cpu)
ISTIO_META_CLUSTER_ID: Kubernetes
ISTIO_META_NODE_NAME: (v1:spec.nodeName)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
ISTIO_META_WORKLOAD_NAME: nginx-deployment
ISTIO_META_OWNER: kubernetes://apis/apps/v1/namespaces/default/deployments/nginx-deployment
ISTIO_META_MESH_ID: cluster.local
TRUST_DOMAIN: cluster.local
Mounts:
/etc/istio/pod from istio-podinfo (rw)
/etc/istio/proxy from istio-envoy (rw)
/var/lib/istio/data from istio-data (rw)
/var/run/secrets/credential-uds from credential-socket (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zx724 (ro)
/var/run/secrets/tokens from istio-token (rw)
/var/run/secrets/workload-spiffe-credentials from workload-certs (rw)
/var/run/secrets/workload-spiffe-uds from workload-socket (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
workload-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
credential-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
workload-certs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
istio-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 43200
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
kube-api-access-zx724:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m2s default-scheduler Successfully assigned default/nginx-deployment-7cc47cc8c9-qjd5t to workernode1
Normal Pulling 4m2s kubelet Pulling image "docker.io/istio/proxyv2:1.26.2"
Normal Pulled 3m50s kubelet Successfully pulled image "docker.io/istio/proxyv2:1.26.2" in 1.697s (11.597s including waiting). Image size: 92614371 bytes.
Normal Created 3m50s kubelet Created container: istio-init
Normal Started 3m50s kubelet Started container istio-init
Normal Pulling 3m49s kubelet Pulling image "nginx:latest"
Normal Pulled 3m47s kubelet Successfully pulled image "nginx:latest" in 1.615s (2.486s including waiting). Image size: 72225394 bytes.
Normal Created 3m47s kubelet Created container: nginxdeployment
Normal Started 3m47s kubelet Started container nginxdeployment
Normal Pulled 3m47s kubelet Container image "docker.io/istio/proxyv2:1.26.2" already present on machine
Normal Created 3m46s kubelet Created container: istio-proxy
Normal Started 3m46s kubelet Started container istio-proxy
Warning Unhealthy 3m46s kubelet Startup probe failed: Get "http://10.32.0.7:15021/healthz/ready": dial tcp 10.32.0.7:15021: connect: connection refused
istio-proxy コンテナが、Pod に inject されているのがわかります。
つづく