LoginSignup
5
8

More than 3 years have passed since last update.

Prometheus と Grafana を Minikube にイチから構築するステップまとめ

Posted at

Helm, Operatorのおかげでkubernetesへのリソース定義は極小化してきました。(良いこと)
Production環境での稼働を想定した時に気になるポイントがないか確認したく、Prometheus, Grafanaをあえてイチから構築してみます。
検証環境はminikubeです。他のCloud等使用される場合はリソース定義を適宜修正ください。

動作環境

MacBook Proの情報です。

$ system_profiler SPHardwareDataType
Model Name: MacBook Pro
Model Identifier: MacBookPro14,3
Processor Name: Intel Core i7
Processor Speed: 2.9 GHz
Number of Processors: 1
Total Number of Cores: 4
Memory: 16 GB

minikubeまわりの情報です。

$ minikube version
minikube version: v1.4.0

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T07:10:00Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

minikubeでは、以下イメージを使用します。

Image Version
prom/prometheus v2.11.1
grafana/grafana 6.2.5
busybox latest
prom/node-exporter v0.15.2
quay.io/coreos/kube-state-metrics v1.8.0

minikubeの設定

LB は ingressを使用します。
minikube では、addon を許可する必要があります。

$ minikube addons enable ingress

Prometheus, Grafana はリソースを消費するため、起動オプションで拡張します。

$ minikube config set memory 8192
$ minikube config set cpus 4
$ minikube config set disk-size 40g

minikubeの起動

apiserver の http hook を許可するフラグを付与します。

minikube start --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook

hosts追記

/etc/hosts に今回使用するFQDNを追記します。

echo `minikube ip` k8s.3tier.webapp alertmanager.minikube prometheus.minikube grafana.minikube >> /etc/hosts

namespace作成

新規に namespace monitoring を作成します。

monitoring-namespace.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: monitoring
  labels:
    name: monitoring

Prometheus のインストール

kubernetesリポジトリにあるサンプルを参考に作成していきます。
ServiceAccount, ClusterRole, ClusterRoleBinding を定義します。

prometheus-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
  - apiGroups:
      - ""
    resources:
      - nodes
      - nodes/metrics
      - nodes/proxy
      - services
      - endpoints
      - pods
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - get
  - nonResourceURLs:
      - "/metrics"
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitoring

PV を作成します。

prometheus-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: prometheus-pv
  labels:
    k8s-3tier-webapp: prometheus
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 20Gi
  persistentVolumeReclaimPolicy: Retain
  storageClassName: prometheus
  hostPath:
    path: /data/pv002

メトリクスの定義を作成します。
ConfigMapとして作成し、起動時にマウントします。

prometheus-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  prometheus.yml: |
    scrape_configs:
    - job_name: prometheus
      static_configs:
      - targets:
        - localhost:9090

    - job_name: kubernetes-apiservers
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: endpoints
        api_server: https://192.168.99.100:8443
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: keep
        regex: default;kubernetes;https
        source_labels:
        - __meta_kubernetes_namespace
        - __meta_kubernetes_service_name
        - __meta_kubernetes_endpoint_port_name
      scheme: https

    - job_name: kubernetes-nodes-kubelet
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
        api_server: https://192.168.99.100:8443
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      scheme: https

    - job_name: kubernetes-nodes-cadvisor
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
        api_server: https://192.168.99.100:8443
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __metrics_path__
        replacement: /metrics/cadvisor
      scheme: https

    - job_name: kubernetes-service-endpoints
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scrape
      - action: replace
        regex: (https?)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scheme
        target_label: __scheme__
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_service_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_service_name
        target_label: kubernetes_name

    - job_name: kubernetes-services
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module:
        - http_2xx
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_probe
      - source_labels:
        - __address__
        target_label: __param_target
      - replacement: blackbox
        target_label: __address__
      - source_labels:
        - __param_target
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - source_labels:
        - __meta_kubernetes_service_name
        target_label: kubernetes_name

    - job_name: kubernetes-pods
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_scrape
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_pod_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: kubernetes_pod_name
    alerting:
      alertmanagers:
      - kubernetes_sd_configs:
        - role: pod
          api_server: https://192.168.99.100:8443
          tls_config:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace]
          regex: kube-system
          action: keep
        - source_labels: [__meta_kubernetes_pod_label_k8s_app]
          regex: alertmanager
          action: keep
        - source_labels: [__meta_kubernetes_pod_container_port_number]
          regex:
          action: drop

StatefulSet を定義します。
サンプルから変更したのは、namespace, labels, lelector くらいです。

prometheus-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v2.2.1
spec:
  serviceName: "prometheus"
  replicas: 1
  podManagementPolicy: "Parallel"
  updateStrategy:
   type: "RollingUpdate"
  selector:
    matchLabels:
      k8s-3tier-webapp: prometheus
  template:
    metadata:
      labels:
        k8s-3tier-webapp: prometheus
    spec:
      serviceAccountName: prometheus
      initContainers:
      - name: "init-chown-data"
        image: "busybox:latest"
        imagePullPolicy: "IfNotPresent"
        command: ["chown", "-R", "65534:65534", "/data"]
        volumeMounts:
        - name: prometheus-persistent-storage
          mountPath: /data
          subPath: ""
      containers:
        - name: prometheus-server
          image: "prom/prometheus:v2.2.1"
          imagePullPolicy: "IfNotPresent"
          args:
            - --config.file=/etc/config/prometheus.yml
            - --storage.tsdb.path=/data
            - --web.console.libraries=/etc/prometheus/console_libraries
            - --web.console.templates=/etc/prometheus/consoles
            - --web.enable-lifecycle
          ports:
            - containerPort: 9090
          readinessProbe:
            httpGet:
              path: /-/ready
              port: 9090
            initialDelaySeconds: 30
            timeoutSeconds: 30
          livenessProbe:
            httpGet:
              path: /-/healthy
              port: 9090
            initialDelaySeconds: 30
            timeoutSeconds: 30

          volumeMounts:
            - name: config-volume
              mountPath: /etc/config
            - name: prometheus-persistent-storage
              mountPath: /data
              subPath: ""
      terminationGracePeriodSeconds: 300
      volumes:
        - name: config-volume
          configMap:
            name: prometheus-config
  volumeClaimTemplates:
  - metadata:
      name: prometheus-persistent-storage
    spec:
      storageClassName: prometheus
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: "16Gi"

Web アクセスするために、 Service, Ingress を定義します。

prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
spec:
  type: ClusterIP
  selector:
    k8s-3tier-webapp: prometheus
  ports:
  - protocol: TCP
    port: 9090
prometheus-ingress.yaml
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
spec:
  type: ClusterIP
  selector:
    k8s-3tier-webapp: prometheus
  ports:
  - protocol: TCP
    port: 9090

kubectl apply で minikube へ反映します。

node-exporter のインストール

その名の通り、 Node のメトリクスを取得するための Exporter です。
https://github.com/prometheus/node_exporter

Node の /proc, /sys をみるため、 PV, PVC を作成します。

node-exporter-pv-proc.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: node-exporter-pv-proc
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
    name: node-exporter-hostpath-proc
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 20Gi
  persistentVolumeReclaimPolicy: Delete
  storageClassName: node-exporter-proc
  hostPath:
    path: /data/pv003
node-exporter-pvc-proc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: node-exporter-pvc-proc
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 15Gi
  storageClassName: node-exporter-proc
  selector:
    matchLabels:
      name: node-exporter-hostpath-proc
node-exporter-pv-sys.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: node-exporter-pv-sys
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
    name: node-exporter-hostpath-sys
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 20Gi
  persistentVolumeReclaimPolicy: Delete
  storageClassName: node-exporter-sys
  hostPath:
    path: /data/pv004
node-exporter-pvc-sys.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: node-exporter-pvc-sys
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 15Gi
  storageClassName: node-exporter-sys
  selector:
    matchLabels:
      name: node-exporter-hostpath-sys

サンプルを参考にさせていただき、 DaemonSet, Service を定義します。

node-exporter-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-3tier-webapp: node-exporter
  updateStrategy:
    type: OnDelete
  template:
    metadata:
      labels:
        k8s-3tier-webapp: node-exporter
    spec:
      containers:
        - name: prometheus-node-exporter
          image: "prom/node-exporter:v0.18.1"
          imagePullPolicy: "IfNotPresent"
          args:
            - --path.procfs=/host/proc
            - --path.sysfs=/host/sys
          ports:
            - name: metrics
              containerPort: 9100
              hostPort: 9100
          volumeMounts:
            - name: node-exporter-persistent-storage-proc
              mountPath: /host/proc
              readOnly:  true
            - name: node-exporter-persistent-storage-sys
              mountPath: /host/sys
              readOnly: true
          resources:
            limits:
              memory: 50Mi
            requests:
              cpu: 100m
              memory: 50Mi
      hostNetwork: true
      hostPID: true
      volumes:
      - name: node-exporter-persistent-storage-proc
        persistentVolumeClaim:
          claimName: node-exporter-pvc-proc
      - name: node-exporter-persistent-storage-sys
        persistentVolumeClaim:
          claimName: node-exporter-pvc-sys
node-exporter-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: node-exporter
  namespace: monitoring
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-3tier-webapp: node-exporter
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  clusterIP: None
  ports:
    - name: metrics
      port: 9100
      protocol: TCP
      targetPort: 9100
  selector:
    k8s-3tier-webapp: node-exporter

kubectl apply で minikube へ反映します。

kube-state-metrics のインストール

次に、 kube-state-metrics で kubernetes のメトリクスを取得していきます。
例のごとくサンプルを参考にします。
ServiceAccount, ClusterRole, ClusterRoleBinding を定義します。

kube-state-metrics-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: monitoring
  labels:
    k8s-3tier-webapp: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kube-state-metrics
  namespace: monitoring
  labels:
    k8s-3tier-webapp: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - secrets
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  - persistentvolumeclaims
  - persistentvolumes
  - namespaces
  - endpoints
  verbs:
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - replicasets
  - ingresses
  verbs:
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  - daemonsets
  - deployments
  - replicasets
  verbs:
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - cronjobs
  - jobs
  verbs:
  - list
  - watch
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - list
  - watch
- apiGroups:
  - authentication.k8s.io
  resources:
  - tokenreviews
  verbs:
  - create
- apiGroups:
  - authorization.k8s.io
  resources:
  - subjectaccessreviews
  verbs:
  - create
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - list
  - watch
- apiGroups:
  - certificates.k8s.io
  resources:
  - certificatesigningrequests
  verbs:
  - list
  - watch
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  verbs:
  - list
  - watch
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - mutatingwebhookconfigurations
  - validatingwebhookconfigurations
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
  labels:
    k8s-3tier-webapp: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: monitoring

Deployment, Service を定義します。

kube-state-metrics-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: monitoring
  labels:
    k8s-3tier-webapp: kube-state-metrics
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-3tier-webapp: kube-state-metrics
  template:
    metadata:
      labels:
        k8s-3tier-webapp: kube-state-metrics
    spec:
      containers:
      - image: quay.io/coreos/kube-state-metrics:v1.8.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 5
        name: kube-state-metrics
        ports:
        - containerPort: 8080
          name: http-metrics
        - containerPort: 8081
          name: telemetry
        readinessProbe:
          httpGet:
            path: /
            port: 8081
          initialDelaySeconds: 5
          timeoutSeconds: 5
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: kube-state-metrics
kube-state-metrics-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kube-state-metrics
  namespace: monitoring
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-3tier-webapp: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  clusterIP: None
  ports:
    - name: http-metrics
      port: 8080
      targetPort: http-metrics
    - name: telemetry
      port: 8081
      targetPort: telemetry
  selector:
    k8s-3tier-webapp: kube-state-metrics

kubectl apply で minikube へ反映します。

Prometheus の稼働確認

メトリクス監視されているかを確認します。
node-exporter, kube-state-metricsのメトリクスを取得していることも確認します。
port: 9100 が node-exporter、port: 8080 と 8081 が kube-state-metrics です。

http://prometheus.minikube/targets
スクリーンショット 2019-10-09 1.00.56.png

Grafana のインストール

PV, PVC を作成します。

grafana-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: grafana-pv
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
    name: grafana-hostpath
spec:
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 20Gi
  persistentVolumeReclaimPolicy: Retain
  storageClassName: grafana
  hostPath:
    path: /data/pv005
grafana-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: grafana-pvc
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 15Gi
  storageClassName: grafana
  selector:
    matchLabels:
      name: grafana-hostpath

Web アクセスもあるため、Deployment, Service, Ingress を定義します。

grafana-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
spec:
  selector:
    matchLabels:
      app: grafana
  replicas: 1
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:6.2.5
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: grafana-persistent-storage
          mountPath: /var/lib/grafana
        securityContext:
          runAsUser: 0
      volumes:
      - name: grafana-persistent-storage
        persistentVolumeClaim:
          claimName: grafana-pvc
grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
spec:
  type: ClusterIP
  selector:
    app: grafana
  ports:
  - protocol: TCP
    port: 3000
    targetPort: 3000
grafana-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
spec:
  rules:
  - host: grafana.minikube
    http:
      paths:
      - path:
        backend:
          serviceName: grafana
          servicePort: 3000

kubectl apply で minikube へ反映します。

Grafana の稼働確認

ブラウザからアクセスし、ユーザーパスワードを入力し、HOME画面を表示します。
http://grafana.minikube
スクリーンショット 2019-10-09 1.13.59.png

"Save & Test" を実行し、Working となることを確認します。

Prometheus + Grafana 連携

"Add Data Source" を選択します。
"Choose data source type" で "Prometheus" を選択し、以下画面に情報を入力します。

URL : http://prometheus.minikube
スクリーンショット 2019-10-09 1.18.47.png

Grafana dashboard の利用

https://grafana.com/grafana/dashboards
では、サンプルの dashboard が公開されています。
今回は以下 dashboard をインポートしてみます。
https://grafana.com/grafana/dashboards/8685

左の + ボタンから "Import" を選択します。
スクリーンショット 2019-10-09 1.20.47.png

ID欄に"8685"を入力し、uidを変更し、Datasourceは"Prometheus"を指定し、"Import"を押します。
スクリーンショット 2019-10-09 1.27.30.png

表示されます!(一部 N/A あり)
スクリーンショット 2019-10-09 1.28.26.png

構築結果

monitoring namespace への構築結果です。

$ kubectl -n monitoring get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/grafana-78d5dfd56f-h7r5n              1/1     Running   0          25h
pod/kube-state-metrics-679945478f-nqxv2   1/1     Running   0          26h
pod/node-exporter-6wrhd                   1/1     Running   0          24h
pod/prometheus-0                          1/1     Running   0          26h

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/grafana              ClusterIP   10.96.64.84     <none>        3000/TCP            25h
service/kube-state-metrics   ClusterIP   None            <none>        8080/TCP,8081/TCP   26h
service/node-exporter        ClusterIP   None            <none>        9100/TCP            24h
service/prometheus           ClusterIP   10.106.206.57   <none>        9090/TCP            26h

NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/node-exporter   1         1         1       1            1           <none>          24h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana              1/1     1            1           25h
deployment.apps/kube-state-metrics   1/1     1            1           26h

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-78d5dfd56f              1         1         1       25h
replicaset.apps/kube-state-metrics-679945478f   1         1         1       26h

NAME                          READY   AGE
statefulset.apps/prometheus   1/1     26h

$ kubectl -n monitoring get pv
NAME                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                   STORAGECLASS         REASON   AGE
grafana-pv              20Gi       RWX            Retain           Bound    monitoring/grafana-pvc                                  grafana                       25h
node-exporter-pv-proc   20Gi       RWO            Delete           Bound    monitoring/node-exporter-pvc-proc                       node-exporter-proc            25h
node-exporter-pv-sys    20Gi       RWO            Delete           Bound    monitoring/node-exporter-pvc-sys                        node-exporter-sys             25h
prometheus-pv           20Gi       RWO            Retain           Bound    monitoring/prometheus-persistent-storage-prometheus-0   prometheus                    26h

$ kubectl -n monitoring get pvc
NAME                                         STATUS   VOLUME                  CAPACITY   ACCESS MODES   STORAGECLASS         AGE
grafana-pvc                                  Bound    grafana-pv              20Gi       RWX            grafana              25h
node-exporter-pvc-proc                       Bound    node-exporter-pv-proc   20Gi       RWO            node-exporter-proc   25h
node-exporter-pvc-sys                        Bound    node-exporter-pv-sys    20Gi       RWO            node-exporter-sys    25h
prometheus-persistent-storage-prometheus-0   Bound    prometheus-pv           20Gi       RWO            prometheus           26h

$ kubectl -n monitoring get ing
NAME         HOSTS                 ADDRESS     PORTS   AGE
grafana      grafana.minikube      10.0.2.15   80      25h
prometheus   prometheus.minikube   10.0.2.15   80      26h

定義は以下リポジトリに格納しました。
https://github.com/yurake/k8s-3tier-webapp/tree/master/monitoring

参考情報

https://github.com/prometheus/prometheus
https://grafana.com/grafana/dashboards
https://github.com/prometheus/node_exporter
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus

5
8
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
5
8