10
7

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

Prometheus + Grafana + 自作Exporter on Kubernetes で部屋の温湿度監視を行った

Last updated at Posted at 2019-10-11

タイトルの通り、Prometheus, Grafana, 自作Exporter をKubernetes上に展開し、自宅(3部屋)の温湿度監視を行ったのでメモ。
Prometheus, Grafana, Kubernetes についての説明は省略させてもらいます。

前準備

構成

  • RX200S6 - ubuntu server 18.04(Master) on KVM
    • CPU: 4Core
    • RAM: 2GB
    • Disk: 50GB
  • RX200S6 - ubuntu server 18.04(Worker) on KVM × 3
    • CPU: 4Core
    • RAM: 4GB
    • Disk: 50GB
  • Raspberry Pi 3 Model B - raspbian 10.1(Worker) × 3
    • CPU: 4Core
    • RAM: 1GB
    • Disk: 30GB

前準備にも記載していますがKubeadmでKubernetesのセットアップを完了していることが前提です。
CNIにはFlannelを使用しています。

~$ kubectl get nodes
NAME                  STATUS   ROLES    AGE   VERSION
kubernetes-master     Ready    master   20d   v1.15.3
kubernetes-worker-1   Ready    worker   20d   v1.15.3
kubernetes-worker-2   Ready    worker   20d   v1.15.3
kubernetes-worker-3   Ready    worker   20d   v1.15.3
kubernetes-worker-4   Ready    worker   20d   v1.15.3
kubernetes-worker-5   Ready    worker   20d   v1.15.3
kubernetes-worker-6   Ready    worker   20d   v1.15.3

Master

~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
~$ kubelet --version
Kubernetes v1.15.3
~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"clean", BuildDate:"2019-09-18T14:41:55Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
~$ docker version
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:57:28 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:19:38 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Worker

kubernetes-worker-[1~3]

~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
~$ kubelet --version
Kubernetes v1.15.3
~$ docker version
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:57:28 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:19:38 2019
  OS/Arch:          linux/amd64
  Experimental:     false

kubernetes-worker-[4~6]

~ $ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm"}
~ $ kubelet --version
Kubernetes v1.15.3
~ $ docker version
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df
 Built:             Wed Sep  4 17:02:31 2019
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:21:03 2019
  OS/Arch:          linux/arm
  Experimental:     false

Namespace作成

監視用のリソースを展開するNamespaceを作成しておきます。

namespace.yml
apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
  labels:
    name: monitoring
~ $ kubectl create -f namespace.yml

ClusterRole, ClusterRoleBinding

ここではClusterRole, ClusterRoleBindingを作成し、
monitoringというNamespace作成時のdefaultというService Accountに対して
クラスタのリソースに対する操作権限を適用します。

cluster-role.yml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: default
  namespace: monitoring
~ $ kubectl create -f cluster-role.yml

Prometheusのデプロイ

prometheus-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: prometheus-server
  namespace: monitoring
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: prometheus-server
    spec:
      volumes:
        - name: prom-config
          configMap:
            name: prometheus-config
      containers:
      - name: prometheus
        image: prom/prometheus:latest
        imagePullPolicy: IfNotPresent
        volumeMounts:
          - name: prom-config
            mountPath: /etc/prometheus
        ports:
        - containerPort: 9090

ConfigMapにてKubernetes用のService Discoveryを設定する。

prometheus-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  labels:
    name: prometheus-config
  namespace: monitoring
data:
  prometheus.yml: |-
    # my global config
    global:
      scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
      evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
      # scrape_timeout is set to the global default (10s).

    # Alertmanager configuration
    alerting:
      alertmanagers:
      - static_configs:
        - targets:
          # - alertmanager:9093

    # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
    rule_files:
      # - "first_rules.yml"
      # - "second_rules.yml"

    # A scrape configuration containing exactly one endpoint to scrape:
    # Here it's Prometheus itself.
    scrape_configs:
      # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
      - job_name: 'prometheus'

        # metrics_path defaults to '/metrics'
        # scheme defaults to 'http'.

        static_configs:
        - targets: ['localhost:9090']

      - job_name: kubernetes-apiservers
        kubernetes_sd_configs:
        - role: endpoints
        relabel_configs:
        - action: keep
          regex: default;kubernetes;https
          source_labels:
          - __meta_kubernetes_namespace
          - __meta_kubernetes_service_name
          - __meta_kubernetes_endpoint_port_name
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          insecure_skip_verify: true
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      
      - job_name: kubernetes-service-endpoints
        kubernetes_sd_configs:
        - role: endpoints
        relabel_configs:
        - action: keep
          regex: true
          source_labels:
          - __meta_kubernetes_service_annotation_prometheus_io_scrape
        - action: replace
          regex: (https?)
          source_labels:
          - __meta_kubernetes_service_annotation_prometheus_io_scheme
          target_label: __scheme__
        - action: replace
          regex: (.+)
          source_labels:
          - __meta_kubernetes_service_annotation_prometheus_io_path
          target_label: __metrics_path__
        - action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          source_labels:
          - __address__
          - __meta_kubernetes_service_annotation_prometheus_io_port
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - action: replace
          source_labels:
          - __meta_kubernetes_namespace
          target_label: kubernetes_namespace
        - action: replace
          source_labels:
          - __meta_kubernetes_service_name
          target_label: kubernetes_name
      
      - job_name: kubernetes-services
        kubernetes_sd_configs:
        - role: service
        metrics_path: /probe
        params:
          module:
          - http_2xx
        relabel_configs:
        - action: keep
          regex: true
          source_labels:
          - __meta_kubernetes_service_annotation_prometheus_io_probe
        - source_labels:
          - __address__
          target_label: __param_target
        - replacement: blackbox
          target_label: __address__
        - source_labels:
          - __param_target
          target_label: instance
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels:
          - __meta_kubernetes_namespace
          target_label: kubernetes_namespace
        - source_labels:
          - __meta_kubernetes_service_name
          target_label: kubernetes_name
      
      - job_name: kubernetes-pods
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - action: keep
          regex: true
          source_labels:
          - __meta_kubernetes_pod_annotation_prometheus_io_scrape
        - action: replace
          regex: (.+)
          source_labels:
          - __meta_kubernetes_pod_annotation_prometheus_io_path
          target_label: __metrics_path__
        - action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          source_labels:
          - __address__
          - __meta_kubernetes_pod_annotation_prometheus_io_port
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - action: replace
          source_labels:
          - __meta_kubernetes_namespace
          target_label: kubernetes_namespace
        - action: replace
          source_labels:
          - __meta_kubernetes_pod_name
          target_label: kubernetes_pod_name
        - action: replace
          source_labels:
          - __meta_kubernetes_pod_node_name
          target_label: kubernetes_pod_node_name
prometheus-service.yml
apiVersion: v1
kind: Service
metadata:
  name: prometheus-service
  namespace: monitoring
spec:
  selector:
    app: prometheus-server
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30090
~ $ kubectl create -f prometheus-configmap.yml
~ $ kubectl create -f prometheus-deployment.yml
~ $ kubectl create -f prometheus-service.yml

Grafanaのデプロイ

grafana-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grafana-deployment
  namespace: monitoring
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
        - name: grafana
          image: grafana/grafana:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 3000
grafana-service.yml
apiVersion: v1
kind: Service
metadata:
  name: grafana-service
  namespace: monitoring
spec:
  selector:
    app: grafana
  type: NodePort
  ports:
    - port: 3000
      targetPort: 3000
      nodePort: 30100
~ $ kubectl create -f grafana-deployment.yml
~ $ kubectl create -f grafana-service.yml

私はClusterの監視も行いたく、下記記事を参考にさせていただきました。
Prometheus+GrafanaでKubernetesクラスターを監視する ~Binaryファイルから起動+yamlファイルから構築

自作Exporterのデプロイ

AM2320から温湿度を吐いてくれる自作Exporter(am2320_exporter)をデプロイしていきます。
AM2320センサが接続されているノードに1つずつ配置し、なおかつkubernetes-worker-[1~3]はスケジューリングから除外したいので
nodeAffinityによりkubernetes-worker-[4~6]だけにPodが作成されるようにしています。

tmp-and-hum-daemonset.yml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: am2320-exporter
  namespace: monitoring
  labels:
    name: am2320-exporter
spec:
  template:
    metadata:
      labels:
        app: am2320-exporter
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '9430'
        prometheus.io/path: /metrics
    spec:
      containers:
      - name: am2320-exporter
        image: yudaishimanaka/am2320-exporter-armv7l:latest
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        ports:
        - containerPort: 9430
      hostNetwork: true
      hostPID: true
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                  - kubernetes-worker-4
                  - kubernetes-worker-5
                  - kubernetes-worker-6
~ $ kubectl create -f tmp-and-hum-daemonset.yml

Grafanaダッシュボードの設定

ダッシュボードの設定を行う前にPrometheusからサービスディスカバリが正しく行えているか確認します。
http://<master-ip>:30090/service-discoveryにアクセスして以下のように表示されれば問題なくサービスディスカバリが行えています。なにも表示されない場合はConfigMapの記述を見直してください。
※表示される内容はConfigMapに記述したprometheus.ymlの設定、クラスタにデプロイされているリソースの数によって異なります。
Selection_043.png
Targetsではam2320_exporterが確認できています。
Selection_044.png

Grafanaダッシュボードにログインします。
http://<master-ip>:30100/login
デフォルトのユーザとパスワードはadmin,adminです。

サイドバーのConfiguration->Data Sources->Add data source->Prometheusを選択し、HTTP URLhttp://<master-ip>:30090と入力しSave & Testで保存します。
Selection_047.png

グラフの作成

グラフの作成ですが、am2320_exporterはam2320_temperature_gauge, am2320_humidity_gaugeの2つのメトリクスしかサポートしていないので、それをよしなにやってくれれば良いと思います。
私はGrafanaのVariablesを使用し以下のように書きました。
Selection_049.png
Selection_050.png
Selection_051.png

最終的なグラフ
Selection_048.png

最後に

きっかけはkubeedge/examplesのLチカです。
同じようなことをしてみたいと思っている人の役に立てれば幸いです。

10
7
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
10
7

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?