概要
- k8sテスト環境構築
EFK インストール
構築目次
環境
- Rancher: v2.4.8
- kubernetes(Client): v1.19.1
- kubernetes(Server): v1.18.8
- ECK(Elastic Cloud on Kubernetes): v1.2.1
- Elasticsearch: v7.9.2
- Kibana: v7.9.2
- Fluentd: v1.9.3
ECK インストール
- 作業場所: ClientPC
- ElasticsearchとkibanaはECK(Elastic Cloud on Kubernetes)使用
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html
ECK operator インストール
※namespaceは自動作成
$ kubectl apply -f https://download.elastic.co/downloads/eck/1.2.1/all-in-one.yaml
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
namespace/elastic-system created
serviceaccount/elastic-operator created
secret/elastic-webhook-server-cert created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator-view created
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
rolebinding.rbac.authorization.k8s.io/elastic-operator created
service/elastic-webhook-server created
statefulset.apps/elastic-operator created
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created
## 確認 ##
$ kubectl get crd | grep -i elastic
apmservers.apm.k8s.elastic.co 2020-09-27T15:32:37Z
beats.beat.k8s.elastic.co 2020-09-27T15:32:37Z
elasticsearches.elasticsearch.k8s.elastic.co 2020-09-27T15:32:37Z
enterprisesearches.enterprisesearch.k8s.elastic.co 2020-09-27T15:32:37Z
kibanas.kibana.k8s.elastic.co 2020-09-27T15:32:37Z
$ kubectl get statefulset -n elastic-system
NAME READY AGE
elastic-operator 1/1 62s
$ kubectl get pod -n elastic-system
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 61s
$ kubectl get svc -n elastic-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elastic-webhook-server ClusterIP 10.43.201.47 <none> 443/TCP 62s
Elasticsearch インストール
- Manifest作成
elasticsearch.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: test-elastic
namespace: elastic-system
spec:
version: 7.9.2
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
podTemplate:
spec:
volumes:
- name: elasticsearch-data
emptyDir: {}
- Deploy
$ kubectl apply -f elasticsearch.yaml
elasticsearch.elasticsearch.k8s.elastic.co/test-elastic created
## 確認 ##
$ kubectl get elasticsearch -n elastic-system
NAME HEALTH NODES VERSION PHASE AGE
test-elastic green 1 7.9.2 Ready 3m5s
$ kubectl get statefulset -n elastic-system
NAME READY AGE
..........
test-elastic-es-default 1/1 13m
..........
$ kubectl get pod -n elastic-system
NAME READY STATUS RESTARTS AGE
..........
test-elastic-es-default-0 1/1 Running 0 2m5s
..........
$ kubectl get svc -n elastic-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
..........
test-elastic-es-default ClusterIP None <none> 9200/TCP 5m22s
test-elastic-es-http ClusterIP 10.43.136.65 <none> 9200/TCP 5m23s
test-elastic-es-transport ClusterIP None <none> 9300/TCP 5m23s
..........
Kibana インストール
- Manifest作成
kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: test-kibana
namespace: elastic-system
spec:
version: 7.9.2
count: 1
elasticsearchRef:
name: test-elastic
- Deploy
$ kubectl apply -f kibana.yaml
kibana.kibana.k8s.elastic.co/test-kibana created
## 確認 ##
$ kubectl get kibana -n elastic-system
NAME HEALTH NODES VERSION AGE
test-kibana green 1 7.9.2 2m56s
$ kubectl get deploy -n elastic-system
NAME READY UP-TO-DATE AVAILABLE AGE
test-kibana-kb 1/1 1 1 2m25s
$ kubectl get pod -n elastic-system
NAME READY STATUS RESTARTS AGE
..........
test-kibana-kb-c478fcc7c-rgm6z 1/1 Running 0 10m
..........
$ kubectl get svc -n elastic-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
..........
test-kibana-kb-http ClusterIP 10.43.222.22 <none> 5601/TCP 2m40s
..........
ネットワーク設定
- Nginx Ingress 設定
- KibanaにHTTPSでアクセスするため、Nginx Ingressの
passthrough
設定が必要
- Ingressを
passthrough
なしでインストールした場合は以下実行
$ helm upgrade nginx-ingress ingress-nginx/ingress-nginx -n ingress-system --set "controller.extraArgs.enable-ssl-passthrough="
- Kibanaアクセス用Ingress追加
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kibana-ingress
namespace: elastic-system
annotations:
# passthrough設定追加
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: kibana.test.local
http:
paths:
- path: /
backend:
serviceName: test-kibana-kb-http
servicePort: 5601
- Deploy
$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/kibana-ingress created
## 確認 ##
$ kubectl get ingress -n elastic-system
NAME CLASS HOSTS ADDRESS PORTS AGE
kibana-ingress <none> kibana.test.local 192.168.245.111 80 105s
- hosts 設定追加
Ingress用IPに「kibana.test.local」に追加
$ cat /etc/hosts
........
192.168.245.111 kibana.test.local
........
Kibana アクセス
- パスワード確認
$ kubectl get secret test-elastic-es-elastic-user -o=jsonpath='{.data.elastic}' -n elastic-system | base64 --decode; echo
35XJBcQ7i1dov0H3Q727u1aU
- Kibana アクセス
URL : https://kibana.test.local
Username: elastic
Password: 上記で確認したパスワード
Fluentd 設定
- 作業場所: ClientPC
- fluentd設定参考リンク
https://github.com/fluent/fluentd-kubernetes-daemonset
https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch-rbac.yaml
https://github.com/joshuarobinson/elasticsearch_k8s_examples/blob/master/fluentd-daemonset-elasticsearch.yaml
Fluentd インストール
- インストール用manifest作成
fluentd-daemonset-elasticsearch.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: elastic-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
namespace: elastic-system
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: elastic-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: elastic-system
labels:
k8s-app: fluentd-logging
version: v1
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
env:
- name: FLUENT_UID
value: "0"
- name: FLUENTD_SYSTEMD_CONF
value: "disable"
- name: FLUENT_ELASTICSEARCH_HOST
value: "test-elastic-es-http"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "https"
- name: FLUENT_ELASTICSEARCH_SSL_VERIFY
value: "false"
- name: FLUENT_ELASTICSEARCH_SSL_VERSION
value: "TLSv1_2"
- name: FLUENT_ELASTICSEARCH_USER
value: "elastic"
- name: FLUENT_ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: test-elastic-es-elastic-user
key: elastic
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- Deploy
$ kubectl apply -f fluentd-daemonset-elasticsearch.yaml
## 確認 ##
$ kubectl get pod -n elastic-system
NAME READY STATUS RESTARTS AGE
......
fluentd-22crn 1/1 Running 0 4m24s
fluentd-pcxqt 1/1 Running 0 4m24s
fluentd-w9fvg 1/1 Running 0 4m24s
......