LoginSignup
7
8

More than 3 years have passed since last update.

Ansibleを使ってKubernetesをインストール

Last updated at Posted at 2019-03-22

これを見ながらKubernetesをAnsibleで入れてみたいと思います。
Ansibleだけで入れたかったのですが、クラスタの初期化やJoinするところは手でコマンド打ちました。
あぁスキル不足 :cry:

結局初期セットアップと必要なパッケージのインストールをAnsibleで、最後に手動でクラスタを構成しました。

Ansibleを利用したkubesprayというものもあるので別途チャレンジしてみたいと思います。

Kubernetesのインストール自体は以下の参考にさせていただきました。
https://qiita.com/nagase/items/15726e37057e7cc3b8cd
https://ops.jig-saw.com/techblog/virtualbox_centos_kubernetes/

環境

今回の環境です。
利用したAnsibleのVersionは2.7.9です。

 # ansible --version
ansible 2.7.9
  config file = /root/k8s/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.5 (default, Apr 10 2018, 17:08:37) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]

Kubernetesの構成はMaster1台とWorker2台構成です。
OSはCentOS7.6を自宅のESXi上に使用しました。

# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)

Ansibleによる構築(一部手動)

hostsファイルを用意します。

[master]
192.168.0.90

[worker]
192.168.0.91
192.168.0.92

[all:vars]
ansible_user = root
ansible_ssh_pass = xxxxxxxx

以下を全台に行う、Playbookを作成します。

・Firewalld の停止、自動起動無効化
・SELinux の無効化
・/etc/hosts
・swap無効化

presetup.yml
---

- hosts: all
  connection: ssh
  gather_facts: True

  tasks:
   - name: disable SELinux
     command: setenforce 0

   - name: disable SELinux on reboot
     selinux:
       state: disabled

   - name: modprebe br_netfilter
     command: modprobe br_netfilter

   - name: ensure net.bridge.bridge-nf-call-iptables is set to 1
     sysctl:
      name: net.bridge.bridge-nf-call-iptables
      value: 1
      state: present

   - name: disable swap
     command: swapoff -a

   - name: Generate /etc/hosts file
     template:
       src=hosts.j2
       dest=/etc/hosts

   - name: diable firewalld
     systemd:
       name: firewalld
       state: stopped
       enabled: no

実行します。

# ansible-playbook -i hosts presetup.yml

PLAY [all] ***********************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************
ok: [192.168.0.90]
ok: [192.168.0.91]
ok: [192.168.0.92]

TASK [disable SELinux] ***********************************************************************************************************************
changed: [192.168.0.90]
changed: [192.168.0.91]
changed: [192.168.0.92]

TASK [disable SELinux on reboot] *************************************************************************************************************
 [WARNING]: SELinux state change will take effect next reboot

ok: [192.168.0.91]
ok: [192.168.0.90]
ok: [192.168.0.92]

TASK [modprebe br_netfilter] *****************************************************************************************************************
changed: [192.168.0.91]
changed: [192.168.0.90]
changed: [192.168.0.92]

TASK [ensure net.bridge.bridge-nf-call-iptables is set to 1] *********************************************************************************
ok: [192.168.0.90]
ok: [192.168.0.91]
ok: [192.168.0.92]

TASK [disable SELinux] ***********************************************************************************************************************
changed: [192.168.0.91]
changed: [192.168.0.90]
changed: [192.168.0.92]

TASK [Generate /etc/hosts file] **************************************************************************************************************
changed: [192.168.0.90]
changed: [192.168.0.91]
changed: [192.168.0.92]

TASK [diable firewalld] **********************************************************************************************************************
changed: [192.168.0.91]
changed: [192.168.0.92]
changed: [192.168.0.90]

PLAY RECAP ***********************************************************************************************************************************
192.168.0.90               : ok=8    changed=5    unreachable=0    failed=0
192.168.0.91               : ok=8    changed=5    unreachable=0    failed=0
192.168.0.92               : ok=8    changed=5    unreachable=0    failed=0

hostsファイルの生成用にこのようなテンプレートを用意しておきます。

hosts.j2
# {{ ansible_managed }}
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

{% for item in play_hosts %}
{% set short_name = item.split('.') %}
{{ hostvars[item]['ansible_default_ipv4']['address'] }}  {{ hostvars[item]['ansible_hostname'] }}
{% endfor %}

次に各マシンに必要なdockerやKubernetesのパッケージを入れるPlaybookを書いて実行します。

kube-dependencies.yml
---

- hosts: all
  connection: ssh
  gather_facts: True

  tasks:
  - name: install packsges
    yum:
      name: "{{ packages }}"
    vars:
      packages:
        - yum-utils
        - device-mapper-persistent-data
        - lvm2

  - name: get repo file
    get_url:
      url: https://download.docker.com/linux/centos/docker-ce.repo
      dest: /etc/yum.repos.d/docker-ce.repo
      mode: 0644

  - name: install docker
    yum:
      name: "{{ packages }}"
    vars:
      packages:
        - docker-ce
        - docker-ce-cli
        - containerd.io

  - name: start docker
    systemd:
      name: docker
      state: started
      enabled: yes

  - name: add kubernetes repo
    yum_repository:
      name: kubernetes
      description: kubernetes repo
      baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
      gpgcheck: no
      enabled: yes

  - name: install kubelet libeadm
    yum:
      name: "{{ packages }}"
    vars:
      packages:
        - kubelet
        - kubeadm
      state: present

  - name: start kubelet
    systemd:
      name: kubelet
      state: started
      enabled: yes

- hosts: master
  tasks:
   - name: install kubectl
     yum:
       name: kubectl
       state: present
       allow_downgrade: yes

↑のを実行します。

# ansible-playbook -i hosts kube-dependencies.yml

PLAY [all] ***********************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************
ok: [192.168.0.91]
ok: [192.168.0.92]
ok: [192.168.0.90]

TASK [install packsges] **********************************************************************************************************************
ok: [192.168.0.90]
ok: [192.168.0.91]
ok: [192.168.0.92]

TASK [get repo file] *************************************************************************************************************************
changed: [192.168.0.90]
changed: [192.168.0.91]
changed: [192.168.0.92]

TASK [install docker] ************************************************************************************************************************
changed: [192.168.0.90]
changed: [192.168.0.91]
changed: [192.168.0.92]

TASK [start docker] **************************************************************************************************************************
changed: [192.168.0.90]
changed: [192.168.0.92]
changed: [192.168.0.91]

TASK [add kubernetes repo] *******************************************************************************************************************
changed: [192.168.0.90]
changed: [192.168.0.91]
changed: [192.168.0.92]

TASK [install kubelet libeadm] ***************************************************************************************************************
changed: [192.168.0.92]
changed: [192.168.0.90]
changed: [192.168.0.91]

TASK [start kubelet] *************************************************************************************************************************
changed: [192.168.0.90]
changed: [192.168.0.92]
changed: [192.168.0.91]

PLAY [master] ********************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************
ok: [192.168.0.90]

TASK [install kubectl] ***********************************************************************************************************************
ok: [192.168.0.90]

PLAY RECAP ***********************************************************************************************************************************
192.168.0.90               : ok=10   changed=6    unreachable=0    failed=0
192.168.0.91               : ok=8    changed=6    unreachable=0    failed=0
192.168.0.92               : ok=8    changed=6    unreachable=0    failed=0

ここからは必要な設定を手でいれました。
Masterでkubeadm initを実行して初期化をします。

# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [idol-master localhost] and IPs [192.168.0.90 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [idol-master localhost] and IPs [192.168.0.90 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [idol-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.90]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.502506 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "idol-master" as an annotation
[mark-control-plane] Marking the node idol-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node idol-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5qvuvz.a79cqaoebyh3nvou
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.90:6443 --token 5qvuvz.a79cqaoebyh3nvou --discovery-token-ca-cert-hash sha256:afce842b5bd9d33cd64e1d5c8257470e24e6e27a7ebf412027d7f466fdbe0d38

必要な設定ファイルを手元にコピってきます。

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

flannelをデプロイしてMasterノードをセットアップします。

[root@idol-master ~]# kubectl get node
NAME          STATUS     ROLES    AGE     VERSION
idol-master   NotReady   master   2m30s   v1.13.4
[root@idol-master ~]#
[root@idol-master ~]#
[root@idol-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@idol-master ~]# kubectl get node
NAME          STATUS     ROLES    AGE     VERSION
idol-master   NotReady   master   2m51s   v1.13.4
[root@idol-master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-2bwgs              1/1     Running   0          2m45s
kube-system   coredns-86c58d9df4-8n8mc              1/1     Running   0          2m45s
kube-system   etcd-idol-master                      1/1     Running   0          102s
kube-system   kube-apiserver-idol-master            1/1     Running   0          101s
kube-system   kube-controller-manager-idol-master   1/1     Running   0          2m5s
kube-system   kube-flannel-ds-amd64-hbx84           1/1     Running   0          18s
kube-system   kube-proxy-lxhsq                      1/1     Running   0          2m45s
kube-system   kube-scheduler-idol-master            1/1     Running   0          114s
[root@idol-master ~]# kubectl get node
NAME          STATUS   ROLES    AGE     VERSION
idol-master   Ready    master   3m10s   v1.13.4
[root@idol-master ~]#

これでMasterノードを準備が完了しました。
あとは先ほど出力された、kubeadm join コマンドをWorkerで実行してクラスタに追加します。

[root@idol-master ~]# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
idol-master    Ready    master   12m    v1.13.4
idol-worker1   Ready    <none>   2m2s   v1.13.4
idol-worker2   Ready    <none>   74s    v1.13.4
[root@idol-master ~]#

最後にlableをセットしておきました。

[root@idol-master ~]# kubectl label node idol-worker1 node-role.kubernetes.io/worker=
[root@idol-master ~]# kubectl label node idol-worker2 node-role.kubernetes.io/worker=
[root@idol-master ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
idol-master    Ready    master   24m   v1.13.4
idol-worker1   Ready    worker   13m   v1.13.4
idol-worker2   Ready    worker   12m   v1.13.4
[root@idol-master ~]#

最後に手作業を挟んでしまいましたが、これで楽しいお家k8sライフを送ります~ :heart_eyes:

7
8
1

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
7
8