11
9

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

Rancher v2.2.2のHA構成をAWS上で構築する

Last updated at Posted at 2019-05-08

1. はじめに

RancherはKubernetesのマルチクラスターを管理する機能を提供するツールです。Rancherを本番環境で利用する場合は、HA構成を組んで利用することが推奨されています。複数台のRancherサーバによるHA構成を組むことで、いずれかのRancherサーバがダウンしても、利用者はRancherに常にアクセスできる環境が用意できます。

RancherのHA構成を構築する際、まずrkeを利用してKubernetesクラスターを構成し、その上にhelmを利用してRancherをデプロイします。RancherをKubernetes上にデプロイすることで、クラスター上のetcdと統合し、またKubernetesのスケジューリング機能を利用することができます

今回は公式ドキュメントで紹介する手順に従い、RancherのHA構成環境を構築したので、その手順を紹介します。ほとんど公式の手順通りのため新しい部分はありませんが、RancherのHA構成の構築を紹介したブログやQiitaの記事が意外と少なかったため、公開しました。

公式ドキュメントリンク:
https://rancher.com/docs/rancher/v2.x/en/installation/ha/

rancher-logo-2.jpg

2. 構築環境

今回はAWSにインスタンス、ロードバランサーを立ち上げて構築しました。以下に構築環境を載せておきます。

2-1. 基本情報

  • AWS

    • EC2インスタンス: t3.medium
    • OS: Ubuntu Server 18.04
  • ホスト名

    • 作業用インスタンス: rancher-building
    • Rancherサーバ: rancher-server-1 rancher-server-2 rancher-server-3
  • アドレス情報

    • VPC: 10.10.0.0/16
    • サブネット: 10.10.0.0/24
    • rancher-server-1: 10.10.0.11
    • rancher-server-2: 10.10.0.12
    • rancher-server-3: 10.10.0.13
    • rancher-buiding: 10.10.0.112
  • バージョン情報

    • Rancher: v2.2.2
    • Docker: 18.09.5
    • Kubernetes: v1.13.5
    • rkeコマンド: v0.1.18
    • kubectlコマンド: v1.14.1
    • helmコマンド: v2.13.1 (v2.12.1以上を利用するよう公式ドキュメントに記載あり)

2-2. 構成環境の構成図

今回構築した環境は以下のようになります。ほぼ公式ドキュメントと同じですが、構築時に利用するための作業用インスタンスを別途作成しています。rke・helmコマンドなどはこの作業用インスタンスから実行します。構築後はLBからRancher Serverにアクセスします。

rancher-ha-1 (2).jpg

3. ノードとロードバランサーの作成

ここから実際の構築に入ります。まずはノードとロードバランサーの構築を行います。

公式ドキュメントのリンク:
https://rancher.com/docs/rancher/v2.x/en/installation/ha/create-nodes-lb/

3-1. インスタンス作成・設定

まずは作業用インスタンス1台・Rancher Server用インスタンス3台を立ち上げます。インスタンスの立ち上げはここでは割愛し、sshでログインした後の作業から紹介します。

3-1-1. 作業用インスタンス設定

まずは作業用インスタンスから行います。Rancherサーバから設定をしても問題ありませんが、実際の作業をしていてエラーが表示されないかを確認したいという意味もあり、作業用インスタンスから開始しました。以下に作業内容項目を記載します。

  • apt-get update && apt-get upgrade -y
  • ホスト名変更
hostnamectl set-hostname --static rancher-building
  • ssh鍵作成:rkeコマンドを実行してRancherサーバを構築する場合、クライアントとサーバ間でssh接続できる必要があります。そのため新規でssh接続用の鍵を作成します。今回は最低限接続できれば良いと考え、ssh-keygenのデフォルト設定で鍵を作成しました。
ubuntu@rancher-building:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
  • /etc/hosts:以下のように追加します。
ubuntu@rancher-building:~$ sudo vi /etc/hosts
ubuntu@rancher-building:~$ cat /etc/hosts
127.0.0.1 localhost
10.10.0.11 rancher-server-1   # 新規追加分
10.10.0.12 rancher-server-2   # 新規追加分
10.10.0.13 rancher-server-3   # 新規追加分
10.10.0.112 rancher-building   # 新規追加分

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
  • kubectlコマンドインストール:Kubernetesの公式ドキュメントに従いインストールします。

公式ドキュメントリンク:
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management

ubuntu@rancher-building:~$ sudo apt-get install -y apt-transport-https
ubuntu@rancher-building:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
ubuntu@rancher-building:~$ echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
ubuntu@rancher-building:~$ sudo apt-get update
ubuntu@rancher-building:~$ sudo apt-get install -y kubectl

確認のためバージョン確認を行います。

ubuntu@rancher-building:~$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
  • rkeコマンドインストール:Rancherの公式ドキュメントに従いインストールします。

公式ドキュメントリンク:
https://rancher.com/docs/rke/latest/en/installation/#download-the-rke-binary

ubuntu@rancher-building:~$ wget https://github.com/rancher/rke/releases/download/v0.1.18/rke_linux-amd64
ubuntu@rancher-building:~$ mv rke_linux-amd64 rke
ubuntu@rancher-building:~$ chmod +x rke
ubuntu@rancher-building:~$ mv ./rke /usr/local/bin/

確認のためバージョン確認を行います。

ubuntu@rancher-building:~$ rke --version
rke version v0.1.18
  • helmコマンドインストール:Helmの公式ドキュメントに従いインストールします。

公式ドキュメントリンク:
https://helm.sh/docs/using_helm/#from-the-binary-releases

ubuntu@rancher-building:~$ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz
ubuntu@rancher-building:~$ tar -zxvf helm-v2.13.1-linux-amd64.tar.gz 
ubuntu@rancher-building:~$ mv linux-amd64/helm /usr/local/bin/

確認のためhelm helpを実行します。

ubuntu@rancher-building:~$ helm help
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

  $ helm init

This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.

Common actions from this point include:

- helm search:    search for charts
- helm fetch:     download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts

Environment:
  $HELM_HOME           set an alternative location for Helm files. By default, these are stored in ~/.helm
  $HELM_HOST           set an alternative Tiller host. The format is host:port
  $HELM_NO_PLUGINS     disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.
  $TILLER_NAMESPACE    set an alternative Tiller namespace (default "kube-system")
  $KUBECONFIG          set an alternative Kubernetes configuration file (default "~/.kube/config")
  $HELM_TLS_CA_CERT    path to TLS CA certificate used to verify the Helm client and Tiller server certificates (default "$HELM_HOME/ca.pem")
  $HELM_TLS_CERT       path to TLS client certificate file for authenticating to Tiller (default "$HELM_HOME/cert.pem")
  $HELM_TLS_KEY        path to TLS client key file for authenticating to Tiller (default "$HELM_HOME/key.pem")
  $HELM_TLS_ENABLE     enable TLS connection between Helm and Tiller (default "false")
  $HELM_TLS_VERIFY     enable TLS connection between Helm and Tiller and verify Tiller server certificate (default "false")
  $HELM_TLS_HOSTNAME   the hostname or IP address used to verify the Tiller server certificate (default "127.0.0.1")
  $HELM_KEY_PASSPHRASE set HELM_KEY_PASSPHRASE to the passphrase of your PGP private key. If set, you will not be prompted for
                       the passphrase while signing helm charts

Usage:
  helm [command]

Available Commands:
  completion  Generate autocompletions script for the specified shell (bash or zsh)
  create      create a new chart with the given name
  delete      given a release name, delete the release from Kubernetes
  dependency  manage a chart's dependencies
  fetch       download a chart from a repository and (optionally) unpack it in local directory
  get         download a named release
  help        Help about any command
  history     fetch release history
  home        displays the location of HELM_HOME
  init        initialize Helm on both client and server
  inspect     inspect a chart
  install     install a chart archive
  lint        examines a chart for possible issues
  list        list releases
  package     package a chart directory into a chart archive
  plugin      add, list, or remove Helm plugins
  repo        add, list, remove, update, and index chart repositories
  reset       uninstalls Tiller from a cluster
  rollback    roll back a release to a previous revision
  search      search for a keyword in charts
  serve       start a local http web server
  status      displays the status of the named release
  template    locally render templates
  test        test a release
  upgrade     upgrade a release
  verify      verify that a chart at the given path has been signed and is valid
  version     print the client/server version information

Flags:
      --debug                           enable verbose output
  -h, --help                            help for helm
      --home string                     location of your Helm config. Overrides $HELM_HOME (default "/home/ubuntu/.helm")
      --host string                     address of Tiller. Overrides $HELM_HOST
      --kube-context string             name of the kubeconfig context to use
      --kubeconfig string               absolute path to the kubeconfig file to use
      --tiller-connection-timeout int   the duration (in seconds) Helm will wait to establish a connection to tiller (default 300)
      --tiller-namespace string         namespace of Tiller (default "kube-system")

Use "helm [command] --help" for more information about a command.

3-1-2. Rancherサーバの設定

続いてRancherサーバの設定を紹介します。以下のコマンドは3台のRancherサーバ全てで実行します

  • apt-get update && apt-get upgrade -y

  • ホスト名の変更:作業用インスタンスと同じ内容を追加します。

  • /etc/hosts:作業用インスタンスと同じ内容を追加します。

  • kubectlコマンドインストール:作業用インスタンスと同じ手順でインストールします。

  • Dockerインストール:rkeコマンドでKubernetesクラスターを構築した場合、Dockerを用いてKubernetesコンポーネントを起動します。そのためRancherサーバにはDockerが必要となります。ここでも公式ドキュメントに従いインストールします。

公式ドキュメントリンク:
https://docs.docker.com/install/linux/docker-ce/ubuntu/

ubuntu@rancher-server-1:~$ sudo apt-get update
ubuntu@rancher-server-1:~$ sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
ubuntu@rancher-server-1:~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
ubuntu@rancher-server-1:~$ sudo apt-key fingerprint 0EBFCD88
ubuntu@rancher-server-1:~$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
ubuntu@rancher-server-1:~$ sudo apt-get update
ubuntu@rancher-server-1:~$ sudo apt-get install docker-ce docker-ce-cli containerd.io

確認のためhello-worldコンテナを起動します。

ubuntu@rancher-server-1:~$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:92695bc579f31df7a63da6922075d0666e565ceccad16b59c3374d2cf4e8e50e
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

またインストールしたDockerバージョンは以下の通りです。

ubuntu@rancher-server-1:~$ sudo docker version
Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        e8ff056
 Built:             Thu Apr 11 04:43:57 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.5
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       e8ff056
  Built:            Thu Apr 11 04:10:53 2019
  OS/Arch:          linux/amd64
  Experimental:     false
  • Dockerグループの追加公式ドキュメントにある要求事項として、ノードにSSHアクセスするユーザは、dockerグループに属さなければならない、とあります。ここではubuntuユーザをdockerグループに追加します。
ubuntu@rancher-server-1:~$ sudo usermod -aG docker ubuntu
ubuntu@rancher-server-1:~$ exit # 一度ターミナルから抜けます
ubuntu@rancher-server-1:~$ docker version # エラーが表示されないことを確認

Docker公式ドキュメントにも手順が紹介されています。

  • ssh設定:作業用インスタンスで生成した公開鍵情報を.ssh/authorized_keysに追加し、作業用インスタンスから各Rancherサーバにssh接続ができるようにします。
# 作業用インスタンス
ubuntu@rancher-building:~$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys

# 各Rancherサーバ
ubuntu@rancher-server-1:~$ vi .ssh/authorized_keys # 作業用インスタンスのid_rsa.pub情報を追加
ubuntu@rancher-server-1:~$ sudo systemctl restart sshd

# 作業用インスタンス
ubuntu@rancher-building:~$ ssh ubuntu@rancher-server-1

~中略~

Last login: Sun May  5 13:01:51 2019 from 118.86.81.13
ubuntu@rancher-server-1:~$ 
# デフォルトではswapオフのため、確認のみを実行
ubuntu@rancher-server-1:~$ swapon -s
ubuntu@rancher-server-1:~$ sudo vi /etc/sysctl.conf
ubuntu@rancher-server-1:~$ cat /etc/sysctl.conf

~中略~

net.bridge.bridge-nf-call-iptables=1

~中略~

ubuntu@rancher-server-1:~$ sudo sysctl -p
net.bridge.bridge-nf-call-iptables = 1
ubuntu@rancher-server-1:~$ 

3-2. LBの構築

次にAWSのロードバランサーを構築します。こちらは構築手順がRancherの公式ドキュメントで記載されているため、これに従い構築します。

公式ドキュメントリンク:
https://rancher.com/docs/rancher/v2.x/en/installation/ha/create-nodes-lb/nlb/

3-2-1. ターゲットグループの作成

  • まず2種類のターゲットグループを作成します。AWS管理画面からEC2インスタンスの管理画面へ移動し、左側のメニューから「ターゲットグループ」を選択します。

rancher-ha-13.jpg

  • 1つ目のターゲットグループであるrancher-tcp-443を作成します。以下の値の通りにターゲットグループを作成します。
オプション
ターゲットグループ名 rancher-tcp-443
ターゲットの種類 インスタンス
プロトコル TCP
ポート 443
VPC 使用するVPCを選択
プロトコル(ヘルスチェック) HTTP
パス(ヘルスチェック) /healthz
ポート(ヘルスチェック詳細) 上書き、80
正常のしきい値(ヘルスチェック詳細) 3
非正常のしきい値(ヘルスチェック詳細) 3
タイムアウト(ヘルスチェック詳細) 6
間隔(ヘルスチェック詳細) 10秒
成功コード(ヘルスチェック詳細) 200-399

rancher-ha-2 (1).jpg

  • 2つ目のターゲットグループであるrancher-tcp-80を作成します。以下の値の通りにターゲットグループを作成します。
オプション
ターゲットグループ名 rancher-tcp-80
ターゲットの種類 インスタンス
プロトコル TCP
ポート 80
VPC 使用するVPCを選択
プロトコル(ヘルスチェック) HTTP
パス(ヘルスチェック) /healthz
ポート(ヘルスチェック詳細) トラフィックポート
正常のしきい値(ヘルスチェック詳細) 3
非正常のしきい値(ヘルスチェック詳細) 3
タイムアウト(ヘルスチェック詳細) 6
間隔(ヘルスチェック詳細) 10秒
成功コード(ヘルスチェック詳細) 200-399

3-2-2. ターゲットグループへのインスタンスの登録

  • 次にRancherサーバを作成したターゲットグループに登録します。先ほどの「ターゲットグループ」画面で作成したターゲットグループを選択し、画面下部のタブから「ターゲット」→「編集」を選択します。

rancher-ha-14.jpg

Rancherサーバのインスタンスを選択し、「保存」を選択します。

rancher-ha-4 (2).jpg

先ほど作成した2つのターゲットグループの両方に、Rancherサーバインスタンスを登録します。

3-2-3. 新規NLBの作成

  • 最後にNLBを作成します。EC2インスタンスの管理画面から「ロードバランサー」を選択します。

rancher-ha-15.jpg

ロードバランサーの種類から「Network Load Balancer」を選択します。

rancher-ha-5.jpg

ロードバランサーの設定を行います。以下の値を入力し、「次の手順:セキュリティ設定の構成」を選択します。

オプション
名前 rancher
スキーム インターネット向け
ロードバランサーのプロトコル(リスナー) TCP
ロードバランサーのポート(リスナー) 443
VPC(アベイラビリティゾーン) 使用するVPCを選択
アベイラビリティゾーン(アベイラビリティゾーン) 使用するサブネットを選択
IPv4アドレス(アベイラビリティゾーン) AWSによって割り当て済み

rancher-ha-6.jpg

次の画面で「次の手順:ルーティングの設定」を選択します。

rancher-ha-8 (1).jpg

ルーティングの設定を行います。以下の値を入力し、「次の手順:ターゲットの登録」を選択します。

オプション
ターゲットグループ 既存のターゲットグループ
名前 rancher-tcp-443

rancher-ha-9.jpg

ターゲットの登録画面に表示されたターゲットを確認し、正しければ「次の手順:確認」を選択します。確認画面で誤りがなければ「作成」を選択します。

rancher-ha-11.jpg

最後にリスナーを追加します。作成したロードバランサーを選択し、画面下部のタブから「リスナー」→「リスナーの追加」を選択します。

rancher-ha-16.jpg

リスナーの追加画面で以下の設定を追加し、「保存」を選択します。

オプション
プロトコル:ポート TCP:80
転送先 rancher-tcp-80

rancher-ha-12.jpg

これでロードバランサーの設定は以上です。

4. rkeによるKubernetesのインストール

続いてrkeコマンドにより、3台のRancherサーバにKubernetesクラスターを構築します。

公式ドキュメントリンク:
https://rancher.com/docs/rancher/v2.x/en/installation/ha/kubernetes-rke/

4-1. rkeコンフィグ作成

まずは構築で利用するrancher-cluster.ymlファイルを作成します。コンフィグファイルで利用できる共通オプションについて、簡単に紹介します。

  • address: 必須。パブリックDNS / IPアドレスを記載
  • user: 必須。dockerコマンドを実行できるユーザを記載
  • role: 必須。ノードに与えるロールを記載。controlplane worker etcdの3種類があります。
  • internal_address: プライベートDNS / IPアドレスを記載
  • ssh_key_path: SSH秘密鍵のパス (デフォルトは~/.ssh/id_rsa)

次に実際に構築に利用したコンフィグファイルを載せます。今回は公式ドキュメントにある注意事項に従い、internal_addressを追加しています。また上記オプション以外にも、必要に応じて様々なオプションを追加できます。以下のリンク先を見ると、Kubernetesのバージョンの指定やクラスター名の指定、プライベートレジストリの指定、外部etcdの利用もできるようです。

公式ドキュメントリンク:
https://rancher.com/docs/rke/latest/en/config-options/

rancher-cluster.yml
nodes:
  - address: 10.10.0.11
    internal_address: 10.10.0.11
    user: ubuntu
    role: [controlplane,worker,etcd]
  - address: 10.10.0.12
    internal_address: 10.10.0.12
    user: ubuntu
    role: [controlplane,worker,etcd]
  - address: 10.10.0.13
    internal_address: 10.10.0.13
    user: ubuntu
    role: [controlplane,worker,etcd]

4-2. rke upコマンド実行

上記コンフィグファイルを用意したら、rke upコマンドを実行してKubernetesクラスターを構築します。ログも含めて載せておきます。

ubuntu@rancher-building:~$ rke up --config ./rancher-cluster.yml
INFO[0000] Building Kubernetes cluster                  
INFO[0000] [dialer] Setup tunnel for host [10.10.0.11]  
INFO[0000] [dialer] Setup tunnel for host [10.10.0.12]  
INFO[0001] [dialer] Setup tunnel for host [10.10.0.13]  
INFO[0002] [network] Deploying port listener containers 
INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.16] on host [10.10.0.12] 
INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.16] on host [10.10.0.11] 
INFO[0002] [network] Pulling image [rancher/rke-tools:v0.1.16] on host [10.10.0.13] 
INFO[0011] [network] Successfully pulled image [rancher/rke-tools:v0.1.16] on host [10.10.0.12] 
INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.16] on host [10.10.0.11] 
INFO[0012] [network] Successfully pulled image [rancher/rke-tools:v0.1.16] on host [10.10.0.13] 
INFO[0013] [network] Successfully started [rke-etcd-port-listener] container on host [10.10.0.12] 
INFO[0013] [network] Successfully started [rke-etcd-port-listener] container on host [10.10.0.13] 
INFO[0013] [network] Successfully started [rke-etcd-port-listener] container on host [10.10.0.11] 
INFO[0013] [network] Successfully started [rke-cp-port-listener] container on host [10.10.0.12] 
INFO[0013] [network] Successfully started [rke-cp-port-listener] container on host [10.10.0.13] 
INFO[0013] [network] Successfully started [rke-cp-port-listener] container on host [10.10.0.11] 
INFO[0014] [network] Successfully started [rke-worker-port-listener] container on host [10.10.0.13] 
INFO[0014] [network] Successfully started [rke-worker-port-listener] container on host [10.10.0.11] 
INFO[0014] [network] Successfully started [rke-worker-port-listener] container on host [10.10.0.12] 
INFO[0014] [network] Port listener containers deployed successfully 
INFO[0014] [network] Running etcd <-> etcd port checks  
INFO[0014] [network] Successfully started [rke-port-checker] container on host [10.10.0.11] 
INFO[0014] [network] Successfully started [rke-port-checker] container on host [10.10.0.13] 
INFO[0014] [network] Successfully started [rke-port-checker] container on host [10.10.0.12] 
INFO[0014] [network] Running control plane -> etcd port checks 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [10.10.0.12] 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [10.10.0.11] 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [10.10.0.13] 
INFO[0015] [network] Running control plane -> worker port checks 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [10.10.0.13] 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [10.10.0.11] 
INFO[0015] [network] Successfully started [rke-port-checker] container on host [10.10.0.12] 
INFO[0016] [network] Running workers -> control plane port checks 
INFO[0016] [network] Successfully started [rke-port-checker] container on host [10.10.0.11] 
INFO[0016] [network] Successfully started [rke-port-checker] container on host [10.10.0.12] 
INFO[0016] [network] Successfully started [rke-port-checker] container on host [10.10.0.13] 
INFO[0016] [network] Checking KubeAPI port Control Plane hosts 
INFO[0016] [network] Removing port listener containers  
INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [10.10.0.11] 
INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [10.10.0.13] 
INFO[0016] [remove/rke-etcd-port-listener] Successfully removed container on host [10.10.0.12] 
INFO[0017] [remove/rke-cp-port-listener] Successfully removed container on host [10.10.0.12] 
INFO[0017] [remove/rke-cp-port-listener] Successfully removed container on host [10.10.0.13] 
INFO[0017] [remove/rke-cp-port-listener] Successfully removed container on host [10.10.0.11] 
INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [10.10.0.13] 
INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [10.10.0.11] 
INFO[0017] [remove/rke-worker-port-listener] Successfully removed container on host [10.10.0.12] 
INFO[0017] [network] Port listener containers removed successfully 
INFO[0017] [certificates] Attempting to recover certificates from backup on [etcd,controlPlane] hosts 
INFO[0018] [certificates] Successfully started [cert-fetcher] container on host [10.10.0.11] 
INFO[0018] [certificates] Successfully started [cert-fetcher] container on host [10.10.0.12] 
INFO[0019] [certificates] Successfully started [cert-fetcher] container on host [10.10.0.13] 
INFO[0020] [certificates] No Certificate backup found on [etcd,controlPlane] hosts 
INFO[0020] [certificates] Generating CA kubernetes certificates 
INFO[0020] [certificates] Generating Kubernetes API server certificates 
INFO[0020] [certificates] Generating Kube Controller certificates 
INFO[0020] [certificates] Generating Kube Scheduler certificates 
INFO[0020] [certificates] Generating Kube Proxy certificates 
INFO[0021] [certificates] Generating Node certificate   
INFO[0021] [certificates] Generating admin certificates and kubeconfig 
INFO[0022] [certificates] Generating etcd-10.10.0.11 certificate and key 
INFO[0022] [certificates] Generating etcd-10.10.0.12 certificate and key 
INFO[0023] [certificates] Generating etcd-10.10.0.13 certificate and key 
INFO[0023] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0023] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0024] [certificates] Temporarily saving certs to [etcd,controlPlane] hosts 
INFO[0029] [certificates] Saved certs to [etcd,controlPlane] hosts 
INFO[0029] [reconcile] Reconciling cluster state        
INFO[0029] [reconcile] This is newly generated cluster  
INFO[0029] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0035] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml] 
INFO[0035] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0035] Pre-pulling kubernetes images                
INFO[0035] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [10.10.0.13] 
INFO[0035] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [10.10.0.11] 
INFO[0035] [pre-deploy] Pulling image [rancher/hyperkube:v1.13.5-rancher1] on host [10.10.0.12] 
INFO[0055] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [10.10.0.13] 
INFO[0055] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [10.10.0.11] 
INFO[0056] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.13.5-rancher1] on host [10.10.0.12] 
INFO[0056] Kubernetes images pulled successfully        
INFO[0056] [etcd] Building up etcd plane..              
INFO[0056] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24] on host [10.10.0.11] 
INFO[0059] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24] on host [10.10.0.11] 
INFO[0060] [etcd] Successfully started [etcd] container on host [10.10.0.11] 
INFO[0060] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.10.0.11] 
INFO[0061] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.10.0.11] 
INFO[0066] [certificates] Successfully started [rke-bundle-cert] container on host [10.10.0.11] 
INFO[0066] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.10.0.11] 
INFO[0067] [etcd] Successfully started [rke-log-linker] container on host [10.10.0.11] 
INFO[0067] [remove/rke-log-linker] Successfully removed container on host [10.10.0.11] 
INFO[0067] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24] on host [10.10.0.12] 
INFO[0070] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24] on host [10.10.0.12] 
INFO[0070] [etcd] Successfully started [etcd] container on host [10.10.0.12] 
INFO[0070] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.10.0.12] 
INFO[0071] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.10.0.12] 
INFO[0076] [certificates] Successfully started [rke-bundle-cert] container on host [10.10.0.12] 
INFO[0076] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.10.0.12] 
INFO[0077] [etcd] Successfully started [rke-log-linker] container on host [10.10.0.12] 
INFO[0077] [remove/rke-log-linker] Successfully removed container on host [10.10.0.12] 
INFO[0077] [etcd] Pulling image [rancher/coreos-etcd:v3.2.24] on host [10.10.0.13] 
INFO[0080] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.2.24] on host [10.10.0.13] 
INFO[0081] [etcd] Successfully started [etcd] container on host [10.10.0.13] 
INFO[0081] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.10.0.13] 
INFO[0081] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.10.0.13] 
INFO[0087] [certificates] Successfully started [rke-bundle-cert] container on host [10.10.0.13] 
INFO[0087] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.10.0.13] 
INFO[0087] [etcd] Successfully started [rke-log-linker] container on host [10.10.0.13] 
INFO[0088] [remove/rke-log-linker] Successfully removed container on host [10.10.0.13] 
INFO[0088] [etcd] Successfully started etcd plane..     
INFO[0088] [controlplane] Building up Controller Plane.. 
INFO[0088] [controlplane] Successfully started [kube-apiserver] container on host [10.10.0.13] 
INFO[0088] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.10.0.13] 
INFO[0088] [controlplane] Successfully started [kube-apiserver] container on host [10.10.0.11] 
INFO[0088] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.10.0.11] 
INFO[0088] [controlplane] Successfully started [kube-apiserver] container on host [10.10.0.12] 
INFO[0088] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.10.0.12] 
INFO[0100] [healthcheck] service [kube-apiserver] on host [10.10.0.13] is healthy 
INFO[0101] [healthcheck] service [kube-apiserver] on host [10.10.0.11] is healthy 
INFO[0101] [healthcheck] service [kube-apiserver] on host [10.10.0.12] is healthy 
INFO[0101] [controlplane] Successfully started [rke-log-linker] container on host [10.10.0.13] 
INFO[0101] [controlplane] Successfully started [rke-log-linker] container on host [10.10.0.11] 
INFO[0101] [remove/rke-log-linker] Successfully removed container on host [10.10.0.13] 
INFO[0101] [controlplane] Successfully started [rke-log-linker] container on host [10.10.0.12] 
INFO[0101] [remove/rke-log-linker] Successfully removed container on host [10.10.0.11] 
INFO[0101] [controlplane] Successfully started [kube-controller-manager] container on host [10.10.0.13] 
INFO[0101] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.10.0.13] 
INFO[0101] [remove/rke-log-linker] Successfully removed container on host [10.10.0.12] 
INFO[0102] [controlplane] Successfully started [kube-controller-manager] container on host [10.10.0.12] 
INFO[0102] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.10.0.12] 
INFO[0102] [controlplane] Successfully started [kube-controller-manager] container on host [10.10.0.11] 
INFO[0102] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.10.0.11] 
INFO[0108] [healthcheck] service [kube-controller-manager] on host [10.10.0.13] is healthy 
INFO[0108] [healthcheck] service [kube-controller-manager] on host [10.10.0.12] is healthy 
INFO[0108] [healthcheck] service [kube-controller-manager] on host [10.10.0.11] is healthy 
INFO[0108] [controlplane] Successfully started [rke-log-linker] container on host [10.10.0.13] 
INFO[0108] [remove/rke-log-linker] Successfully removed container on host [10.10.0.13] 
INFO[0108] [controlplane] Successfully started [rke-log-linker] container on host [10.10.0.12] 
INFO[0108] [controlplane] Successfully started [rke-log-linker] container on host [10.10.0.11] 
INFO[0109] [controlplane] Successfully started [kube-scheduler] container on host [10.10.0.13] 
INFO[0109] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.10.0.13] 
INFO[0109] [remove/rke-log-linker] Successfully removed container on host [10.10.0.12] 
INFO[0109] [remove/rke-log-linker] Successfully removed container on host [10.10.0.11] 
INFO[0109] [controlplane] Successfully started [kube-scheduler] container on host [10.10.0.12] 
INFO[0109] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.10.0.12] 
INFO[0109] [controlplane] Successfully started [kube-scheduler] container on host [10.10.0.11] 
INFO[0109] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.10.0.11] 
INFO[0110] [healthcheck] service [kube-scheduler] on host [10.10.0.13] is healthy 
INFO[0110] [controlplane] Successfully started [rke-log-linker] container on host [10.10.0.13] 
INFO[0111] [remove/rke-log-linker] Successfully removed container on host [10.10.0.13] 
INFO[0115] [healthcheck] service [kube-scheduler] on host [10.10.0.12] is healthy 
INFO[0115] [healthcheck] service [kube-scheduler] on host [10.10.0.11] is healthy 
INFO[0116] [controlplane] Successfully started [rke-log-linker] container on host [10.10.0.12] 
INFO[0116] [controlplane] Successfully started [rke-log-linker] container on host [10.10.0.11] 
INFO[0116] [remove/rke-log-linker] Successfully removed container on host [10.10.0.12] 
INFO[0116] [remove/rke-log-linker] Successfully removed container on host [10.10.0.11] 
INFO[0116] [controlplane] Successfully started Controller Plane.. 
INFO[0116] [authz] Creating rke-job-deployer ServiceAccount 
INFO[0116] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0116] [authz] Creating system:node ClusterRoleBinding 
INFO[0116] [authz] system:node ClusterRoleBinding created successfully 
INFO[0116] [certificates] Save kubernetes certificates as secrets 
INFO[0116] [certificates] Successfully saved certificates as kubernetes secret [k8s-certs] 
INFO[0116] [state] Saving cluster state to Kubernetes   
INFO[0117] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state 
INFO[0117] [state] Saving cluster state to cluster nodes 
INFO[0117] [state] Successfully started [cluster-state-deployer] container on host [10.10.0.11] 
INFO[0117] [remove/cluster-state-deployer] Successfully removed container on host [10.10.0.11] 
INFO[0118] [state] Successfully started [cluster-state-deployer] container on host [10.10.0.12] 
INFO[0118] [remove/cluster-state-deployer] Successfully removed container on host [10.10.0.12] 
INFO[0119] [state] Successfully started [cluster-state-deployer] container on host [10.10.0.13] 
INFO[0119] [remove/cluster-state-deployer] Successfully removed container on host [10.10.0.13] 
INFO[0119] [worker] Building up Worker Plane..          
INFO[0119] [remove/service-sidekick] Successfully removed container on host [10.10.0.11] 
INFO[0119] [remove/service-sidekick] Successfully removed container on host [10.10.0.12] 
INFO[0119] [remove/service-sidekick] Successfully removed container on host [10.10.0.13] 
INFO[0119] [worker] Successfully started [kubelet] container on host [10.10.0.12] 
INFO[0119] [healthcheck] Start Healthcheck on service [kubelet] on host [10.10.0.12] 
INFO[0119] [worker] Successfully started [kubelet] container on host [10.10.0.13] 
INFO[0119] [healthcheck] Start Healthcheck on service [kubelet] on host [10.10.0.13] 
INFO[0119] [worker] Successfully started [kubelet] container on host [10.10.0.11] 
INFO[0119] [healthcheck] Start Healthcheck on service [kubelet] on host [10.10.0.11] 
INFO[0125] [healthcheck] service [kubelet] on host [10.10.0.12] is healthy 
INFO[0125] [healthcheck] service [kubelet] on host [10.10.0.13] is healthy 
INFO[0125] [healthcheck] service [kubelet] on host [10.10.0.11] is healthy 
INFO[0126] [worker] Successfully started [rke-log-linker] container on host [10.10.0.11] 
INFO[0126] [worker] Successfully started [rke-log-linker] container on host [10.10.0.12] 
INFO[0126] [worker] Successfully started [rke-log-linker] container on host [10.10.0.13] 
INFO[0126] [remove/rke-log-linker] Successfully removed container on host [10.10.0.11] 
INFO[0126] [remove/rke-log-linker] Successfully removed container on host [10.10.0.12] 
INFO[0126] [remove/rke-log-linker] Successfully removed container on host [10.10.0.13] 
INFO[0126] [worker] Successfully started [kube-proxy] container on host [10.10.0.11] 
INFO[0126] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.10.0.11] 
INFO[0126] [worker] Successfully started [kube-proxy] container on host [10.10.0.12] 
INFO[0126] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.10.0.12] 
INFO[0126] [worker] Successfully started [kube-proxy] container on host [10.10.0.13] 
INFO[0126] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.10.0.13] 
INFO[0127] [healthcheck] service [kube-proxy] on host [10.10.0.11] is healthy 
INFO[0127] [healthcheck] service [kube-proxy] on host [10.10.0.12] is healthy 
INFO[0127] [healthcheck] service [kube-proxy] on host [10.10.0.13] is healthy 
INFO[0128] [worker] Successfully started [rke-log-linker] container on host [10.10.0.11] 
INFO[0128] [worker] Successfully started [rke-log-linker] container on host [10.10.0.13] 
INFO[0128] [worker] Successfully started [rke-log-linker] container on host [10.10.0.12] 
INFO[0128] [remove/rke-log-linker] Successfully removed container on host [10.10.0.11] 
INFO[0128] [remove/rke-log-linker] Successfully removed container on host [10.10.0.13] 
INFO[0128] [remove/rke-log-linker] Successfully removed container on host [10.10.0.12] 
INFO[0128] [worker] Successfully started Worker Plane.. 
INFO[0128] [sync] Syncing nodes Labels and Taints       
INFO[0129] [sync] Successfully synced nodes Labels and Taints 
INFO[0129] [network] Setting up network plugin: canal   
INFO[0129] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0129] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin 
INFO[0129] [addons] Executing deploy job..              
INFO[0135] [addons] Setting up KubeDNS                  
INFO[0135] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0135] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon 
INFO[0135] [addons] Executing deploy job..              
INFO[0140] [addons] KubeDNS deployed successfully..     
INFO[0140] [addons] Setting up Metrics Server           
INFO[0140] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0140] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-metrics-addon 
INFO[0140] [addons] Executing deploy job..              
INFO[0145] [addons] KubeDNS deployed successfully..     
INFO[0145] [ingress] Setting up nginx ingress controller 
INFO[0145] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0145] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller 
INFO[0145] [addons] Executing deploy job..              
INFO[0150] [ingress] ingress controller nginx is successfully deployed 
INFO[0150] [addons] Setting up user addons              
INFO[0150] [addons] no user addons defined              
INFO[0150] Finished building Kubernetes cluster successfully 
ubuntu@rancher-building:~$ 

ログの内容を簡単に紹介します。

  • INFO[0000] - [0001]: 構築の開始。ssh接続の確立
  • INFO[0002] - [0017]: port-listener-containerによるポートリッスンの確認
    • rancher/rke-tools:v0.1.16イメージのpull
    • rke-etcd-port-listner rke-cp-port-listener rke-worker-port-listenerの開始
    • etcd間、control plane - etcd間、control plane - worker間のポート確認
    • KubeAPIポートの確認
    • 使用済みコンテナの削除
  • INFO[0018] - [0035]: 証明書の生成、デプロイ
    • CA証明書
    • Kubernetes APIサーバ証明書
    • Kube Controller証明書
    • Kube Scheduler証明書
    • Kube Proxy証明書
    • Node証明書
    • Admin証明書、kubeconfig
    • etcd証明書とキー
    • aggregation layer用の証明書
    • 生成した証明書の保存
    • 証明書のクラスターへのデプロイ
  • INFO[0035] - [0056]: Kubernetesイメージのpull
    • rancher/hyperkube:v1.13.5-rancher1のpull:hyperkubeはKubernetesコンポーネントをひとまとめにしたもの。
  • INFO[0056] - [0088]: Etcdの構築
    • rancher/coreos-etcd:v3.2.24イメージのpull
    • コンテナの開始
    • snapshotの取得
    • rke-bundle-certによる証明書との紐付け
  • INFO[0088] - [0116]: Control Planeの構築
    • kube-apiserverの開始、ヘルスチェック
    • kube-controller-managerの開始、ヘルスチェック
    • kube-schedulerの開始、ヘルスチェック
  • INFO[0116]: rke-job-deployerServiceAccountの生成
  • INFO[0116]: ノード ClusterRoleBindingの生成
  • INFO[0116]: k8s-certsecretの生成
  • INFO[0117] - [0119]: クラスターのstateの確認、保存
  • INFO[0119] - [0128]: Workerの構築
    • kubeletの開始
    • kube-proxyの開始、ヘルスチェック確認
  • INFO[0128]: ノードのtaintlabelの同期
  • INFO[0129]: ネットワークプラグイン(Canal)の開始
  • INFO[0129] - [0150]: addonの追加
    • KubeDNS
    • Metrics Server
    • Nginx Ingress Controller

rke upコマンドにより生成されたkube_config_rancher-cluster.ymlをkubeconfigとして登録し、確認のためkubectlコマンドを実行します。

ubuntu@rancher-building:~$ cat .profile
ubuntu@rancher-building:~$ echo "export KUBECONFIG=/home/ubuntu/kube_config_rancher-cluster.yml" >> .profile
ubuntu@rancher-building:~$ source .profile
ubuntu@rancher-building:~$ kubectl get nodes
NAME         STATUS   ROLES                      AGE     VERSION
10.10.0.11   Ready    controlplane,etcd,worker   9m48s   v1.13.5
10.10.0.12   Ready    controlplane,etcd,worker   9m47s   v1.13.5
10.10.0.13   Ready    controlplane,etcd,worker   9m47s   v1.13.5

ubuntu@rancher-building:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-7f8fbb85db-bmfb6     1/1     Running     0          9m47s
ingress-nginx   nginx-ingress-controller-dbs86            1/1     Running     0          9m42s
ingress-nginx   nginx-ingress-controller-jxkhf            1/1     Running     0          9m47s
ingress-nginx   nginx-ingress-controller-mv7qn            1/1     Running     0          9m47s
kube-system     canal-66vds                               2/2     Running     0          10m
kube-system     canal-rlpdz                               2/2     Running     0          10m
kube-system     canal-rzhh4                               2/2     Running     0          10m
kube-system     kube-dns-5fd74c7488-8zcvk                 3/3     Running     0          9m58s
kube-system     kube-dns-autoscaler-c89df977f-ztbbm       1/1     Running     0          9m57s
kube-system     metrics-server-7fbd549b78-7hr2r           1/1     Running     0          9m52s
kube-system     rke-ingress-controller-deploy-job-pvpcv   0/1     Completed   0          9m49s
kube-system     rke-kubedns-addon-deploy-job-ljdrc        0/1     Completed   0          9m59s
kube-system     rke-metrics-addon-deploy-job-48x5j        0/1     Completed   0          9m54s
kube-system     rke-network-plugin-deploy-job-fqjpg       0/1     Completed   0          10m

以上でrkeによるクラスター構築は完了です。

5. HelmによるRancherインストール

ここからHelmを利用したRancherのインストールになります。

公式ドキュメントリンク:
https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-init/
https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-rancher/

5-1. Helmとは

HelmはKubernetes向けのパッケージ管理ツールであり、Chartと呼ばれる単位のパッケージを用いて管理します。Kubernetesの利用が広がる中で、大量のyamlファイルを管理する必要が生じ、これをうまく管理するために開発されました。詳細についてはこちらの記事が参考になります

またRancherではHelmを利用してCatalogという機能を提供します。Catalog機能を利用することで、アプリケーションを簡単に、繰り返しクラスター上にデプロイできるようになります。詳しくはこちらの記事を参照してください

5-2. tillerインストール

HelmではChartを管理するためtillerというサーバサイドのコンポーネントを利用します。tillerがクラスターにChartをデプロイできるようにするため、ServiceAccountとClusterRoleBindingを作成します。作成後helm initコマンドでtillerをインストールします。

ubuntu@rancher-building:~$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created

ubuntu@rancher-building:~$ kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created

ubuntu@rancher-building:~$ helm init --service-account tiller
Creating /home/ubuntu/.helm 
Creating /home/ubuntu/.helm/repository 
Creating /home/ubuntu/.helm/repository/cache 
Creating /home/ubuntu/.helm/repository/local 
Creating /home/ubuntu/.helm/plugins 
Creating /home/ubuntu/.helm/starters 
Creating /home/ubuntu/.helm/cache/archive 
Creating /home/ubuntu/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/ubuntu/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

確認のためkubectlによるリソース確認とhelmのバージョン確認を行います。

ubuntu@rancher-building:~$ kubectl get serviceaccount
NAME      SECRETS   AGE
default   1         21m

ubuntu@rancher-building:~$ kubectl get serviceaccount -n kube-system
NAME                                 SECRETS   AGE

~中略~

tiller                               1         37s

ubuntu@rancher-building:~$ kubectl get clusterrolebinding
NAME                                                   AGE

~中略~

tiller                                                 45s

ubuntu@rancher-building:~$ kubectl -n kube-system  rollout status deploy/tiller-deploy
deployment "tiller-deploy" successfully rolled out

ubuntu@rancher-building:~$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
ubuntu@rancher-building:~$ 

5-3. cert-manager

Rancherサーバはデフォルトでセキュアになるよう設計されており、SSL/TLSが要求されます。公式ドキュメントでは3つの方法が紹介されており、その中で今回はRancher Generated Certificateを選択しました。

まずはcert-managerをインストールします。cert-managerは様々な種類のTLS証明書を自動で管理・発行するためのKubernetesのaddonです。

まずはHelmリポジトリを追加し、その後cert-managerをデプロイします。

ubuntu@rancher-building:~$ helm repo list
NAME    URL                                             
stable  https://kubernetes-charts.storage.googleapis.com
local   http://127.0.0.1:8879/charts              
      
ubuntu@rancher-building:~$ helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
"rancher-latest" has been added to your repositories

ubuntu@rancher-building:~$ helm install stable/cert-manager --name cert-manager --namespace kube-system --version v0.5.2
NAME:   cert-manager
LAST DEPLOYED: Mon May  6 04:38:33 2019
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                           READY  STATUS             RESTARTS  AGE
cert-manager-6464494858-pzlps  0/1    ContainerCreating  0         0s

==> v1/ServiceAccount
NAME          SECRETS  AGE
cert-manager  1        0s

==> v1beta1/ClusterRole
NAME          AGE
cert-manager  0s

==> v1beta1/ClusterRoleBinding
NAME          AGE
cert-manager  0s

==> v1beta1/Deployment
NAME          READY  UP-TO-DATE  AVAILABLE  AGE
cert-manager  0/1    0           0          0s


NOTES:
cert-manager has been deployed successfully!

In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).

More information on the different types of issuers and how to configure them
can be found in our documentation:

https://cert-manager.readthedocs.io/en/latest/reference/issuers.html

For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:

https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html

確認のためkubectl helmコマンドを実行します。

ubuntu@rancher-building:~$ kubectl -n kube-system rollout status deploy/cert-manager
deployment "cert-manager" successfully rolled out
ubuntu@rancher-building:~$ helm ls
NAME          REVISION  UPDATED                   STATUS    CHART               APP VERSION NAMESPACE  
cert-manager  1         Mon May  6 04:38:33 2019  DEPLOYED  cert-manager-v0.5.2 v0.5.2      kube-system
ubuntu@rancher-building:~$ 

5-4. Rancher

いよいよRancherをデプロイします。コマンド中の--set hostname=では、先ほど作成したロードバランサーのDNS名を指定します。

ubuntu@rancher-building:~$ helm install rancher-latest/rancher --name rancher --namespace cattle-system --set hostname=<NLBのDNS名>
NAME:   rancher
LAST DEPLOYED: Mon May  6 04:41:33 2019
NAMESPACE: cattle-system
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRoleBinding
NAME     AGE
rancher  0s

==> v1/Deployment
NAME     READY  UP-TO-DATE  AVAILABLE  AGE
rancher  0/3    0           0          0s

==> v1/Pod(related)
NAME                      READY  STATUS             RESTARTS  AGE
rancher-6679788569-98nlw  0/1    Pending            0         0s
rancher-6679788569-rqcjz  0/1    ContainerCreating  0         0s
rancher-6679788569-w686m  0/1    ContainerCreating  0         0s

==> v1/Service
NAME     TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)  AGE
rancher  ClusterIP  10.43.113.44  <none>       80/TCP   0s

==> v1/ServiceAccount
NAME     SECRETS  AGE
rancher  1        0s

==> v1alpha1/Issuer
NAME     AGE
rancher  0s

==> v1beta1/Ingress
NAME     HOSTS                                                 ADDRESS  PORTS  AGE
rancher  <NLBのDNS名>                                                   80, 443  0s


NOTES:
Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued and Ingress comes up.

Check out our docs at https://rancher.com/docs/rancher/v2.x/en/

Browse to https://<NLBのDNS名>

Happy Containering!

確認のためkubectlコマンドを実行します。

ubuntu@rancher-building:~$ kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "rancher" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment spec update to be observed...
Waiting for deployment "rancher" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "rancher" rollout to finish: 2 of 3 updated replicas are available...
deployment "rancher" successfully rolled out

ubuntu@rancher-building:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
cattle-system   rancher-6679788569-98nlw                  1/1     Running     2          115s
cattle-system   rancher-6679788569-rqcjz                  1/1     Running     0          115s
cattle-system   rancher-6679788569-w686m                  1/1     Running     1          115s
ingress-nginx   default-http-backend-7f8fbb85db-bmfb6     1/1     Running     0          33m
ingress-nginx   nginx-ingress-controller-dbs86            1/1     Running     0          33m
ingress-nginx   nginx-ingress-controller-jxkhf            1/1     Running     0          33m
ingress-nginx   nginx-ingress-controller-mv7qn            1/1     Running     0          33m
kube-system     canal-66vds                               2/2     Running     0          33m
kube-system     canal-rlpdz                               2/2     Running     0          33m
kube-system     canal-rzhh4                               2/2     Running     0          33m
kube-system     cert-manager-6464494858-pzlps             1/1     Running     0          4m55s
kube-system     kube-dns-5fd74c7488-8zcvk                 3/3     Running     0          33m
kube-system     kube-dns-autoscaler-c89df977f-ztbbm       1/1     Running     0          33m
kube-system     metrics-server-7fbd549b78-7hr2r           1/1     Running     0          33m
kube-system     rke-ingress-controller-deploy-job-pvpcv   0/1     Completed   0          33m
kube-system     rke-kubedns-addon-deploy-job-ljdrc        0/1     Completed   0          33m
kube-system     rke-metrics-addon-deploy-job-48x5j        0/1     Completed   0          33m
kube-system     rke-network-plugin-deploy-job-fqjpg       0/1     Completed   0          33m
kube-system     tiller-deploy-5f4fc5bcc6-gvqs5            1/1     Running     0          12m
ubuntu@rancher-building:~$ 

以上でRancherのデプロイは完了です。

6. Rancher管理画面にアクセス

Rancherのデプロイが完了したら、Web GUIからRancherサーバにアクセスします。LBのDNS名をブラウザに入力し、アクセスします。以下のような画面が表示されるので「LBのDNS名にアクセスする」を選択します。

rancher-ha-17.jpg

Rancherのログイン画面が表示されるので、新規パスワードを入力し「Continue」を選択します。

rancher-ha-18 (1).jpg

ログイン後、Rancherクラスターの構築状況が表示されます。最初は「This cluster is currently Provisioning」というメッセージが表示されます。

rancher-ha-19.jpg

数分ほど待つとメッセージは消え、構築は完了となります。

rancher-ha-20.jpg

7. 終わりに

今回はRancherのHA構成を構築しました。Rancherは公式ドキュメントが充実しているため、それほど苦もなく構築することができます。特にAWSを利用する場合はロードバランサーの作業内容まで紹介されており、これまでAWSのロードバランサーを構築したことがなかった私でも簡単にできました。公式ドキュメントでは他にもNGINXの構築方法が紹介されています

8. 参考リンク

11
9
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
11
9

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?