LoginSignup
6
4

More than 5 years have passed since last update.

メモ: Ubuntu 18.04.1 LTS の localhost(LXD) で Kubernetes 環境構築

Last updated at Posted at 2018-08-26

※ 準備のみの作業メモ。

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

conjure-up

Kubernetes をセットアップするために conjure-up を使うので、インストールしておく。

$ sudo snap install conjure-up --classic

LXD

localhost で Kubernetes を動かす際に LXD が必要になるのでインストールしておく。

$ sudo snap install lxd

ソケットファイルが root:lxd で作られているため、実行ユーザーを lxd グループに所属させておく。

$ sudo usermod -aG lxd $USER

再起動などしてから以下を実行できることが確認できればよし。

$ /snap/bin/lxc query --wait -X GET /1.0

LXD の初期化も済ませておく。

$ /snap/bin/lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: none
  description: ""
  managed: false
  name: lxdbr0
  type: ""
storage_pools:
- config: {}
  description: ""
  name: default
  driver: dir
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
cluster: null

Kubernetes

conjure-up を利用して Kubernetes をセットアップ。localhost で動作させる以外の選択肢はデフォルトで済ませた。

$ conjure-up kubernetes

kubectl をインストール。

$ sudo snap install kubectl --classic
$ kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}
$ kubectl get namespace
NAME          STATUS    AGE
default       Active    22m
kube-public   Active    22m
kube-system   Active    22m

ダッシュボードをインストール

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs configured
serviceaccount/kubernetes-dashboard configured
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal configured
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal configured
deployment.apps/kubernetes-dashboard configured
service/kubernetes-dashboard configured
$ kubectl get pod --namespace=kube-system -l k8s-app=kubernetes-dashboard
NAME                                   READY     STATUS    RESTARTS   AGE
kubernetes-dashboard-6948bdb78-6rhs9   1/1       Running   0          23m

ダッシュボードでは認証を要求されるので、ガイドに従って必要な権限を持ったユーザーを作成しておく。

$ kubectl create -f service-account.yml
$ kubectl create -f cluster-role-binding.yml
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-hwcqw
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=admin-user
              kubernetes.io/service-account.uid=6b6e5ae5-a92d-11e8-a30c-ae983dcb0fa3

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWh3Y3F3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2YjZlNWFlNS1hOTJkLTExZTgtYTMwYy1hZTk4M2RjYjBmYTMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.PtwrWRpVKBplE--aBqVZ23urazX-Rq_mnu3eTzTHh4jwUeTmSA_uUFNvz6Pr7L_ryVvh-tQqDMVHQGSmG2KHi_NWMUshaSe9tK86B9Jj_Ye5TXCpmMmtCYk2fz6tDKOolxaJBdWXDtPzm7BRo8_aORf9Y06HFW9-335n6-8R6AjGmo2MlElUEEtS41nVwMF6iZWvFTexV2L7VOs0JdzGxxK3CVQLIHwl8ib_fSwTdduR468-LDEhc6g53IzrIwrWYDekpVDxeXQlb4nSLqA_FgfZyI4RqaPNvRlwNJgQchS8Cb9PCd2yYsW66EcnsLsJW9_pgLWcondaG_CzaSEu_w
ca.crt:     1179 bytes
namespace:  11 bytes

認証トークンを得られたので、ダッシュボードを起動してアクセスしてみる。

$ kubectl proxy

2018/08/29 追記: 削除

※ 開発ノートで動かしていたら CPU 温度の問題などが発生して面倒になったため削除。クラウド環境を使うほうが総合的にはリーズナブルだと考えたい。

$ juju list-models
Controller: conjure-up-localhost-56e

Model                          Cloud/Region         Status     Machines  Access  Last connection
conjure-canonical-kubern-129*  localhost/localhost  available        10  admin   2018-08-26
controller                     localhost/localhost  available         1  admin   just now

$ juju destroy-model conjure-canonical-kubern-129
WARNING! This command will destroy the "conjure-canonical-kubern-129" model.
This includes all machines, applications, data and other resources.

Continue [y/N]? y
Destroying model
Waiting on model to be removed, 10 machine(s), 6 application(s)...
Waiting on model to be removed, 10 machine(s), 6 application(s)...
Waiting on model to be removed, 10 machine(s), 6 application(s)...
Waiting on model to be removed, 10 machine(s), 6 application(s)...
Waiting on model to be removed, 9 machine(s), 6 application(s)...
Waiting on model to be removed, 8 machine(s), 4 application(s)...
Waiting on model to be removed, 8 machine(s)...
Waiting on model to be removed, 7 machine(s)...
Waiting on model to be removed, 7 machine(s)...
Waiting on model to be removed, 7 machine(s)...
Waiting on model to be removed, 7 machine(s)...
Waiting on model to be removed, 7 machine(s)...
Waiting on model to be removed, 4 machine(s)...
Waiting on model to be removed, 4 machine(s)...
Waiting on model to be removed, 4 machine(s)...
Waiting on model to be removed, 4 machine(s)...
Waiting on model to be removed, 4 machine(s)...
Waiting on model to be removed, 4 machine(s)...
Waiting on model to be removed, 4 machine(s)...
Waiting on model to be removed, 1 machine(s)...
Waiting on model to be removed...
Waiting on model to be removed...
Model destroyed.
$ juju list-controllers
Use --refresh flag with this command to see the latest information.

Controller                 Model                         User   Access     Cloud/Region         Models  Machines    HA  Version
conjure-up-localhost-56e*  -                             admin  superuser  localhost/localhost       1         1  none  2.4.0

$ juju destroy-controller conjure-up-localhost-56e
WARNING! This command will destroy the "conjure-up-localhost-56e" controller.
This includes all machines, applications, data and other resources.

Continue? (y/N):y
Destroying controller
Waiting for hosted model resources to be reclaimed
All hosted models reclaimed, cleaning up controller machines
$ sudo snap remove lxd
$ sudo snap remove conjure-up
6
4
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
6
4