LoginSignup
5
5

More than 5 years have passed since last update.

Docker Swarm Cluster without docker-machine

Last updated at Posted at 2016-04-23

docker-machineを使わずに気軽にSwarm clusterを作りたいと思っていたら、素晴らしいvideo tutorialがあったのでやってみた。Video 2のHigh Availability Configurationのところまでを以下メモしてみる。

環境

  • Ubuntu 14.04 (on VMware Workstation)、4 VM (manager x2, node x2)を利用
  • Docker Engine 1.11.0 この通りにインストール
  • Swarm 1.2.0
  • 利用したIPアドレス
    • swarm manager1: 192.168.209.138
    • swarm manager2: 192.168.209.168
    • swarm node1: 192.168.209.166
    • swarm node2: 192.168.209.167

準備

  • Remote APIの設定
/etc/default/docker
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"
  • Docker Engineをインストールした状態のVMをクローンして使いまわそうとすると、IDが重複するというエラーがあとあと発生するので、/etc/docker/key.jsonを削除してservice docker restartしておく。

設定

service discovery用の分散KVSにはConsulを利用。

manager1(192.168.209.138)
(consul setup)
$ docker run --restart=unless-stopped -d -p 8500:8500 -h consul1 progrium/consul -server -bootstrap
(swarm setup)
$ docker run --restart=unless-stopped -d -p 3375:2375 swarm manage --replication --advertise 192.168.209.138:3375 consul://192.168.209.138:8500
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                            NAMES
a2cb67a7618d        swarm               "/swarm manage --repl"   5 hours ago         Up 5 hours          0.0.0.0:3375->2375/tcp                                                           tender_bassi
9558a4ef7b35        progrium/consul     "/bin/start -server -"   25 hours ago        Up 25 hours         53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp   high_visvesvaraya

node1とnode2をClusterにjoinさせる。

manager1(192.168.209.138)
$ docker -H=tcp://192.168.209.166:2375 run -d swarm join --advertise=192.168.209.166:2375 consul://192.168.209.138:8500
$ docker -H=tcp://192.168.209.167:2375 run -d swarm join --advertise=192.168.209.167:2375 consul://192.168.209.138:8500
$ DOCKER_HOST=192.168.209.138:3375 docker info
Containers: 24
 Running: 2
 Paused: 0
 Stopped: 22
Images: 14
Server Version: swarm/1.2.0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
 ubu-swarm-node1: 192.168.209.166:2375
  └ Status: Healthy
  └ Containers: 13
  └ Reserved CPUs: 0 / 2
  └ Reserved Memory: 1 GiB / 4.047 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-85-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-23T10:44:58Z
  └ ServerVersion: 1.11.0
 ubu-swarm-node2: 192.168.209.167:2375
  └ Status: Healthy
  └ Containers: 11
  └ Reserved CPUs: 0 / 2
  └ Reserved Memory: 1 GiB / 4.047 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-85-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-23T10:44:39Z
  └ ServerVersion: 1.11.0
Plugins:
 Volume:
 Network:
Kernel Version: 3.13.0-85-generic
Operating System: linux
Architecture: amd64
CPUs: 4
Total Memory: 8.094 GiB
Name: a2cb67a7618d
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support

HAのためmanager2を追加する。

manager2(192.168.209.168)
$ docker run --restart=unless-stopped -d -p 3375:2375 swarm manage --replication --advertise 192.168.209.168:3375 consul://192.168.209.138:8500
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
3fa053141c84        swarm               "/swarm manage --repl"   4 seconds ago       Up 4 seconds        0.0.0.0:3375->2375/tcp   evil_darwin
$ docker logs 3fa0
time="2016-04-23T10:54:26Z" level=info msg="Initializing discovery without TLS"
time="2016-04-23T10:54:26Z" level=info msg="Listening for HTTP" addr=":2375" proto=tcp
time="2016-04-23T10:54:26Z" level=info msg="Leader Election: Cluster leadership lost"
time="2016-04-23T10:54:26Z" level=info msg="New leader elected: 192.168.209.138:3375"
time="2016-04-23T10:54:26Z" level=info msg="Registered Engine ubu-swarm-node2 at 192.168.209.167:2375"
time="2016-04-23T10:54:26Z" level=info msg="Registered Engine ubu-swarm-node1 at 192.168.209.166:2375"
$ DOCKER_HOST=192.168.209.168:3375 docker info
Containers: 24
 Running: 2
 Paused: 0
 Stopped: 22
Images: 14
Server Version: swarm/1.2.0
Role: replica
Primary: 192.168.209.138:3375
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
 ubu-swarm-node1: 192.168.209.166:2375
  └ Status: Healthy
  └ Containers: 13
  └ Reserved CPUs: 0 / 2
  └ Reserved Memory: 1 GiB / 4.047 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-85-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-23T10:56:25Z
  └ ServerVersion: 1.11.0
 ubu-swarm-node2: 192.168.209.167:2375
  └ Status: Healthy
  └ Containers: 11
  └ Reserved CPUs: 0 / 2
  └ Reserved Memory: 1 GiB / 4.047 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-85-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-23T10:56:05Z
  └ ServerVersion: 1.11.0
Plugins:
 Volume:
 Network:
Kernel Version: 3.13.0-85-generic
Operating System: linux
Architecture: amd64
CPUs: 4
Total Memory: 8.094 GiB
Name: 3fa053141c84
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support

さっそくコンテナを立ち上げてみる。

manager1(192.168.209.138)
$ export DOCKER_HOST=192.168.209.138:3375
$ docker run -dit ubuntu /bin/bash
9291d5a7c57edb577ec828f4b24a990a1472734e3d45a75936dc806c7691a53e
$ docker run -dit ubuntu /bin/bash
f6ea8aa3b746f25fc76c28a56c19f003a8fae35d034bee017aecfae4aecaa424
$ docker run -dit ubuntu /bin/bash
099c3ee9cbfb9f54b9db7c237e6e655cadbda38ecb8e452e7fa5718718c75b58
$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED                  STATUS                  PORTS               NAMES
f6ea8aa3b746        ubuntu              "/bin/bash"         Less than a second ago   Up Less than a second                       ubu-swarm-node2/backstabbing_knuth
099c3ee9cbfb        ubuntu              "/bin/bash"         2 seconds ago            Up 2 seconds                                ubu-swarm-node1/elated_poincare
9291d5a7c57e        ubuntu              "/bin/bash"         7 minutes ago            Up 7 minutes                                ubu-swarm-node2/romantic_pare

HAの確認でprimaryとなっているmanager1を落としてみる。

manager1(192.168.209.138)
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                            NAMES
a2cb67a7618d        swarm               "/swarm manage --repl"   22 hours ago        Up 22 hours         0.0.0.0:3375->2375/tcp                                                           tender_bassi
9558a4ef7b35        progrium/consul     "/bin/start -server -"   42 hours ago        Up 42 hours         53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp   high_visvesvaraya
$ docker stop a2cb
a2cb
$ docker rm a2cb
a2cb

manager2がprimaryとなっていることを確認する。

manager2(192.168.209.168)
$ DOCKER_HOST=192.168.209.168:3375 docker info
Containers: 28
 Running: 5
 Paused: 0
 Stopped: 23
Images: 14
Server Version: swarm/1.2.0
Role: primary
(省略)
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
3fa053141c84        swarm               "/swarm manage --repl"   4 seconds ago       Up 4 seconds        0.0.0.0:3375->2375/tcp   evil_darwin
$ docker logs 3fa0
time="2016-04-23T10:54:26Z" level=info msg="Initializing discovery without TLS"
time="2016-04-23T10:54:26Z" level=info msg="Listening for HTTP" addr=":2375" proto=tcp
time="2016-04-23T10:54:26Z" level=info msg="Leader Election: Cluster leadership lost"
time="2016-04-23T10:54:26Z" level=info msg="New leader elected: 192.168.209.138:3375"
time="2016-04-23T10:54:26Z" level=info msg="Registered Engine ubu-swarm-node2 at 192.168.209.167:2375"
time="2016-04-23T10:54:26Z" level=info msg="Registered Engine ubu-swarm-node1 at 192.168.209.166:2375"
time="2016-04-23T11:12:36Z" level=info msg="Leader Election: Cluster leadership acquired"

おしまい

References

5
5
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
5
5