LoginSignup
1
1

More than 5 years have passed since last update.

【Docker】etcd+docker-machineでdocker-swarmを使う

Last updated at Posted at 2016-11-24

etcdを管理情報を扱うBackendにして、docker-machineを使ってSwarm Clusterを構築する。

 環境

  • docker(version 1.12.3)
  • docker-machine(version 0.7.0)
  • etcd(version 2.3.7)

Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # If true, then any SSH connections made will enable agent forwarding.
  # Default value: false
  config.ssh.forward_agent = true

  config.vm.define "manager" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.10", virtualbox__intnet: "intnet"
  end

  config.vm.define "node1" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.20", virtualbox__intnet: "intnet"
  end

  config.vm.define "node2" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.30", virtualbox__intnet: "intnet"
  end

  config.vm.define "kvstore" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.40", virtualbox__intnet: "intnet"
  end

  config.vm.define "host" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.50", virtualbox__intnet: "intnet"
  end
end
IP 用途
manager 192.168.33.10 Swarmクラスターのmaster
node1 192.168.33.20 Swarmクラスターのnode1
node2 192.168.33.30 Swarmクラスターのmaster
kvstore 192.168.33.40 SwarmクラスターBackendのetcd用
host 192.168.33.50 作業場所

hostからmanager, node1, node2, kvstoreにSSHで入れるようにしておく。
最初にetcd作ってからSwarmを構築していくような形。
作業は全てhostVM上でやる。

etcd

kvstoreにetcdコンテナを立てる。

$ docker-machine create \
   --driver generic --generic-ip-address=192.168.33.40 \
   --generic-ssh-user vagrant --generic-ssh-key ~/.ssh/id_rsa \
   kvstore
$ eval $(docker-machine env kvstore)
$ docker pull quay.io/coreos/etcd:v2.3.7
$ docker run -d --name etcd \
   -p 2379:2379 -p 4001:4001 \
   quay.io/coreos/etcd:v2.3.7 \
   --data-dir=/tmp/default.etcd \
   --advertise-client-urls 'http://192.168.33.40:2379,http://192.168.33.40:4001' \
   --listen-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001'
$ ETCDCTL_ENDPOINT="http://192.168.33.40:2379" ./etcdctl --no-sync  get /

はまったポイント

--advertise-client-urls'http://0.0.0.0:2379,http://0.0.0.0:4001'ではなくhttp://192.168.33.40:2379,http://192.168.33.40:4001にする必要がある。

Swarm Clusterの構築

swarm-masterを作る

manager

$ docker-machine create \
   --driver generic --generic-ip-address=192.168.33.10 \
   --generic-ssh-user vagrant --generic-ssh-key ~/.ssh/id_rsa \
   --swarm --swarm-master \
   --swarm-discovery="etcd://$(docker-machine ip kvstore):2379/swarm" \
   --engine-opt="cluster-store=etcd://$(docker-machine ip kvstore):2379" \
   --engine-opt="cluster-advertise=eth1:2376" \
   manager

swarm-nodeを作る

node1

$ docker-machine create \
   --driver generic --generic-ip-address=192.168.33.20 \
   --generic-ssh-user vagrant --generic-ssh-key ~/.ssh/id_rsa \
   --swarm \
   --swarm-discovery="etcd://$(docker-machine ip kvstore):2379/swarm" \
   node1

node2

$ docker-machine create \
   --driver generic --generic-ip-address=192.168.33.30 \
   --generic-ssh-user vagrant --generic-ssh-key ~/.ssh/id_rsa \
   --swarm \
   --swarm-discovery="etcd://$(docker-machine ip kvstore):2379/swarm" \
   node2

動作確認

etcd

etcdのAPIで状態を確認する。
manager, node1, node2が登録されている。

{
  "node": {
    "nodes": [
      {
        "createdIndex": 4,
        "modifiedIndex": 4,
        "nodes": [
          {
            "createdIndex": 4,
            "modifiedIndex": 4,
            "nodes": [
              {
                "createdIndex": 4,
                "modifiedIndex": 4,
                "nodes": [
                  {
                    "createdIndex": 4,
                    "modifiedIndex": 4,
                    "nodes": [
                      {
                        "createdIndex": 100,
                        "modifiedIndex": 100,
                        "ttl": 134,
                        "expiration": "2016-11-23T11:45:51.259602538Z",
                        "value": "192.168.33.10:2376",
                        "key": "/swarm/docker/swarm/nodes/192.168.33.10:2376"
                      },
                      {
                        "createdIndex": 102,
                        "modifiedIndex": 102,
                        "ttl": 168,
                        "expiration": "2016-11-23T11:46:25.014658247Z",
                        "value": "192.168.33.20:2376",
                        "key": "/swarm/docker/swarm/nodes/192.168.33.20:2376"
                      },
                      {
                        "createdIndex": 101,
                        "modifiedIndex": 101,
                        "ttl": 137,
                        "expiration": "2016-11-23T11:45:53.997167245Z",
                        "value": "192.168.33.30:2376",
                        "key": "/swarm/docker/swarm/nodes/192.168.33.30:2376"
                      }
                    ],
                    "dir": true,
                    "key": "/swarm/docker/swarm/nodes"
                  }
                ],
                "dir": true,
                "key": "/swarm/docker/swarm"
              }
            ],
            "dir": true,
            "key": "/swarm/docker"
          }
        ],
        "dir": true,
        "key": "/swarm"
      }
    ],
    "dir": true
  },
  "action": "get"
}

cluster

docker-machineで状態確認。
192.168.33.10をmasterにしてRunningになっている。

$ docker-machine ls
NAME      ACTIVE      DRIVER    STATE     URL                        SWARM              DOCKER    ERRORS
kvstore   -           generic   Running   tcp://192.168.33.40:2376                      v1.12.3
manager   * (swarm)   generic   Running   tcp://192.168.33.10:2376   manager (master)   v1.12.3
node1     -           generic   Running   tcp://192.168.33.20:2376   manager            v1.12.3
node2     -           generic   Running   tcp://192.168.33.30:2376   manager            v1.12.3

swarm-managerをdockerホストにしてdockerコマンドを起動。
クラスタの情報が確認できる。

$ eval $(docker-machine env --swarm manager)
$ docker info
...
Nodes: 3
 manager: 192.168.33.10:2376
  └ ID: 7MOY:Q7KV:KRXU:ORDJ:7LNB:LGSM:GT5H:N4J3:BMC7:WC7C:RMEA:UPA6
  └ Status: Healthy
  └ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 514.5 MiB
  └ Labels: kernelversion=3.13.0-95-generic, operatingsystem=Ubuntu 14.04.5 LTS, provider=generic, storagedriver=aufs
  └ UpdatedAt: 2016-11-23T11:49:04Z
  └ ServerVersion: 1.12.3
 node1: 192.168.33.20:2376
  └ ID: C6GI:OKAT:H2LH:VCBU:OF65:YWST:QVQ7:X6RP:6JY5:2CQK:4PVM:LVCZ
  └ Status: Healthy
  └ Containers: 1 (1 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 514.5 MiB
  └ Labels: kernelversion=3.13.0-95-generic, operatingsystem=Ubuntu 14.04.5 LTS, provider=generic, storagedriver=aufs
  └ UpdatedAt: 2016-11-23T11:49:16Z
  └ ServerVersion: 1.12.3
 node2: 192.168.33.30:2376
  └ ID: IJAI:XSRL:3T5S:5XEF:AUVZ:TLYM:RVAH:MYAW:Q43J:CVCF:J5JH:BGZA
  └ Status: Healthy
  └ Containers: 1 (1 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 514.5 MiB
  └ Labels: kernelversion=3.13.0-95-generic, operatingsystem=Ubuntu 14.04.5 LTS, provider=generic, storagedriver=aufs
  └ UpdatedAt: 2016-11-23T11:49:16Z
  └ ServerVersion: 1.12.3
...

適当なコンテナを立ててみる

ubuntuをpullして10個ぐらい立てる。

$ docker pull ubuntu
$ docker run -d ubuntu tail -f /dev/nul
.....
$ docker run -d ubuntu tail -f /dev/nul
$ docker ps
CONTAINER ID        IMAGE               COMMAND               CREATED             STATUS              PORTS               NAMES
adb3b7d5e716        ubuntu              "tail -f /dev/null"   4 seconds ago       Up 3 seconds                            node1/cocky_stallman
dcd7f4f99864        ubuntu              "tail -f /dev/null"   22 seconds ago      Up 21 seconds                           node2/furious_payne
734878f86e59        ubuntu              "tail -f /dev/null"   23 seconds ago      Up 22 seconds                           node1/small_ardinghelli
0d6099e7cc05        ubuntu              "tail -f /dev/null"   25 seconds ago      Up 24 seconds                           node2/nauseous_hoover
f06a02774421        ubuntu              "tail -f /dev/null"   26 seconds ago      Up 25 seconds                           node1/big_kare
b24d42d433f4        ubuntu              "tail -f /dev/null"   29 seconds ago      Up 28 seconds                           node2/hungry_engelbart
7daa0be7d630        ubuntu              "tail -f /dev/null"   54 seconds ago      Up 52 seconds                           node1/stupefied_joliot
9ddaf2e88b93        ubuntu              "tail -f /dev/null"   2 seconds ago       Up 1 seconds                            manager/elated_northcutt
66b4da48e09d        ubuntu              "tail -f /dev/null"   24 seconds ago      Up 23 seconds                           manager/elated_stonebraker
bb97b9a39644        ubuntu              "tail -f /dev/null"   27 seconds ago      Up 26 seconds                           manager/determined_lichterman

デフォルトのstrategyがspreadなのでmanager, node1, node2にそれぞれ均等にコンテナが立っている。

参考

1
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1