LoginSignup
1
1

More than 5 years have passed since last update.

Docker 1.12のSwarm modeをVagrantで体験する

Posted at

Docker 1.12から導入されたSwarm modeをVagrantで簡単にセットアップしつつ体験してみます。

基本的に Swarm tutorial の構成でやっていきます。

  • manager1
    • Swarm manager
    • 192.168.99.100
  • worker1
    • Swarm worker その1
    • 192.168.99.101
  • worker2
    • Swarm worker その2
    • 192.168.99.102

今回使うファイルたち

今回使うファイルは下記に置いてあります。

前提条件

  • Ubuntu 16.04 x86_64 LTS (Vagrantが動くなら別にこれじゃなくてもOK。)
  • VirtualBox 5.0.26
  • Vagrant 1.8.5

Vagrantの立ち上げ

(https://docs.docker.com/engine/swarm/swarm-tutorial/)

vagrant up コマンドでDockerインストール済みのUbuntu x86_64が3台立ち上がります。(やや時間かかります。)

console
$ vagrant up
Bringing machine 'manager1' up with 'virtualbox' provider...
Bringing machine 'worker1' up with 'virtualbox' provider...
Bringing machine 'worker2' up with 'virtualbox' provider...
==> manager1: Importing base box 'ubuntu/xenial64'...
==> manager1: Matching MAC address for NAT networking...
==> manager1: Checking if box 'ubuntu/xenial64' is up to date...
==> manager1: Setting the name of the VM: manager1
==> manager1: Clearing any previously set network interfaces...
==> manager1: Preparing network interfaces based on configuration...
(以下略)

それぞれ下記のようなコマンドでSSH接続できるようになります。

$ vagrant ssh manager1
$ vagrant ssh worker1
$ vagrant ssh worker2

manager1: Docker Swarmの作成

(https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/)

manager用のノード manager1 にログインして、初期セットアップにて下記コマンドにてSwarmを初期作成します。

ssh#manager1
ubuntu@manager1:~$ sudo docker swarm init --advertise-addr 192.168.99.100
==> manager1: Swarm initialized: current node (780n0sxr0f4p4j1ohouerjydw) is now a manager.
==> manager1: To add a worker to this swarm, run the following command:
==> manager1:     docker swarm join \
==> manager1:     --token SWMTKN-1-1qxh5updlhvwb6bam49j92vk1848rkfwclke8c7wsgyo5vet2n-9v0rvzq5q9zaeapexeaya5nvt \
==> manager1:     192.168.99.100:2377
==> manager1: To add a manager to this swarm, run the following command:
==> manager1:     docker swarm join \
==> manager1:     --token SWMTKN-1-1qxh5updlhvwb6bam49j92vk1848rkfwclke8c7wsgyo5vet2n-cuey49co68uty16dgctkfii72 \
==> manager1:     192.168.99.100:2377

--token SWMTKN〜〜〜 の部分は毎回異なるので気をつけてください。

worker1/worker2: Swarmへの参加

(https://docs.docker.com/engine/swarm/swarm-tutorial/add-nodes/)

worker1, worker2 それぞれでdocker swarm joinを実行してSwarmに参加します。

ssh#worker1
ubuntu@worker1:~$ sudo docker swarm join --token SWMTKN-1-1qxh5updlhvwb6bam49j92vk1848rkfwclke8c7wsgyo5vet2n-9v0rvzq5q9zaeapexeaya5nvt 192.168.99.100:2377
This node joined a swarm as a worker.

ubuntu@worker1:~$ sudo docker info
Swarm: active
 NodeID: 8kn8m3a1cpr0rc80eyooyeeh6
 Is Manager: false
 Node Address: 192.168.99.101
ssh#worker2
ubuntu@worker2:~$ sudo docker swarm join --token SWMTKN-1-1qxh5updlhvwb6bam49j92vk1848rkfwclke8c7wsgyo5vet2n-9v0rvzq5q9zaeapexeaya5nvt 192.168.99.100:2377
This node joined a swarm as a worker.

ubuntu@worker2:~$ sudo docker info
Swarm: active
 NodeID: bsa3z2j6g6g5koj5jrbb12nik
 Is Manager: false
 Node Address: 192.168.99.101

この時点で、manager1側での表示は下記のようになっています。

ssh#manager1
ubuntu@manager1:~$ sudo docker info
Swarm: active
 NodeID: 780n0sxr0f4p4j1ohouerjydw
 Is Manager: true
 ClusterID: corbsrqhuph7wzbuc0jo6fqeq
 Managers: 1
 Nodes: 3 # manager(1) + worker(2) ということでしょう
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot interval: 10000
  Heartbeat tick: 1
  Election tick: 3
 Dispatcher:
  Heartbeat period: 5 seconds
 CA configuration:
  Expiry duration: 3 months
 Node Address: 192.168.99.100
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-31-generic

ubuntu@manager1:~$ sudo docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
780n0sxr0f4p4j1ohouerjydw *  manager1  Ready   Active        Leader
8kn8m3a1cpr0rc80eyooyeeh6    worker1   Ready   Active        
bsa3z2j6g6g5koj5jrbb12nik    worker2   Ready   Active     

サービスを定義してワーカーに乗っける

(https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/)

alpineイメージでping docker.comを発行するサービスをSwarmに定義して実行します。

ssh#manager1
ubuntu@manager1:~$ sudo docker service create --replicas 1 --name helloworld alpine ping docker.com
6f6p4ev1vd4qdqt6t62rg7m84

ubuntu@manager1:~$ sudo docker service ls
ID            NAME        REPLICAS  IMAGE   COMMAND
6f6p4ev1vd4q  helloworld  1/1       alpine  ping docker.com

ubuntu@manager1:~$ sudo docker service ps helloworld
ID                         NAME          IMAGE   NODE     DESIRED STATE  CURRENT STATE               ERROR
a7oylt8jcnovjuiaa3y6ndfry  helloworld.1  alpine  worker2  Running        Running about a minute ago  
ssh#worker2
ubuntu@worker2:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
caed52a2006d        alpine:latest       "ping docker.com"   About a minute ago   Up About a minute                       helloworld.1.a7oylt8jcnovjuiaa3y6ndfry

サービスをスケールアウトさせる

(https://docs.docker.com/engine/swarm/swarm-tutorial/scale-service/)

helloworldサービスを5つ走らせるようにスケールアウトします。

ssh#manager1
ubuntu@manager1:~$ sudo docker service scale helloworld=5
helloworld scaled to 5

ubuntu@manager1:~$ sudo docker service ps helloworld
ID                         NAME          IMAGE   NODE      DESIRED STATE  CURRENT STATE           ERROR
a7oylt8jcnovjuiaa3y6ndfry  helloworld.1  alpine  worker2   Running        Running 5 minutes ago   
32u69hoo7eq7pufqw69vailwy  helloworld.2  alpine  worker1   Running        Running 30 seconds ago  
767paibm1v0abbee28khyy72d  helloworld.3  alpine  manager1  Running        Running 31 seconds ago  
es0corpw8iyari5thoq2lintx  helloworld.4  alpine  manager1  Running        Running 31 seconds ago  
0p4xqr29vuhhsjszwk3tdgvp6  helloworld.5  alpine  worker2   Running        Running 31 seconds ago  

2つほどmanagerで走ってしまっていますね…。自分で作業をこなしてしまうダメなマネージャなようです。

ほか

いろいろ機能があるみたいなので、試してみます。

1
1
1

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1