LoginSignup
3
2

More than 5 years have passed since last update.

Ceph(luminous)のインストール方法

Last updated at Posted at 2017-07-31

1 環境

  • VWare上のゲストマシン3台構成です。OSはCentOS7.3
  • 3台のホスト名はadmin,s01,s02です。
  • adminが管理サーバ、s01,s02がOSD兼MONの役割をします。
    +---- admin ---+           +----- s01 ----+           +----- s02 ----+
    |              |           |              |           |              |
    |   CentOS7.3  |           |  CentOS7.3   |           |  CentOS7.3   |
    |              |           |   OSD/MON    |           |   OSD/MON    |
    |              |           |              |           |              |
    +----- eth0 ---+           +----- eth0 ---+           +----- eth0 ---+
            | .100                      | .110                     | .120
            |                           |                          |
    ----------------------------------------------------------------------
                       192.168.0.0/24 (Public Network)

2 Cephのバージョン管理について

  • コードネームは、A,B,C..順 ... Infernalis -> Jewel -> Kraken -> Luminous
  • バージョン番号の付け方がInfernalis より変更になった。X.Y.Z方式
  • X:リリース番号(Luminousの'L'はアルファベット順で12番目なので12)
  • Y:0(開発版), 1(RC版), 2(安定板)
  • Zはバグ修正でアップするのかな?
  • (例) Infernalisの安定板: v9.2.Zなる。

3 事前準備

3.1 仮想ディスクの追加

私の場合、
「仮想マシン設定の編集(D)」->「追加(A)」->「ハードディスクの追加」->「SCSI(s) (推奨)」
->「仮想ディスクの新規作成(V)」-> 「仮想ディスクを単一ファイルとして格納(O)」を実行して、
20Gの仮想ディスクを2本(/dev/sdb,/dev/sdc)づつ追加しました。

3.2 ホスト名の設定

全ゲストマシン(admin,s01,s02)でホスト名の設定を行う

[root@admin ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.100 admin
192.168.0.110 s01
192.168.0.120 s02

3.3 パスワードなしでsshログイン可能にする

adminからs01,s02,admin(自分自身)にパスワードなしでsshログインできるようにする。

3.3.1 s01,s02にパスワードなしでログイン可能にする設定

[root@admin ~]# mkdir .ssh
[root@admin ~]# chmod 700 .ssh

adminの公開鍵を作成する。
[root@admin .ssh]# ssh-keygen -t dsa

作成した公開鍵を確認する(★印)
[root@admin ~]# cd .ssh/
[root@admin .ssh]#  ls
id_dsa  ★id_dsa.pub

adminの公開鍵をs01,s02に転送する。
[root@admin .ssh]# scp id_dsa.pub root@s01:/root
[root@admin .ssh]# scp id_dsa.pub root@s02:/root

公開鍵を保存するファイル(authorized_keys)を作成する。
s02でも同じ手順を実行する。
[root@s01 ~]# mkdir .ssh
[root@s01 ~]# chmod 700 .ssh
[root@s01 ~]# touch .ssh/authorized_keys

adminから転送した公開鍵を確認する。
[root@s01 ~]# ls id_dsa.pub
id_dsa.pub

公開鍵をauthorized_keysファイルに追加する。
[root@s01 ~]# cat id_dsa.pub >> .ssh/authorized_keys

PubkeyAuthenticationを有効にする。
[root@s01 ~]# vi /etc/ssh/sshd_config
PubkeyAuthentication yes

sshdを再起動する。
[root@s01 ~]# systemctl restart sshd

3.3.2 adminにパスワードなしでログイン可能にする設定

自分自身(admin)もパスワード入力なしでsshログインできるようにする。
[root@admin .ssh]# pwd
/root/.ssh

公開鍵をauthorized_keysファイルに追加する。
[root@admin .ssh]# cat id_dsa.pub >> authorized_keys
[root@admin .ssh]# ls
authorized_keys  id_dsa  id_dsa.pub

PubkeyAuthenticationを有効にする。
[root@admin .ssh]# vi /etc/ssh/sshd_config
PubkeyAuthentication yes

sshdを再起動する。
[root@admin .ssh]# systemctl restart sshd

3.4 firewalld停止(暫定対処)

6789番ポートと6800-7300番ポートへのアクセスを許可する。
今は暫定対処として、firewalldの起動を停止する。

[root@admin ~]# systemctl stop firewalld
[root@admin ~]# systemctl disable firewalld

[root@s01 ~]# systemctl stop firewalld
[root@s01 ~]# systemctl disable firewalld

[root@s02 ~]# systemctl stop firewalld
[root@s02 ~]# systemctl disable firewalld

3.5 時刻同期

NTPサーバと各サーバ(admin,s01,s02)の時刻同期をする。

3.5.1 chronyd自動起動設定&起動

私の自宅環境(直接インターネットに接続)では、設定ファイル(chrony.conf)は変更することなく使用しています。

adminでchronydを起動する。
[root@admin ~]# systemctl enable chronyd
[root@admin ~]# systemctl start chronyd

s01でchronydを起動する。
[root@s01 ~]# systemctl enable chronyd
[root@s01 ~]# systemctl start chronyd

s02でchronydを起動する。
[root@s02 ~]# systemctl enable chronyd
[root@s02 ~]# systemctl start chronyd

3.5.2 chronydの状態確認

各ゲストマシンのchronyd がインターネット上のNTPサーバと時刻同期できているかどうかを確認する。

chronyd がアクセスしている現在の時間ソースについての情報を確認する。
★印のサーバが、現在時刻同期をしているNTPサーバになります(左端に"*"が付いているサーバ)。
[root@admin ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^+ sjkBBML24.bb.kddi.ne.jp       5   9   377   208  -1525us[-1525us] +/-   59ms
^+ hachi.paina.net               2   9   377   400   -905us[ -905us] +/-   37ms
^+ routerida1.soprano-asm.ne     2  10   377   460   -361us[ -361us] +/-   21ms
^* r017081.203112.miinet.jp★    4   9   377   19m  -1150us[-1288us] +/-   22ms


時刻同期(上記★印)しているNTPサーバの詳細情報を表示する。
[root@admin ~]# chronyc tracking
Reference ID    : 203.112.17.81 (★r017081.203112.miinet.jp) 
Stratum         : 5
Ref time (UTC)  : Thu Jun 22 00:54:40 2017
System time     : 0.000000024 seconds fast of NTP time
Last offset     : -0.000137531 seconds
RMS offset      : 0.000383716 seconds
Frequency       : 22.150 ppm slow
Residual freq   : -0.016 ppm
Skew            : 0.523 ppm
Root delay      : 0.025137 seconds
Root dispersion : 0.006982 seconds
Update interval : 257.0 seconds
Leap status     : Normal

3.6 SELinux無効化(暫定対処)

全てのゲストマシンで行う。

[root@admin ~]# vi /etc/sysconfig/selinux
SELINUX=disabled

4 Cephインストール手順

4.1 リポジトリの追加

[root@admin ~]# wget http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm
[root@admin ~]# ls ceph-release-1-1.el7.noarch.rpm
ceph-release-1-1.el7.noarch.rpm

[root@admin ~]# rpm -ivh ceph-release-1-1.el7.noarch.rpm
[root@admin ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

4.2 ツール(ceph-deploy)のインストール

ceph-deployは、各ゲストマシン(admin,s01,s02)にrpmパッケージをインストールしたり、
各種設定を自動で行うツールです。

[root@admin ~]# mkdir mycluster
[root@admin ~]# cd mycluster/

[root@admin mycluster]# yum -y install ceph-deploy
[root@admin mycluster]# rpm -qa|grep ceph-deploy
ceph-deploy-1.5.38-0.noarch

4.3 OSDノードの登録(ceph-deploy new)

OSDノードとして使うゲストマシンを登録する。ここでは、s01とs02をOSDとして使う。

[root@admin mycluster]# ceph-deploy new s01 s02
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy new s01 s02
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0xf22578>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xf93518>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['s01', 's02']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[s01][DEBUG ] connected to host: admin
[s01][INFO  ] Running command: ssh -CT -o BatchMode=yes s01
[s01][DEBUG ] connected to host: s01
[s01][DEBUG ] detect platform information from remote host
[s01][DEBUG ] detect machine type
[s01][DEBUG ] find the location of an executable
[s01][INFO  ] Running command: /usr/sbin/ip link show
[s01][INFO  ] Running command: /usr/sbin/ip addr show
[s01][DEBUG ] IP addresses found: [u'192.168.0.110']
[ceph_deploy.new][DEBUG ] Resolving host s01
[ceph_deploy.new][DEBUG ] Monitor s01 at 192.168.0.110
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[s02][DEBUG ] connected to host: admin
[s02][INFO  ] Running command: ssh -CT -o BatchMode=yes s02
[s02][DEBUG ] connected to host: s02
[s02][DEBUG ] detect platform information from remote host
[s02][DEBUG ] detect machine type
[s02][DEBUG ] find the location of an executable
[s02][INFO  ] Running command: /usr/sbin/ip link show
[s02][INFO  ] Running command: /usr/sbin/ip addr show
[s02][DEBUG ] IP addresses found: [u'192.168.0.120']
[ceph_deploy.new][DEBUG ] Resolving host s02
[ceph_deploy.new][DEBUG ] Monitor s02 at 192.168.0.120
[ceph_deploy.new][DEBUG ] Monitor initial members are ['s01', 's02']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.0.110', '192.168.0.120']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

4.4 定義ファイル(ceph.conf)編集

定義ファイルにpublic networkの定義を追加します。

[root@admin mycluster]# ls
ceph-deploy-ceph.log  ceph.conf  ceph.mon.keyring

[root@admin mycluster]# vi ceph.conf
[root@admin mycluster]# cat ceph.conf
[global]
fsid = 89e77f89-62d8-4a6c-ba64-85dc580f58e6
mon_initial_members = s01, s02
mon_host = 192.168.0.110,192.168.0.120
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.0.0/24   <===追加

4.5 cephパッケージのインストール

adminサーバでceph-deployを実行して、cephパッケージのインストール、cephリポジトリの追加をする。


[root@admin mycluster]# ceph-deploy install --release luminous s01 s02 admin
-中略-
[s01][DEBUG ] 完了しました!
[s01][INFO  ] Running command: ceph --version
[s01][DEBUG ] ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous (rc)
-中略-
[s02][DEBUG ] 完了しました!
[s02][INFO  ] Running command: ceph --version
[s02][DEBUG ] ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous (rc)
-中略-
[admin][DEBUG ] 完了しました!
[admin][INFO  ] Running command: ceph --version
[admin][DEBUG ] ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous (rc)

4.6 監視ノード(MON)の初期化(ceph-deploy mon create-initial)

[root@admin mycluster]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x114d908>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x1143668>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts s01 s02
[ceph_deploy.mon][DEBUG ] detecting platform for host s01 ...
[s01][DEBUG ] connected to host: s01
[s01][DEBUG ] detect platform information from remote host
[s01][DEBUG ] detect machine type
[s01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.3.1611 Core
[s01][DEBUG ] determining if provided host has same hostname in remote
[s01][DEBUG ] get remote short hostname
[s01][DEBUG ] deploying mon to s01
[s01][DEBUG ] get remote short hostname
[s01][DEBUG ] remote hostname: s01
[s01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[s01][DEBUG ] create the mon path if it does not exist
[s01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-s01/done
[s01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-s01/done
[s01][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-s01.mon.keyring
[s01][DEBUG ] create the monitor keyring file
[s01][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i s01 --keyring /var/lib/ceph/tmp/ceph-s01.mon.keyring --setuser 167 --setgroup 167
[s01][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-s01.mon.keyring
[s01][DEBUG ] create a done file to avoid re-doing the mon deployment
[s01][DEBUG ] create the init path if it does not exist
[s01][INFO  ] Running command: systemctl enable ceph.target
[s01][INFO  ] Running command: systemctl enable ceph-mon@s01
[s01][INFO  ] Running command: systemctl start ceph-mon@s01
[s01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s01.asok mon_status
[s01][DEBUG ] ********************************************************************************
[s01][DEBUG ] status for monitor: mon.s01
[s01][DEBUG ] {
[s01][DEBUG ]   "election_epoch": 0,
[s01][DEBUG ]   "extra_probe_peers": [
[s01][DEBUG ]     "192.168.0.120:6789/0"
[s01][DEBUG ]   ],
[s01][DEBUG ]   "feature_map": {},
[s01][DEBUG ]   "features": {
[s01][DEBUG ]     "quorum_con": "0",
[s01][DEBUG ]     "quorum_mon": [],
[s01][DEBUG ]     "required_con": "0",
[s01][DEBUG ]     "required_mon": []
[s01][DEBUG ]   },
[s01][DEBUG ]   "monmap": {
[s01][DEBUG ]     "created": "2017-07-30 20:52:51.640090",
[s01][DEBUG ]     "epoch": 0,
[s01][DEBUG ]     "features": {
[s01][DEBUG ]       "optional": [],
[s01][DEBUG ]       "persistent": []
[s01][DEBUG ]     },
[s01][DEBUG ]     "fsid": "abf36c8d-ae59-4317-9822-e8b4c6c1ffe7",
[s01][DEBUG ]     "modified": "2017-07-30 20:52:51.640090",
[s01][DEBUG ]     "mons": [
[s01][DEBUG ]       {
[s01][DEBUG ]         "addr": "192.168.0.110:6789/0",
[s01][DEBUG ]         "name": "s01",
[s01][DEBUG ]         "public_addr": "192.168.0.110:6789/0",
[s01][DEBUG ]         "rank": 0
[s01][DEBUG ]       },
[s01][DEBUG ]       {
[s01][DEBUG ]         "addr": "0.0.0.0:0/1",
[s01][DEBUG ]         "name": "s02",
[s01][DEBUG ]         "public_addr": "0.0.0.0:0/1",
[s01][DEBUG ]         "rank": 1
[s01][DEBUG ]       }
[s01][DEBUG ]     ]
[s01][DEBUG ]   },
[s01][DEBUG ]   "name": "s01",
[s01][DEBUG ]   "outside_quorum": [
[s01][DEBUG ]     "s01"
[s01][DEBUG ]   ],
[s01][DEBUG ]   "quorum": [],
[s01][DEBUG ]   "rank": 0,
[s01][DEBUG ]   "state": "probing",
[s01][DEBUG ]   "sync_provider": []
[s01][DEBUG ] }
[s01][DEBUG ] ********************************************************************************
[s01][INFO  ] monitor: mon.s01 is running
[s01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s01.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host s02 ...
[s02][DEBUG ] connected to host: s02
[s02][DEBUG ] detect platform information from remote host
[s02][DEBUG ] detect machine type
[s02][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.3.1611 Core
[s02][DEBUG ] determining if provided host has same hostname in remote
[s02][DEBUG ] get remote short hostname
[s02][DEBUG ] deploying mon to s02
[s02][DEBUG ] get remote short hostname
[s02][DEBUG ] remote hostname: s02
[s02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[s02][DEBUG ] create the mon path if it does not exist
[s02][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-s02/done
[s02][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-s02/done
[s02][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-s02.mon.keyring
[s02][DEBUG ] create the monitor keyring file
[s02][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i s02 --keyring /var/lib/ceph/tmp/ceph-s02.mon.keyring --setuser 167 --setgroup 167
[s02][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-s02.mon.keyring
[s02][DEBUG ] create a done file to avoid re-doing the mon deployment
[s02][DEBUG ] create the init path if it does not exist
[s02][INFO  ] Running command: systemctl enable ceph.target
[s02][INFO  ] Running command: systemctl enable ceph-mon@s02
[s02][INFO  ] Running command: systemctl start ceph-mon@s02
[s02][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s02.asok mon_status
[s02][DEBUG ] ********************************************************************************
[s02][DEBUG ] status for monitor: mon.s02
[s02][DEBUG ] {
[s02][DEBUG ]   "election_epoch": 1,
[s02][DEBUG ]   "extra_probe_peers": [
[s02][DEBUG ]     "192.168.0.110:6789/0"
[s02][DEBUG ]   ],
[s02][DEBUG ]   "feature_map": {
[s02][DEBUG ]     "mon": {
[s02][DEBUG ]       "group": {
[s02][DEBUG ]         "features": 1152323339925389307,
[s02][DEBUG ]         "num": 3,
[s02][DEBUG ]         "release": "luminous"
[s02][DEBUG ]       }
[s02][DEBUG ]     }
[s02][DEBUG ]   },
[s02][DEBUG ]   "features": {
[s02][DEBUG ]     "quorum_con": "0",
[s02][DEBUG ]     "quorum_mon": [],
[s02][DEBUG ]     "required_con": "0",
[s02][DEBUG ]     "required_mon": []
[s02][DEBUG ]   },
[s02][DEBUG ]   "monmap": {
[s02][DEBUG ]     "created": "2017-07-30 20:53:15.412659",
[s02][DEBUG ]     "epoch": 0,
[s02][DEBUG ]     "features": {
[s02][DEBUG ]       "optional": [],
[s02][DEBUG ]       "persistent": []
[s02][DEBUG ]     },
[s02][DEBUG ]     "fsid": "abf36c8d-ae59-4317-9822-e8b4c6c1ffe7",
[s02][DEBUG ]     "modified": "2017-07-30 20:53:15.412659",
[s02][DEBUG ]     "mons": [
[s02][DEBUG ]       {
[s02][DEBUG ]         "addr": "192.168.0.110:6789/0",
[s02][DEBUG ]         "name": "s01",
[s02][DEBUG ]         "public_addr": "192.168.0.110:6789/0",
[s02][DEBUG ]         "rank": 0
[s02][DEBUG ]       },
[s02][DEBUG ]       {
[s02][DEBUG ]         "addr": "192.168.0.120:6789/0",
[s02][DEBUG ]         "name": "s02",
[s02][DEBUG ]         "public_addr": "192.168.0.120:6789/0",
[s02][DEBUG ]         "rank": 1
[s02][DEBUG ]       }
[s02][DEBUG ]     ]
[s02][DEBUG ]   },
[s02][DEBUG ]   "name": "s02",
[s02][DEBUG ]   "outside_quorum": [],
[s02][DEBUG ]   "quorum": [],
[s02][DEBUG ]   "rank": 1,
[s02][DEBUG ]   "state": "electing",
[s02][DEBUG ]   "sync_provider": []
[s02][DEBUG ] }
[s02][DEBUG ] ********************************************************************************
[s02][INFO  ] monitor: mon.s02 is running
[s02][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s02.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.s01
[s01][DEBUG ] connected to host: s01
[s01][DEBUG ] detect platform information from remote host
[s01][DEBUG ] detect machine type
[s01][DEBUG ] find the location of an executable
[s01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s01.asok mon_status
[ceph_deploy.mon][INFO  ] mon.s01 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] processing monitor mon.s02
[s02][DEBUG ] connected to host: s02
[s02][DEBUG ] detect platform information from remote host
[s02][DEBUG ] detect machine type
[s02][DEBUG ] find the location of an executable
[s02][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s02.asok mon_status
[ceph_deploy.mon][INFO  ] mon.s02 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpmYY82y
[s01][DEBUG ] connected to host: s01
[s01][DEBUG ] detect platform information from remote host
[s01][DEBUG ] detect machine type
[s01][DEBUG ] get remote short hostname
[s01][DEBUG ] fetch remote file
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.s01.asok mon_status
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get client.admin
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get-or-create client.admin osd allow * mds allow * mon allow * mgr allow *
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get client.bootstrap-mds
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get client.bootstrap-mgr
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get client.bootstrap-osd
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get client.bootstrap-rgw
[s01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-s01/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpmYY82y

4.7 cephコマンドを利用可にする。

[root@admin mycluster]# ceph-deploy admin s01 s02 admin
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy admin s01 s02 admin
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1facc68>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['s01', 's02', 'admin']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x1f03938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to s01
[s01][DEBUG ] connected to host: s01
[s01][DEBUG ] detect platform information from remote host
[s01][DEBUG ] detect machine type
[s01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to s02
[s02][DEBUG ] connected to host: s02
[s02][DEBUG ] detect platform information from remote host
[s02][DEBUG ] detect machine type
[s02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to admin
[admin][DEBUG ] connected to host: admin
[admin][DEBUG ] detect platform information from remote host
[admin][DEBUG ] detect machine type
[admin][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf


[root@admin mycluster]# ls
ceph-deploy-ceph.log        ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring   ceph.mon.keyring

4.8 OSDノードの作成(ceph-deploy osd create)

[root@admin mycluster]# ceph-deploy osd create s01:sdb s01:sdc s02:sdb s02:sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy osd create s01:sdb s01:sdc s02:sdb s02:sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('s01', '/dev/sdb', None), ('s01', '/dev/sdc', None), ('s02', '/dev/sdb', None), ('s02', '/dev/sdc', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f49ce08d950>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f49ce07eed8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks s01:/dev/sdb: s01:/dev/sdc: s02:/dev/sdb: s02:/dev/sdc:
[s01][DEBUG ] connected to host: s01
[s01][DEBUG ] detect platform information from remote host
[s01][DEBUG ] detect machine type
[s01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to s01
[s01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[s01][WARNIN] osd keyring does not exist yet, creating one
[s01][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host s01 disk /dev/sdb journal None activate True
[s01][DEBUG ] find the location of an executable
[s01][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb
[s01][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] set_type: Will colocate block with data on /dev/sdb
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] ptype_tobe_for_name: name = data
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:a02129ba-1e8d-4465-aa7e-f192aa9fef4a --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[s01][DEBUG ] Creating new GPT entries.
[s01][DEBUG ] The operation has completed successfully.
[s01][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] ptype_tobe_for_name: name = block
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:1a37c09c-029d-4571-9922-3aed780f2516 --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[s01][DEBUG ] The operation has completed successfully.
[s01][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[s01][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/1a37c09c-029d-4571-9922-3aed780f2516
[s01][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/sdb
[s01][DEBUG ] The operation has completed successfully.
[s01][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/1a37c09c-029d-4571-9922-3aed780f2516
[s01][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[s01][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[s01][DEBUG ] meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=6400 blks
[s01][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[s01][DEBUG ]          =                       crc=1        finobt=0, sparse=0
[s01][DEBUG ] data     =                       bsize=4096   blocks=25600, imaxpct=25
[s01][DEBUG ]          =                       sunit=0      swidth=0 blks
[s01][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
[s01][DEBUG ] log      =internal log           bsize=4096   blocks=864, version=2
[s01][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[s01][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[s01][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.x82pcH with options noatime,inode64
[s01][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.x82pcH
[s01][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.x82pcH
[s01][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.x82pcH
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.x82pcH/ceph_fsid.1808.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.x82pcH/ceph_fsid.1808.tmp
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.x82pcH/fsid.1808.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.x82pcH/fsid.1808.tmp
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.x82pcH/magic.1808.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.x82pcH/magic.1808.tmp
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.x82pcH/block_uuid.1808.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.x82pcH/block_uuid.1808.tmp
[s01][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.x82pcH/block -> /dev/disk/by-partuuid/1a37c09c-029d-4571-9922-3aed780f2516
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.x82pcH/type.1808.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.x82pcH/type.1808.tmp
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.x82pcH
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.x82pcH
[s01][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.x82pcH
[s01][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.x82pcH
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s01][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[s01][DEBUG ] The operation has completed successfully.
[s01][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[s01][INFO  ] Running command: systemctl enable ceph.target
[s01][INFO  ] checking OSD status...
[s01][DEBUG ] find the location of an executable
[s01][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[s01][WARNIN] there is 1 OSD down
[s01][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host s01 is now ready for osd use.
[s01][DEBUG ] connected to host: s01
[s01][DEBUG ] detect platform information from remote host
[s01][DEBUG ] detect machine type
[s01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] Preparing host s01 disk /dev/sdc journal None activate True
[s01][DEBUG ] find the location of an executable
[s01][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdc
[s01][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] set_type: Will colocate block with data on /dev/sdc
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[s01][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] set_data_partition: Creating osd partition on /dev/sdc
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] ptype_tobe_for_name: name = data
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:da29d66d-241d-4197-a3ed-6b50086dc3f5 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdc
[s01][DEBUG ] Creating new GPT entries.
[s01][DEBUG ] The operation has completed successfully.
[s01][WARNIN] update_partition: Calling partprobe on created device /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdc /usr/sbin/partprobe /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc1 uuid path is /sys/dev/block/8:33/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] ptype_tobe_for_name: name = block
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:6cc195e5-0062-4396-8f09-62db48c76b1c --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdc
[s01][DEBUG ] The operation has completed successfully.
[s01][WARNIN] update_partition: Calling partprobe on created device /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdc /usr/sbin/partprobe /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc2 uuid path is /sys/dev/block/8:34/dm/uuid
[s01][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/6cc195e5-0062-4396-8f09-62db48c76b1c
[s01][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/sdc
[s01][DEBUG ] The operation has completed successfully.
[s01][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdc /usr/sbin/partprobe /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/6cc195e5-0062-4396-8f09-62db48c76b1c
[s01][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdc1
[s01][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdc1
[s01][DEBUG ] meta-data=/dev/sdc1              isize=2048   agcount=4, agsize=6400 blks
[s01][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[s01][DEBUG ]          =                       crc=1        finobt=0, sparse=0
[s01][DEBUG ] data     =                       bsize=4096   blocks=25600, imaxpct=25
[s01][DEBUG ]          =                       sunit=0      swidth=0 blks
[s01][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
[s01][DEBUG ] log      =internal log           bsize=4096   blocks=864, version=2
[s01][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[s01][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[s01][WARNIN] mount: Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.Lojqhi with options noatime,inode64
[s01][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.Lojqhi
[s01][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Lojqhi
[s01][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Lojqhi
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Lojqhi/ceph_fsid.2613.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Lojqhi/ceph_fsid.2613.tmp
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Lojqhi/fsid.2613.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Lojqhi/fsid.2613.tmp
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Lojqhi/magic.2613.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Lojqhi/magic.2613.tmp
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Lojqhi/block_uuid.2613.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Lojqhi/block_uuid.2613.tmp
[s01][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Lojqhi/block -> /dev/disk/by-partuuid/6cc195e5-0062-4396-8f09-62db48c76b1c
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Lojqhi/type.2613.tmp
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Lojqhi/type.2613.tmp
[s01][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Lojqhi
[s01][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Lojqhi
[s01][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.Lojqhi
[s01][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Lojqhi
[s01][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s01][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdc
[s01][DEBUG ] Warning: The kernel is still using the old partition table.
[s01][DEBUG ] The new table will be used at the next reboot.
[s01][DEBUG ] The operation has completed successfully.
[s01][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdc /usr/sbin/partprobe /dev/sdc
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s01][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdc1
[s01][INFO  ] Running command: systemctl enable ceph.target
[s01][INFO  ] checking OSD status...
[s01][DEBUG ] find the location of an executable
[s01][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host s01 is now ready for osd use.
[s02][DEBUG ] connected to host: s02
[s02][DEBUG ] detect platform information from remote host
[s02][DEBUG ] detect machine type
[s02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to s02
[s02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[s02][WARNIN] osd keyring does not exist yet, creating one
[s02][DEBUG ] create a keyring file
[ceph_deploy.osd][DEBUG ] Preparing host s02 disk /dev/sdb journal None activate True
[s02][DEBUG ] find the location of an executable
[s02][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb
[s02][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] set_type: Will colocate block with data on /dev/sdb
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] ptype_tobe_for_name: name = data
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:1a874f43-9cfe-4021-83d0-7e48ac7ab950 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[s02][DEBUG ] Creating new GPT entries.
[s02][DEBUG ] Warning: The kernel is still using the old partition table.
[s02][DEBUG ] The new table will be used at the next reboot.
[s02][DEBUG ] The operation has completed successfully.
[s02][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] ptype_tobe_for_name: name = block
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:5e4b6394-5347-4c10-862b-a68d7c3386c5 --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[s02][DEBUG ] The operation has completed successfully.
[s02][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[s02][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/5e4b6394-5347-4c10-862b-a68d7c3386c5
[s02][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/sdb
[s02][DEBUG ] The operation has completed successfully.
[s02][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/5e4b6394-5347-4c10-862b-a68d7c3386c5
[s02][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[s02][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[s02][DEBUG ] meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=6400 blks
[s02][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[s02][DEBUG ]          =                       crc=1        finobt=0, sparse=0
[s02][DEBUG ] data     =                       bsize=4096   blocks=25600, imaxpct=25
[s02][DEBUG ]          =                       sunit=0      swidth=0 blks
[s02][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
[s02][DEBUG ] log      =internal log           bsize=4096   blocks=864, version=2
[s02][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[s02][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[s02][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.1pPbZB with options noatime,inode64
[s02][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.1pPbZB
[s02][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.1pPbZB
[s02][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.1pPbZB
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1pPbZB/ceph_fsid.1479.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1pPbZB/ceph_fsid.1479.tmp
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1pPbZB/fsid.1479.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1pPbZB/fsid.1479.tmp
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1pPbZB/magic.1479.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1pPbZB/magic.1479.tmp
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1pPbZB/block_uuid.1479.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1pPbZB/block_uuid.1479.tmp
[s02][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.1pPbZB/block -> /dev/disk/by-partuuid/5e4b6394-5347-4c10-862b-a68d7c3386c5
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1pPbZB/type.1479.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1pPbZB/type.1479.tmp
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1pPbZB
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1pPbZB
[s02][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.1pPbZB
[s02][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.1pPbZB
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[s02][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[s02][DEBUG ] The operation has completed successfully.
[s02][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[s02][INFO  ] Running command: systemctl enable ceph.target
[s02][INFO  ] checking OSD status...
[s02][DEBUG ] find the location of an executable
[s02][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[s02][WARNIN] there is 1 OSD down
[s02][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host s02 is now ready for osd use.
[s02][DEBUG ] connected to host: s02
[s02][DEBUG ] detect platform information from remote host
[s02][DEBUG ] detect machine type
[s02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] Preparing host s02 disk /dev/sdc journal None activate True
[s02][DEBUG ] find the location of an executable
[s02][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdc
[s02][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] set_type: Will colocate block with data on /dev/sdc
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_db_size
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_size
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup bluestore_block_wal_size
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[s02][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] set_data_partition: Creating osd partition on /dev/sdc
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] ptype_tobe_for_name: name = data
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] create_partition: Creating data partition num 1 size 100 on /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:bcc72f49-6d66-4b8b-a326-0cbbfe06caf0 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdc
[s02][DEBUG ] Creating new GPT entries.
[s02][DEBUG ] The operation has completed successfully.
[s02][WARNIN] update_partition: Calling partprobe on created device /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdc /usr/sbin/partprobe /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc1 uuid path is /sys/dev/block/8:33/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] ptype_tobe_for_name: name = block
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] create_partition: Creating block partition num 2 size 0 on /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:0cd4257f-0a9c-471b-b488-3818f4b2b03a --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdc
[s02][DEBUG ] The operation has completed successfully.
[s02][WARNIN] update_partition: Calling partprobe on created device /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdc /usr/sbin/partprobe /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc2 uuid path is /sys/dev/block/8:34/dm/uuid
[s02][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/0cd4257f-0a9c-471b-b488-3818f4b2b03a
[s02][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/sdc
[s02][DEBUG ] The operation has completed successfully.
[s02][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdc /usr/sbin/partprobe /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] prepare_device: Block is GPT partition /dev/disk/by-partuuid/0cd4257f-0a9c-471b-b488-3818f4b2b03a
[s02][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdc1
[s02][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdc1
[s02][DEBUG ] meta-data=/dev/sdc1              isize=2048   agcount=4, agsize=6400 blks
[s02][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[s02][DEBUG ]          =                       crc=1        finobt=0, sparse=0
[s02][DEBUG ] data     =                       bsize=4096   blocks=25600, imaxpct=25
[s02][DEBUG ]          =                       sunit=0      swidth=0 blks
[s02][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
[s02][DEBUG ] log      =internal log           bsize=4096   blocks=864, version=2
[s02][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[s02][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[s02][WARNIN] mount: Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.tHM88s with options noatime,inode64
[s02][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.tHM88s
[s02][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.tHM88s
[s02][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.tHM88s
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tHM88s/ceph_fsid.2316.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tHM88s/ceph_fsid.2316.tmp
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tHM88s/fsid.2316.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tHM88s/fsid.2316.tmp
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tHM88s/magic.2316.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tHM88s/magic.2316.tmp
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tHM88s/block_uuid.2316.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tHM88s/block_uuid.2316.tmp
[s02][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.tHM88s/block -> /dev/disk/by-partuuid/0cd4257f-0a9c-471b-b488-3818f4b2b03a
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tHM88s/type.2316.tmp
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tHM88s/type.2316.tmp
[s02][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tHM88s
[s02][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tHM88s
[s02][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.tHM88s
[s02][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.tHM88s
[s02][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[s02][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdc
[s02][DEBUG ] The operation has completed successfully.
[s02][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdc /usr/sbin/partprobe /dev/sdc
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[s02][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdc1
[s02][INFO  ] Running command: systemctl enable ceph.target
[s02][INFO  ] checking OSD status...
[s02][DEBUG ] find the location of an executable
[s02][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[s02][WARNIN] there is 1 OSD down
[s02][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host s02 is now ready for osd use.

4.9 ceph-mgrの起動

luminousより、ceph-mgrを起動する必要があります。

[root@admin mycluster]# mkdir /var/lib/ceph/mgr/ceph-admin
[root@admin mycluster]# touch /var/lib/ceph/mgr/ceph-admin/keyring
[root@admin mycluster]# ceph --cluster ceph auth get-or-create mgr.admin mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.admin]
        key = AQDmy31ZXIQuChAAW6a/RHstNjaAAPZFW3UwJg==

[root@admin mycluster]# vi /var/lib/ceph/mgr/ceph-admin/keyring
[root@admin mycluster]# cat /var/lib/ceph/mgr/ceph-admin/keyring
[mgr.admin]
        key = AQDmy31ZXIQuChAAW6a/RHstNjaAAPZFW3UwJg==

[root@admin mycluster]# ceph-mgr -i admin

[root@admin mycluster]# ps -C ceph-mgr
   PID TTY          TIME CMD
   927 ?        00:00:03 ceph-mgr

5 設定完了後の各種状態チェック

5.1 クラスタの状態確認

[root@admin ~]# ceph health
HEALTH_OK

[root@admin ~]# ceph -s
  cluster:
    id:     abf36c8d-ae59-4317-9822-e8b4c6c1ffe7
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum s01,s02
    mgr: admin(active)
    osd: 4 osds: 4 up, 4 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage:   4286 MB used, 77229 MB / 81515 MB avail
    pgs:

5.2 OSDの確認

[root@admin ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME     UP/DOWN REWEIGHT PRI-AFF
-1       0.07758 root default
-3       0.03879     host s01
 0   hdd 0.01939         osd.0      up  1.00000 1.00000
 1   hdd 0.01939         osd.1      up  1.00000 1.00000
-5       0.03879     host s02
 2   hdd 0.01939         osd.2      up  1.00000 1.00000
 3   hdd 0.01939         osd.3      up  1.00000 1.00000

5.3 OSDノード初期化後のディスクの状態確認

[root@s01 ~]# df -hT
ファイルシス   タイプ   サイズ  使用  残り 使用% マウント位置
/dev/sda3      xfs         18G  1.8G   17G   10% /
devtmpfs       devtmpfs   466M     0  466M    0% /dev
tmpfs          tmpfs      489M     0  489M    0% /dev/shm
tmpfs          tmpfs      489M  572K  488M    1% /run
tmpfs          tmpfs      489M     0  489M    0% /sys/fs/cgroup
/dev/sda1      xfs       1014M  106M  909M   11% /boot
tmpfs          tmpfs       98M     0   98M    0% /run/user/0
/dev/sdb1      xfs         97M  5.4M   92M    6% /var/lib/ceph/osd/ceph-0
/dev/sdc1      xfs         97M  5.4M   92M    6% /var/lib/ceph/osd/ceph-1

[root@s02 ~]# df -hT
ファイルシス   タイプ   サイズ  使用  残り 使用% マウント位置
/dev/sda3      xfs         18G  1.8G   17G   10% /
devtmpfs       devtmpfs   479M     0  479M    0% /dev
tmpfs          tmpfs      489M     0  489M    0% /dev/shm
tmpfs          tmpfs      489M  600K  488M    1% /run
tmpfs          tmpfs      489M     0  489M    0% /sys/fs/cgroup
/dev/sda1      xfs       1014M  116M  899M   12% /boot
/dev/sdb1      xfs         97M  5.4M   92M    6% /var/lib/ceph/osd/ceph-2
/dev/sdc1      xfs         97M  5.4M   92M    6% /var/lib/ceph/osd/ceph-3
tmpfs          tmpfs       98M     0   98M    0% /run/user/0

6 オブジェクトストレージにオブジェクトを格納する方法

"pool0001"という名前のプールを作成する。PG数は128を指定。
[root@admin ceph]# ceph osd pool create pool0001 128
pool 'pool0001' created

作成したプールを確認する。
[root@admin ceph]# ceph osd lspools
1 pool0001,

プールにオブジェクトを関連付ける。
[root@admin ceph]# ceph osd map pool0001 obj0001
osdmap e32 pool 'pool0001' (1) object 'obj0001' -> pg 1.c1fd732a (1.2a) -> up ([0,2], p0) acting ([0,2], p0)

テスト用ファイルを作成する。
[root@admin ceph]# echo "1234567890" > testfile0001.txt
[root@admin ceph]# echo "abcdefghij" > testfile0002.txt

作成したファイルをオブジェクトに関連付ける。そして、オブジェクトをストレージに格納する。
[root@admin ceph]# rados put myobj01 ./testfile0001.txt --pool=pool0001
[root@admin ceph]# rados put myobj02 ./testfile0002.txt --pool=pool0001

プールに格納したオブジェクトを確認する。正しく格納できていることがわかる。
[root@admin ceph]# rados -p pool0001 ls
myobj01
myobj02

テスト用ファイルを削除する。
[root@admin ceph]# rm testfile000*
rm: 通常ファイル `testfile0001.txt' を削除しますか? y
rm: 通常ファイル `testfile0002.txt' を削除しますか? y

オブジェクトストレージよりオブジェクトを取得する。
[root@admin ceph]# rados get myobj01 ./output1.txt --pool=pool0001
[root@admin ceph]# rados get myobj02 ./output2.txt --pool=pool0001
[root@admin ceph]# ls output*
output1.txt  output2.txt

オブジェクトストレージより取得したテストファイルの中身を確認する。正しい内容であることがわかる。
[root@admin ceph]# cat output1.txt
1234567890
[root@admin ceph]# cat output2.txt
abcdefghij

7 プールを削除する方法

全ゲストマシンに下記設定をする。
MONノード(s01,s02)だけに設定をすればよいと思われるが、
adminとMONノードでceph.confの内容が異なるとイヤなので、全ゲストマシンを同じ内容にしました。

[root@admin ~]# vi /etc/ceph/ceph.conf
-中略-
[mon]
mon_allow_pool_delete = true   <=追加

mon-targetを再起動する。もしだめだら、ゲストマシンの再起動をしてください。
[root@admin ~]# ssh s01 systemctl restart ceph-mon.target
[root@admin ~]# ssh s02 systemctl restart ceph-mon.target

Y インストール中にエラーがでたときの対処

インストール中にエラーが発生してうまくいかない場合は、下記手順を実行したあと、
再度インストールすると、うまくいくかもしれません。Ceph公式ページで紹介されていました。
下記手順を実行すると、インストールしたパッケージや設定ファイルを削除できます。
私は、下記手順をなんども繰り返しました。

[root@admin ~]# ceph-deploy purge admin node1 node2
[root@admin ~]# ceph-deploy purgedata admin node1 node2
[root@admin ~]# ceph-deploy forgetkeys

全てのノードをリブートする。
[root@admin ~]# shutdown -r now
[root@node1 ~]# shutdown -r now
[root@node2 ~]# shutdown -r now

X 参考情報

CentOS 7実践ガイド (impress top gear)
CHAPTER 7. MANAGING USERS
Ceph の覚え書きのインデックス
How to Setup Red Hat Ceph Storage on CentOS 7.0
Configuring the Storage Cluster
My adventures with Ceph Storage. Part 5: install Ceph in the Lab

3
2
1

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
3
2