はじめに
Docker...良いですよね笑
コンテナ型の仮想化ソフトウェアが軽量で良い事は周知の事実なのですが、
場合によっては、ホスト型を選択したい場合もあるかと思います。
今回はVirtualBox + Vagrant
の構成でvagrant up
を実行するまでを、
少し掘り下げてみたいと思います。
実行環境
ホストマシン
- macOS: 10.15.7
- Virtualbox: 6.0.22
- Vagrant: 2.2.9
- vagrant-s3auth: 1.3.2
- Packer: 1.4.5
- packer-post-processor-vagrant-s3: 1.4.0
- Ansible: 2.9.2
ゲストマシン
- CentOS: 7.8
- Nginx: 1.18.0
最終的なディレクトリ構成
operation
├── ansible
│ ├── development
│ │ ├── group_vars
│ │ │ └── all.yml
│ │ ├── host_vars
│ │ │ └── all.yml
│ │ └── hosts.yml
│ ├── production
│ │ └── .gitkeep
│ ├── roles
│ │ ├── nginx
│ │ │ ├── defaults
│ │ │ │ └── main.yml
│ │ │ ├── files
│ │ │ │ ├── development
│ │ │ │ │ └── .gitkeep
│ │ │ │ ├── production
│ │ │ │ │ └── .gitkeep
│ │ │ │ └── staging
│ │ │ │ └── .gitkeep
│ │ │ ├── handlers
│ │ │ │ └── main.yml
│ │ │ ├── meta
│ │ │ │ └── main.yml
│ │ │ ├── tasks
│ │ │ │ └── main.yml
│ │ │ ├── templates
│ │ │ │ ├── development
│ │ │ │ │ └── .gitkeep
│ │ │ │ ├── production
│ │ │ │ │ └── .gitkeep
│ │ │ │ └── staging
│ │ │ │ └── .gitkeep
│ │ │ ├── tests
│ │ │ │ ├── inventory
│ │ │ │ └── test.yml
│ │ │ └── vars
│ │ │ └── main.yml
│ ├── site.yml
│ └── staging
│ └── .gitkeep
├── packer
│ ├── development
│ │ ├── http
│ │ │ └── kickstart.ks
│ │ └── template.json
│ ├── production
│ │ └── .gitkeep
│ └── staging
│ └── .gitkeep
└── vagrant
└── Vagrantfile
いつもの
$ mkdir operation/vagrant && cd operation/vagrant
$ vagrant init
$ echo -e 'Vagrant.configure("2") do |config|
config.vm.box = "bento/centos-7.8"
config.vm.network "forwarded_port", guest: 80, host: 8080, auto_correct: true
config.vm.network "private_network", ip: "192.168.33.10"
config.vm.synced_folder ".", "/vagrant", disabled: true
end
' > Vagrantfile
$ vagrant up
VirtualBoxの複雑な起動オプションを吸収し、
テンプレートに沿ってワンコマンドで起動出来ます。
手動プロビジョニング
$ vagrant ssh
$ touch /etc/yum.repos.d/nginx.repo && \
echo -e '[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
' > /etc/yum.repos.d/nginx.repo
$ yum install -y nginx
$ systemctl enable nginx && systemctl start nginx
仮想マシンにSSHで接続し、手動でコマンドを実行
シンプルで学習コストも低いですが、ヒューマンエラーが混入する余地があります。
自動プロビジョニング(Ansible)
$ mkdir operation/ansible && \
mkdir operation/ansible/development && \
mkdir operation/ansible/development/hosts_va && \
touch operation/ansible/development/hosts_all/all.yml && \
mkdir operation/ansible/development/group_all && \
touch operation/ansible/development/group_all/all.yml && \
touch operation/ansible/development/hosts.yml && \
mkdir operation/ansible/staging && \
touch operation/ansible/staging/.gitkeep && \
mkdir operation/ansible/production && \
touch operation/ansible/production/.gitkeep && \
mkdir operation/ansible/roles
$ ansible-galaxy init --init-path=operation/ansible/roles nginx
$ mkdir operation/ansible/roles/nginx/files/development && \
touch operation/ansible/roles/nginx/files/development/.gitkeep && \
mkdir operation/ansible/roles/nginx/files/production && \
touch operation/ansible/roles/nginx/files/production/.gitkeep && \
mkdir operation/ansible/roles/nginx/files/staging && \
touch operation/ansible/roles/nginx/files/staging/.gitkeep && \
mkdir operation/ansible/roles/nginx/templates/development && \
touch operation/ansible/roles/nginx/templates/development/.gitkeep && \
mkdir operation/ansible/roles/nginx/templates/production && \
touch operation/ansible/roles/nginx/templates/production/.gitkeep && \
mkdir operation/ansible/roles/nginx/templates/staging && \
touch operation/ansible/roles/nginx/templates/staging/.gitkeep
$ echo -e 'all:
hosts:
localhost:
ansible_connection: local
' > operation/ansible/development/hosts.yml
$ echo -e '- hosts: all
become: yes
roles:
- nginx
' > operation/ansible/site.yml
$ echo -e '- name: add nginx repository
yum_repository:
file: nginx
name: nginx-stable
description: nginx stable repo
baseurl: http://nginx.org/packages/centos/$releasever/$basearch
gpgcheck: yes
gpgkey: https://nginx.org/keys/nginx_signing.key
enabled: yes
- name: install nginx
yum:
name: nginx
- name: service enable and start
service:
name: nginx
state: started
enabled: yes
' > operation/ansible/roles/nginx/tasks/main.yml
$ echo -e 'Vagrant.configure("2") do |config|
config.vm.box = "bento/centos-7.8"
config.vm.network "forwarded_port", guest: 80, host: 8080, auto_correct: true
config.vm.network "private_network", ip: "192.168.33.10"
config.vm.synced_folder "../ansible", "/home/vagrant/ansible"
config.vm.provision :ansible_local do |ansible|
ansible.install_mode = "pip"
ansible.playbook = "/home/vagrant/ansible/site.yml"
ansible.inventory_path = "/home/vagrant/ansible/development"
ansible.limit = "all"
end
end
' > operation/vagrant/Vagrantfile
$ cd operation/vagrant && \
vagrant halt && \
vagrant destroy -f && \
rm -rf .vagrant && \
vagrant up
構成管理ツールを仮想マシンにインストールし、自分自身の構成管理を実行します。
よく見かける構成ですが、プロビジョニング時のトラブルシュートに悩まされている現場も...
配布可能な開発環境の問題点
- プロビジョニングを開発者の環境毎に実行する為、意図しない不具合が発生する確率が高まる。
- トラブルシュートの結果を全体反映し辛い。
配布可能から配信可能
へ
テンプレートを配布して各自で構築、というフローから、
構築済みの開発環境を配信して起動
、というフローへ変更してみます。
ゴールデンイメージを作成・バージョニングする
$ mkdir operation/packer && \
mkdir operation/packer/development && \
mkdir operation/packer/development/http && \
touch operation/packer/development/http/kickstart.ks && \
touch operation/packer/development/template.json && \
mkdir operation/packer/production && \
touch operation/packer/production/.gitkeep && \
mkdir operation/packer/staging
touch operation/packer/staging/.gitkeep
$ echo -e '{
"variables": {
"environment": "development",
"vmName": "golden-image",
"vmVersion": "1.0.0",
"isoUrl": "http://ftp.riken.jp/Linux/centos/7.8.2003/isos/x86_64/CentOS-7-x86_64-Minimal-2003.iso",
"isoChecksum": "659691c28a0e672558b003d223f83938f254b39875ee7559d1a4a14c79173193",
"isoChecksumType": "sha256",
"guestAdditionsIsoFileName": "VBoxGuestAdditions.iso",
"diskSize": "10240",
"homeDirectory": "/home/vagrant",
"userName": "vagrant",
"passWord": "vagrant"
},
"builders": [
{
"type": "virtualbox-iso",
"vm_name": "{{user `vmName`}}",
"guest_os_type": "RedHat_64",
"iso_url": "{{user `isoUrl`}}",
"iso_checksum": "{{user `isoChecksum`}}",
"iso_checksum_type": "{{user `isoChecksumType`}}",
"http_directory": "http",
"guest_additions_path": "{{user `homeDirectory`}}/{{user `guestAdditionsIsoFileName`}}",
"boot_command": "<tab> text ks=http://{{.HTTPIP}}:{{.HTTPPort}}/kickstart.ks <enter>",
"shutdown_command": "echo 'vagrant' | sudo -S shutdown -h now",
"disk_size": "{{user `diskSize`}}",
"vboxmanage": [
["modifyvm", "{{ .Name }}", "--cpus", "2"],
["modifyvm", "{{ .Name }}", "--memory", "2048"],
["modifyvm", "{{ .Name }}", "--chipset", "ich9"],
["modifyvm", "{{ .Name }}", "--ioapic", "on"],
["modifyvm", "{{ .Name }}", "--rtcuseutc", "on"],
["modifyvm", "{{ .Name }}", "--pae", "on"],
["modifyvm", "{{ .Name }}", "--hwvirtex", "on"],
["modifyvm", "{{ .Name }}", "--nestedpaging", "on"],
["modifyvm", "{{ .Name }}", "--largepages", "on"],
["modifyvm", "{{ .Name }}", "--paravirtprovider", "kvm"],
["modifyvm", "{{ .Name }}", "--vram", "9"],
["modifyvm", "{{ .Name }}", "--vrde", "off"],
["modifyvm", "{{ .Name }}", "--graphicscontroller", "vboxsvga"],
["modifyvm", "{{ .Name }}", "--audio", "none"],
["storagectl", "{{ .Name }}", "--name", "IDE Controller", "--controller", "ICH6"]
],
"headless": true,
"ssh_wait_timeout": "10000s",
"ssh_username": "{{user `userName`}}",
"ssh_password": "{{user `passWord`}}"
}
],
"provisioners": [
{
"type": "ansible",
"playbook_file": "../../ansible/site.yml",
"inventory_directory": "../../ansible/{{user `environment`}}"
}
],
"post-processors": [
[
{
"type": "vagrant",
"compression_level": 9,
"output": "{{user `vmName`}}.box"
},
{
"type": "vagrant-s3",
"region": "ap-northeast-1",
"bucket": "{{user `environment`}}-virtual-machine-image",
"acl": "private",
"profile": "default",
"manifest": "{{user `vmName`}}/manifest.json",
"box_dir": "{{user `vmName`}}",
"box_name": "{{user `vmName`}}.box",
"version": "{{user `vmVersion`}}"
}
]
]
}
' > operation/packer/development/template.json
$ echo -e '# Install OS instead of upgrade
install
# Use CDROM installation media
cdrom
# System language
lang ja_JP.UTF-8
# Keyboard layouts
keyboard jp106
# network setting
network --bootproto=dhcp --onboot=on --device=eth0
# Firewall configuration
firewall --disabled
# SELinux configuration
selinux --disabled
# System Timezone
timezone Asia/Tokyo --utc
# System bootloader configuration
bootloader --location=mbr
# Use graphical install
text
# Do not configure the X Window System
skipx
# Clear the Master Boot Record
zerombr
# Partition clearing infomation
clearpart --all --initlabel
# Diskpartitioning information
autopart
# System authorization infomation
auth --useshadow --passalgo=sha512 --kickstart
# Root password
rootpw --iscrypted $1$we3.LxeP$ko6NFF4j8D02bbqgCS80r.
# Create user
user --name=vagrant --plaintext --password vagrant
# First boot
firstboot --disabled
# Machine reboot
reboot --eject
%post
echo "%vagrant ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/vagrant
chmod 0440 /etc/sudoers.d/vagrant
yum update -y --exclude=centos*
%end
' > operation/packer/development/http/kickstart.ks
$ : > operation/ansible/development/hosts.yml
$ echo -e '- hosts: all
become: yes
roles:
- common
- {
role: guest_additions,
when: targetEnvironment == "development"
}
- nginx
' > operation/ansible/site.yml
$ echo -e 'targetEnvironment: development
userName: vagrant
userGroup: vagrant
homeDirectory: /home/vagrant
' > operation/ansible/development/group_vars/all.yml
$ ansible-galaxy init --init-path=operation/ansible/roles common
$ mkdir operation/ansible/roles/common/files/development && \
touch operation/ansible/roles/common/files/development/sshd_config && \
mkdir operation/ansible/roles/common/files/production && \
touch operation/ansible/roles/common/files/production/.gitkeep && \
mkdir operation/ansible/roles/common/files/staging && \
touch operation/ansible/roles/common/files/staging/.gitkeep && \
mkdir operation/ansible/roles/common/templates/development && \
touch operation/ansible/roles/common/templates/development/.gitkeep && \
mkdir operation/ansible/roles/common/templates/production && \
touch operation/ansible/roles/common/templates/production/.gitkeep && \
mkdir operation/ansible/roles/common/templates/staging && \
touch operation/ansible/roles/common/templates/staging/.gitkeep
$ echo -e '- name: create .ssh
file:
path: "{{ homeDirectory }}/.ssh"
state: directory
owner: "{{ userName }}"
group: "{{ userGroup }}"
mode: 0700
- name: set authorized_keys
copy:
src: "{{ targetEnvironment }}/authorized_keys"
dest: "{{ homeDirectory }}/.ssh"
owner: "{{ userName }}"
group: "{{ userGroup }}"
mode: 0600
- name: set sshd_config
copy:
src: "{{ targetEnvironment }}/sshd_config"
dest: /etc/ssh
owner: root
group: root
mode: 0600
' > operation/ansible/roles/common/tasks/main.yml
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant.pub -O operation/ansible/roles/common/files/development/authorized_keys
$ echo -e '# $OpenBSD: sshd_config,v 1.100 2016/08/15 12:32:04 naddy Exp $
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/local/bin:/usr/bin
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
# default value.
# If you want to change the port on a SELinux system, you have to tell
# SELinux about this change.
# semanage port -a -t ssh_port_t -p tcp #PORTNUMBER
#
# Port 22
# AddressFamily any
# ListenAddress 0.0.0.0
# ListenAddress ::
HostKey /etc/ssh/ssh_host_rsa_key
# HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
# RekeyLimit default none
# Logging
# SyslogFacility AUTH
SyslogFacility AUTHPRIV
# LogLevel INFO
# Authentication:
# LoginGraceTime 2m
# PermitRootLogin yes
# StrictModes yes
# MaxAuthTries 6
# MaxSessions 10
PubkeyAuthentication yes
# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile .ssh/authorized_keys
# AuthorizedPrincipalsFile none
# AuthorizedKeysCommand none
# AuthorizedKeysCommandUser nobody
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
# HostbasedAuthentication no
# Change to yes if you don"t trust ~/.ssh/known_hosts for
# HostbasedAuthentication
# IgnoreUserKnownHosts no
# Don"t read the user"s ~/.rhosts and ~/.shosts files
# IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
# PasswordAuthentication yes
# PermitEmptyPasswords no
PasswordAuthentication yes
# Change to no to disable s/key passwords
# ChallengeResponseAuthentication yes
ChallengeResponseAuthentication no
# Kerberos options
# KerberosAuthentication no
# KerberosOrLocalPasswd yes
# KerberosTicketCleanup yes
# KerberosGetAFSToken no
# KerberosUseKuserok yes
# GSSAPI options
GSSAPIAuthentication no
GSSAPICleanupCredentials no
# GSSAPIStrictAcceptorCheck yes
# GSSAPIKeyExchange no
# GSSAPIEnablek5users no
# Set this to "yes" to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to "no".
# WARNING: "UsePAM no" is not supported in Red Hat Enterprise Linux and may cause several
# problems.
UsePAM yes
# AllowAgentForwarding yes
# AllowTcpForwarding yes
# GatewayPorts no
X11Forwarding yes
# X11DisplayOffset 10
# X11UseLocalhost yes
# PermitTTY yes
# PrintMotd yes
# PrintLastLog yes
# TCPKeepAlive yes
# UseLogin no
# UsePrivilegeSeparation sandbox
# PermitUserEnvironment no
# Compression delayed
# ClientAliveInterval 0
# ClientAliveCountMax 3
# ShowPatchLevel no
# UseDNS yes
# PidFile /var/run/sshd.pid
# MaxStartups 10:30:100
# PermitTunnel no
# ChrootDirectory none
# VersionAddendum none
# no default banner path
# Banner none
# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server
# Example of overriding settings on a per-user basis
# Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# PermitTTY no
# ForceCommand cvs server
UseDNS no
' > operation/ansible/roles/common/files/development/sshd_config
$ ansible-galaxy init --init-path=operation/ansible/roles guest_additions
$ mkdir operation/ansible/roles/guest_additions/files/development && \
touch operation/ansible/roles/guest_additions/files/development/.gitkeep && \
mkdir operation/ansible/roles/guest_additions/files/production && \
touch operation/ansible/roles/guest_additions/files/production/.gitkeep && \
mkdir operation/ansible/roles/guest_additions/files/staging && \
touch operation/ansible/roles/guest_additions/files/staging/.gitkeep && \
mkdir operation/ansible/roles/guest_additions/templates/development && \
touch operation/ansible/roles/guest_additions/templates/development/.gitkeep && \
mkdir operation/ansible/roles/guest_additions/templates/production && \
touch operation/ansible/roles/guest_additions/templates/production/.gitkeep && \
mkdir operation/ansible/roles/guest_additions/templates/staging && \
touch operation/ansible/roles/guest_additions/templates/staging/.gitkeep
$ echo -e '- name: get kernel version
shell: uname -r
register: kernelVersion
- name: install guest additions requirement
yum:
name: "{{ packages }}"
vars:
packages: "{{ guestAdditionsRequirementPackageList }}"
- name: create mount directory
file:
path: "{{ guestAdditionsPath }}"
state: directory
owner: root
group: root
mode: 0700
- name: mount guest additions
mount:
path: "{{ guestAdditionsPath }}"
src: "{{ homeDirectory }}/{{ guestAdditionsIsoFileName }}"
state: mounted
opts: ro,loop
fstype: iso9660
- name: install guest additions
shell: "sh {{ guestAdditionsInstallerFileName }}"
args:
chdir: "{{ guestAdditionsPath }}"
- name: unmount guest additions
mount:
path: "{{ guestAdditionsPath }}"
src: "{{ homeDirectory }}/{{ guestAdditionsIsoFileName }}"
state: absent
- name: service start and enable vboxadd
service:
name: "{{ item }}"
state: started
enabled: yes
with_items: "{{ guestAdditionsService }}"
' > operation/ansible/roles/guest_additions/tasks/main.yml
$ echo -e 'guestAdditionsRequirementPackageList:
- gcc
- make
- perl
- bzip2
- "kernel-devel-{{ kernelVersion.stdout }}"
guestAdditionsPath: /tmp/guest_additions
guestAdditionsIsoFileName: VBoxGuestAdditions.iso
guestAdditionsInstallerFileName: VBoxLinuxAdditions.run
guestAdditionsService:
- vboxadd
- vboxadd-service
' > operation/ansible/roles/guest_additions/vars/main.yml
$ cd operation/packer/development && packer build template.json
※自動プロビジョニング(Ansible)、に合わせるのであれば、
ansible-local
を使用した方が良かったですね...orz
OSのイメージファイルを元に、仮想マシンをKickstartで自動セットアップします。
起動後、構成管理ツールでプロビジョニングを行い、
vagrant用のboxファイルとして出力したものを、S3へアップロードします。
配信された開発環境を起動する
$ cd operation/vagrant
$ vagrant halt && \
vagrant destroy -f && \
rm -rf .vagrant
$ echo -e 'ENV.delete_if { |name| name.start_with?("AWS_") }
ENV["AWS_PROFILE"] = "default"
Vagrant.configure("2") do |config|
config.vm.box = "golden-image.box"
config.vm.box_url = "https://s3-ap-northeast-1.amazonaws.com/development-virtual-machine-image/golden-image/manifest.json"
config.vm.network "forwarded_port", guest: 80, host: 8080, auto_correct: true
config.vm.network "private_network", ip: "192.168.33.10"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider :virtualbox do |vbox|
vbox.name = "golden-image"
end
end
' > Vagrantfile
$ vagrant up
S3からマニフェストを元に、指定されたboxファイルをダウンロードして起動しています。
バージョン指定をしない場合は最新バージョン、指定した場合は対象のバージョンのboxがダウンロードされます。
良い所
- 既に構成管理されているboxファイルを起動するだけなので、エラーの混入する余地が少ない。
- トラブルシュートの結果を反映させやすい。
- DevとOpsの役割を明確に出来るので、それぞれが得意な分野で戦える。
悪い所
- AWS-S3が使える事が前提の為、少量のコストが発生する。
- サードパーティー製のプラグインに依存している為、提供元の開発状況に依存する。
後書き
この構成を構築する時に、OSの自動起動を設定ファイルで行える事を知らなかったので凄くハマりました...笑
因みに、ubuntuの場合はpreseedを仮想マシンにアップロードしてセットアップさせます。
Dockerでも同じような事が出来るので、どちらでも遊んでみると楽しめます。