概要
Docker まとめ Chaos で502エラーが頻発するようになったので記事を分けました。dockerについて書いていきます。現在適宜トピックを追加しており、書いていること精査はできていません。カオスです。
Kubernetes
- Kubernetes覚え書き に移動中
構築
podの作成
### httpdコンテナ作成用のYAMLファイルを作成
$ vi pod-httpd.yml
================================================
apiVersion: v1
kind: Pod
metadata:
name: httpd
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 8080
================================================
### Podを作成
$ kubectl create -f pod-httpd.yml
### Podの確認
$ kubectl get pod
================================================
NAME READY STATUS RESTARTS AGE
httpd 1/1 Running 0 18m
================================================
$ kubectl get pod httpd -o yaml
================================================
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2016-12-23T06:42:17Z
labels:
app: httpd
name: httpd
namespace: default
resourceVersion: "31099"
selfLink: /api/v1/namespaces/default/pods/httpd
uid: f27323ac-c8da-11e6-8745-0050568f6b90
spec:
containers:
- image: httpd
imagePullPolicy: Always
name: httpd
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-j83r0
readOnly: true
dnsPolicy: ClusterFirst
host: 100.127.108.153
nodeName: 100.127.108.153
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-j83r0
secret:
secretName: default-token-j83r0
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-12-23T06:42:17Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2016-12-23T06:43:43Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2016-12-23T06:42:17Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://b2b5e4dc3923a37ab554e...d08
image: httpd
imageID: docker-pullable://docker.io/httpd@sha256:0d817a924bed...9cebfe1382d024287154c99
lastState: {}
name: httpd
ready: true
restartCount: 0
state:
running:
startedAt: 2016-12-23T06:43:42Z
hostIP: 100.127.108.153 ### コンテナが動いているnodeのIP
phase: Running
podIP: 172.17.67.2 ### コンテナに割り当てられたIP
startTime: 2016-12-23T06:42:17Z
================================================
### nodeでコンテナの確認
[node2(100.127.108.153)]
$ sudo docker ps -a
================================================
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b2b5e4dc3923 httpd "httpd-foreground" 9 minutes ago Up 9 minutes k8s_httpd.569c2eeb_httpd_default_f27323ac-c8da-11e6-8745-0050568f6b90_f33566a0
d4a90d399dfc registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 10 minutes ago Up 10 minutes k8s_POD.24f70ba9_httpd_default_f27323ac-c8da-11e6-8745-0050568f6b90_5314642b
================================================
### masterからcurlでApacheが起動していることを確認
[master(100.127.108.151)]
$ curl -I http://172.17.67.2
================================================
HTTP/1.1 200 OK
Date: Fri, 23 Dec 2016 07:06:21 GMT
Server: Apache/2.4.25 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
================================================
### Podの削除
$ kubectl delete pod httpd
================================================
pod "httpd" deleted
================================================
Podに特定のIPアドレスを割り当てる
### Service用のYAMLファイルを作成
$ vi demo-service.yml
================================================
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
ports:
- port: 80
selector:
app: httpd
================================================
### Serviceの作成
$ kubectl create -f demo-service.yml
================================================
service "demo" created
================================================
### Serviceの確認
$ kubectl get service
================================================
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo 10.254.199.250 <none> 8080/TCP 2m
kubernetes 10.254.0.1 <none> 443/TCP 1d
================================================
$ kubectl get service -o yaml
================================================
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-12-23T07:24:36Z
name: demo
namespace: default
resourceVersion: "31605"
selfLink: /api/v1/namespaces/default/services/demo
uid: dbb7baa5-c8e0-11e6-8745-0050568f6b90
spec:
clusterIP: 10.254.199.250
portalIP: 10.254.199.250
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: httpd
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-12-21T11:15:00Z
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: "10"
selfLink: /api/v1/namespaces/default/services/kubernetes
uid: b6dc779f-c76e-11e6-9fd6-0050568f6b90
spec:
clusterIP: 10.254.0.1
portalIP: 10.254.0.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: ClientIP
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata: {}
================================================
### curlでCulusterIP経由で利用できることを確認
$ curl -I http://10.254.199.250
================================================
HTTP/1.1 200 OK
Date: Fri, 23 Dec 2016 07:06:21 GMT
Server: Apache/2.4.25 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
================================================
### Serviceの削除
$ kubectl delete service demo
================================================
service "demo" deleted
================================================
設定値まとめ
Pod
### Pod
$ vi demo-pod-nginx.yaml
================================================
apiVersion: v1
kind: Pod
metadata:
name: metadata-name-pod ### PodのコンテナのNAMESにこの文字列が付与される名前
labels:
app: metadata-labels-app-nginx ### ラベルの指定。他のリソースから参照するときの識別子。
namespace: default ### ネームスペースの指定。異なるネームスペースに属するリソースはお互いに参照できない。
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent ### なければpullする
#imagePullPolicy: Always ### 毎回pullする
#imagePullPolicy: Never ### pullしない
name: spec-containers-name-pod ### コンテナのNAMESにこの文字列が付与される
ports:
- containerPort: 80 ### コンテナにアクセスするためのport
protocol: TCP
dnsPolicy: ClusterFirst
host: 100.127.108.159 ### コンテナを起動するnodeを指定。指定がなければ適当なnodeになる。
restartPolicy: Always ### コンテナに障害が発生したらいつもrestartする (LivenessProbeの設定などで利用)
#restartPolicy: OnFailure
#restartPolicy: Never
================================================
$ kubectl create -f demo-pod-nginx.yaml
### podsの確認
$ kubectl get pods
================================================
NAME READY STATUS RESTARTS AGE
metadata-name-pod 1/1 Running 0 2m
================================================
$ kubectl get pods -o yaml | grep IP
================================================
hostIP: 100.127.108.159 ### hostで指定したPodが起動しているnodeのIPアドレス
podIP: 172.17.83.3 ### Podに割り当てられたIPアドレス
================================================
### コンテナが起動しているnodeで確認
$ sudo docker ps -a
================================================
807d446ae5ae nginx "nginx -g 'daemon off" 8 minutes ago Up 8 minutes k8s_spec-containers-name-pod.b04037fb_metadata-name-pod_default_bc86cd07-cb3b-11e6-9dda-0050568f6b90_ea048fcd ### nginxのコンテナ
b7912a28b8fb registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 8 minutes ago Up 8 minutes k8s_POD.a8590b41_metadata-name-pod_default_bc86cd07-cb3b-11e6-9dda-0050568f6b90_e1f2fbfa ### Podのコンテナ
================================================
Service
### Service
$ vi demo-service-nginx.yaml
================================================
apiVersion: v1
kind: Service
metadata:
name: sv-metadata-name-nginx ### serviceの名前
labels:
app: metadata-labels-app-nginx ### Podと同じラベル名を指定
namespace: default ### Podと同じネームスペースの指定
spec:
ports:
- nodePort: 30287 ### ホストを経由して外部からPodへの接続用ポート curl -I http://100.127.108.153:30287
port: 8080 ### ローカル(kubernetesの外)からPodへ接続用ポート
protocol: TCP
targetPort: 80 ### コンテナ(Pod)内の規定ポート
name: http
selector:
app: metadata-labels-app-nginx ### Podと同じラベル名を指定
type: NodePort ### ホストのIP(local/global)と、kubernetesのコンテナのIPとの紐付けがされる
# type: LoadBalancer ### ロードバランサとして使う場合
externalIPs:
- 100.127.108.141 ### 外部からアクセスするための任意のアドレスを設定
================================================
$ kubectl create -f demo-service-nginx.yaml
================================================
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30287) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "sv-metadata-name-service" created
================================================
### servicesの確認
$ kubectl get services
================================================
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 4d
sv-metadata-name-service 10.254.219.218 100.127.108.141 8080/TCP 11m
================================================
$ kubectl get services sv-metadata-name-service -o yaml
================================================
...
spec:
clusterIP: 10.254.219.218 ### クラスタのIP
deprecatedPublicIPs:
- 100.127.108.141
externalIPs:
- 100.127.108.141 ### 外部からアクセスするIP。VIPのように任意のアドレスを設定。
portalIP: 10.254.219.218
ports:
- nodePort: 30287
port: 8080
protocol: TCP
targetPort: 80
selector:
app: metadata-labels-app-nginx
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
================================================
### kubernetesと関連の無いサーバからは接続
### curl -I http://<PodのhostIP(nodeのIP)>:<ServiceのnodePort>
$ curl -I http://100.127.108.159:30287
================================================
HTTP/1.1 200 OK
Server: nginx/1.11.7
...
================================================
### masterやどのnodeのIPでも接続できる
$ curl -I http://100.127.108.151:30287
$ curl -I http://100.127.108.152:30287
$ curl -I http://100.127.108.153:30287
================================================
HTTP/1.1 200 OK
Server: nginx/1.11.7
...
================================================
### kubernetesのネットワーク内から直接コンテナに接続
$ curl -I http://172.17.83.3
================================================
HTTP/1.1 200 OK
Server: nginx/1.11.7
================================================
トラブルシューティング
kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
$ sudo vi /etc/kubernetes/kubelet
================================================
KUBELET_ARGS="--cluster-dns=192.168.33.11 --cluster-domain=cluster.local"
================================================
参考
- Dockerを管理するKubernetesの基本的な動作や仕組みとは? Kubernetesを触ってみた。第20回 PaaS勉強会
- kubernetesによるDockerコンテナ管理入門
- Kubernetesにまつわるエトセトラ(主に苦労話)
Rancher
-
Kubernetes on Rancherハンズオン
- 48ページから
めも
kubectl run my-nginx --image=nginx --port=80
kubectl expose deployment my-nginx --port=80 --type=LoadBalancer
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-12-26T06:28:41Z
labels:
run: my-nginx
name: my-nginx
namespace: default
resourceVersion: "4674"
selfLink: /api/v1/namespaces/default/services/my-nginx
uid: 8b34d5ba-cb34-11e6-9dda-0050568f6b90
spec:
clusterIP: 10.254.108.202
portalIP: 10.254.108.202
ports:
- nodePort: 30946
port: 80
protocol: TCP
targetPort: 80
selector:
run: my-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
kind: List
metadata: {}
- apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"my-nginx-2494149703","uid":"7b8de09d-cb34-11e6-9dda-0050568f6b90","apiVersion":"extensions","resourceVersion":"4646"}}
creationTimestamp: 2016-12-26T06:28:15Z
generateName: my-nginx-2494149703-
labels:
pod-template-hash: "2494149703"
run: my-nginx
name: my-nginx-2494149703-npb5j
namespace: default
resourceVersion: "4665"
selfLink: /api/v1/namespaces/default/pods/my-nginx-2494149703-npb5j
uid: 7b8ed1d3-cb34-11e6-9dda-0050568f6b90
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: my-nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-j83r0
readOnly: true
dnsPolicy: ClusterFirst
host: 100.127.108.159
nodeName: 100.127.108.159
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-j83r0
secret:
secretName: default-token-j83r0
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-12-26T06:28:15Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2016-12-26T06:28:24Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2016-12-26T06:28:15Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://b185d13eddf67e7e08457355b5900a5244f04722b1198f2cd6e06e7b40070463
image: nginx
imageID: docker-pullable://docker.io/nginx@sha256:2a07a07e5bbf62e7b583cbb5257357c7e0ba1a8e9650e8fa76d999a60968530f
lastState: {}
name: my-nginx
ready: true
restartCount: 0
state:
running:
startedAt: 2016-12-26T06:28:23Z
hostIP: 100.127.108.159
phase: Running
podIP: 172.17.83.3
startTime: 2016-12-26T06:28:15Z
kind: List
metadata: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2016-12-26T06:28:15Z
generation: 2
labels:
run: my-nginx
name: my-nginx
namespace: default
resourceVersion: "4666"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/my-nginx
uid: 7b853f84-cb34-11e6-9dda-0050568f6b90
spec:
replicas: 1
selector:
matchLabels:
run: my-nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: my-nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: my-nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
observedGeneration: 2
replicas: 1
updatedReplicas: 1
Ansibleとdocker連携
Ansibleのバージョンは2.0以降を入れてください。
事前準備
- ansibleのインストールは省略
### 必要なパッケージのインストール
### $ sudo pip install docker-py
dockerホスト接続テスト
ansibleを実行する自分自身(127.0.0.1)がDockerホストの場合
### Ansibleの設定
$ vi hosts
================================================
[docker_hosts]
127.0.0.1
[containers]
webserver01
================================================
$ vi ansible.conf
================================================
[defaults]
host_key_checking = False
[ssh_connection]
ssh_args = -o ForwardAgent=yes
================================================
### ansible実行テスト
$ ansible -i hosts 127.0.0.1 -k -c paramiko -m ping
================================================
SSH password:
127.0.0.1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
================================================
dockerコンテナをAnsibleで作成
-
docker runの-iオプションにあたるものが無いのでdocker attachできない
- docker execでログインはできるから別にいいかも
- 手元ではmoduleを使わずに単純にdocker runをスクリプトで叩いて処理している
-
Docker API Error: client is newer than server (client API version: 1.22, server API version: 1.21)"
の対応- playbookの最後の「docker_api_version」にエラーとなっているserver API versionの値(ここでは1.21)を設定する
### playbookの作成
$ vi create_container.yml
================================================
- hosts: docker_hosts
become: yes
tasks:
- name: deploy centos container
docker: image=centos:centos6 name=webserver01 ports=8081:80 expose=80 tty=yes docker_api_version=1.21
================================================
### playbookの実行
### (dockerホストに対して実行するユーザ(vagrant)を指定)
$ ansible-playbook -i hosts create_container.yml -u vagrant -k -K -c paramiko
### コンテナが作成されていることの確認
$ sudo docker ps
================================================
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4fafe9df90d6 centos:centos6 "/bin/bash" About a minute ago Up About a minute 0.0.0.0:8081->80/tcp webserver01
================================================
コンテナのプロビジョニング
### playbookの作成
$ vi provisioning.yml
================================================
- hosts: containers
become: yes
connection: docker
tasks:
- name: Apacheのインストール
yum: name=httpd state=installed
- name: Apacheを起動する
service: name=httpd enabled=yes state=started
================================================
### playbookの実行
### (コンテナ内で作業するユーザ(root)を指定)
$ sudo ansible-playbook -i hosts provisioning.yml -u root -c paramiko
================================================
PLAY [containers] **************************************************************
TASK [setup] *******************************************************************
ok: [webserver01]
TASK [Apacheのインストール] ***********************************************************
changed: [webserver01]
TASK [Apacheを起動する] *************************************************************
changed: [webserver01]
PLAY RECAP *********************************************************************
webserver01 : ok=3 changed=2 unreachable=0 failed=0
================================================
### dockerホストの外のホストからcurlを実行
$ curl http://192.168.56.180:8081 | head
================================================
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
101 4961 101 4961 0 0 1188k 0 --:--:-- --:--:-- --:--:-- 4844k
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<head>
<title>Apache HTTP Server Test Page powered by CentOS</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<style type="text/css">
body {
background-color: #fff;
color: #000;
font-size: 0.9em;
font-family: sans-serif,helvetica;
================================================
ansible-container (検証中)
-
ansible-container(github)
- docker containerのイメージ作成と実行
- inventoryは自動作成
- 接続はsshではなくdocker connectionを使う
### インストール
$ git clone https://github.com/ansible/ansible-container.git
$ cd ansible-container
$ sudo python setup.py install
### サンプルを動かしてみる
$ cd samaple
$ sudo ansible-container build
================================================
No DOCKER_HOST environment variable found. Assuming UNIX socket at /var/run/docker.sock
(Re)building the Ansible Container image.
Building Docker Engine context...
Starting Docker build of Ansible Container image (please be patient)... ### ここでしばらく時間がかかる
================================================
docker-machine
CentOS7にVirtualBoxを入れて使う(使えない)
VirtualBoxのインストール
### 必要なパッケージのインストール
$ sudo yum -y install kernel-devel kernel-headers dkms
$ sudo yum -y groupinstall "Development Tools"
$ sudo yum update
### Oracleのパブリックキーをインポート
$ wget http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc
$ sudo rpm --import oracle_vbox.asc
### リポジトリの追加
$ sudo wget http://download.virtualbox.org/virtualbox/rpm/el/virtualbox.repo -O /etc/yum.repos.d/virtualbox.repo
### VirtualBoxをインストール
$ sudo yum install VirtualBox-5.0
### VirtualBoxの起動
$ sudo systemctl enable vboxdrv.service
$ sudo systemctl restart vboxdrv.service
$ docker-machine create --driver virtualbox default
================================================
Creating CA: /home/jsd/.docker/machine/certs/ca.pem
Creating client certificate: /home/jsd/.docker/machine/certs/cert.pem
Running pre-create checks...
Error with pre-create check: "This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory"
================================================
VMware ESXi 上でdockerホストを用意 (用意できてない)
- 鍵のところがうまく作れてないのでsshで入れない
### 最低限必要?
$ sudo docker-machine create --driver vmwarevsphere \
--vmwarevsphere-vcenter 192.168.56.190 \
--vmwarevsphere-username root \
--vmwarevsphere-password hoge \
--vmwarevsphere-network 'VM Network' \
--vmwarevsphere-datastore 'datastore1' \
test-docker-host001
### 色々オプション指定
$ sudo docker-machine create --driver vmwarevsphere \
--vmwarevsphere-vcenter 192.168.56.190 \
--vmwarevsphere-username root \
--vmwarevsphere-password hogehoge \
--vmwarevsphere-network 'VM Network' \
--vmwarevsphere-datastore 'datastore1' \
--vmwarevsphere-cpu-count 2 \
--vmwarevsphere-memory-size 2048 \
--vmwarevsphere-disk-size 40960 \
--vmwarevsphere-datacenter ha-datacenter \
test-docker-host002
docker swarm
docker-compose (v1)
### ymlファイルの作成
$ cat docker-compose.yml
================================================
sample:
image: centos:centos6
tty: true
container_name: sample
command: /bin/bash --login
================================================
### コンテナ起動
$ docker-compose up -d
================================================
Creating sample
================================================
### 確認
$ docker ps
================================================
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a46905e75bba centos:centos6 "/bin/bash --login" 18 seconds ago Up 17 seconds sample
================================================
[保留]docker-compose (v2)
- v2は色々挙動が変わってうまく動かないので保留中
- 以下はとりあえず動かす方法
### ymlファイルの作成
$ cat docker-compose.yml
================================================
version: '2'
services:
web:
image: ubuntu
container_name: web_server
tty: true
command: /bin/bash --login
================================================
### コンテナ起動
$ docker-compose up -d
================================================
Creating web_server
================================================
### 確認
$ docker ps
================================================
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8999bf4e8af2 ubuntu "/bin/bash --login" 25 seconds ago Up 23 seconds web_server
================================================
version2の問題
linksが動かない
docker技術
セキュリティー(作成中)
- Dockerでホストを乗っ取られた
- 権限
- privilegedについて
ネットワーク(作成中)
Data Volume
- Docker の Data Volume まわりを整理する
- NFSとの違いについて
サービス用コンテナ構築
### dockerホストでwork/docker/containersディレクトリの作成
$ mkdir -p /home/${HOME}/work/docker/containers
オペレーションコンテナ
公式のCentOS7イメージから手作業で作る
- イメージを起動
### --loginでコンテナログイン時にprofileなど設定の読み込みが有効になる
$ sudo docker run -it --name operation centos:centos7 bin/bash --login
- localeの設定
- ja_JP.UTF-8にする
### glibc-commonの入れ直し
# yum reinstall -y glibc-common
### localeの追加
# localedef -v -c -i ja_JP -f UTF-8 ja_JP.UTF-8; echo "";
### ログイン時に環境変数LANGを設定
# vi /etc/profile
================================================
...
export LANG=ja_JP.UTF-8
================================================
### ログインしなおして確認
- TimeZoneの設定
- JSTにする
### /etc/localtimeを削除
# rm /etc/localtime
### /etc/localtimeから/usr/share/zoneinfo/Asia/Tokyoへsymlinkをはる
# ln -s /usr/share/zoneinfo/Asia/Tokyo /etc/localtime
### 確認
# date
================================================
2016年 6月 29日 水曜日 18:28:56 JST
================================================
- 必要なパッケージのインストール
### yum update
# yum update
### yum install
# yum install epel-release
# yum install make gcc-c++ wget git openssh-clients telnet traceroute perl patch tcpdump screen bind-utils strace sysstat lsof mailx zip bzip2 sudo pv which
- operation ユーザの作成
# user=ftakao2007
# useradd -u 1000 -G wheel -m ${user}
# passwd ${user}
(以下作成したoperationユーザで作業)
- ansibleのインストール
### 必要なパッケージのインストール
$ sudo yum install python-devel python-yaml libffi-devel openssl-devel
$ sudo sh -c "curl -kL https://bootstrap.pypa.io/get-pip.py | python"
### setuptoolsのアップグレード
$ sudo pip install --upgrade setuptools
### ansibleのインストール
$ sudo pip install ansible
- ansibleが実行されるdockerホストのsudo設定
$ sudo visudo
================================================
### Wheelをnopasswdにする
#%wheel ALL=(ALL) ALL
%wheel ALL=(ALL) NOPASSWD: ALL
### Defaults requirettyをコメントアウト
# Defaults requiretty
================================================
- rubyのインストール
- rbenvで最新を入れる
(いずれ書く)
### sudo設定
$ sudo visudo
================================================
### secure_pathの設定
Defaults secure_path = /usr/local/bin:/usr/local/rbenv/shims:/usr/local/rbenv/bin:/sbin:/bin:/usr/sbin:/usr/bin
================================================
Jenkinsコンテナ
- jenkinsを停止するとコンテナごと消えるが、必要なデータはdockerホストの${HOME}/work/containers/jenkins_homeに全てある
### dockerホストにjenkins_homeディレクトリを作成
$ mkdir -p ${HOME}/work/docker/containers/jenkins_home
$ chown 1000:1000 ${HOME}/work/docker/containers/jenkins_home
### コンテナを起動
$ sudo docker run --rm -e JAVA_OPTS="-Duser.timezone=Asia/Tokyo -Djava.awt.headless=true -Dorg.apache.commons.jelly.tags.fmt.timeZone=Asia/Tokyo" -v ${HOME}/work/docker/containers/jenkins_home:/var/jenkins_home -p 8080:8080 -p 50000:50000 --name jenkins jenkins:latest
### コンテナにログインするときは別ターミナルを立ち上げて以下実行
$ sudo docker exec -it jenkins /bin/bash --login
Jenkins設定
- 入れたPlugins
- Git plugin
- Git client plugin
- gitを使うため
- SSH plugin
- リモートホストにsshで接続してコマンド実行するため
- Disk Usage Plugin
- ディスク容量を監視
- Discard Old Build plugin
- ビルド履歴の削除
- ログがディスク容量をおびやかすので
- ビルド履歴の削除
- Jenkinsまとめ以降は以下で
registryコンテナ
- registryを停止するとコンテナごと消えるが、必要なデータはdockerホストの${HOME}/work/containers/registryに全てある
### dockerホストにregistryディレクトリを作成
$ mkdir -p ${HOME}/work/docker/containers/registry
### コンテナを起動
$ sudo docker run --rm \
-p 5001:5000 \
-v /home/${USER}/work/docker/containers/registry:/var/lib/registry \
--name registry \
registry:latest
### コンテナにログインするときは別ターミナルを立ち上げて以下実行
$ sudo docker exec -it registry /bin/bash
redmineコンテナ
- redmineを停止するとコンテナごと消えるが、必要なデータはdockerホストの${HOME}/work/containers/redmineに全てある
### dockerホストにredmineディレクトリを作成
$ mkdir -p ${HOME}/work/docker/containers/redmine/plugins
$ mkdir -p ${HOME}/work/docker/containers/redmine/files
$ mkdir -p ${HOME}/work/docker/containers/redmine/sqlite
$ sudo chown 999:999 -R ${HOME}/work/docker/containers/redmine
### コンテナを起動
$ sudo docker run --rm \
-v ${HOME}/work/docker/containers/redmine/plugins:/usr/src/redmine/plugins \
-v ${HOME}/work/docker/containers/redmine/files:/usr/src/redmine/files \
-v ${HOME}/work/docker/containers/redmine/sqlite:/usr/src/redmine/sqlite \
-p 8081:3000 \
--name redmine \
redmine:latest
### コンテナにログインするときは別ターミナルを立ち上げて以下実行
$ sudo docker exec -it redmine /bin/bash --login
redmine設定
- redmineまとめ以降は以下で