経緯
- 今流行りのRancherを使ってdockerコンテナを管理しています
- Rancherには「CATALOG」というdocker-compose.ymlのサンプルがあります。それを元に作成しました。
- またどんなサービスかいつも気になってたので、どうせなら使いこなしたいので備忘録として
- 色々なサービスのdocker-composeがあるので参考になるかもしれません
- また、glusterFSのように消えてしまう前のメモとしても
使い方
- ${xxx}は適宜、値を代入してください。
特におすすめ
docker-composeサンプル
- ファイル内全文検索・タグ付けでファイルが非常に探しやすい
- ドキュメントの参照/更新履歴が可視化されてチームコラボレーションしやすくなる
- 全文検索できるファイルサーバとしてのAlfresco
alfresco:
environment:
CIFS_ENABLED: 'false'
FTP_ENABLED: 'false'
tty: true
image: webcenter/rancher-alfresco:v5.1-201605-1
links:
- postgres:db
stdin_open: true
ports:
- 8080:8080/tcp
volumes_from:
- alfresco-data
alfresco-data:
image: alpine
volumes:
- /opt/alfresco/alf_data
net: none
command: /bin/true
postgres:
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_DB: ${database_name}
POSTGRES_PASSWORD: ${database_password}
POSTGRES_USER: ${database_user}
tty: true
image: postgres:9.4
stdin_open: true
volumes_from:
- postgres-data
postgres-data:
labels:
io.rancher.container.start_once: 'true'
image: alpine
volumes:
- /var/lib/postgresql/data/pgdata
net: none
command: /bin/true
Apache Kafka
- 2011年にLinkedInから公開されたオープンソースの分散メッセージングシステムである.Kafkaはウェブサービスなどから発せられる大容量のデータ(e.g., ログやイベント)を高スループット/低レイテンシに収集/配信することを目的に開発されている.
- Fast とにかく大量のメッセージを扱うことができる
- Scalable Kafkaはシングルクラスタで大規模なメッセージを扱うことができダウンタイムなしでElasticかつ透過的にスケールすることができる
- Durable メッセージはディスクにファイルとして保存され,かつクラスタ内でレプリカが作成されるためデータの損失を防げる(パフォーマンスに影響なくTBのメッセージを扱うことができる)
- Distributed by Design クラスタは耐障害性のある設計になっている
- Apache Kafkaに入門した
broker:
tty: true
image: rawmind/alpine-kafka:0.10.0.1-1
volumes_from:
- broker-volume
- broker-conf
environment:
- JVMFLAGS=-Xmx${kafka_mem}m -Xms${kafka_mem}m
- CONFD_INTERVAL=${kafka_interval}
- ZK_SERVICE=${zk_link}
- KAFKA_DELETE_TOPICS=${kafka_delete_topics}
- KAFKA_LOG_DIRS=${kafka_log_dir}
- KAFKA_LOG_RETENTION_HOURS=${kafka_log_retention}
- KAFKA_NUM_PARTITIONS=${kafka_num_partitions}
- ADVERTISE_PUB_IP=${kafka_pub_ip}
external_links:
- ${zk_link}:zk
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: broker-volume, broker-conf
broker-conf:
net: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
image: rawmind/rancher-kafka:0.10.0.0-3
volumes:
- /opt/tools
broker-volume:
net: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
environment:
- SERVICE_UID=10003
- SERVICE_GID=10003
- SERVICE_VOLUME=${kafka_log_dir}
volumes:
- ${kafka_log_dir}
volume_driver: local
image: rawmind/alpine-volume:0.0.2-1
Apache Zookeeper
- 分散アプリケーション向けの高パフォーマンスな協調サービス
- 分散アプリケーションを構築する上で必要となる,同期, 設定管理, グルーピング, 名前管理, などの機能を提供するサービス
- zookeeper とは
zk:
tty: true
image: rawmind/alpine-zk:3.4.9
volumes_from:
- zk-volume
- zk-conf
environment:
- JVMFLAGS=-Xmx${zk_mem}m -Xms${zk_mem}m
- ZK_DATA_DIR=${zk_data_dir}
- ZK_INIT_LIMIT=${zk_init_limit}
- ZK_MAX_CLIENT_CXNS=${zk_max_client_cxns}
- ZK_SYNC_LIMIT=${zk_sync_limit}
- ZK_TICK_TIME=${zk_tick_time}
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: zk-volume, zk-conf
zk-conf:
net: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
image: rawmind/rancher-zk:3.4.8-5
volumes:
- /opt/tools
zk-volume:
net: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
environment:
- SERVICE_UID=10002
- SERVICE_GID=10002
- SERVICE_VOLUME=${zk_data_dir}
volumes:
- ${zk_data_dir}
volume_driver: local
image: rawmind/alpine-volume:0.0.2-1
asciinema.org
- Asciinemaは端末の操作を共有できるサービスです
- 動画のシークができる
- 動画をテキストとしてコピーできる
- 導入が簡単
- 録画が簡単
- Asciinemaで華麗に端末操作を共有する
asciinema-org:
image: 'asciinema/asciinema.org:latest'
links:
- postgres
- redis
restart: always
ports:
- ${port}:3000
environment:
HOST: ${host}:${port}
DATABASE_URL: postgresql://postgres:${postgres_password}@postgres/asciinema
REDIS_URL: redis://redis:6379
RAILS_ENV: development
postgres:
image: 'postgres:latest'
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: ${postgres_password}
container_name: postgres
redis:
image: 'redis:latest'
ports:
- 6379:6379
container_name: redis
sidekiq:
image: 'asciinema/asciinema.org:latest'
links:
- postgres
- redis
command: 'ruby start_sidekiq.rb'
environment:
HOST: ${host}:${port}
DATABASE_URL: postgresql://postgres:${postgres_password}@postgres/asciinema
REDIS_URL: redis://redis:6379
RAILS_ENV: development
Bind9 Domain Name Server
- DNS を実装するためのシステムの一つである BIND (Berkeley Internet Name Domain)
- BINDは、サーバ(named)、 リゾルバ(libresolv.a)、管理用プログラム(nslookup, dig) などから構成される一連のプログラム群です。 現在は、Internet Software Consortium によってサポートされています。
- DNSサーバ構築手順(ソースからBIND 9.10.1-P1をインストール + 内部向け権威DNSサーバ構築)
bind9:
image: digitallumberjack/docker-bind9:v1.2.0
ports:
- ${BIND9_PORT}:53/tcp
- ${BIND9_PORT}:53/udp
environment:
BIND9_ROOTDOMAIN: ${BIND9_ROOTDOMAIN}
BIND9_KEYNAME: ${BIND9_KEYNAME}
BIND9_KEY: ${BIND9_KEY}
BIND9_FORWARDERS: ${BIND9_FORWARDERS}
RANCHER_ENV: "true"
Cloud9
-
Cloud9(クラウド9)とはアプリケーションの開発やデータベースなどをクラウド環境で利用できるサービスです。IDE(統合開発環境)としての機能が充実しており、GitHubやHerokuといった他のツールとの連携もスムーズに行えます。
-
ブラウザ上で動くため、PCに依存することなく開発環境を準備することができるのが利点です。また、無料で使うことができるのも良さのひとつです。
cloud9-sdk:
command: "--listen 0.0.0.0 --port ${cloud9_port} -w /workspace --collab --auth ${cloud9_user}:${cloud9_pass}"
image: "rawmind/cloud9-sdk:0.3.0-2"
restart: "always"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/usr/local/bin/docker:/bin/docker"
- "/workspace"
environment:
GIT_REPO: ${cloud9_repo}
labels:
traefik.domain: ${cloud9_domain}
traefik.port: ${cloud9_port}
traefik.enable: ${cloud9_publish}
CloudFlare
- CloudFlareは、サーバー上のキャッシュ可能なファイル(画像 / CSS / jsファイルなど)を構築している世界中の複数のサーバーにキャッシュすることができるシステム。こうすることでサイトのレスポンスを大幅に向上することができ、サイトの表示速度を高速化するというものです。
- CloudFlare導入
cloudflare:
image: rancher/external-dns:v0.6.0
command: -provider=cloudflare
expose:
- 1000
environment:
CLOUDFLARE_EMAIL: ${CLOUDFLARE_EMAIL}
CLOUDFLARE_KEY: ${CLOUDFLARE_KEY}
ROOT_DOMAIN: ${ROOT_DOMAIN}
NAME_TEMPLATE: ${NAME_TEMPLATE}
TTL: ${TTL}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"
Concrete5
- concrete5とは、ウェブサーバー上で、誰でも簡単で、しかも直感的にホームページの運営管理が出来る、手軽で画期的なCMS(コンテンツ・マネージメント・システム)です。
- WordPressで企業サイトをつくるよりもconcrete5というCMSが適している理由 | 株式会社bridge
CMSMysql:
environment:
MYSQL_ROOT_PASSWORD: ${root_password}
MYSQL_DATABASE: ${db_name}
MYSQL_USER: ${db_username}
MYSQL_PASSWORD: ${db_password}
labels:
io.rancher.container.pull_image: always
tty: true
image: mysql
volumes:
- ${db_data_location}:/var/lib/mysql
stdin_open: true
volume_driver: ${volume_driver}
CMSConfig:
image: opensaas/concrete5
tty: true
stdin_open: true
links:
- CMSMysql:mysql
volumes:
- ${cms_application_data}:/var/www/html/application
- ${cms_packages_data}:/var/www/html/packages
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volume_driver: ${volume_driver}
command: bash -c "chown -R www-data. application; chown -R www-data. packages; sleep 2m; php -f concrete/bin/concrete5.php c5:install --db-server=mysql --db-username=${db_username} --db-password=${db_password} --db-database=${db_name} --site=${cms_sitename} --admin-email=${cms_admin_email} --admin-password=${cms_admin_password} -n -vvv"
Concrete5App:
labels:
io.rancher.container.pull_image: always
io.rancher.sidekicks: CMSConfig
tty: true
links:
- CMSMysql:mysql
image: opensaas/concrete5
volumes:
- ${cms_application_data}:/var/www/html/application
- ${cms_packages_data}:/var/www/html/packages
volume_driver: ${volume_driver}
stdin_open: true
Confluence
- Conflunenceは、ドキュメントの作成や議論を一箇所にまとめることができる情報共有ツールです。
- Confluenceとは?
- slackと連携したら便利そうですSlackとConfluenceを連携する方法
confluence:
image: sanderkleykens/confluence:6.0.1
restart: always
environment:
- CATALINA_OPTS=-Xms${heap_size} -Xmx${heap_size} ${jvm_args}
- CONFLUENCE_PROXY_PORT=${proxy_port}
- CONFLUENCE_PROXY_NAME=${proxy_name}
- CONFLUENCE_PROXY_SCHEME=${proxy_scheme}
- CONFLUENCE_CONTEXT_PATH=${context_path}
external_links:
- ${database_link}:database
volumes:
- ${confluence_home}:/var/atlassian/confluence
Consul Cluster
- 複数台のサーバ構成によって,Webサーバであれば通信量やアクセスの負荷分散を図ることができます。あるいは,データベースやキャッシュ用のサーバを冗長化構成にすることで,障害発生時におけるサービスの継続性(稼動時間)を高めることもできます。
- オーケストレーションツールとしてのConsulの使い方
- Consulサーバクラスタ
consul-conf:
image: husseingalal/consul-config
labels:
io.rancher.container.hostname_override: container_name
volumes_from:
- consul
net: "container:consul"
consul:
image: husseingalal/consul
labels:
io.rancher.sidekicks: consul-conf
volumes:
- /opt/rancher/ssl
- /opt/rancher/config
- /var/consul
Consul-Registrator
- Registratorは、Dockerで立ち上げたコンテナに関する情報を、Consul や etcd、SkyDNS 2へ登録するためのツールです。
- Docker-Registrator(Normal/internal)でConsulに登録される内容は?
- Amazon ECS + registrator + consul でサービスの自動登録超シンプルパターン
consul-registrator:
log_driver: ''
labels:
io.rancher.sidekicks: consul,consul-data
io.rancher.scheduler.global: 'true'
io.rancher.container.pull_image: always
io.rancher.container.hostname_override: container_name
tty: true
restart: always
command:
- consul://consul:8500
log_opt: {}
image: gliderlabs/registrator:v7
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
stdin_open: true
consul:
ports:
- 8300:8300/tcp
- 8301:8301/tcp
- 8301:8301/udp
- 8302:8302/tcp
- 8302:8302/udp
- 8400:8400/tcp
- 8500:8500/tcp
- 8600:8600/tcp
- 8600:8600/udp
log_driver: ''
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
io.rancher.container.hostname_override: container_name
io.rancher.container.dns: true
tty: true
net: host
restart: always
command:
- agent
- -retry-join
- ${consul_server}
- -recursor=169.254.169.250
- -client=0.0.0.0
environment:
CONSUL_LOCAL_CONFIG: "{\"leave_on_terminate\": true, \"datacenter\": \"${consul_datacenter}\"}"
CONSUL_BIND_INTERFACE: eth0
volumes_from:
- consul-data
log_opt: {}
image: consul:v0.6.4
stdin_open: true
consul-data:
image: consul:v0.6.4
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.global: 'true'
io.rancher.container.start_once: true
volumes:
- /consul/data
entrypoint: /bin/true
DataDog Agent & DogStatsD
datadog-init:
image: janeczku/datadog-rancher-init:v2.2.3
net: none
command: /bin/true
volumes:
- /opt/rancher
labels:
io.rancher.container.start_once: 'true'
io.rancher.container.pull_image: always
datadog-agent:
image: datadog/docker-dd-agent:11.3.585
entrypoint: /opt/rancher/entrypoint-wrapper.py
command:
- supervisord
- -n
- -c
- /etc/dd-agent/supervisor.conf
restart: always
environment:
API_KEY: ${api_key}
SD_BACKEND_HOST: ${sd_backend_host}
SD_BACKEND_PORT: ${sd_backend_port}
SD_TEMPLATE_DIR: ${sd_template_dir}
STATSD_METRIC_NAMESPACE: ${statsd_namespace}
DD_STATSD_STANDALONE: "${statsd_standalone}"
DD_HOST_LABELS: ${host_labels}
DD_CONTAINER_LABELS: ${service_labels}
DD_SERVICE_DISCOVERY: ${service_discovery}
DD_SD_CONFIG_BACKEND: ${sd_config_backend}
DD_CONSUL_TOKEN: ${dd_consul_token}
DD_CONSUL_SCHEME: ${dd_consul_scheme}
DD_CONSUL_VERIFY: ${dd_consul_verify}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /proc/:/host/proc/:ro
- /sys/fs/cgroup/:/host/sys/fs/cgroup:ro
volumes_from:
- datadog-init
labels:
io.rancher.scheduler.global: "${global_service}"
io.rancher.sidekicks: 'datadog-init'
DNS Update (RFC2136)
rfc2136dns:
image: rancher/external-dns:v0.6.0
command: -provider=rfc2136
expose:
- 1000
environment:
RFC2136_HOST: ${RFC2136_HOST}
RFC2136_PORT: ${RFC2136_PORT}
RFC2136_TSIG_KEYNAME: ${RFC2136_TSIG_KEYNAME}
RFC2136_TSIG_SECRET: ${RFC2136_TSIG_SECRET}
ROOT_DOMAIN: ${ROOT_DOMAIN}
NAME_TEMPLATE: ${NAME_TEMPLATE}
TTL: ${TTL}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"
DNSimple DNS
- DNSimpleを利用できるライブラリ
- https://dnsimple.com/
dnsimple:
image: rancher/external-dns:v0.6.0
command: -provider=dnsimple
expose:
- 1000
environment:
DNSIMPLE_TOKEN: ${DNSIMPLE_TOKEN}
DNSIMPLE_EMAIL: ${DNSIMPLE_EMAIL}
ROOT_DOMAIN: ${ROOT_DOMAIN}
NAME_TEMPLATE: ${NAME_TEMPLATE}
TTL: ${TTL}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"
DokuWiki
- データベースを前提としない、使い易く汎用性の高いオープンソースのウィキソフトウェア
- データベース不要のWikiクローン「DokuWiki」の導入
dokuwiki-server:
ports:
- ${http_port}:80/tcp
labels:
io.rancher.sidekicks: dokuwiki-data
hostname: ${dokuwiki_hostname}
image: ununseptium/dokuwiki-docker
volumes_from:
- dokuwiki-data
dokuwiki-data:
labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
hostname: dokuwikidata
image: ununseptium/dokuwiki-docker
volumes:
- /var/www/html/data
- /var/www/html/lib/plugins
Drone
- Dockerを使ってテスト環境を毎回構築、破棄してくれます。しかもDroneのCI環境構築についてもDockerを使って簡単にできます。
- Dockerを使ったCIサーバ「Drone」レビュー
- OSS版 drone.io を使って Docker Image をビルド
drone-lb:
ports:
- ${public_port}:8000
tty: true
image: rancher/load-balancer-service
links:
- drone-server:drone-server
stdin_open: true
drone-healthcheck:
image: rancher/drone-config:v0.1.0
net: 'container:drone-server'
volumes_from:
- drone-data-volume
entrypoint: /giddyup health
drone-server:
image: rancher/drone-config:v0.1.0
volumes_from:
- drone-data-volume
labels:
io.rancher.sidekicks: drone-data-volume,drone-daemon,drone-healthcheck
external_links:
- ${database_service}:database
drone-daemon:
image: rancher/drone:0.4
net: 'container:drone-server'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
volumes_from:
- drone-data-volume
entrypoint: /opt/rancher/rancher_entry.sh
## Do not change below. Could cause data loss in upgrade.
drone-data-volume:
image: busybox
net: none
command: /bin/true
labels:
io.rancher.container.start_once: 'true'
volumes:
- /var/lib/drone
- /etc/drone
- /opt/rancher
Drone Rancher Node Manager
- 一つのエージェントは、ホストごとに実行し、単一のワーカーを登録してくれるrancher向けのciツールっぽい
drone-agent:
labels:
io.rancher.scheduler.global: 'true'
tty: true
image: rancher/socat-docker
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker:/var/lib/docker
stdin_open: true
dynamic-drones-mgr-0:
environment:
DRONE_TOKEN: ${DRONE_TOKEN}
DRONE_URL: http://droneserver:8000
external_links:
- ${DRONE_SERVICE}:droneserver
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/drone-agent
tty: true
entrypoint:
- /dynamic-drone-nodes
- /stacks/${STACK_NAME}/services/drone-agent
image: rancher/drone-config:v0.1.0
stdin_open: true
Rancher ECR Credentials Updater
- EC2 Container Registry にアクセス可能になっており、外に置きたくないコンテナイメージをAWS内で管理するのに適しています
- Amazon ECR + ECS CLI ハンズオン
ecr-updater:
environment:
AWS_ACCESS_KEY_ID: ${aws_access_key_id}
AWS_SECRET_ACCESS_KEY: ${aws_secret_access_key}
AWS_REGION: ${aws_region}
labels:
io.rancher.container.pull_image: always
io.rancher.container.create_agent: 'true'
io.rancher.container.agent.role: environment
tty: true
image: objectpartners/rancher-ecr-credentials:1.1.0
stdin_open: true
Elasticsearch
- 設計フローまで変えてしまう画期的なドキュメント指向型検索エンジン
- Elasticsearch 入門
- Dockerで簡単にElasticsearchのクラスタを試してみる
elasticsearch-masters:
image: rancher/elasticsearch-conf:v0.4.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-master,elasticsearch-datavolume-masters
elasticsearch-datavolume-masters:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
entrypoint: /bin/true
image: elasticsearch:1.7.3
elasticsearch-base-master:
labels:
elasticsearch.master.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:1.7.3
net: "container:elasticsearch-masters"
volumes_from:
- elasticsearch-masters
- elasticsearch-datavolume-masters
entrypoint:
- /opt/rancher/bin/run.sh
elasticsearch-datanodes:
image: rancher/elasticsearch-conf:v0.4.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-datanode,elasticsearch-datavolume-datanode
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
links:
- elasticsearch-masters:es-masters
elasticsearch-datavolume-datanode:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
entrypoint: /bin/true
image: elasticsearch:1.7.3
elasticsearch-base-datanode:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:1.7.3
links:
- elasticsearch-masters:es-masters
entrypoint:
- /opt/rancher/bin/run.sh
volumes_from:
- elasticsearch-datanodes
- elasticsearch-datavolume-datanode
net: "container:elasticsearch-datanodes"
elasticsearch-clients:
image: rancher/elasticsearch-conf:v0.4.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-clients,elasticsearch-datavolume-clients
links:
- elasticsearch-masters:es-masters
elasticsearch-datavolume-clients:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
entrypoint: /bin/true
image: elasticsearch:1.7.3
elasticsearch-base-clients:
labels:
elasticsearch.client.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:1.7.3
volumes_from:
- elasticsearch-clients
- elasticsearch-datavolume-clients
net: "container:elasticsearch-clients"
entrypoint:
- /opt/rancher/bin/run.sh
kopf:
image: rancher/kopf:v0.4.0
ports:
- "80:80"
environment:
KOPF_SERVER_NAME: 'es.dev'
KOPF_ES_SERVERS: 'es-clients:9200'
labels:
io.rancher.container.hostname_override: container_name
links:
- elasticsearch-clients:es-clients
Elasticsearch 2.x
elasticsearch-masters:
image: rancher/elasticsearch-conf:v0.5.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-master,elasticsearch-datavolume-masters
volumes_from:
- elasticsearch-datavolume-masters
elasticsearch-datavolume-masters:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
- /usr/share/elasticsearch/config
- /opt/rancher/bin
entrypoint: /bin/true
image: elasticsearch:2.2.1
elasticsearch-base-master:
labels:
elasticsearch.master.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:2.2.1
net: "container:elasticsearch-masters"
volumes_from:
- elasticsearch-datavolume-masters
entrypoint:
- /opt/rancher/bin/run.sh
elasticsearch-datanodes:
image: rancher/elasticsearch-conf:v0.5.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-datanode,elasticsearch-datavolume-datanode
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
links:
- elasticsearch-masters:es-masters
volumes_from:
- elasticsearch-datavolume-datanode
elasticsearch-datavolume-datanode:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
- /usr/share/elasticsearch/config
- /opt/rancher/bin
entrypoint: /bin/true
image: elasticsearch:2.2.1
elasticsearch-base-datanode:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:2.2.1
links:
- elasticsearch-masters:es-masters
entrypoint:
- /opt/rancher/bin/run.sh
volumes_from:
- elasticsearch-datavolume-datanode
net: "container:elasticsearch-datanodes"
elasticsearch-clients:
image: rancher/elasticsearch-conf:v0.5.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-clients,elasticsearch-datavolume-clients
links:
- elasticsearch-masters:es-masters
volumes_from:
- elasticsearch-datavolume-clients
elasticsearch-datavolume-clients:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
- /usr/share/elasticsearch/config
- /opt/rancher/bin
entrypoint: /bin/true
image: elasticsearch:2.2.1
elasticsearch-base-clients:
labels:
elasticsearch.client.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:2.2.1
volumes_from:
- elasticsearch-datavolume-clients
net: "container:elasticsearch-clients"
entrypoint:
- /opt/rancher/bin/run.sh
kopf:
image: rancher/kopf:v0.4.0
ports:
- "${kopf_port}:80"
environment:
KOPF_SERVER_NAME: 'es.dev'
KOPF_ES_SERVERS: 'es-clients:9200'
labels:
io.rancher.container.hostname_override: container_name
links:
- elasticsearch-clients:es-clients
AWS ELB Classic External LB
- elbと連携が簡単にできます
- http://rancher.com/inside-the-external-elb-catalog-template/
elbv1:
image: rancher/external-lb:v0.2.1
command: -provider=elbv1
expose:
- 1000
environment:
ELBV1_AWS_ACCESS_KEY: ${ELBV1_AWS_ACCESS_KEY}
ELBV1_AWS_SECRET_KEY: ${ELBV1_AWS_SECRET_KEY}
ELBV1_AWS_REGION: ${ELBV1_AWS_REGION}
ELBV1_AWS_VPCID: ${ELBV1_AWS_VPCID}
ELBV1_USE_PRIVATE_IP: ${ELBV1_USE_PRIVATE_IP}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"
Etcd
-
etcd は Go 言語で記述されたオープンソースの高信頼分散 KVSです。また etcd クラスタは、CoreOS 上の Docker コンテナ環境等に配備されるアプリケーション間でサービス設定情報などを交換、共有する機能を提供します。尚、同様のミドルウェアでは、Apache ZooKeeper や consul 等が該当すると思われます。
etcd:
image: rancher/etcd:v2.3.7-11
labels:
io.rancher.scheduler.affinity:host_label_soft: etcd=true
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.sidekicks: data
environment:
RANCHER_DEBUG: '${RANCHER_DEBUG}'
EMBEDDED_BACKUPS: '${EMBEDDED_BACKUPS}'
BACKUP_PERIOD: '${BACKUP_PERIOD}'
BACKUP_RETENTION: '${BACKUP_RETENTION}'
volumes:
- etcd:/pdata
- ${BACKUP_LOCATION}:/data-backup
volumes_from:
- data
data:
image: busybox
command: /bin/true
net: none
volumes:
- /data
labels:
io.rancher.container.start_once: 'true'
F5 BIG-IP
- Virtual Edition(以後BIG-IP VE)とは、簡単にいうとロードバランサー(負荷分散装置)みたいです。
- インフラエンジニア向け、F5 BIG-IP Virtual Edition for AWS を使ってみる
external-lb:
image: rancher/external-lb:v0.1.1
command: -provider=f5_BigIP
expose:
- 1000
environment:
F5_BIGIP_HOST: ${F5_BIGIP_HOST}
F5_BIGIP_USER: ${F5_BIGIP_USER}
F5_BIGIP_PWD: ${F5_BIGIP_PWD}
LB_TARGET_RANCHER_SUFFIX: ${LB_TARGET_RANCHER_SUFFIX}
FBCTF
- FBCTFというFacebookが出した旗取りゲームのように、問題をクリアしていくゲームだそうです
- Facebook CTF(fbctf)で遊んでいる人いた
fbctf:
image: 'qazbnm456/dockerized_fbctf:multi_containers'
links:
- mysql
- memcached
restart: always
ports:
- ${http_port}:80
- ${https_port}:443
environment:
MYSQL_HOST: mysql
MYSQL_PORT: 3306
MYSQL_DATABASE: ${mysql_database}
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
MEMCACHED_PORT: 11211
SSL_SELF_SIGNED: ${ssl}
mysql:
image: 'mysql:5.5'
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
container_name: mysql
memcached:
image: 'memcached:latest'
restart: always
container_name: memcached
Ghost
- 軽量ブログの1つ
- Ghost
- DockerでGhostを動かしてみる #1
ghost:
image: ghost
ports:
- ${public_port}:2368
GitLab
- GitLabとは、GitHubのようなサービスを社内などのクローズドな環境に独自で構築できるように公開されたオープンソースです。
fbctf:
image: 'qazbnm456/dockerized_fbctf:multi_containers'
links:
- mysql
- memcached
restart: always
ports:
- ${http_port}:80
- ${https_port}:443
environment:
MYSQL_HOST: mysql
MYSQL_PORT: 3306
MYSQL_DATABASE: ${mysql_database}
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
MEMCACHED_PORT: 11211
SSL_SELF_SIGNED: ${ssl}
mysql:
image: 'mysql:5.5'
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
container_name: mysql
memcached:
image: 'memcached:latest'
restart: always
container_name: memcached
Gocd agent
- CI/CDツールの1つ(JenkinsやTravis CIなどの仲間っぽい)
gocd-agent:
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
gocd.role: agent
tty: true
image: rawmind/rancher-goagent:16.2.1-1
external_links:
- ${goserver_ip}:gocd-server
environment:
- AGENT_MEM=${mem_initial}m
- AGENT_MAX_MEM=${mem_max}m
- GO_SERVER=gocd-server.rancher.internal
- GO_SERVER_PORT=${goserver_port}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Gocd server
- CI/CDツールの1つ(JenkinsやTravis CIなどの仲間っぽい)
gocd-server:
labels:
gocd.role: server
tty: true
image: rawmind/rancher-goserver:16.2.1-3
volumes_from:
- gocd-volume
environment:
- SERVER_MEM=${mem_initial}m
- SERVER_MAX_MEM=${mem_max}m
ports:
- ${public_port}:8153
- ${ssh_port}:8154
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: gocd-volume
gocd-volume:
net: none
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- ${volume_work}:/opt/go-server/work
volume_driver: ${volume_driver}
entrypoint: /bin/true
image: busybox
Gogs
- Goで作られたGitHubクローン
- 究極的に簡単に始めることができるGoで作られたGitHubクローン「Gogs」
gogs:
image: gogs/gogs:latest
ports:
- ${http_port}:3000
- ${ssh_port}:22
links:
- mysql:db
mysql:
image: mysql:latest
ports:
- ${public_port}:3306
environment:
MYSQL_ROOT_PASSWORD: ${mysql_password}
Grafana
- KibanaはElasticsearch、GrafanaはGraphite or InfluxDBを主にバックエンドとしてWebブラウザで動作するダッシュボードツールです(KibanaとGrafanaの比較)
- InfluxDBとGrafanaを使ってサーバーリソースの可視化をする
grafana:
image: grafana/grafana:latest
ports:
- ${http_port}:3000
environment:
GF_SECURITY_ADMIN_USER: ${admin_username}
GF_SECURITY_ADMIN_PASSWORD: ${admin_password}
GF_SECURITY_SECRET_KEY: ${secret_key}
Hadoop + Yarn
- ビッグ・データを処理するのに最もよく使われているツール
- あの日見たYARNのお仕事を僕達はまだ知らない。
bootstrap-hdfs:
image: rancher/hadoop-base:v0.3.5
labels:
io.rancher.container.start_once: true
command: 'su -c "sleep 20 && exec /bootstrap-hdfs.sh" hdfs'
net: "container:namenode-primary"
volumes_from:
- namenode-primary-data
sl-namenode-config:
image: rancher/hadoop-followers-config:v0.3.5
net: "container:namenode-primary"
environment:
NODETYPE: "hdfs"
volumes_from:
- namenode-primary-data
namenode-config:
image: rancher/hadoop-config:v0.3.5
net: "container:namenode-primary"
volumes_from:
- namenode-primary-data
namenode-primary:
image: rancher/hadoop-base:v0.3.5
command: 'su -c "sleep 15 && /usr/local/hadoop-2.7.1/bin/hdfs namenode" hdfs'
volumes_from:
- namenode-primary-data
ports:
- 50070:50070
labels:
io.rancher.sidekicks: namenode-config,sl-namenode-config,bootstrap-hdfs,namenode-primary-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_soft: io.rancher.stack_service.name=$${stack_name}/yarn-resourcemanager,io.rancher.stack_service.name=$${stack_name}/jobhistory-server
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/datanode
namenode-primary-data:
image: rancher/hadoop-base:v0.3.5
volumes:
- '${cluster}-namenode-primary-config:/etc/hadoop'
- '/tmp'
net: none
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'
datanode-config:
image: rancher/hadoop-config:v0.3.5
net: "container:datanode"
volumes_from:
- datanode-data
datanode-data:
image: rancher/hadoop-base:v0.3.5
net: none
volumes:
- '${cluster}-datanode-config:/etc/hadoop'
- '/tmp'
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'
datanode:
image: rancher/hadoop-base:v0.3.5
volumes_from:
- datanode-data
labels:
io.rancher.sidekicks: datanode-config,datanode-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/namenode-primary,io.rancher.stack_service.name=$${stack_name}/yarn-resourcemanager
links:
- 'namenode-primary:namenode'
command: 'su -c "sleep 45 && exec /usr/local/hadoop-2.7.1/bin/hdfs datanode" hdfs'
yarn-nodemanager-config:
image: rancher/hadoop-config:v0.3.5
net: "container:yarn-nodemanager"
volumes_from:
- yarn-nodemanager-data
yarn-nodemanager-data:
image: rancher/hadoop-base:v0.3.5
net: none
volumes:
- '${cluster}-yarn-nodemanager-config:/etc/hadoop'
- '/tmp'
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'
yarn-nodemanager:
image: rancher/hadoop-base:v0.3.5
volumes_from:
- yarn-nodemanager-data
ports:
- '8042:8042'
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: yarn-nodemanager-config,yarn-nodemanager-data
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/namenode-primary,io.rancher.stack_service.name=$${stack_name}/yarn-resourcemanager,io.rancher.stack_service.name=$${stack_name}/jobhistory-server,
io.rancher.scheduler.affinity:container_label: io.rancher.stack_service.name=$${stack_name}/datanode
links:
- 'namenode-primary:namenode'
- 'yarn-resourcemanager:yarn-rm'
command: 'su -c "sleep 45 && exec /usr/local/hadoop-2.7.1/bin/yarn nodemanager" yarn'
jobhistory-server-config:
image: rancher/hadoop-config:v0.3.5
net: "container:jobhistory-server"
volumes_from:
- jobhistory-server-data
jobhistory-server-data:
image: rancher/hadoop-base:v0.3.5
net: none
volumes:
- '${cluster}-jobhistory-config:/etc/hadoop'
- '/tmp'
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'
jobhistory-server:
image: rancher/hadoop-base:v0.3.5
volumes_from:
- jobhistory-server-data
links:
- 'namenode-primary:namenode'
- 'yarn-resourcemanager:yarn-rm'
ports:
- '10020:10020'
- '19888:19888'
labels:
io.rancher.sidekicks: jobhistory-server-config,jobhistory-server-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label: io.rancher.stack_service.name=$${stack_name}/yarn-resourcemanager,io.rancher.stack_service.name=$${stack_name}/namenode-primary
command: 'su -c "sleep 45 && /usr/local/hadoop-2.7.1/bin/mapred historyserver" mapred'
yarn-resourcemanager-config:
image: rancher/hadoop-config:v0.3.5
net: "container:yarn-resourcemanager"
volumes_from:
- yarn-resourcemanager-data
sl-yarn-resourcemanager-config:
image: rancher/hadoop-followers-config:v0.3.5
net: "container:yarn-resourcemanager"
environment:
NODETYPE: "yarn"
volumes_from:
- yarn-resourcemanager-data
yarn-resourcemanager-data:
image: rancher/hadoop-base:v0.3.5
net: none
volumes:
- '${cluster}-yarn-resourcemanager-config:/etc/hadoop'
- '/tmp'
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'
yarn-resourcemanager:
image: rancher/hadoop-base:v0.3.5
volumes_from:
- yarn-resourcemanager-data
ports:
- '8088:8088'
links:
- 'namenode-primary:namenode'
labels:
io.rancher.sidekicks: yarn-resourcemanager-config,sl-yarn-resourcemanager-config,yarn-resourcemanager-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name},io.rancher.stack_service.name=$${stack_name}/datanode,io.rancher.stack_service.name=$${stack_name}/yarn-nodemanager
io.rancher.scheduler.affinity:container_label: io.rancher.stack_service.name=$${stack_name}/namenode-primary
command: 'su -c "sleep 30 && /usr/local/hadoop-2.7.1/bin/yarn resourcemanager" yarn'
Janitor
- 使われていないリソースを自動削除するツール
- 使っていないAWSリソースを監視して自動削除するJanitor Monkeyを使ってみた
cleanup:
image: meltwater/docker-cleanup:1.8.0
environment:
CLEAN_PERIOD: ${FREQUENCY}
DELAY_TIME: "900"
KEEP_IMAGES: "${KEEP}"
KEEP_CONTAINERS: "${KEEPC}"
KEEP_CONTAINERS_NAMED: "${KEEPCN}"
LOOP: "true"
DEBUG: "0"
labels:
io.rancher.scheduler.global: "true"
io.rancher.scheduler.affinity:host_label_ne: "${EXCLUDE_LABEL}"
net: none
privileged: true
tty: false
stdin_open: false
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker:/var/lib/docker
Jenkins
- おなじみciツール
- Jenkins + dockerでテストの並列化
jenkins-primary:
image: "jenkins:2.19.4"
ports:
- "${PORT}:8080"
labels:
io.rancher.sidekicks: jenkins-plugins,jenkins-datavolume
io.rancher.container.hostname_override: container_name
volumes_from:
- jenkins-plugins
- jenkins-datavolume
entrypoint: /usr/share/jenkins/rancher/jenkins.sh
jenkins-plugins:
image: rancher/jenkins-plugins:v0.1.1
jenkins-datavolume:
image: "busybox"
volumes:
- ${volume_work}:/var/jenkins_home
labels:
io.rancher.container.start_once: true
entrypoint: ["chown", "-R", "1000:1000", "/var/jenkins_home"]
Jenkins Swarm Plugin Clients
- Jenkins側で、動的にスレーブが追加されたりするプラグインのようです
- Docker ComposeでJenkinsとSelenium Gridを一気に立ち上げる
swarm-clients:
image: "rancher/jenkins-swarm:v0.2.0"
user: "${user}"
labels:
io.rancher.scheduler.affinity:host_label_soft: ci=worker
io.rancher.container.hostname_override: container_name
external_links:
- "${jenkins_service}:jenkins-primary"
environment:
JENKINS_PASS: "${jenkins_pass}"
JENKINS_USER: "${jenkins_user}"
SWARM_EXECUTORS: "${swarm_executors}"
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
- '/var/jenkins_home/workspace:/var/jenkins_home/workspace'
- '/tmp:/tmp'
Kibana 4
- Elastic社が提供するログデータ解析/可視化ツールです。 基本的に、リアルタイム検索エンジン「Elasticsearch」とセットで使われます。
- Kibana 4 BETAファーストインプレッション
kibana-vip:
ports:
- "${public_port}:80"
restart: always
tty: true
image: rancher/load-balancer-service
links:
- nginx-proxy:kibana4
stdin_open: true
nginx-proxy-conf:
image: rancher/nginx-conf:v0.2.0
command: "-backend=rancher --prefix=/2015-07-25"
labels:
io.rancher.container.hostname_override: container_name
nginx-proxy:
image: rancher/nginx:v1.9.4-3
volumes_from:
- nginx-proxy-conf
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: nginx-proxy-conf,kibana4
external_links:
- ${elasticsearch_source}:elasticsearch
kibana4:
restart: always
tty: true
image: kibana:4.4.2
net: "container:nginx-proxy"
stdin_open: true
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
labels:
io.rancher.container.hostname_override: container_name
Let's Encrypt
letsencrypt:
image: janeczku/rancher-letsencrypt:v0.3.0
environment:
EULA: ${EULA}
API_VERSION: ${API_VERSION}
CERT_NAME: ${CERT_NAME}
EMAIL: ${EMAIL}
DOMAINS: ${DOMAINS}
PUBLIC_KEY_TYPE: ${PUBLIC_KEY_TYPE}
RENEWAL_TIME: ${RENEWAL_TIME}
PROVIDER: ${PROVIDER}
CLOUDFLARE_EMAIL: ${CLOUDFLARE_EMAIL}
CLOUDFLARE_KEY: ${CLOUDFLARE_KEY}
DO_ACCESS_TOKEN: ${DO_ACCESS_TOKEN}
AWS_ACCESS_KEY: ${AWS_ACCESS_KEY}
AWS_SECRET_KEY: ${AWS_SECRET_KEY}
DNSIMPLE_EMAIL: ${DNSIMPLE_EMAIL}
DNSIMPLE_KEY: ${DNSIMPLE_KEY}
DYN_CUSTOMER_NAME: ${DYN_CUSTOMER_NAME}
DYN_USER_NAME: ${DYN_USER_NAME}
DYN_PASSWORD: ${DYN_PASSWORD}
volumes:
- ${STORAGE_VOLUME}/etc/letsencrypt/production/certs
labels:
io.rancher.container.create_agent: 'true'
io.rancher.container.agent.role: 'environment'
Liferay Portal
-
Webシステムを構築するためのオープンソースのポータル(社内ポータル、対外サイト)製品です。
-
ポータル(社内ポータル、対外サイト)を実現するためのフレームワーク、およびそのフレームワーク用に開発されたポートレット(機能部品)、及びポートレットの開発環境から構成されています。
-
Liferay Portal(ライフレイ ポータル)はJavaで実装されており、JBoss, Apache Tomcat, WebSphereなど多くのアプリケーションサーバやWebコンテナ上で動作します。
liferay:
ports:
- 8080:8080/tcp
environment:
SETUP_WIZARD_ENABLED: ${SETUP_WIZARD_ENABLED}
DB_KIND: mysql
DB_HOST: liferaydb
DB_USERNAME: ${MYSQL_USER}
DB_PASSWORD: ${MYSQL_PASSWORD}
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
log_opt: {}
image: rsippl/liferay:7.0.0-2
links:
- mysql:liferaydb
stdin_open: true
mysql:
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
command:
- --character-set-server=utf8
- --collation-server=utf8_general_ci
log_opt: {}
image: mysql:5.6.30
stdin_open: true
Logmatic
- goで書かれた log analyzer のようです
- logmatic.io
logmatic-agent:
image: logmatic/logmatic-docker
entrypoint: /usr/src/app/index.js
command: ${logmatic_key} ${opts_args}
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /proc/:/host/proc/:ro
- /sys/fs/cgroup/:/host/sys/fs/cgroup:ro
labels:
io.rancher.scheduler.global: "true"
Logspout
- ホスト内で動かした全てのDockerコンテナの出力を集約して,好きなところに飛ばす(ルーティングする)ためのツール
- logspoutでDockerコンテナのログの集約・ルーティング
- logspout で CoreOS 上の Docker コンテナのログを集約・ルーティングする
logspout:
restart: always
environment:
ROUTE_URIS: "${route_uri}"
LOGSPOUT: 'ignore'
SYSLOG_HOSTNAME: "${envname}"
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
labels:
io.rancher.scheduler.global: 'true'
io.rancher.container.hostname_override: container_name
tty: true
image: bekt/logspout-logstash:latest
stdin_open: true
Logstash
- Elastic社が提供するオープンソースログ収集管理ツールです。ログを収集し、1つのサーバに集約して管理します。主に、Elastic社のリアルタイム検索エンジン「Elasticsearch」とのセットでの使用を想定されています。
- ELK(ElasticSearch, Logstash, Kibana)+fluentdの環境をDocker Composeで構築しつつ、試しにCloudWatchの統計データを収集してみた
logstash-indexer-config:
restart: always
image: rancher/logstash-config:v0.2.0
labels:
io.rancher.container.hostname_override: container_name
redis:
restart: always
tty: true
image: redis:3.2.6-alpine
stdin_open: true
labels:
io.rancher.container.hostname_override: container_name
logstash-indexer:
restart: always
tty: true
volumes_from:
- logstash-indexer-config
command:
- logstash
- -f
- /etc/logstash
image: logstash:5.1.1-alpine
links:
- redis:redis
external_links:
- ${elasticsearch_link}:elasticsearch
stdin_open: true
labels:
io.rancher.sidekicks: logstash-indexer-config
io.rancher.container.hostname_override: container_name
logstash-collector-config:
restart: always
image: rancher/logstash-config:v0.2.0
labels:
io.rancher.container.hostname_override: container_name
logstash-collector:
restart: always
tty: true
links:
- redis:redis
ports:
- "5000/udp"
- "6000/tcp"
volumes_from:
- logstash-collector-config
command:
- logstash
- -f
- /etc/logstash
image: logstash:5.1.1-alpine
stdin_open: true
labels:
io.rancher.sidekicks: logstash-collector-config
io.rancher.container.hostname_override: container_name
MariaDB Galera Cluster
- MySQL (MariaDB) の冗長化で同期レプリケーションを用いたマルチマスタ型のクラスタを組むことができる。
- docker runコマンドライン MariaDB Galera Cluster 5.5 (kudotty/mariadb-galeracluster-5.5)
mariadb-galera-server:
image: rancher/galera:10.0.22-rancher2
net: "container:galera"
environment:
TERM: "xterm"
MYSQL_ROOT_PASSWORD: "${mysql_root_password}"
MYSQL_DATABASE: "${mysql_database}"
MYSQL_USER: "${mysql_user}"
MYSQL_PASSWORD: "${mysql_password}"
volumes_from:
- 'mariadb-galera-data'
labels:
io.rancher.container.hostname_override: container_name
entrypoint: bash -x /opt/rancher/start_galera
mariadb-galera-data:
image: rancher/galera:10.0.22-rancher2
net: none
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
volumes:
- /var/lib/mysql
- /etc/mysql/conf.d
- /docker-entrypoint-initdb.d
- /opt/rancher
command: /bin/true
labels:
io.rancher.container.start_once: true
galera-leader-forwarder:
image: rancher/galera-leader-proxy:v0.1.0
net: "container:galera"
volumes_from:
- 'mariadb-galera-data'
galera:
image: rancher/galera-conf:v0.2.0
labels:
io.rancher.sidekicks: mariadb-galera-data,mariadb-galera-server,galera-leader-forwarder
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
volumes_from:
- 'mariadb-galera-data'
stdin_open: true
tty: true
command: /bin/bash
galera-lb:
expose:
- 3306:3307/tcp
tty: true
image: rancher/load-balancer-service
links:
- galera:galera
stdin_open: true
Minecraft
- これでマインクラフトがいつでも遊べます
Minecraft:
environment:
- EULA
- VERSION
- DIFFICULTY
- MODE
- LEVEL_TYPE
- GENERATOR_SETTINGS
- PVP
- WHITELIST
- OPS
- MOTD
- SEED
- WORLD
tty: true
image: itzg/minecraft-server
stdin_open: true
labels:
io.rancher.sidekicks: MinecraftData
volumes_from:
- MinecraftData
MinecraftData:
image: busybox
labels:
io.rancher.container.start_once: 'true'
net: none
entrypoint: /bin/true
volumes:
- ${DATA_VOLUME}/data
volume_driver: ${VOLUME_DRIVER}
MinecraftLB:
ports:
- ${PORT}:25565/tcp
tty: true
image: rancher/load-balancer-service
links:
- Minecraft:Minecraft
stdin_open: true
MongoDB
- RDBMSではなく、いわゆるNoSQLと呼ばれるデータベースに分類されるもの
- fig/docker-compose事始め
mongo-cluster:
restart: always
environment:
MONGO_SERVICE_NAME: mongo-cluster
tty: true
entrypoint: /opt/rancher/bin/entrypoint.sh
command:
- --replSet
- "${replset_name}"
image: mongo:3.2
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: mongo-base, mongo-datavolume
volumes_from:
- mongo-datavolume
- mongo-base
mongo-base:
restart: always
net: none
tty: true
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
image: rancher/mongodb-conf:v0.1.0
stdin_open: true
entrypoint: /bin/true
mongo-datavolume:
net: none
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /data/db
entrypoint: /bin/true
image: busybox
Mumble
- 多人数でも遅延なしで軽快に使えるボイスチャット
- 「Mumble」 の使い方
mumble:
image: ranchercb/murmur:latest
ports:
- 64738:64738
- 64738:64738/udp
Netdata
- netdataは、ZabbixやNagiosなどの監視ツールとは異なり、リアルタイムパフォーマンスモニタリングができるツールです。
- リアルタイムなリソースモニタリングツールのnetdataを試してみた
- netdataを使ってリアルタイムで可視化してみる
netdata:
image: titpetric/netdata:latest
labels:
io.rancher.scheduler.global: 'true'
cap_add:
- SYS_PTRACE
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
environment:
NETDATA_PORT: "${NETDATA_PORT}"
Nuxeo
- このテンプレートはNuxeoサーバーをすべてのコンパニオン(Elasticsearch、Redis、Postgres)と共に展開し、Nuxeoをあなたの上で実行できるようにします
postgres-datavolume:
labels:
io.rancher.container.start_once: 'true'
io.rancher.container.hostname_override: container_name
image: nuxeo/postgres
entrypoint: chown -R postgres:postgres /var/lib/postgresql/data
volume_driver: ${volumedriver}
volumes:
- /var/lib/postgresql/data
postgres:
image: nuxeo/postgres
environment:
- POSTGRES_USER=nuxeo
- POSTGRES_PASSWORD=nuxeo
labels:
io.rancher.sidekicks: postgres-datavolume
io.rancher.container.hostname_override: container_name
volumes_from:
- postgres-datavolume
# Copied from default Rancher ES Stack : don't modifiy service names
elasticsearch-masters:
image: rancher/elasticsearch-conf:v0.4.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-master,elasticsearch-datavolume-masters
volume_driver: ${volumedriver}
elasticsearch-datavolume-masters:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
entrypoint: /bin/true
image: elasticsearch:1.7.3
volume_driver: ${volumedriver}
elasticsearch-base-master:
labels:
elasticsearch.master.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:1.7.3
net: "container:elasticsearch-masters"
volumes_from:
- elasticsearch-masters
- elasticsearch-datavolume-masters
entrypoint:
- /opt/rancher/bin/run.sh
redis:
labels:
io.rancher.container.hostname_override: container_name
tty: true
image: redis:3.0.3
stdin_open: true
volume_driver: ${volumedriver}
nuxeo-datavolume:
labels:
io.rancher.container.start_once: 'true'
io.rancher.container.hostname_override: container_name
image: nuxeo
entrypoint: /bin/true
volume_driver: ${volumedriver}
volumes:
- /var/lib/nuxeo/data
- /var/log/nuxeo
nuxeo:
environment:
NUXEO_CLID: ${clid}
NUXEO_PACKAGES: ${packages}
NUXEO_DB_HOST: postgres
NUXEO_DB_TYPE: postgresql
NUXEO_ES_HOSTS: elasticsearch:9300
NUXEO_DATA: /data/nuxeo/data/
NUXEO_LOG: /data/nuxeo/log/
NUXEO_REDIS_HOST: redis
NUXEO_URL: ${url}
labels:
io.rancher.sidekicks: nuxeo-datavolume
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
image: nuxeo:FT
links:
- redis:redis
- postgres:postgres
- elasticsearch-masters:elasticsearch
volumes_from:
- nuxeo-datavolume
lb:
expose:
- 80:8080
image: rancher/load-balancer-service
links:
- nuxeo:nuxeo
Odoo
- Odoo(旧称OpenERP)はベルギーのOpenERP S.A.社により開発とりまとめが行われている、世界で大人気のオープンソースの業務アプリケーションスイートです。機能を豊富に備え、操作性、拡張性、保守性の各面で優れており、圧倒的なコストパフォーマンスを誇ります。カバー領域は、従来のERPパッケージ守備範囲をもはや越え、CMS、Eコマース、イベント管理等、多岐に亘ります。
- Odoo8プラグイン全部入りイメージ作った(896.4 MB)
odoo:
image: odoo
ports:
- "8069:8069"
links:
- db
db:
image: postgres
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=odoo
OpenVPN
- インターネットに接続しているPCが1台あれば、VPNサーバーが設置できます(自社内サーバーはもちろん、レンタルVPSなどで運用しているケースもあります)。特定のインターネットプロバイダなどの制約もありません。このような優れた特徴から、個人ユーザーや中小企業での導入に適しています
- dockerでvpnサーバーをたてる
openvpn-httpbasic-data:
labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
image: busybox
volumes:
- /etc/openvpn/
openvpn-httpbasic-server:
ports:
- 1194:1194/tcp
environment:
AUTH_METHOD: httpbasic
AUTH_HTTPBASIC_URL: ${AUTH_HTTPBASIC_URL}
CERT_COUNTRY: ${CERT_COUNTRY}
CERT_PROVINCE: ${CERT_PROVINCE}
CERT_CITY: ${CERT_CITY}
CERT_ORG: ${CERT_ORG}
CERT_EMAIL: ${CERT_EMAIL}
CERT_OU: ${CERT_OU}
REMOTE_IP: ${REMOTE_IP}
REMOTE_PORT: ${REMOTE_PORT}
VPNPOOL_NETWORK: ${VPNPOOL_NETWORK}
VPNPOOL_CIDR: ${VPNPOOL_CIDR}
OPENVPN_EXTRACONF: ${OPENVPN_EXTRACONF}
labels:
io.rancher.sidekicks: openvpn-httpbasic-data
io.rancher.container.pull_image: always
image: mdns/rancher-openvpn:1.0
privileged: true
volumes_from:
- openvpn-httpbasic-data
OpenVPN-httpdigest
- Digest認証でOpenVPNを使用できる
openvpn-httpdigest-data:
labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
image: busybox
volumes:
- /etc/openvpn/
openvpn-httpdigest-server:
ports:
- 1194:1194/tcp
environment:
AUTH_METHOD: httpdigest
AUTH_HTTPDIGEST_URL: ${AUTH_HTTPDIGEST_URL}
CERT_COUNTRY: ${CERT_COUNTRY}
CERT_PROVINCE: ${CERT_PROVINCE}
CERT_CITY: ${CERT_CITY}
CERT_ORG: ${CERT_ORG}
CERT_EMAIL: ${CERT_EMAIL}
CERT_OU: ${CERT_OU}
REMOTE_IP: ${REMOTE_IP}
REMOTE_PORT: ${REMOTE_PORT}
VPNPOOL_NETWORK: ${VPNPOOL_NETWORK}
VPNPOOL_CIDR: ${VPNPOOL_CIDR}
OPENVPN_EXTRACONF: ${OPENVPN_EXTRACONF}
labels:
io.rancher.sidekicks: openvpn-httpdigest-data
io.rancher.container.pull_image: always
image: mdns/rancher-openvpn:1.0
privileged: true
volumes_from:
- openvpn-httpdigest-data
OpenVPN-LDAP
- LDAPアカウントでOpenVPNを使用できる
- OpenVPNにユーザー/パスワードでログイン(LDAP認証)
openvpn-ldap-data:
labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
image: busybox
volumes:
- /etc/openvpn/
openvpn-ldap-server:
ports:
- 1194:1194/tcp
environment:
AUTH_METHOD: ldap
AUTH_LDAP_URL: ${AUTH_LDAP_URL}
AUTH_LDAP_BASEDN: ${AUTH_LDAP_BASEDN}
AUTH_LDAP_SEARCH: ${AUTH_LDAP_SEARCH}
AUTH_LDAP_BINDDN: ${AUTH_LDAP_BINDDN}
AUTH_LDAP_BINDPWD: ${AUTH_LDAP_BINDPWD}
CERT_COUNTRY: ${CERT_COUNTRY}
CERT_PROVINCE: ${CERT_PROVINCE}
CERT_CITY: ${CERT_CITY}
CERT_ORG: ${CERT_ORG}
CERT_EMAIL: ${CERT_EMAIL}
CERT_OU: ${CERT_OU}
REMOTE_IP: ${REMOTE_IP}
REMOTE_PORT: ${REMOTE_PORT}
VPNPOOL_NETWORK: ${VPNPOOL_NETWORK}
VPNPOOL_CIDR: ${VPNPOOL_CIDR}
OPENVPN_EXTRACONF: ${OPENVPN_EXTRACONF}
labels:
io.rancher.sidekicks: openvpn-ldap-data
io.rancher.container.pull_image: always
image: mdns/rancher-openvpn:1.0
privileged: true
volumes_from:
- openvpn-ldap-data
Owncloud
- Dropbox のようなオンラインストレージサービスを簡単に構築することができるオープンソースソフトウェアです。
- ownCloud の Docker コンテナを作る
owncloud:
image: owncloud
ports:
- "80:80"
links:
- db
db:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=password
Percona XtraDB Cluster
- マルチマスタでクラスタ構築が可能なMySQL互換RDBMSです。
- Percona XtraDB ClusterをDockerで構築する
pxc-clustercheck:
image: flowman/percona-xtradb-cluster-clustercheck:v2.0
net: "container:pxc"
labels:
io.rancher.container.hostname_override: container_name
volumes_from:
- 'pxc-data'
pxc-server:
image: flowman/percona-xtradb-cluster:5.6.28-1
net: "container:pxc"
environment:
MYSQL_ROOT_PASSWORD: "${mysql_root_password}"
PXC_SST_PASSWORD: "${pxc_sst_password}"
MYSQL_DATABASE: "${mysql_database}"
MYSQL_USER: "${mysql_user}"
MYSQL_PASSWORD: "${mysql_password}"
labels:
io.rancher.container.hostname_override: container_name
volumes_from:
- 'pxc-data'
entrypoint: bash -x /opt/rancher/start_pxc
pxc-data:
image: flowman/percona-xtradb-cluster:5.6.28-1
net: none
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
volumes:
- /var/lib/mysql
- /etc/mysql/conf.d
- /docker-entrypoint-initdb.d
command: /bin/true
labels:
io.rancher.container.start_once: true
pxc:
image: flowman/percona-xtradb-cluster-confd:v0.2.0
labels:
io.rancher.sidekicks: pxc-clustercheck,pxc-server,pxc-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
volumes_from:
- 'pxc-data'
PHP Adminer
- Adminerは、PHPで作られているデータベース管理ツールです。(Apache License or GPL 2)
- phpMyAdminのようにWeb上でMySQLなど※のデータベースを操作できる (一通りのことはできる)
MySQL以外で使用したことはないですが、公式では下記の対応を謳っています。 - MySQL, PostgreSQL, SQLite, MS SQL, Oracle, Firebird, SimpleDB, Elasticsearch, MongoDB
1ファイルで作成されているため設置が簡単 - Adminerを設置する
adminer:
image: 'clue/adminer:latest'
restart: on-failure
Plone 5.0
- Plone は WordPress と違い「Python」+「アプリケーションサーバ」+「DBMS」+「CMS アプリケーション」を一度にすべてインストールすることができるので、「PHP」+「Web サーバ」+「MySQL」を別途用意しなければならない WordPress より導入そのものは簡単にできる。
- Yet Another 仕事のツール
zeoserver:
image: plone:5.0
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.community.plone=true
io.rancher.community.plone: "true"
volumes:
- ${volume_name}:/data
volume_driver: ${volume_driver}
command: ["zeoserver"]
plone:
image: plone:5.0
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.community.plone=true
io.rancher.community.plone: "true"
links:
- zeoserver:zeoserver
environment:
ADDONS: ${addons}
ZEO_ADDRESS: zeoserver:8100
lb:
image: rancher/load-balancer-service
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.community.plone=true
io.rancher.community.plone: "true"
links:
- plone:plone
ports:
- ${http_port}:8080
PointHQ DNS
- 1アプリ毎で1ドメインにつき10レコードまで無料で登録できる
- 管理画面からドメインのリクエスト数を確認することができる
- PointDNSでHerokuアプリにNaked domainを割り当てる
pointhq:
image: rancher/external-dns:v0.2.1
command: --provider pointhq
expose:
- 1000
environment:
POINTHQ_TOKEN: ${POINTHQ_TOKEN}
POINTHQ_EMAIL: ${POINTHQ_EMAIL}
ROOT_DOMAIN: ${ROOT_DOMAIN}
TTL: ${TTL}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"
PowerDNS
- PowerDNSとは、オランダのPowerDNS.COM BVが開発を行った、オープンソースソフトウェアのDNSサーバです。
- 久しぶりにPowerDNSをさわる
- PowerDNSでDNSサーバーを作る。
powerdns:
image: rancher/external-dns:v0.5.0
command: "-provider=powerdns"
expose:
- 1000
environment:
POWERDNS_API_KEY: ${POWERDNS_API_KEY}
POWERDNS_URL: ${POWERDNS_URL}
ROOT_DOMAIN: ${ROOT_DOMAIN}
TTL: ${TTL}
labels:
io.rancher.container.pull_image: always
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"
Prometheus
- オープンソースのサービス監視システムと時系列データベース
- 【Docker】"また死んでる!!"コンテナの死活管理とセキュリティー
- 【入門】PrometheusでサーバやDockerコンテナのリソース監視
- prometheusのexporterのcollectd-exporterをdockerでちょろっと動かしてみる
cadvisor:
labels:
io.rancher.scheduler.global: 'true'
tty: true
image: google/cadvisor:latest
stdin_open: true
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"
node-exporter:
labels:
io.rancher.scheduler.global: 'true'
tty: true
image: prom/node-exporter:latest
stdin_open: true
prom-conf:
tty: true
image: infinityworks/prom-conf:17
volumes:
- /etc/prom-conf/
net: none
prometheus:
tty: true
image: prom/prometheus:v1.4.1
command: -alertmanager.url=http://alertmanager:9093 -config.file=/etc/prom-conf/prometheus.yml -storage.local.path=/prometheus -web.console.libraries=/etc/prometheus/console_libraries -web.console.templates=/etc/prometheus/consoles
ports:
- 9090:9090
labels:
io.rancher.sidekicks: prom-conf
volumes_from:
- prom-conf
links:
- cadvisor:cadvisor
- node-exporter:node-exporter
- prometheus-rancher-exporter:prometheus-rancher-exporter
influxdb:
image: tutum/influxdb:0.10
ports:
- 2003:2003
environment:
- PRE_CREATE_DB=grafana;prometheus;rancher
- GRAPHITE_DB=rancher
- GRAPHITE_BINDING=:2003
graf-db:
tty: true
image: infinityworks/graf-db:10
command: cat
volumes:
- /var/lib/grafana/
net: none
grafana:
tty: true
image: grafana/grafana:4.0.2
ports:
- 3000:3000
labels:
io.rancher.sidekicks: graf-db
volumes_from:
- graf-db
links:
- prometheus:prometheus
- prometheus-rancher-exporter:prometheus-rancher-exporter
prometheus-rancher-exporter:
tty: true
labels:
io.rancher.container.create_agent: true
io.rancher.container.agent.role: environment
image: infinityworks/prometheus-rancher-exporter:v0.22.40
Puppet 4 .x (standalone)
- オープンソースなシステム自動管理ツール Puppet
- PuppetはRubyでできた,UNIX系OSのシステム管理を自動で行うためのツール
- Puppet導入前に知りたかったこと
puppet-lb:
ports:
- ${PUPPET_PORT}:8140/tcp
labels:
io.rancher.loadbalancer.target.puppet: 8140=${PUPPET_PORT}
tty: true
image: rancher/load-balancer-service
links:
- puppet:puppet
stdin_open: true
puppet:
hostname: puppet
domainname: puppet.rancher.internal
labels:
io.rancher.sidekicks: puppet-config-volumes
image: nrvale0/puppetserver-standalone
environment:
- CONTROL_REPO_GIT_URI=${CONTROL_REPO_GIT_URI}
volumes_from:
- puppet-config-volumes
puppet-config-volumes:
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: "true"
volumes:
- /etc/puppetlabs/ssl
- /opt/puppetlabs/r10k/cache
- /etc/puppetlabs/code
entrypoint: /bin/true
image: alpine
px-dev
- PX-Developer(PX-Dev)は、コンテナのスケールアウトストレージとデータサービスです。 PX-Dev自体は、アプリケーションスタックとともにコンテナとして展開されます。 アプリケーションスタックでPX-Devを実行することで、スケールアウト環境でストレージの永続性、容量管理、パフォーマンス、可用性をコンテナ単位で制御できます。 Docker Engineを搭載したサーバーにPX-Developerコンテナを展開すると、そのサーバーはスケールアウトされたストレージノードになります。 コンピューティングと統合されたストレージの稼働は、ベアメタル駆動のパフォーマンスをもたらします。
- https://github.com/portworx/px-dev
portworx:
labels:
io.rancher.container.create_agent: 'true'
io.rancher.scheduler.global: 'true'
io.rancher.container.pull_image: 'always'
image: portworx/px-dev
container_name: px
ipc: host
net: host
privileged: true
environment:
CLUSTER_ID: ${cluster_id}
KVDB: ${kvdb}
volumes:
- /dev:/dev
- /usr/src:/usr/src
- /run/docker/plugins:/run/docker/plugins
- /var/lib/osd:/var/lib/osd:shared
- /etc/pwx:/etc/pwx
- /opt/pwx/bin:/export_bin:shared
- /var/run/docker.sock:/var/run/docker.sock
- /var/cores:/var/cores
command: -c ${cluster_id} -k ${kvdb} -a -z -f
QuasarDB
- quasardbは、リアルタイム解析用に最適化されたソフトウェア定義のストレージテクノロジです。ファイルシステムとデータベース間のリンクが欠落しています。
- quasardbデータは、ユーザーにパターンを強制しません。 データは、Microsoft Excel、ActivePivot、Apache Sparkなどのアプリケーションやマルチ言語APIを使用して直接使用できます。
- https://www.quasardb.net/-what-is-nosql-
qdb-ui:
ports:
- ${webport}:${webport}/tcp
environment:
DEVICE: ${device}
PEER: qdb1
PORT: '${qdbport}'
WEBPORT: '${webport}'
labels:
io.rancher.container.dns: 'true'
command:
- /start.sh
- httpd
image: makazi/quasardb:2.0.0-rc.8
net: host
qdb1-data:
labels:
io.rancher.container.start_once: 'true'
command:
- /bin/true
image: busybox
volumes:
- /var/db/qdb
- /var/lib/qdb
qdb2-data:
labels:
io.rancher.container.start_once: 'true'
command:
- /bin/true
image: busybox
volumes:
- /var/db/qdb
- /var/lib/qdb
qdb2:
ports:
- ${qdbport}:${qdbport}/tcp
environment:
ID: 2/2
DEVICE: ${device}
PEER: qdb1
PORT: '${qdbport}'
REPLICATION: ${replication}
labels:
io.rancher.sidekicks: qdb2-data
io.rancher.container.dns: 'true'
command:
- /start.sh
image: makazi/quasardb:2.0.0-rc.8
volumes_from:
- qdb2-data
net: host
qdb1:
ports:
- ${qdbport}:${qdbport}/tcp
environment:
ID: 1/2
DEVICE: ${device}
PORT: '${qdbport}'
REPLICATION: ${replication}
labels:
io.rancher.sidekicks: qdb1-data
io.rancher.container.dns: 'true'
command:
- /start.sh
image: makazi/quasardb:2.0.0-rc.8
volumes_from:
- qdb1-data
net: host
RabbitMQ
- RabbitMQはメッセージキューイングのためのミドルウェアです。オープンソースで公開されています。
- 新人プログラマに知ってもらいたいRabbitMQ初心者の入門の入門
- 13日目: RabbitMQの個人的に難しかった用語の説明
- はじめての RabbitMQ
rabbitmq:
image: rdaneel/rabbitmq-conf:0.2.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: rabbitmq-base,rabbitmq-datavolume
volumes_from:
- rabbitmq-datavolume
environment:
- RABBITMQ_NET_TICKTIME=${net_ticktime}
- RABBITMQ_CLUSTER_PARTITION_HANDLING=${cluster_partition_handling}
- CONFD_ARGS=${confd_args}
rabbitmq-datavolume:
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /etc/rabbitmq
- /opt/rancher/bin
entrypoint: /bin/true
image: rabbitmq:3.6-management
rabbitmq-base:
labels:
io.rancher.container.hostname_override: container_name
image: rabbitmq:3.6-management
restart: always
volumes_from:
- rabbitmq-datavolume
net: "container:rabbitmq"
entrypoint:
- /opt/rancher/bin/run.sh
environment:
- RABBITMQ_ERLANG_COOKIE=${erlang_cookie}
Launch
rancher-security-bench
- 各コンテナのセキュリティ状況が見れます
- コンテナーのセキュリティ
rancher-bench-security:
image: germanramos/rancher-bench-security:1.11.0
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
net: host
pid: host
stdin_open: true
tty: true
volumes:
- /var/lib:/var/lib
- /var/run/docker.sock:/var/run/docker.sock
- /usr/lib/systemd:/usr/lib/systemd
- /etc:/etc
- /tmp:/tmp
environment:
- INTERVAL=${INTERVAL}
web-server:
image: germanramos/nginx-php-fpm:v5.6.21
stdin_open: true
tty: true
labels:
traefik.enable: stack
traefik.domain: ${TRAEFIK_DOMAIN}
traefik.port: 80
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
volumes:
- /tmp/cis:/var/www/html
Registry
- private docker hubです
- LDAPとアカウントを連携できる
db:
image: mysql:5.7.10
environment:
MYSQL_DATABASE: portus
MYSQL_ROOT_PASSWORD: ${ROOTPASSWORD}
MYSQL_USER: portus
MYSQL_PASSWORD: ${DBPASSWORD}
tty: true
stdin_open: true
volumes:
- ${DIR}/db:/var/lib/mysql
labels:
registry.portus.db: 1
sslproxy:
image: nginx:1.9.9
tty: true
stdin_open: true
links:
- portus:portus
volumes:
- ${DIR}/certs:/etc/nginx/certs:ro
- ${DIR}/proxy:/etc/nginx/conf.d:ro
labels:
io.rancher.scheduler.affinity:container_label_soft: registry.portus.db=1
registry:
image: registry:2.3.1
environment:
REGISTRY_LOG_LEVEL: warn
REGISTRY_STORAGE_DELETE_ENABLED: true
REGISTRY_AUTH: token
REGISTRY_AUTH_TOKEN_REALM: https://${DOMAIN}:${PPORT}/v2/token
REGISTRY_AUTH_TOKEN_SERVICE: ${DOMAIN}:${RPORT}
REGISTRY_AUTH_TOKEN_ISSUER: ${DOMAIN}
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /certs/registry.crt
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/registry.crt
REGISTRY_HTTP_TLS_KEY: /certs/registry.key
REGISTRY_HTTP_SECRET: httpsecret
REGISTRY_NOTIFICATIONS_ENDPOINTS: >
- name: portus
url: http://portus:3000/v2/webhooks/events
timeout: 500
threshold: 5
backoff: 1
tty: true
stdin_open: true
links:
- portus:portus
volumes:
- ${DIR}/certs:/certs
- ${DIR}/data:/var/lib/registry
lb:
image: rancher/load-balancer-service
tty: true
stdin_open: true
ports:
- ${RPORT}:5000/tcp
- ${PPORT}:443/tcp
labels:
io.rancher.loadbalancer.target.sslproxy: ${PPORT}=443
io.rancher.loadbalancer.target.registry: ${RPORT}=5000
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:not_host_label: lb=0
io.rancher.scheduler.affinity:not_host_label: registry.enabled=false
links:
- registry:registry
- sslproxy:sslproxy
portus:
image: sshipway/portus:2.0.5
environment:
PORTUS_MACHINE_FQDN: ${DOMAIN}
PORTUS_PRODUCTION_HOST: db
PORTUS_PRODUCTION_DATABASE: portus
PORTUS_PRODUCTION_USERNAME: portus
PORTUS_PRODUCTION_PASSWORD: ${DBPASSWORD}
PORTUS_GRAVATAR_ENABLED: true
PORTUS_KEY_PATH: /certs/registry.key
PORTUS_PASSWORD: ${DBPASSWORD}
PORTUS_SECRET_KEY_BASE: ${ROOTPASSWORD}
PORTUS_CHECK_SSL_USAGE_ENABLED: true
PORTUS_SMTP_ENABLED: false
PORTUS_LDAP_ENABLED: ${LDAP}
PORTUS_LDAP_HOSTNAME: ${LDAPHOST}
PORTUS_LDAP_PORT: ${LDAPPORT}
PORTUS_LDAP_METHOD: ${LDAPTLS}
PORTUS_LDAP_BASE: ${LDAPBASE}
PORTUS_LDAP_UID: ${LDAPBINDUID}
PORTUS_LDAP_AUTHENTICATION_ENABLED: ${LDAPBIND}
PORTUS_LDAP_AUTHENTICATION_BIND_DN: ${LDAPBINDDN}
PORTUS_LDAP_AUTHENTICATION_PASSWORD: ${LDAPBINDPASS}
PORTUS_LDAP_GUESS_EMAIL_ENABLED: true
PORTUS_LDAP_GUESS_EMAIL_ATTR: mail
PORTUS_PORT: ${PPORT}
REGISTRY_SSL_ENABLED: true
REGISTRY_HOSTNAME: ${DOMAIN}
REGISTRY_PORT: ${RPORT}
REGISTRY_NAME: Registry
tty: true
stdin_open: true
volumes:
- ${DIR}/certs:/certs
- ${DIR}/proxy:/etc/nginx/conf.d
links:
- db:db
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.affinity:container_label_soft: registry.portus.db=1
registry.portus.app: 1
Registry Convoy
- Registryをconvoyのvolumeを利用して使うことができる
db:
image: mysql:5.7.10
environment:
MYSQL_DATABASE: portus
MYSQL_ROOT_PASSWORD: ${ROOTPASSWORD}
MYSQL_USER: portus
MYSQL_PASSWORD: ${DBPASSWORD}
tty: true
stdin_open: true
volume_driver: ${DRIVER}
volumes:
- ${PFX}-db:/var/lib/mysql
labels:
registry.portus.db: 1
sslproxy:
image: nginx:1.9.9
tty: true
stdin_open: true
links:
- portus:portus
volume_driver: ${DRIVER}
volumes:
- ${PFX}-certs:/etc/nginx/certs:ro
- ${PFX}-proxy:/etc/nginx/conf.d:ro
labels:
io.rancher.scheduler.affinity:container_label_soft: registry.portus.db=1
registry:
image: registry:2.3.1
environment:
REGISTRY_LOG_LEVEL: warn
REGISTRY_STORAGE_DELETE_ENABLED: true
REGISTRY_AUTH: token
REGISTRY_AUTH_TOKEN_REALM: https://${DOMAIN}:${PPORT}/v2/token
REGISTRY_AUTH_TOKEN_SERVICE: ${DOMAIN}:${RPORT}
REGISTRY_AUTH_TOKEN_ISSUER: ${DOMAIN}
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /certs/registry.crt
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/registry.crt
REGISTRY_HTTP_TLS_KEY: /certs/registry.key
REGISTRY_HTTP_SECRET: httpsecret
REGISTRY_NOTIFICATIONS_ENDPOINTS: >
- name: portus
url: http://portus:3000/v2/webhooks/events
timeout: 500
threshold: 5
backoff: 1
tty: true
stdin_open: true
links:
- portus:portus
volume_driver: ${DRIVER}
volumes:
- ${PFX}-certs:/certs
- ${PFX}-data:/var/lib/registry
lb:
image: rancher/load-balancer-service
tty: true
stdin_open: true
ports:
- ${RPORT}:5000/tcp
- ${PPORT}:443/tcp
labels:
io.rancher.loadbalancer.target.sslproxy: ${PPORT}=443
io.rancher.loadbalancer.target.registry: ${RPORT}=5000
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:not_host_label: lb=0
io.rancher.scheduler.affinity:not_host_label: registry.enabled=false
links:
- registry:registry
- sslproxy:sslproxy
portus:
image: sshipway/portus:2.0.5
environment:
PORTUS_MACHINE_FQDN: ${DOMAIN}
PORTUS_PRODUCTION_HOST: db
PORTUS_PRODUCTION_DATABASE: portus
PORTUS_PRODUCTION_USERNAME: portus
PORTUS_PRODUCTION_PASSWORD: ${DBPASSWORD}
PORTUS_GRAVATAR_ENABLED: true
PORTUS_KEY_PATH: /certs/registry.key
PORTUS_PASSWORD: ${DBPASSWORD}
PORTUS_SECRET_KEY_BASE: ${ROOTPASSWORD}
PORTUS_CHECK_SSL_USAGE_ENABLED: true
PORTUS_SMTP_ENABLED: false
PORTUS_LDAP_ENABLED: ${LDAP}
PORTUS_LDAP_HOSTNAME: ${LDAPHOST}
PORTUS_LDAP_PORT: ${LDAPPORT}
PORTUS_LDAP_METHOD: ${LDAPTLS}
PORTUS_LDAP_BASE: ${LDAPBASE}
PORTUS_LDAP_UID: cn
PORTUS_LDAP_AUTHENTICATION_ENABLED: ${LDAPBIND}
PORTUS_LDAP_AUTHENTICATION_BIND_DN: ${LDAPBINDDN}
PORTUS_LDAP_AUTHENTICATION_PASSWORD: ${LDAPBINDPASS}
PORTUS_LDAP_GUESS_EMAIL_ENABLED: true
PORTUS_LDAP_GUESS_EMAIL_ATTR: mail
PORTUS_PORT: ${PPORT}
REGISTRY_SSL_ENABLED: true
REGISTRY_HOSTNAME: ${DOMAIN}
REGISTRY_PORT: ${RPORT}
REGISTRY_NAME: Registry
tty: true
stdin_open: true
volume_driver: ${DRIVER}
volumes:
- ${PFX}-certs:/certs
- ${PFX}-proxy:/etc/nginx/conf.d
links:
- db:db
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.affinity:container_label_soft: registry.portus.db=1
registry.portus.app: 1
REX-Ray
- REX-Rayは、ベンダーに依存しないストレージオーケストレーションエンジンを提供します。 主な設計目標は、DockerコンテナおよびMesosフレームワークとタスクに永続的なストレージを提供することです。
- また、Goパッケージ、CLIツール、およびLinuxサービスとして追加のユースケースで使用できるようになります。
rexray:
image: wlan0/sdc2
stdin_open: true
tty: true
privileged: true
net: host
environment:
STACK_NAME: ${SCALEIO_STACK_NAME}
SYSTEM_ID: ${SCALEIO_SYSTEM_ID}
MDM_IP: ${SCALEIO_MDM_IP}
volumes:
- /proc:/host/proc
labels:
io.rancher.container.pull_image: always
io.rancher.container.dns: true
io.rancher.scheduler.affinity:host_label: rexray.scaleio=true
RocketChat
- meteor製のチャットアプリです。
- BYOSなので、自分でインストールし、起動する必要があります。
- まるでSlackのような、というかほとんどSlackなUIです。
- SlackのようなBYOSでOSSなチャット Rocket.Chat をインストールする
Route53 DNS
- aws にRoute53といい感じに連携してくれます。
- AWS Route53ドメイン取得からCertificate Managerでの証明書作成まで
mongo:
image: mongo
# volumes:
# - ./data/runtime/db:/data/db
# - ./data/dump:/dump
command: mongod --smallfiles --oplogSize 128
rocketchat:
image: rocketchat/rocket.chat:latest
# volumes:
# - ./uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://yourhost:3000
- MONGO_URL=mongodb://mongo:27017/rocketchat
links:
- mongo:mongo
ports:
- 3000:3000
# hubot, the popular chatbot (add the bot user first and change the password before starting this image)
hubot:
image: rocketchat/hubot-rocketchat
environment:
- ROCKETCHAT_URL=rocketchat:3000
- ROCKETCHAT_ROOM=GENERAL
- ROCKETCHAT_USER=bot
- ROCKETCHAT_PASSWORD=botpassword
- BOT_NAME=bot
# you can add more scripts as you'd like here, they need to be installable by npm
- EXTERNAL_SCRIPTS=hubot-help,hubot-seen,hubot-links,hubot-diagnostics
links:
- rocketchat:rocketchat
# this is used to expose the hubot port for notifications on the host on port 3001, e.g. for hubot-jenkins-notifier
ports:
- 3001:8080
ScaleIO NAS/DAS
- 分散ストレージシステム・SDS(Software Defined Storage)
- 分散ストレージとしてSheepdog, DRBD, VSAN, ScaleIOなどがあります。最近ではSDS(Software Defined Storage)とカテゴライズされることも多いです。
- 明日から試せる!ソフトウエアベースストレージ「ScaleIO」のご紹介
tb:
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
image: wlan0/tb
labels:
io.rancher.container.pull_image: always
io.rancher.container.hostname_override: container_name
sds:
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
image: wlan0/sds
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.pull_image: always
mdm:
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- /dev/shm:/dev/shm
image: wlan0/mdm
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/primary_mdm
io.rancher.container.pull_image: always
primary-mdm:
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- /dev/shm:/dev/shm
image: wlan0/mdm
command: /usr/sbin/init
entrypoint: /run_mdm_and_configure.sh
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/mdm
io.rancher.container.pull_image: always
Secrets Bridge
- Secrets Bridgeサービスは、起動時のDockerコンテナがVault内の秘密と安全に接続されるように、RancherとVaultを統合する標準化された方法です。
- シークレットブリッジサービスは、サーバーとエージェントで構成されています
secrets-bridge:
image: rancher/secrets-bridge:v0.2.0
environment:
CATTLE_ACCESS_KEY: ${CATTLE_ACCESS_KEY}
CATTLE_SECRET_KEY: ${CATTLE_SECRET_KEY}
CATTLE_URL: ${CATTLE_URL}
VAULT_TOKEN: ${VAULT_TOKEN}
VAULT_CUBBYPATH: ${VAULT_CUBBYPATH}
command:
- server
- --vault-url
- ${VAULT_URL}
- --rancher-url
- $CATTLE_URL
- --rancher-secret
- ${CATTLE_SECRET_KEY}
- --rancher-access
- ${CATTLE_ACCESS_KEY}
secrets-bridge-lb:
ports:
- "${LBPORT}:8181"
image: rancher/load-balancer-service
links:
- secrets-bridge:secrets-bridge
Secrets Bridge Agents
- Secrets Bridgeサービスは、起動時のDockerコンテナがVault内の秘密と安全に接続されるように、RancherとVaultを統合する標準化された方法です。
- シークレットブリッジサービスは、サーバーとエージェントで構成されています
secrets-bridge:
image: rancher/secrets-bridge:v0.2.0
command: agent --bridge-url ${BRIDGE_URL}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
privileged: true
labels:
io.rancher.container.create_agent: true
io.rancher.container.agent.role: agent
io.rancher.scheduler.global: true
Sematext Docker Agent
- https://github.com/sematext/sematext-agent-docker
- Docker用Sematext Agentは、SPM Docker Monitoring&Logsene / Hosted ELK Log Management用のDocker APIからメトリック、イベント、ログを収集します。 CoreOS、RancherOS、Docker Swarm、Kubernetes、Apache Mesos、Hashicorp Nomad、Amzon ECS、...インストールを参照してください。
sematext-agent:
image: 'sematext/sematext-agent-docker:${image_version}'
environment:
- LOGSENE_TOKEN=${logsene_token}
- SPM_TOKEN=${spm_token}
- GEOIP_ENABLED=${geoip_enabled}
- HTTPS_PROXY=${https_proxy}
- HTTP_PROXY=${http_proxy}
- MATCH_BY_IMAGE=${match_by_image}
- MATCH_BY_NAME=${match_by_name}
- SKIP_BY_IMAGE=${match_by_image}
- SKIP_BY_NAME=${match_by_name}
- LOGAGENT_PATTERNS=${logagent_patterns}
- KUBERNETES=${kubernetes}
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
io.rancher.scheduler.global: 'true'
Launch
Sentry
- 様々な言語から送信されたイベントログを表示してくれるやつ
- Sentryでjsのエラーログを収集してみた
- イベントログ収集ツールの Sentry が凄そう
sentry-postgres:
environment:
POSTGRES_USER: sentry
POSTGRES_PASSWORD: secret
PGDATA: /data/postgres/data
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
log_opt: {}
image: postgres:9.5.3
stdin_open: true
sentry-cron:
environment:
SENTRY_EMAIL_HOST: ${sentry_email_host}
SENTRY_EMAIL_PASSWORD: ${sentry_email_password}
SENTRY_EMAIL_PORT: '${sentry_email_port}'
SENTRY_EMAIL_USER: ${sentry_email_user}
SENTRY_SECRET_KEY: ${sentry_secret_key}
SENTRY_SERVER_EMAIL: ${sentry_server_email}
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
command:
- run
- cron
log_opt: {}
image: sentry:8.5.0
links:
- sentry-postgres:postgres
- sentry-redis:redis
stdin_open: true
sentry-redis:
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
log_opt: {}
image: redis:3.2.0-alpine
stdin_open: true
sentry:
ports:
- ${sentry_public_port}:9000/tcp
environment:
SENTRY_EMAIL_HOST: ${sentry_email_host}
SENTRY_EMAIL_PASSWORD: ${sentry_email_password}
SENTRY_EMAIL_PORT: '${sentry_email_port}'
SENTRY_EMAIL_USER: ${sentry_email_user}
SENTRY_SECRET_KEY: ${sentry_secret_key}
SENTRY_SERVER_EMAIL: ${sentry_server_email}
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
command:
- /bin/bash
- -c
- sentry upgrade --noinput && sentry createuser --email ${sentry_inital_user_email} --password ${sentry_inital_user_password} --superuser && /entrypoint.sh run web || /entrypoint.sh run web
log_opt: {}
image: sentry:8.5.0
links:
- sentry-postgres:postgres
- sentry-redis:redis
stdin_open: true
sentry-worker:
environment:
SENTRY_EMAIL_HOST: ${sentry_email_host}
SENTRY_EMAIL_PASSWORD: ${sentry_email_password}
SENTRY_EMAIL_PORT: '${sentry_email_port}'
SENTRY_EMAIL_USER: ${sentry_email_user}
SENTRY_SECRET_KEY: ${sentry_secret_key}
SENTRY_SERVER_EMAIL: ${sentry_server_email}
log_driver: ''
labels:
io.rancher.scheduler.global: 'true'
io.rancher.container.pull_image: always
tty: true
command:
- run
- worker
log_opt: {}
image: sentry:8.5.0
links:
- sentry-postgres:postgres
- sentry-redis:redis
stdin_open: true
SonarQube
- スイスのSonarSource社が主に開発を行っている統合的なプログラム品質管理を行える統合品質管理ツール
- SonarQubeでプログラムの品質管理をはじめる(概要)
- SonarQubeでソースコードの品質チェック
sonarqube-data:
image: busybox
net: none
labels:
io.rancher.container.start_once: true
volumes:
- /opt/sonarqube/extensions/plugins
sonarqube:
image: sonarqube
ports:
- ${http_port}:9000
links:
- postgres
environment:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
SONARQUBE_JDBC_USERNAME: ${postgres_user}
SONARQUBE_JDBC_PASSWORD: ${postgres_password}
SONARQUBE_JDBC_URL: jdbc:postgresql://postgres/sonar
labels:
io.rancher.sidekicks: sonarqube-data
volumes_from:
- sonarqube-data
postgres-data:
image: busybox
net: none
labels:
io.rancher.container.start_once: true
volumes:
- ${postgres_data}
postgres:
image: postgres:latest
ports:
- ${postgress_port}:5432
environment:
PGDATA: ${postgres_data}
POSTGRES_DB: ${postgres_db}
POSTGRES_USER: ${postgres_user}
POSTGRES_PASSWORD: ${postgres_password}
tty: true
stdin_open: true
labels:
io.rancher.sidekicks: postgres-data
volumes_from:
- postgres-data
Sysdig
- strace + tcpdump + lsof + htop + iftop + Lua = sysdig って感じです。curses によるグラフィカルなUI出力なども可能です。
- Logging driversを試してみた
sysdig:
container_name: sysdig
privileged: true
stdin_open: true
tty: true
image: sysdig/sysdig:${VERSION}
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
labels:
io.rancher.scheduler.global: true
io.rancher.scheduler.affinity:host_label_ne: ${HOST_EXCLUDE_LABEL}
Sysdig Cloud
- Sysdig のクラウドサービス版
sysdig-agent:
container_name: sysdig-agent
privileged: true
image: sysdig/agent:${VERSION}
net: "host"
pid: "host"
environment:
ACCESS_KEY: ${SDC_ACCESS_KEY}
TAGS: "${SDC_TAGS}"
ADDITIONAL_CONF: "${SDC_ADDITIONAL_CONF}"
log_opt:
max-size: ${LOG_SIZE}
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
labels:
io.rancher.scheduler.global: true
io.rancher.scheduler.affinity:host_label_ne: ${HOST_EXCLUDE_LABEL}
Taiga
- やたらデザインがきれいなアジャイルプロジェクト管理ツール。Trelloクローンという感じはなく、Redmine拡張のAlminiumに似ている。
- TAIGA on Dockerで本格アジャイル開発管理
postgres:
image: postgres
environment:
- POSTGRES_DB=taiga
- POSTGRES_USER=taiga
- POSTGRES_PASSWORD=password
volumes:
- ${DATABASE}:/var/lib/postgresql/data
rabbit:
image: rabbitmq:3
hostname: rabbit
redis:
image: redis:3
celery:
image: celery
links:
- rabbit
events:
image: kartoffeltoby/taiga-events:latest
links:
- rabbit
taiga:
image: kartoffeltoby/taiga:latest
restart: always
links:
- postgres
- events
- rabbit
- redis
environment:
- TAIGA_HOSTNAME=${DOMAIN}
- TAIGA_DB_HOST=postgres
- TAIGA_DB_NAME=taiga
- TAIGA_DB_USER=taiga
- TAIGA_DB_PASSWORD=password
- HTTPS_SELF_DOMAIN=${DOMAIN}
- TAIGA_SSL=True
- TAIGA_SLEEP=10
volumes:
- ${DATA}:/usr/src/taiga-back/media
TeamCity
- 分散型継続的統合サーバー
- TeamCityは、独創的な継続ユニットテスト、コード品質分析、ビルド問題の早期報告ツールです。インストールは非常に簡単でほんの数分で終わりますのであっという間にコードやリリース管理の品質向上を体感できることでしょう。
TeamCityはJava、.NET、Ruby開発をサポートし、主要なIDE、バージョン管理システム、バグトラッキングシステムと完全に連携できます。 - node × TeamCity インテグレーション 〜TeamCity上でnode, npm を使用できるようにする〜
- mocha × TeamCity インテグレーション 〜テスト結果、カバレッジをTeamCity に表示する〜
teamcity-data:
image: busybox
tty: true
volumes:
- /var/lib/teamcity
teamcity-server:
image: sjoerdmulder/teamcity:latest
ports:
- ${http_port}:8111
links:
- postgres:${postgress_container}
environment:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
labels:
io.rancher.sidekicks: teamcity-data
volumes_from:
- teamcity-data
postgres-data:
image: busybox
tty: true
volumes:
- ${postgres_data}
postgres:
image: postgres:latest
ports:
- ${postgress_port}:5432
environment:
PGDATA: ${postgres_data}
POSTGRES_DB: ${postgres_db}
POSTGRES_USER: ${postgres_user}
POSTGRES_PASSWORD: ${postgres_password}
tty: true
stdin_open: true
labels:
io.rancher.sidekicks: postgres-data
volumes_from:
- postgres-data
teamcity-agent:
image: sjoerdmulder/teamcity-agent:latest
links:
- teamcity-server:teamcity-server
environment:
TEAMCITY_SERVER: http://teamcity-server:8111
Traefik
- Traefikとは様々なバックエンド(docker、swarm、kubernetes、mesos、consul、zookeeperなど)の状態を元に設定を動的に変更することができる、ロードバランサ兼リバースプロキシです。Goで書かれており、他のGo製ツールと同じく、バイナリをひとつ置くだけで実行できます。また、綺麗なGUIも備えています。
- CoreOS環境でtraefikを使ってコンテナのサービスディスカバリを行う方法
- Traefik と consul を使って web サービスをお気軽にぶら下げる
traefik:
ports:
- ${admin_port}:8000/tcp
- ${http_port}:${http_port}/tcp
- ${https_port}:${https_port}/tcp
log_driver: ''
labels:
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.sidekicks: traefik-conf
io.rancher.container.hostname_override: container_name
tty: true
log_opt: {}
image: rawmind/alpine-traefik:1.1.2-1
environment:
- CONF_INTERVAL=${refresh_interval}
- TRAEFIK_HTTP_PORT=${http_port}
- TRAEFIK_HTTPS_PORT=${https_port}
- TRAEFIK_HTTPS_ENABLE=${https_enable}
- TRAEFIK_ACME_ENABLE=${acme_enable}
- TRAEFIK_ACME_EMAIL=${acme_email}
- TRAEFIK_ACME_ONDEMAND=${acme_ondemand}
- TRAEFIK_ACME_ONHOSTRULE=${acme_onhostrule}
volumes_from:
- traefik-conf
traefik-conf:
log_driver: ''
labels:
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.start_once: 'true'
tty: true
log_opt: {}
image: rawmind/rancher-traefik:0.3.4-18
net: none
volumes:
- /opt/tools
Turtl
- ノート、研究、パスワード、ブックマーク、夢のログ、写真、書類その他の安全を守るためのプライベートな場所です。 Turtlの簡単なタグ付けとフィルタリングは、個人的なプロフェッショナルなプロジェクトのために組織化や研究に理想的です。
- 最高のプライバシーのTurtlをEvernoteだと考えてください。
- https://turtlapp.com/
turtl-api-data:
labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
image: busybox
volumes:
- /opt/api/uploads
- /var/lib/rethinkdb/instance1
turtl-api:
ports:
- 8181:8181/tcp
environment:
DISPLAY_ERRORS: ${DISPLAY_ERRORS}
FQDN: ${FQDN}
SITE_URL: ${SITE_URL}
LOCAL_UPLOAD_URL: ${LOCAL_UPLOAD_URL}
LOCAL_UPLOAD_PATH: ${LOCAL_UPLOAD_PATH}
AWS_S3_TOKEN: ${AWS_S3_TOKEN}
ADMIN_EMAIL: ${ADMIN_EMAIL}
EMAIL_FROM: ${EMAIL_FROM}
SMTP_USER: ${SMTP_USER}
SMTP_PASS: ${SMTP_PASS}
DEFAULT_STORAGE_LIMIT: ${DEFAULT_STORAGE_LIMIT}
STORAGE_INVITE_CREDIT: ${STORAGE_INVITE_CREDIT}
image: webofmars/turtl-docker:latest
stdin_open: true
tty: true
labels:
io.rancher.sidekicks: turtl-api-data
volumes_from:
- turtl-api-data
Weave Scope
- Weave Scope はブラウザを通して、ホスト上でどのようなコンテナやアプリケーション(プロセス)が稼働しているかどうか見ることができ、その関連性をリアルタイムでマッピング(地図化)するためのものです。
- Weave Scopeでコンテナ構成をリアルタイム視覚化
weavescope-probe:
image: weaveworks/scope:1.0.0
privileged: true
net: host
pid: host
labels:
io.rancher.scheduler.global: true
io.rancher.container.dns: true
links:
- weavescope-app
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
tty: true
command:
- "--probe.docker"
- "true"
- "--no-app"
- "weavescope-app"
weavescope-app:
image: weaveworks/scope:1.0.0
ports:
- "4040:4040"
command:
- "--no-probe"
Wekan
- かんばん式管理ツールWekanはTrelloクローンの一つ。
- DockerでWekanを使う
wekandb:
image: mongo
# volumes:
# - ./data/runtime/db:/data/db
# - ./data/dump:/dump
command: mongod --smallfiles --oplogSize 128
ports:
- 27017
wekan:
image: mquandalle/wekan
links:
- wekandb
environment:
- MONGO_URL=mongodb://wekandb/wekan
- ROOT_URL=http://localhost:80
ports:
- 80:80
Wordpress
- 定番のオープンソースのブログソフトウェア
wordpress:
image: wordpress
links:
- db:mysql
ports:
- ${public_port}:80
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: example
XPilots
- ネットワーク型シューティングゲーム「XPilot」
server:
environment:
PASSWORD: ${PASSWORD}
log_driver: ''
command:
- -server
log_opt: {}
tty: false
stdin_open: false
image: sshipway/xpilot:latest
labels:
xpilot: server
client:
environment:
DISPLAY: ${DISPLAY}
NAME: ${NAME}
SERVER: xpilot
log_driver: ''
command:
- xpilot
log_opt: {}
image: sshipway/xpilot:latest
links:
- server:xpilot
tty: false
stdin_open: false
labels:
io.rancher.scheduler.affinity:container_label_soft: xpilot=server
io.rancher.container.start_once: 'true'
まとめ
- けっこう長い記事になってしまった。
- 修正点や追加してほしいサンプルがあればおしらせくださいまし。
- 次はk8sも全て晒してみようと思う