docker
docker-compose
rancher

【Docker】これは役に立つ!! docker-compose.ymlの例まとめ

More than 1 year has passed since last update.


経緯


  • 今流行りのRancherを使ってdockerコンテナを管理しています

  • Rancherには「CATALOG」というdocker-compose.ymlのサンプルがあります。それを元に作成しました。

  • またどんなサービスかいつも気になってたので、どうせなら使いこなしたいので備忘録として

  • 色々なサービスのdocker-composeがあるので参考になるかもしれません

  • また、glusterFSのように消えてしまう前のメモとしても


使い方


  • ${xxx}は適宜、値を代入してください。


特におすすめ


docker-composeサンプル


Alfresco

community-alfresco_1.png


docker-compose.yml

alfresco:

environment:
CIFS_ENABLED: 'false'
FTP_ENABLED: 'false'
tty: true
image: webcenter/rancher-alfresco:v5.1-201605-1
links:
- postgres:db
stdin_open: true
ports:
- 8080:8080/tcp
volumes_from:
- alfresco-data
alfresco-data:
image: alpine
volumes:
- /opt/alfresco/alf_data
net: none
command: /bin/true
postgres:
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_DB: ${database_name}
POSTGRES_PASSWORD: ${database_password}
POSTGRES_USER: ${database_user}
tty: true
image: postgres:9.4
stdin_open: true
volumes_from:
- postgres-data
postgres-data:
labels:
io.rancher.container.start_once: 'true'
image: alpine
volumes:
- /var/lib/postgresql/data/pgdata
net: none
command: /bin/true



Apache Kafka

community-kafka.jpg


  • 2011年にLinkedInから公開されたオープンソースの分散メッセージングシステムである.Kafkaはウェブサービスなどから発せられる大容量のデータ(e.g., ログやイベント)を高スループット/低レイテンシに収集/配信することを目的に開発されている.

  • Fast とにかく大量のメッセージを扱うことができる

  • Scalable Kafkaはシングルクラスタで大規模なメッセージを扱うことができダウンタイムなしでElasticかつ透過的にスケールすることができる

  • Durable メッセージはディスクにファイルとして保存され,かつクラスタ内でレプリカが作成されるためデータの損失を防げる(パフォーマンスに影響なくTBのメッセージを扱うことができる)

  • Distributed by Design クラスタは耐障害性のある設計になっている

  • Apache Kafkaに入門した


docker-compose.yml


broker:
tty: true
image: rawmind/alpine-kafka:0.10.0.1-1
volumes_from:
- broker-volume
- broker-conf
environment:
- JVMFLAGS=-Xmx${kafka_mem}m -Xms${kafka_mem}m
- CONFD_INTERVAL=${kafka_interval}
- ZK_SERVICE=${zk_link}
- KAFKA_DELETE_TOPICS=${kafka_delete_topics}
- KAFKA_LOG_DIRS=${kafka_log_dir}
- KAFKA_LOG_RETENTION_HOURS=${kafka_log_retention}
- KAFKA_NUM_PARTITIONS=${kafka_num_partitions}
- ADVERTISE_PUB_IP=${kafka_pub_ip}
external_links:
- ${zk_link}:zk
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: broker-volume, broker-conf
broker-conf:
net: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
image: rawmind/rancher-kafka:0.10.0.0-3
volumes:
- /opt/tools
broker-volume:
net: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
environment:
- SERVICE_UID=10003
- SERVICE_GID=10003
- SERVICE_VOLUME=${kafka_log_dir}
volumes:
- ${kafka_log_dir}
volume_driver: local
image: rawmind/alpine-volume:0.0.2-1



Apache Zookeeper

community-zookeeper.png


  • 分散アプリケーション向けの高パフォーマンスな協調サービス

  • 分散アプリケーションを構築する上で必要となる,同期, 設定管理, グルーピング, 名前管理, などの機能を提供するサービス

  • zookeeper とは


docker-compose.yml

zk:

tty: true
image: rawmind/alpine-zk:3.4.9
volumes_from:
- zk-volume
- zk-conf
environment:
- JVMFLAGS=-Xmx${zk_mem}m -Xms${zk_mem}m
- ZK_DATA_DIR=${zk_data_dir}
- ZK_INIT_LIMIT=${zk_init_limit}
- ZK_MAX_CLIENT_CXNS=${zk_max_client_cxns}
- ZK_SYNC_LIMIT=${zk_sync_limit}
- ZK_TICK_TIME=${zk_tick_time}
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: zk-volume, zk-conf
zk-conf:
net: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
image: rawmind/rancher-zk:3.4.8-5
volumes:
- /opt/tools
zk-volume:
net: none
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
environment:
- SERVICE_UID=10002
- SERVICE_GID=10002
- SERVICE_VOLUME=${zk_data_dir}
volumes:
- ${zk_data_dir}
volume_driver: local
image: rawmind/alpine-volume:0.0.2-1


asciinema.org

community-asciinema-org.jpg


docker-composer.yml

asciinema-org:  

image: 'asciinema/asciinema.org:latest'
links:
- postgres
- redis
restart: always
ports:
- ${port}:3000
environment:
HOST: ${host}:${port}
DATABASE_URL: postgresql://postgres:${postgres_password}@postgres/asciinema
REDIS_URL: redis://redis:6379
RAILS_ENV: development

postgres:
image: 'postgres:latest'
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: ${postgres_password}
container_name: postgres

redis:
image: 'redis:latest'
ports:
- 6379:6379
container_name: redis

sidekiq:
image: 'asciinema/asciinema.org:latest'
links:
- postgres
- redis
command: 'ruby start_sidekiq.rb'
environment:
HOST: ${host}:${port}
DATABASE_URL: postgresql://postgres:${postgres_password}@postgres/asciinema
REDIS_URL: redis://redis:6379
RAILS_ENV: development



Bind9 Domain Name Server

community-infra-bind9.png


docker-compose.yml

bind9:

image: digitallumberjack/docker-bind9:v1.2.0
ports:
- ${BIND9_PORT}:53/tcp
- ${BIND9_PORT}:53/udp
environment:
BIND9_ROOTDOMAIN: ${BIND9_ROOTDOMAIN}
BIND9_KEYNAME: ${BIND9_KEYNAME}
BIND9_KEY: ${BIND9_KEY}
BIND9_FORWARDERS: ${BIND9_FORWARDERS}
RANCHER_ENV: "true"



Cloud9

community-cloud9.png


docker-compose.yml

cloud9-sdk:

command: "--listen 0.0.0.0 --port ${cloud9_port} -w /workspace --collab --auth ${cloud9_user}:${cloud9_pass}"
image: "rawmind/cloud9-sdk:0.3.0-2"
restart: "always"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/usr/local/bin/docker:/bin/docker"
- "/workspace"
environment:
GIT_REPO: ${cloud9_repo}
labels:
traefik.domain: ${cloud9_domain}
traefik.port: ${cloud9_port}
traefik.enable: ${cloud9_publish}



CloudFlare

community-infra-cloudflare.png


  • CloudFlareは、サーバー上のキャッシュ可能なファイル(画像 / CSS / jsファイルなど)を構築している世界中の複数のサーバーにキャッシュすることができるシステム。こうすることでサイトのレスポンスを大幅に向上することができ、サイトの表示速度を高速化するというものです。

  • CloudFlare導入


docker-compose.yml

cloudflare:

image: rancher/external-dns:v0.6.0
command: -provider=cloudflare
expose:
- 1000
environment:
CLOUDFLARE_EMAIL: ${CLOUDFLARE_EMAIL}
CLOUDFLARE_KEY: ${CLOUDFLARE_KEY}
ROOT_DOMAIN: ${ROOT_DOMAIN}
NAME_TEMPLATE: ${NAME_TEMPLATE}
TTL: ${TTL}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"



Concrete5

community-Concrete5.png


docker-compose.yml

CMSMysql:

environment:
MYSQL_ROOT_PASSWORD: ${root_password}
MYSQL_DATABASE: ${db_name}
MYSQL_USER: ${db_username}
MYSQL_PASSWORD: ${db_password}
labels:
io.rancher.container.pull_image: always
tty: true
image: mysql
volumes:
- ${db_data_location}:/var/lib/mysql
stdin_open: true
volume_driver: ${volume_driver}

CMSConfig:
image: opensaas/concrete5
tty: true
stdin_open: true
links:
- CMSMysql:mysql
volumes:
- ${cms_application_data}:/var/www/html/application
- ${cms_packages_data}:/var/www/html/packages
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volume_driver: ${volume_driver}
command: bash -c "chown -R www-data. application; chown -R www-data. packages; sleep 2m; php -f concrete/bin/concrete5.php c5:install --db-server=mysql --db-username=${db_username} --db-password=${db_password} --db-database=${db_name} --site=${cms_sitename} --admin-email=${cms_admin_email} --admin-password=${cms_admin_password} -n -vvv"

Concrete5App:
labels:
io.rancher.container.pull_image: always
io.rancher.sidekicks: CMSConfig
tty: true
links:
- CMSMysql:mysql
image: opensaas/concrete5
volumes:
- ${cms_application_data}:/var/www/html/application
- ${cms_packages_data}:/var/www/html/packages
volume_driver: ${volume_driver}
stdin_open: true



Confluence

community-confluence.png


docker-compose.yml

confluence:

image: sanderkleykens/confluence:6.0.1
restart: always
environment:
- CATALINA_OPTS=-Xms${heap_size} -Xmx${heap_size} ${jvm_args}
- CONFLUENCE_PROXY_PORT=${proxy_port}
- CONFLUENCE_PROXY_NAME=${proxy_name}
- CONFLUENCE_PROXY_SCHEME=${proxy_scheme}
- CONFLUENCE_CONTEXT_PATH=${context_path}
external_links:
- ${database_link}:database
volumes:
- ${confluence_home}:/var/atlassian/confluence



Consul Cluster

community-consul-registrator.png


docker-compose.yml

consul-conf:

image: husseingalal/consul-config
labels:
io.rancher.container.hostname_override: container_name
volumes_from:
- consul
net: "container:consul"
consul:
image: husseingalal/consul
labels:
io.rancher.sidekicks: consul-conf
volumes:
- /opt/rancher/ssl
- /opt/rancher/config
- /var/consul



Consul-Registrator

community-consul-registrator.png


docker-compose.yml

consul-registrator:

log_driver: ''
labels:
io.rancher.sidekicks: consul,consul-data
io.rancher.scheduler.global: 'true'
io.rancher.container.pull_image: always
io.rancher.container.hostname_override: container_name
tty: true
restart: always
command:
- consul://consul:8500
log_opt: {}
image: gliderlabs/registrator:v7
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
stdin_open: true
consul:
ports:
- 8300:8300/tcp
- 8301:8301/tcp
- 8301:8301/udp
- 8302:8302/tcp
- 8302:8302/udp
- 8400:8400/tcp
- 8500:8500/tcp
- 8600:8600/tcp
- 8600:8600/udp
log_driver: ''
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
io.rancher.container.hostname_override: container_name
io.rancher.container.dns: true
tty: true
net: host
restart: always
command:
- agent
- -retry-join
- ${consul_server}
- -recursor=169.254.169.250
- -client=0.0.0.0
environment:
CONSUL_LOCAL_CONFIG: "{\"leave_on_terminate\": true, \"datacenter\": \"${consul_datacenter}\"}"
CONSUL_BIND_INTERFACE: eth0
volumes_from:
- consul-data
log_opt: {}
image: consul:v0.6.4
stdin_open: true
consul-data:
image: consul:v0.6.4
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.global: 'true'
io.rancher.container.start_once: true
volumes:
- /consul/data
entrypoint: /bin/true



DataDog Agent & DogStatsD

community-datadog.png


docker-compose.yml

datadog-init:

image: janeczku/datadog-rancher-init:v2.2.3
net: none
command: /bin/true
volumes:
- /opt/rancher
labels:
io.rancher.container.start_once: 'true'
io.rancher.container.pull_image: always
datadog-agent:
image: datadog/docker-dd-agent:11.3.585
entrypoint: /opt/rancher/entrypoint-wrapper.py
command:
- supervisord
- -n
- -c
- /etc/dd-agent/supervisor.conf
restart: always
environment:
API_KEY: ${api_key}
SD_BACKEND_HOST: ${sd_backend_host}
SD_BACKEND_PORT: ${sd_backend_port}
SD_TEMPLATE_DIR: ${sd_template_dir}
STATSD_METRIC_NAMESPACE: ${statsd_namespace}
DD_STATSD_STANDALONE: "${statsd_standalone}"
DD_HOST_LABELS: ${host_labels}
DD_CONTAINER_LABELS: ${service_labels}
DD_SERVICE_DISCOVERY: ${service_discovery}
DD_SD_CONFIG_BACKEND: ${sd_config_backend}
DD_CONSUL_TOKEN: ${dd_consul_token}
DD_CONSUL_SCHEME: ${dd_consul_scheme}
DD_CONSUL_VERIFY: ${dd_consul_verify}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /proc/:/host/proc/:ro
- /sys/fs/cgroup/:/host/sys/fs/cgroup:ro
volumes_from:
- datadog-init
labels:
io.rancher.scheduler.global: "${global_service}"
io.rancher.sidekicks: 'datadog-init'



DNS Update (RFC2136)

community-infra-dnsupdate-rfc2136.png


docker-compose.yml

rfc2136dns:

image: rancher/external-dns:v0.6.0
command: -provider=rfc2136
expose:
- 1000
environment:
RFC2136_HOST: ${RFC2136_HOST}
RFC2136_PORT: ${RFC2136_PORT}
RFC2136_TSIG_KEYNAME: ${RFC2136_TSIG_KEYNAME}
RFC2136_TSIG_SECRET: ${RFC2136_TSIG_SECRET}
ROOT_DOMAIN: ${ROOT_DOMAIN}
NAME_TEMPLATE: ${NAME_TEMPLATE}
TTL: ${TTL}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"



DNSimple DNS

community-dnsimple.svg.png


docker-compose.yml

dnsimple:

image: rancher/external-dns:v0.6.0
command: -provider=dnsimple
expose:
- 1000
environment:
DNSIMPLE_TOKEN: ${DNSIMPLE_TOKEN}
DNSIMPLE_EMAIL: ${DNSIMPLE_EMAIL}
ROOT_DOMAIN: ${ROOT_DOMAIN}
NAME_TEMPLATE: ${NAME_TEMPLATE}
TTL: ${TTL}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"


DokuWiki

community-dokuwiki.png


docker-compose.yml

dokuwiki-server:

ports:
- ${http_port}:80/tcp
labels:
io.rancher.sidekicks: dokuwiki-data
hostname: ${dokuwiki_hostname}
image: ununseptium/dokuwiki-docker
volumes_from:
- dokuwiki-data

dokuwiki-data:
labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
hostname: dokuwikidata
image: ununseptium/dokuwiki-docker
volumes:
- /var/www/html/data
- /var/www/html/lib/plugins



Drone

community-drone.svg.png


docker-compose.yml

drone-lb:

ports:
- ${public_port}:8000
tty: true
image: rancher/load-balancer-service
links:
- drone-server:drone-server
stdin_open: true

drone-healthcheck:
image: rancher/drone-config:v0.1.0
net: 'container:drone-server'
volumes_from:
- drone-data-volume
entrypoint: /giddyup health

drone-server:
image: rancher/drone-config:v0.1.0
volumes_from:
- drone-data-volume
labels:
io.rancher.sidekicks: drone-data-volume,drone-daemon,drone-healthcheck
external_links:
- ${database_service}:database

drone-daemon:
image: rancher/drone:0.4
net: 'container:drone-server'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
volumes_from:
- drone-data-volume
entrypoint: /opt/rancher/rancher_entry.sh

## Do not change below. Could cause data loss in upgrade.
drone-data-volume:
image: busybox
net: none
command: /bin/true
labels:
io.rancher.container.start_once: 'true'
volumes:
- /var/lib/drone
- /etc/drone
- /opt/rancher



Drone Rancher Node Manager

community-drone.svg.png


  • 一つのエージェントは、ホストごとに実行し、単一のワーカーを登録してくれるrancher向けのciツールっぽい


docker-compose.yml

drone-agent:

labels:
io.rancher.scheduler.global: 'true'
tty: true
image: rancher/socat-docker
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker:/var/lib/docker
stdin_open: true
dynamic-drones-mgr-0:
environment:
DRONE_TOKEN: ${DRONE_TOKEN}
DRONE_URL: http://droneserver:8000
external_links:
- ${DRONE_SERVICE}:droneserver
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/drone-agent
tty: true
entrypoint:
- /dynamic-drone-nodes
- /stacks/${STACK_NAME}/services/drone-agent
image: rancher/drone-config:v0.1.0
stdin_open: true



Rancher ECR Credentials Updater

community-ecr.svg.png


  • EC2 Container Registry にアクセス可能になっており、外に置きたくないコンテナイメージをAWS内で管理するのに適しています

  • Amazon ECR + ECS CLI ハンズオン


docker-compose.yml

ecr-updater:

environment:
AWS_ACCESS_KEY_ID: ${aws_access_key_id}
AWS_SECRET_ACCESS_KEY: ${aws_secret_access_key}
AWS_REGION: ${aws_region}
labels:
io.rancher.container.pull_image: always
io.rancher.container.create_agent: 'true'
io.rancher.container.agent.role: environment
tty: true
image: objectpartners/rancher-ecr-credentials:1.1.0
stdin_open: true



Elasticsearch

community-elasticsearch.svg.png


docker-compose.yml

elasticsearch-masters:

image: rancher/elasticsearch-conf:v0.4.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-master,elasticsearch-datavolume-masters
elasticsearch-datavolume-masters:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
entrypoint: /bin/true
image: elasticsearch:1.7.3
elasticsearch-base-master:
labels:
elasticsearch.master.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:1.7.3
net: "container:elasticsearch-masters"
volumes_from:
- elasticsearch-masters
- elasticsearch-datavolume-masters
entrypoint:
- /opt/rancher/bin/run.sh

elasticsearch-datanodes:
image: rancher/elasticsearch-conf:v0.4.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-datanode,elasticsearch-datavolume-datanode
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
links:
- elasticsearch-masters:es-masters
elasticsearch-datavolume-datanode:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
entrypoint: /bin/true
image: elasticsearch:1.7.3
elasticsearch-base-datanode:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:1.7.3
links:
- elasticsearch-masters:es-masters
entrypoint:
- /opt/rancher/bin/run.sh
volumes_from:
- elasticsearch-datanodes
- elasticsearch-datavolume-datanode
net: "container:elasticsearch-datanodes"

elasticsearch-clients:
image: rancher/elasticsearch-conf:v0.4.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-clients,elasticsearch-datavolume-clients
links:
- elasticsearch-masters:es-masters
elasticsearch-datavolume-clients:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
entrypoint: /bin/true
image: elasticsearch:1.7.3
elasticsearch-base-clients:
labels:
elasticsearch.client.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:1.7.3
volumes_from:
- elasticsearch-clients
- elasticsearch-datavolume-clients
net: "container:elasticsearch-clients"
entrypoint:
- /opt/rancher/bin/run.sh

kopf:
image: rancher/kopf:v0.4.0
ports:
- "80:80"
environment:
KOPF_SERVER_NAME: 'es.dev'
KOPF_ES_SERVERS: 'es-clients:9200'
labels:
io.rancher.container.hostname_override: container_name
links:
- elasticsearch-clients:es-clients



Elasticsearch 2.x

community-elasticsearch.svg.png


docker-compose.yml

elasticsearch-masters:

image: rancher/elasticsearch-conf:v0.5.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-master,elasticsearch-datavolume-masters
volumes_from:
- elasticsearch-datavolume-masters
elasticsearch-datavolume-masters:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
- /usr/share/elasticsearch/config
- /opt/rancher/bin
entrypoint: /bin/true
image: elasticsearch:2.2.1
elasticsearch-base-master:
labels:
elasticsearch.master.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:2.2.1
net: "container:elasticsearch-masters"
volumes_from:
- elasticsearch-datavolume-masters
entrypoint:
- /opt/rancher/bin/run.sh

elasticsearch-datanodes:
image: rancher/elasticsearch-conf:v0.5.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-datanode,elasticsearch-datavolume-datanode
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
links:
- elasticsearch-masters:es-masters
volumes_from:
- elasticsearch-datavolume-datanode
elasticsearch-datavolume-datanode:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
- /usr/share/elasticsearch/config
- /opt/rancher/bin
entrypoint: /bin/true
image: elasticsearch:2.2.1
elasticsearch-base-datanode:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:2.2.1
links:
- elasticsearch-masters:es-masters
entrypoint:
- /opt/rancher/bin/run.sh
volumes_from:
- elasticsearch-datavolume-datanode
net: "container:elasticsearch-datanodes"

elasticsearch-clients:
image: rancher/elasticsearch-conf:v0.5.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-clients,elasticsearch-datavolume-clients
links:
- elasticsearch-masters:es-masters
volumes_from:
- elasticsearch-datavolume-clients
elasticsearch-datavolume-clients:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /usr/share/elasticsearch/data
- /usr/share/elasticsearch/config
- /opt/rancher/bin
entrypoint: /bin/true
image: elasticsearch:2.2.1
elasticsearch-base-clients:
labels:
elasticsearch.client.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:2.2.1
volumes_from:
- elasticsearch-datavolume-clients
net: "container:elasticsearch-clients"
entrypoint:
- /opt/rancher/bin/run.sh

kopf:
image: rancher/kopf:v0.4.0
ports:
- "${kopf_port}:80"
environment:
KOPF_SERVER_NAME: 'es.dev'
KOPF_ES_SERVERS: 'es-clients:9200'
labels:
io.rancher.container.hostname_override: container_name
links:
- elasticsearch-clients:es-clients



AWS ELB Classic External LB

community-aws-elbv1.png


docker-compsoe.yml

elbv1:

image: rancher/external-lb:v0.2.1
command: -provider=elbv1
expose:
- 1000
environment:
ELBV1_AWS_ACCESS_KEY: ${ELBV1_AWS_ACCESS_KEY}
ELBV1_AWS_SECRET_KEY: ${ELBV1_AWS_SECRET_KEY}
ELBV1_AWS_REGION: ${ELBV1_AWS_REGION}
ELBV1_AWS_VPCID: ${ELBV1_AWS_VPCID}
ELBV1_USE_PRIVATE_IP: ${ELBV1_USE_PRIVATE_IP}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"



Etcd

community-etcd-ha.svg.png


  • etcd は Go 言語で記述されたオープンソースの高信頼分散 KVSです。また etcd クラスタは、CoreOS 上の Docker コンテナ環境等に配備されるアプリケーション間でサービス設定情報などを交換、共有する機能を提供します。尚、同様のミドルウェアでは、Apache ZooKeeper や consul 等が該当すると思われます。


  • etcd + docker で簡単にリモートコンテナに接続しよう


  • CentOS で始める etcd



docker-compose.yml

etcd:

image: rancher/etcd:v2.3.7-11
labels:
io.rancher.scheduler.affinity:host_label_soft: etcd=true
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.sidekicks: data
environment:
RANCHER_DEBUG: '${RANCHER_DEBUG}'
EMBEDDED_BACKUPS: '${EMBEDDED_BACKUPS}'
BACKUP_PERIOD: '${BACKUP_PERIOD}'
BACKUP_RETENTION: '${BACKUP_RETENTION}'
volumes:
- etcd:/pdata
- ${BACKUP_LOCATION}:/data-backup
volumes_from:
- data
data:
image: busybox
command: /bin/true
net: none
volumes:
- /data
labels:
io.rancher.container.start_once: 'true'


F5 BIG-IP

community-f5.svg.png


docker-compose.yml

external-lb:

image: rancher/external-lb:v0.1.1
command: -provider=f5_BigIP
expose:
- 1000
environment:
F5_BIGIP_HOST: ${F5_BIGIP_HOST}
F5_BIGIP_USER: ${F5_BIGIP_USER}
F5_BIGIP_PWD: ${F5_BIGIP_PWD}
LB_TARGET_RANCHER_SUFFIX: ${LB_TARGET_RANCHER_SUFFIX}



FBCTF


docker-compse.yml

fbctf:  

image: 'qazbnm456/dockerized_fbctf:multi_containers'
links:
- mysql
- memcached
restart: always
ports:
- ${http_port}:80
- ${https_port}:443
environment:
MYSQL_HOST: mysql
MYSQL_PORT: 3306
MYSQL_DATABASE: ${mysql_database}
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
MEMCACHED_PORT: 11211
SSL_SELF_SIGNED: ${ssl}

mysql:
image: 'mysql:5.5'
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
container_name: mysql

memcached:
image: 'memcached:latest'
restart: always
container_name: memcached



Ghost

community-ghost.svg.png


docker-compose.yml

ghost:

image: ghost
ports:
- ${public_port}:2368



GitLab

community-gitlab.svg.png


  • GitLabとは、GitHubのようなサービスを社内などのクローズドな環境に独自で構築できるように公開されたオープンソースです。


docker-compose.yml

fbctf:  

image: 'qazbnm456/dockerized_fbctf:multi_containers'
links:
- mysql
- memcached
restart: always
ports:
- ${http_port}:80
- ${https_port}:443
environment:
MYSQL_HOST: mysql
MYSQL_PORT: 3306
MYSQL_DATABASE: ${mysql_database}
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
MEMCACHED_PORT: 11211
SSL_SELF_SIGNED: ${ssl}

mysql:
image: 'mysql:5.5'
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
container_name: mysql

memcached:
image: 'memcached:latest'
restart: always
container_name: memcached



Gocd agent

community-gocd-agent.png.png


  • CI/CDツールの1つ(JenkinsやTravis CIなどの仲間っぽい)


docker-compose.yml

gocd-agent:

labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
gocd.role: agent
tty: true
image: rawmind/rancher-goagent:16.2.1-1
external_links:
- ${goserver_ip}:gocd-server
environment:
- AGENT_MEM=${mem_initial}m
- AGENT_MAX_MEM=${mem_max}m
- GO_SERVER=gocd-server.rancher.internal
- GO_SERVER_PORT=${goserver_port}
volumes:
- /var/run/docker.sock:/var/run/docker.sock



Gocd server

community-gocd-agent.png.png


  • CI/CDツールの1つ(JenkinsやTravis CIなどの仲間っぽい)


docker-compose.yml

gocd-server:

labels:
gocd.role: server
tty: true
image: rawmind/rancher-goserver:16.2.1-3
volumes_from:
- gocd-volume
environment:
- SERVER_MEM=${mem_initial}m
- SERVER_MAX_MEM=${mem_max}m
ports:
- ${public_port}:8153
- ${ssh_port}:8154
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: gocd-volume
gocd-volume:
net: none
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- ${volume_work}:/opt/go-server/work
volume_driver: ${volume_driver}
entrypoint: /bin/true
image: busybox



Gogs

community-gogs.png.png


docker-compose.yml

gogs:

image: gogs/gogs:latest
ports:
- ${http_port}:3000
- ${ssh_port}:22
links:
- mysql:db

mysql:
image: mysql:latest
ports:
- ${public_port}:3306
environment:
MYSQL_ROOT_PASSWORD: ${mysql_password}



Grafana

community-grafana.png


docker-compose.yml

grafana:

image: grafana/grafana:latest
ports:
- ${http_port}:3000
environment:
GF_SECURITY_ADMIN_USER: ${admin_username}
GF_SECURITY_ADMIN_PASSWORD: ${admin_password}
GF_SECURITY_SECRET_KEY: ${secret_key}



Hadoop + Yarn

community-hadoop.svg.png


docker-compose.yml

bootstrap-hdfs:

image: rancher/hadoop-base:v0.3.5
labels:
io.rancher.container.start_once: true
command: 'su -c "sleep 20 && exec /bootstrap-hdfs.sh" hdfs'
net: "container:namenode-primary"
volumes_from:
- namenode-primary-data
sl-namenode-config:
image: rancher/hadoop-followers-config:v0.3.5
net: "container:namenode-primary"
environment:
NODETYPE: "hdfs"
volumes_from:
- namenode-primary-data
namenode-config:
image: rancher/hadoop-config:v0.3.5
net: "container:namenode-primary"
volumes_from:
- namenode-primary-data
namenode-primary:
image: rancher/hadoop-base:v0.3.5
command: 'su -c "sleep 15 && /usr/local/hadoop-2.7.1/bin/hdfs namenode" hdfs'
volumes_from:
- namenode-primary-data
ports:
- 50070:50070
labels:
io.rancher.sidekicks: namenode-config,sl-namenode-config,bootstrap-hdfs,namenode-primary-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_soft: io.rancher.stack_service.name=$${stack_name}/yarn-resourcemanager,io.rancher.stack_service.name=$${stack_name}/jobhistory-server
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/datanode
namenode-primary-data:
image: rancher/hadoop-base:v0.3.5
volumes:
- '${cluster}-namenode-primary-config:/etc/hadoop'
- '/tmp'
net: none
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'

datanode-config:
image: rancher/hadoop-config:v0.3.5
net: "container:datanode"
volumes_from:
- datanode-data
datanode-data:
image: rancher/hadoop-base:v0.3.5
net: none
volumes:
- '${cluster}-datanode-config:/etc/hadoop'
- '/tmp'
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'
datanode:
image: rancher/hadoop-base:v0.3.5
volumes_from:
- datanode-data
labels:
io.rancher.sidekicks: datanode-config,datanode-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/namenode-primary,io.rancher.stack_service.name=$${stack_name}/yarn-resourcemanager
links:
- 'namenode-primary:namenode'
command: 'su -c "sleep 45 && exec /usr/local/hadoop-2.7.1/bin/hdfs datanode" hdfs'

yarn-nodemanager-config:
image: rancher/hadoop-config:v0.3.5
net: "container:yarn-nodemanager"
volumes_from:
- yarn-nodemanager-data
yarn-nodemanager-data:
image: rancher/hadoop-base:v0.3.5
net: none
volumes:
- '${cluster}-yarn-nodemanager-config:/etc/hadoop'
- '/tmp'
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'
yarn-nodemanager:
image: rancher/hadoop-base:v0.3.5
volumes_from:
- yarn-nodemanager-data
ports:
- '8042:8042'
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: yarn-nodemanager-config,yarn-nodemanager-data
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/namenode-primary,io.rancher.stack_service.name=$${stack_name}/yarn-resourcemanager,io.rancher.stack_service.name=$${stack_name}/jobhistory-server,
io.rancher.scheduler.affinity:container_label: io.rancher.stack_service.name=$${stack_name}/datanode
links:
- 'namenode-primary:namenode'
- 'yarn-resourcemanager:yarn-rm'
command: 'su -c "sleep 45 && exec /usr/local/hadoop-2.7.1/bin/yarn nodemanager" yarn'

jobhistory-server-config:
image: rancher/hadoop-config:v0.3.5
net: "container:jobhistory-server"
volumes_from:
- jobhistory-server-data
jobhistory-server-data:
image: rancher/hadoop-base:v0.3.5
net: none
volumes:
- '${cluster}-jobhistory-config:/etc/hadoop'
- '/tmp'
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'
jobhistory-server:
image: rancher/hadoop-base:v0.3.5
volumes_from:
- jobhistory-server-data
links:
- 'namenode-primary:namenode'
- 'yarn-resourcemanager:yarn-rm'
ports:
- '10020:10020'
- '19888:19888'
labels:
io.rancher.sidekicks: jobhistory-server-config,jobhistory-server-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label: io.rancher.stack_service.name=$${stack_name}/yarn-resourcemanager,io.rancher.stack_service.name=$${stack_name}/namenode-primary
command: 'su -c "sleep 45 && /usr/local/hadoop-2.7.1/bin/mapred historyserver" mapred'

yarn-resourcemanager-config:
image: rancher/hadoop-config:v0.3.5
net: "container:yarn-resourcemanager"
volumes_from:
- yarn-resourcemanager-data
sl-yarn-resourcemanager-config:
image: rancher/hadoop-followers-config:v0.3.5
net: "container:yarn-resourcemanager"
environment:
NODETYPE: "yarn"
volumes_from:
- yarn-resourcemanager-data
yarn-resourcemanager-data:
image: rancher/hadoop-base:v0.3.5
net: none
volumes:
- '${cluster}-yarn-resourcemanager-config:/etc/hadoop'
- '/tmp'
labels:
io.rancher.container.start_once: true
command: '/bootstrap-local.sh'
yarn-resourcemanager:
image: rancher/hadoop-base:v0.3.5
volumes_from:
- yarn-resourcemanager-data
ports:
- '8088:8088'
links:
- 'namenode-primary:namenode'
labels:
io.rancher.sidekicks: yarn-resourcemanager-config,sl-yarn-resourcemanager-config,yarn-resourcemanager-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name},io.rancher.stack_service.name=$${stack_name}/datanode,io.rancher.stack_service.name=$${stack_name}/yarn-nodemanager
io.rancher.scheduler.affinity:container_label: io.rancher.stack_service.name=$${stack_name}/namenode-primary
command: 'su -c "sleep 30 && /usr/local/hadoop-2.7.1/bin/yarn resourcemanager" yarn'



Janitor

community-janitor.svg.png


docker-compose.yml

cleanup:

image: meltwater/docker-cleanup:1.8.0
environment:
CLEAN_PERIOD: ${FREQUENCY}
DELAY_TIME: "900"
KEEP_IMAGES: "${KEEP}"
KEEP_CONTAINERS: "${KEEPC}"
KEEP_CONTAINERS_NAMED: "${KEEPCN}"
LOOP: "true"
DEBUG: "0"
labels:
io.rancher.scheduler.global: "true"
io.rancher.scheduler.affinity:host_label_ne: "${EXCLUDE_LABEL}"
net: none
privileged: true
tty: false
stdin_open: false
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker:/var/lib/docker



Jenkins

community-jenkins-swarm.svg.png


docker-compose.yml

jenkins-primary:

image: "jenkins:2.19.4"
ports:
- "${PORT}:8080"
labels:
io.rancher.sidekicks: jenkins-plugins,jenkins-datavolume
io.rancher.container.hostname_override: container_name
volumes_from:
- jenkins-plugins
- jenkins-datavolume
entrypoint: /usr/share/jenkins/rancher/jenkins.sh
jenkins-plugins:
image: rancher/jenkins-plugins:v0.1.1
jenkins-datavolume:
image: "busybox"
volumes:
- ${volume_work}:/var/jenkins_home
labels:
io.rancher.container.start_once: true
entrypoint: ["chown", "-R", "1000:1000", "/var/jenkins_home"]


Jenkins Swarm Plugin Clients

community-jenkins-swarm.svg.png


docker-compose.yml

swarm-clients:

image: "rancher/jenkins-swarm:v0.2.0"
user: "${user}"
labels:
io.rancher.scheduler.affinity:host_label_soft: ci=worker
io.rancher.container.hostname_override: container_name
external_links:
- "${jenkins_service}:jenkins-primary"
environment:
JENKINS_PASS: "${jenkins_pass}"
JENKINS_USER: "${jenkins_user}"
SWARM_EXECUTORS: "${swarm_executors}"
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
- '/var/jenkins_home/workspace:/var/jenkins_home/workspace'
- '/tmp:/tmp'



Kibana 4

community-kibana.svg.png


docker-compose.yml

kibana-vip:

ports:
- "${public_port}:80"
restart: always
tty: true
image: rancher/load-balancer-service
links:
- nginx-proxy:kibana4
stdin_open: true
nginx-proxy-conf:
image: rancher/nginx-conf:v0.2.0
command: "-backend=rancher --prefix=/2015-07-25"
labels:
io.rancher.container.hostname_override: container_name
nginx-proxy:
image: rancher/nginx:v1.9.4-3
volumes_from:
- nginx-proxy-conf
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: nginx-proxy-conf,kibana4
external_links:
- ${elasticsearch_source}:elasticsearch
kibana4:
restart: always
tty: true
image: kibana:4.4.2
net: "container:nginx-proxy"
stdin_open: true
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
labels:
io.rancher.container.hostname_override: container_name



Let's Encrypt

community-letsencrypt.png


docker-compose.yml

letsencrypt:

image: janeczku/rancher-letsencrypt:v0.3.0
environment:
EULA: ${EULA}
API_VERSION: ${API_VERSION}
CERT_NAME: ${CERT_NAME}
EMAIL: ${EMAIL}
DOMAINS: ${DOMAINS}
PUBLIC_KEY_TYPE: ${PUBLIC_KEY_TYPE}
RENEWAL_TIME: ${RENEWAL_TIME}
PROVIDER: ${PROVIDER}
CLOUDFLARE_EMAIL: ${CLOUDFLARE_EMAIL}
CLOUDFLARE_KEY: ${CLOUDFLARE_KEY}
DO_ACCESS_TOKEN: ${DO_ACCESS_TOKEN}
AWS_ACCESS_KEY: ${AWS_ACCESS_KEY}
AWS_SECRET_KEY: ${AWS_SECRET_KEY}
DNSIMPLE_EMAIL: ${DNSIMPLE_EMAIL}
DNSIMPLE_KEY: ${DNSIMPLE_KEY}
DYN_CUSTOMER_NAME: ${DYN_CUSTOMER_NAME}
DYN_USER_NAME: ${DYN_USER_NAME}
DYN_PASSWORD: ${DYN_PASSWORD}
volumes:
- ${STORAGE_VOLUME}/etc/letsencrypt/production/certs
labels:
io.rancher.container.create_agent: 'true'
io.rancher.container.agent.role: 'environment'


Liferay Portal

community-liferay.png


  • Webシステムを構築するためのオープンソースのポータル(社内ポータル、対外サイト)製品です。

  • ポータル(社内ポータル、対外サイト)を実現するためのフレームワーク、およびそのフレームワーク用に開発されたポートレット(機能部品)、及びポートレットの開発環境から構成されています。

  • Liferay Portal(ライフレイ ポータル)はJavaで実装されており、JBoss, Apache Tomcat, WebSphereなど多くのアプリケーションサーバやWebコンテナ上で動作します。


  • Liferay Portal(ライフレイ ポータル)の概要



docker-compose.yml

liferay:

ports:
- 8080:8080/tcp
environment:
SETUP_WIZARD_ENABLED: ${SETUP_WIZARD_ENABLED}
DB_KIND: mysql
DB_HOST: liferaydb
DB_USERNAME: ${MYSQL_USER}
DB_PASSWORD: ${MYSQL_PASSWORD}
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
log_opt: {}
image: rsippl/liferay:7.0.0-2
links:
- mysql:liferaydb
stdin_open: true
mysql:
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
command:
- --character-set-server=utf8
- --collation-server=utf8_general_ci
log_opt: {}
image: mysql:5.6.30
stdin_open: true


Logmatic


  • goで書かれた log analyzer のようです

  • logmatic.io


docker-compose.yml

logmatic-agent:

image: logmatic/logmatic-docker
entrypoint: /usr/src/app/index.js
command: ${logmatic_key} ${opts_args}
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /proc/:/host/proc/:ro
- /sys/fs/cgroup/:/host/sys/fs/cgroup:ro
labels:
io.rancher.scheduler.global: "true"



Logspout

community-logspout.1.png


docker-compose.yml

logspout:

restart: always
environment:
ROUTE_URIS: "${route_uri}"
LOGSPOUT: 'ignore'
SYSLOG_HOSTNAME: "${envname}"
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
labels:
io.rancher.scheduler.global: 'true'
io.rancher.container.hostname_override: container_name
tty: true
image: bekt/logspout-logstash:latest
stdin_open: true


Logstash

community-logstash.png


docker-compose.yml

logstash-indexer-config:

restart: always
image: rancher/logstash-config:v0.2.0
labels:
io.rancher.container.hostname_override: container_name
redis:
restart: always
tty: true
image: redis:3.2.6-alpine
stdin_open: true
labels:
io.rancher.container.hostname_override: container_name
logstash-indexer:
restart: always
tty: true
volumes_from:
- logstash-indexer-config
command:
- logstash
- -f
- /etc/logstash
image: logstash:5.1.1-alpine
links:
- redis:redis
external_links:
- ${elasticsearch_link}:elasticsearch
stdin_open: true
labels:
io.rancher.sidekicks: logstash-indexer-config
io.rancher.container.hostname_override: container_name
logstash-collector-config:
restart: always
image: rancher/logstash-config:v0.2.0
labels:
io.rancher.container.hostname_override: container_name
logstash-collector:
restart: always
tty: true
links:
- redis:redis
ports:
- "5000/udp"
- "6000/tcp"
volumes_from:
- logstash-collector-config
command:
- logstash
- -f
- /etc/logstash
image: logstash:5.1.1-alpine
stdin_open: true
labels:
io.rancher.sidekicks: logstash-collector-config
io.rancher.container.hostname_override: container_name


MariaDB Galera Cluster

community-galera.png


docker-compose.yml

mariadb-galera-server:

image: rancher/galera:10.0.22-rancher2
net: "container:galera"
environment:
TERM: "xterm"
MYSQL_ROOT_PASSWORD: "${mysql_root_password}"
MYSQL_DATABASE: "${mysql_database}"
MYSQL_USER: "${mysql_user}"
MYSQL_PASSWORD: "${mysql_password}"
volumes_from:
- 'mariadb-galera-data'
labels:
io.rancher.container.hostname_override: container_name
entrypoint: bash -x /opt/rancher/start_galera
mariadb-galera-data:
image: rancher/galera:10.0.22-rancher2
net: none
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
volumes:
- /var/lib/mysql
- /etc/mysql/conf.d
- /docker-entrypoint-initdb.d
- /opt/rancher
command: /bin/true
labels:
io.rancher.container.start_once: true
galera-leader-forwarder:
image: rancher/galera-leader-proxy:v0.1.0
net: "container:galera"
volumes_from:
- 'mariadb-galera-data'
galera:
image: rancher/galera-conf:v0.2.0
labels:
io.rancher.sidekicks: mariadb-galera-data,mariadb-galera-server,galera-leader-forwarder
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
volumes_from:
- 'mariadb-galera-data'
stdin_open: true
tty: true
command: /bin/bash

galera-lb:
expose:
- 3306:3307/tcp
tty: true
image: rancher/load-balancer-service
links:
- galera:galera
stdin_open: true



Minecraft

community-minecraft.png


  • これでマインクラフトがいつでも遊べます


docker-compose.yml


Minecraft:
environment:
- EULA
- VERSION
- DIFFICULTY
- MODE
- LEVEL_TYPE
- GENERATOR_SETTINGS
- PVP
- WHITELIST
- OPS
- MOTD
- SEED
- WORLD
tty: true
image: itzg/minecraft-server
stdin_open: true
labels:
io.rancher.sidekicks: MinecraftData
volumes_from:
- MinecraftData

MinecraftData:
image: busybox
labels:
io.rancher.container.start_once: 'true'
net: none
entrypoint: /bin/true
volumes:
- ${DATA_VOLUME}/data
volume_driver: ${VOLUME_DRIVER}

MinecraftLB:
ports:
- ${PORT}:25565/tcp
tty: true
image: rancher/load-balancer-service
links:
- Minecraft:Minecraft
stdin_open: true



MongoDB

community-MongoDB.png


docker-compose.yml

mongo-cluster:

restart: always
environment:
MONGO_SERVICE_NAME: mongo-cluster
tty: true
entrypoint: /opt/rancher/bin/entrypoint.sh
command:
- --replSet
- "${replset_name}"
image: mongo:3.2
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: mongo-base, mongo-datavolume
volumes_from:
- mongo-datavolume
- mongo-base
mongo-base:
restart: always
net: none
tty: true
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
image: rancher/mongodb-conf:v0.1.0
stdin_open: true
entrypoint: /bin/true
mongo-datavolume:
net: none
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /data/db
entrypoint: /bin/true
image: busybox



Mumble

community-mumble.png


docker-compose.yml

mumble:

image: ranchercb/murmur:latest
ports:
- 64738:64738
- 64738:64738/udp



Netdata

community-netdata.png


docker-compose.yml

netdata:

image: titpetric/netdata:latest
labels:
io.rancher.scheduler.global: 'true'
cap_add:
- SYS_PTRACE
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
environment:
NETDATA_PORT: "${NETDATA_PORT}"


Nuxeo

community-nuxeo.svg.png


  • このテンプレートはNuxeoサーバーをすべてのコンパニオン(Elasticsearch、Redis、Postgres)と共に展開し、Nuxeoをあなたの上で実行できるようにします


docker-compose.yml

postgres-datavolume:

labels:
io.rancher.container.start_once: 'true'
io.rancher.container.hostname_override: container_name
image: nuxeo/postgres
entrypoint: chown -R postgres:postgres /var/lib/postgresql/data
volume_driver: ${volumedriver}
volumes:
- /var/lib/postgresql/data

postgres:
image: nuxeo/postgres
environment:
- POSTGRES_USER=nuxeo
- POSTGRES_PASSWORD=nuxeo
labels:
io.rancher.sidekicks: postgres-datavolume
io.rancher.container.hostname_override: container_name
volumes_from:
- postgres-datavolume

# Copied from default Rancher ES Stack : don't modifiy service names
elasticsearch-masters:
image: rancher/elasticsearch-conf:v0.4.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: elasticsearch-base-master,elasticsearch-datavolume-masters
volume_driver: ${volumedriver}
elasticsearch-datavolume-masters:
labels:
elasticsearch.datanode.config.version: '0'
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
entrypoint: /bin/true
image: elasticsearch:1.7.3
volume_driver: ${volumedriver}
elasticsearch-base-master:
labels:
elasticsearch.master.config.version: '0'
io.rancher.container.hostname_override: container_name
image: elasticsearch:1.7.3
net: "container:elasticsearch-masters"
volumes_from:
- elasticsearch-masters
- elasticsearch-datavolume-masters
entrypoint:
- /opt/rancher/bin/run.sh

redis:
labels:
io.rancher.container.hostname_override: container_name
tty: true
image: redis:3.0.3
stdin_open: true
volume_driver: ${volumedriver}

nuxeo-datavolume:
labels:
io.rancher.container.start_once: 'true'
io.rancher.container.hostname_override: container_name
image: nuxeo
entrypoint: /bin/true
volume_driver: ${volumedriver}
volumes:
- /var/lib/nuxeo/data
- /var/log/nuxeo

nuxeo:
environment:
NUXEO_CLID: ${clid}
NUXEO_PACKAGES: ${packages}
NUXEO_DB_HOST: postgres
NUXEO_DB_TYPE: postgresql
NUXEO_ES_HOSTS: elasticsearch:9300
NUXEO_DATA: /data/nuxeo/data/
NUXEO_LOG: /data/nuxeo/log/
NUXEO_REDIS_HOST: redis
NUXEO_URL: ${url}
labels:
io.rancher.sidekicks: nuxeo-datavolume
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
image: nuxeo:FT
links:
- redis:redis
- postgres:postgres
- elasticsearch-masters:elasticsearch
volumes_from:
- nuxeo-datavolume

lb:
expose:
- 80:8080
image: rancher/load-balancer-service
links:
- nuxeo:nuxeo



Odoo

community-odoo.png


  • Odoo(旧称OpenERP)はベルギーのOpenERP S.A.社により開発とりまとめが行われている、世界で大人気のオープンソースの業務アプリケーションスイートです。機能を豊富に備え、操作性、拡張性、保守性の各面で優れており、圧倒的なコストパフォーマンスを誇ります。カバー領域は、従来のERPパッケージ守備範囲をもはや越え、CMS、Eコマース、イベント管理等、多岐に亘ります。

  • Odoo8プラグイン全部入りイメージ作った(896.4 MB)


docker-compose.yml

odoo:

image: odoo
ports:
- "8069:8069"
links:
- db
db:
image: postgres
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=odoo



OpenVPN

community-openvpn-httpbasic.png.png


  • インターネットに接続しているPCが1台あれば、VPNサーバーが設置できます(自社内サーバーはもちろん、レンタルVPSなどで運用しているケースもあります)。特定のインターネットプロバイダなどの制約もありません。このような優れた特徴から、個人ユーザーや中小企業での導入に適しています

  • dockerでvpnサーバーをたてる


docker-compose.yml

openvpn-httpbasic-data:

labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
image: busybox
volumes:
- /etc/openvpn/

openvpn-httpbasic-server:
ports:
- 1194:1194/tcp
environment:
AUTH_METHOD: httpbasic
AUTH_HTTPBASIC_URL: ${AUTH_HTTPBASIC_URL}
CERT_COUNTRY: ${CERT_COUNTRY}
CERT_PROVINCE: ${CERT_PROVINCE}
CERT_CITY: ${CERT_CITY}
CERT_ORG: ${CERT_ORG}
CERT_EMAIL: ${CERT_EMAIL}
CERT_OU: ${CERT_OU}
REMOTE_IP: ${REMOTE_IP}
REMOTE_PORT: ${REMOTE_PORT}
VPNPOOL_NETWORK: ${VPNPOOL_NETWORK}
VPNPOOL_CIDR: ${VPNPOOL_CIDR}
OPENVPN_EXTRACONF: ${OPENVPN_EXTRACONF}
labels:
io.rancher.sidekicks: openvpn-httpbasic-data
io.rancher.container.pull_image: always
image: mdns/rancher-openvpn:1.0
privileged: true
volumes_from:
- openvpn-httpbasic-data



OpenVPN-httpdigest

community-openvpn-httpbasic.png.png


docker-compose.yml

openvpn-httpdigest-data:

labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
image: busybox
volumes:
- /etc/openvpn/

openvpn-httpdigest-server:
ports:
- 1194:1194/tcp
environment:
AUTH_METHOD: httpdigest
AUTH_HTTPDIGEST_URL: ${AUTH_HTTPDIGEST_URL}
CERT_COUNTRY: ${CERT_COUNTRY}
CERT_PROVINCE: ${CERT_PROVINCE}
CERT_CITY: ${CERT_CITY}
CERT_ORG: ${CERT_ORG}
CERT_EMAIL: ${CERT_EMAIL}
CERT_OU: ${CERT_OU}
REMOTE_IP: ${REMOTE_IP}
REMOTE_PORT: ${REMOTE_PORT}
VPNPOOL_NETWORK: ${VPNPOOL_NETWORK}
VPNPOOL_CIDR: ${VPNPOOL_CIDR}
OPENVPN_EXTRACONF: ${OPENVPN_EXTRACONF}
labels:
io.rancher.sidekicks: openvpn-httpdigest-data
io.rancher.container.pull_image: always
image: mdns/rancher-openvpn:1.0
privileged: true
volumes_from:
- openvpn-httpdigest-data



OpenVPN-LDAP

community-openvpn-httpbasic.png.png


docker-compose.yml

openvpn-ldap-data:

labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
image: busybox
volumes:
- /etc/openvpn/

openvpn-ldap-server:
ports:
- 1194:1194/tcp
environment:
AUTH_METHOD: ldap
AUTH_LDAP_URL: ${AUTH_LDAP_URL}
AUTH_LDAP_BASEDN: ${AUTH_LDAP_BASEDN}
AUTH_LDAP_SEARCH: ${AUTH_LDAP_SEARCH}
AUTH_LDAP_BINDDN: ${AUTH_LDAP_BINDDN}
AUTH_LDAP_BINDPWD: ${AUTH_LDAP_BINDPWD}
CERT_COUNTRY: ${CERT_COUNTRY}
CERT_PROVINCE: ${CERT_PROVINCE}
CERT_CITY: ${CERT_CITY}
CERT_ORG: ${CERT_ORG}
CERT_EMAIL: ${CERT_EMAIL}
CERT_OU: ${CERT_OU}
REMOTE_IP: ${REMOTE_IP}
REMOTE_PORT: ${REMOTE_PORT}
VPNPOOL_NETWORK: ${VPNPOOL_NETWORK}
VPNPOOL_CIDR: ${VPNPOOL_CIDR}
OPENVPN_EXTRACONF: ${OPENVPN_EXTRACONF}
labels:
io.rancher.sidekicks: openvpn-ldap-data
io.rancher.container.pull_image: always
image: mdns/rancher-openvpn:1.0
privileged: true
volumes_from:
- openvpn-ldap-data



Owncloud

community-owncloud.svg.png


docker-compose.yml

owncloud:

image: owncloud
ports:
- "80:80"
links:
- db

db:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=password



Percona XtraDB Cluster

community-pxc.1.png


docker-compose.yml

pxc-clustercheck:

image: flowman/percona-xtradb-cluster-clustercheck:v2.0
net: "container:pxc"
labels:
io.rancher.container.hostname_override: container_name
volumes_from:
- 'pxc-data'
pxc-server:
image: flowman/percona-xtradb-cluster:5.6.28-1
net: "container:pxc"
environment:
MYSQL_ROOT_PASSWORD: "${mysql_root_password}"
PXC_SST_PASSWORD: "${pxc_sst_password}"
MYSQL_DATABASE: "${mysql_database}"
MYSQL_USER: "${mysql_user}"
MYSQL_PASSWORD: "${mysql_password}"
labels:
io.rancher.container.hostname_override: container_name
volumes_from:
- 'pxc-data'
entrypoint: bash -x /opt/rancher/start_pxc
pxc-data:
image: flowman/percona-xtradb-cluster:5.6.28-1
net: none
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
volumes:
- /var/lib/mysql
- /etc/mysql/conf.d
- /docker-entrypoint-initdb.d
command: /bin/true
labels:
io.rancher.container.start_once: true
pxc:
image: flowman/percona-xtradb-cluster-confd:v0.2.0
labels:
io.rancher.sidekicks: pxc-clustercheck,pxc-server,pxc-data
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
volumes_from:
- 'pxc-data'



PHP Adminer

community-adminer.png


  • Adminerは、PHPで作られているデータベース管理ツールです。(Apache License or GPL 2)

  • phpMyAdminのようにWeb上でMySQLなど※のデータベースを操作できる (一通りのことはできる)
    MySQL以外で使用したことはないですが、公式では下記の対応を謳っています。

  • MySQL, PostgreSQL, SQLite, MS SQL, Oracle, Firebird, SimpleDB, Elasticsearch, MongoDB
    1ファイルで作成されているため設置が簡単

  • Adminerを設置する


docker-compose.yml

adminer:

image: 'clue/adminer:latest'
restart: on-failure


Plone 5.0

community-plone.png


  • Plone は WordPress と違い「Python」+「アプリケーションサーバ」+「DBMS」+「CMS アプリケーション」を一度にすべてインストールすることができるので、「PHP」+「Web サーバ」+「MySQL」を別途用意しなければならない WordPress より導入そのものは簡単にできる。

  • Yet Another 仕事のツール


docker-compose.yml

zeoserver:

image: plone:5.0
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.community.plone=true
io.rancher.community.plone: "true"
volumes:
- ${volume_name}:/data
volume_driver: ${volume_driver}
command: ["zeoserver"]

plone:
image: plone:5.0
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.community.plone=true
io.rancher.community.plone: "true"
links:
- zeoserver:zeoserver
environment:
ADDONS: ${addons}
ZEO_ADDRESS: zeoserver:8100

lb:
image: rancher/load-balancer-service
labels:
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.community.plone=true
io.rancher.community.plone: "true"
links:
- plone:plone
ports:
- ${http_port}:8080



PointHQ DNS

community-infra-pointhq.png


docker-compose.yml

pointhq:

image: rancher/external-dns:v0.2.1
command: --provider pointhq
expose:
- 1000
environment:
POINTHQ_TOKEN: ${POINTHQ_TOKEN}
POINTHQ_EMAIL: ${POINTHQ_EMAIL}
ROOT_DOMAIN: ${ROOT_DOMAIN}
TTL: ${TTL}
labels:
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"



PowerDNS

community-infra-powerdns-external-dns.png


docker-compose.yml

powerdns:

image: rancher/external-dns:v0.5.0
command: "-provider=powerdns"
expose:
- 1000
environment:
POWERDNS_API_KEY: ${POWERDNS_API_KEY}
POWERDNS_URL: ${POWERDNS_URL}
ROOT_DOMAIN: ${ROOT_DOMAIN}
TTL: ${TTL}
labels:
io.rancher.container.pull_image: always
io.rancher.container.create_agent: "true"
io.rancher.container.agent.role: "external-dns"


Prometheus

community-Prometheus.svg.png


docker-compose.yml

cadvisor:

labels:
io.rancher.scheduler.global: 'true'
tty: true
image: google/cadvisor:latest
stdin_open: true
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"

node-exporter:
labels:
io.rancher.scheduler.global: 'true'
tty: true
image: prom/node-exporter:latest
stdin_open: true

prom-conf:
tty: true
image: infinityworks/prom-conf:17
volumes:
- /etc/prom-conf/
net: none

prometheus:
tty: true
image: prom/prometheus:v1.4.1
command: -alertmanager.url=http://alertmanager:9093 -config.file=/etc/prom-conf/prometheus.yml -storage.local.path=/prometheus -web.console.libraries=/etc/prometheus/console_libraries -web.console.templates=/etc/prometheus/consoles
ports:
- 9090:9090
labels:
io.rancher.sidekicks: prom-conf
volumes_from:
- prom-conf
links:
- cadvisor:cadvisor
- node-exporter:node-exporter
- prometheus-rancher-exporter:prometheus-rancher-exporter

influxdb:
image: tutum/influxdb:0.10
ports:
- 2003:2003
environment:
- PRE_CREATE_DB=grafana;prometheus;rancher
- GRAPHITE_DB=rancher
- GRAPHITE_BINDING=:2003

graf-db:
tty: true
image: infinityworks/graf-db:10
command: cat
volumes:
- /var/lib/grafana/
net: none

grafana:
tty: true
image: grafana/grafana:4.0.2
ports:
- 3000:3000
labels:
io.rancher.sidekicks: graf-db
volumes_from:
- graf-db
links:
- prometheus:prometheus
- prometheus-rancher-exporter:prometheus-rancher-exporter

prometheus-rancher-exporter:
tty: true
labels:
io.rancher.container.create_agent: true
io.rancher.container.agent.role: environment
image: infinityworks/prometheus-rancher-exporter:v0.22.40



Puppet 4 .x (standalone)

community-puppet-standalone.png


docker-compose.yml

puppet-lb:

ports:
- ${PUPPET_PORT}:8140/tcp
labels:
io.rancher.loadbalancer.target.puppet: 8140=${PUPPET_PORT}
tty: true
image: rancher/load-balancer-service
links:
- puppet:puppet
stdin_open: true

puppet:
hostname: puppet
domainname: puppet.rancher.internal
labels:
io.rancher.sidekicks: puppet-config-volumes
image: nrvale0/puppetserver-standalone
environment:
- CONTROL_REPO_GIT_URI=${CONTROL_REPO_GIT_URI}
volumes_from:
- puppet-config-volumes

puppet-config-volumes:
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: "true"
volumes:
- /etc/puppetlabs/ssl
- /opt/puppetlabs/r10k/cache
- /etc/puppetlabs/code
entrypoint: /bin/true
image: alpine



px-dev

687474703a2f2f692e696d6775722e636f6d2f6c384a526878672e6a7067.png


  • PX-Developer(PX-Dev)は、コンテナのスケールアウトストレージとデータサービスです。 PX-Dev自体は、アプリケーションスタックとともにコンテナとして展開されます。 アプリケーションスタックでPX-Devを実行することで、スケールアウト環境でストレージの永続性、容量管理、パフォーマンス、可用性をコンテナ単位で制御できます。 Docker Engineを搭載したサーバーにPX-Developerコンテナを展開すると、そのサーバーはスケールアウトされたストレージノードになります。 コンピューティングと統合されたストレージの稼働は、ベアメタル駆動のパフォーマンスをもたらします。

  • https://github.com/portworx/px-dev


docker-compose.yml

portworx:

labels:
io.rancher.container.create_agent: 'true'
io.rancher.scheduler.global: 'true'
io.rancher.container.pull_image: 'always'
image: portworx/px-dev
container_name: px
ipc: host
net: host
privileged: true
environment:
CLUSTER_ID: ${cluster_id}
KVDB: ${kvdb}
volumes:
- /dev:/dev
- /usr/src:/usr/src
- /run/docker/plugins:/run/docker/plugins
- /var/lib/osd:/var/lib/osd:shared
- /etc/pwx:/etc/pwx
- /opt/pwx/bin:/export_bin:shared
- /var/run/docker.sock:/var/run/docker.sock
- /var/cores:/var/cores
command: -c ${cluster_id} -k ${kvdb} -a -z -f


QuasarDB

community-quasardb-community.png


  • quasardbは、リアルタイム解析用に最適化されたソフトウェア定義のストレージテクノロジです。ファイルシステムとデータベース間のリンクが欠落しています。

  • quasardbデータは、ユーザーにパターンを強制しません。 データは、Microsoft Excel、ActivePivot、Apache Sparkなどのアプリケーションやマルチ言語APIを使用して直接使用できます。

  • https://www.quasardb.net/-what-is-nosql-


docker-compose.yml

qdb-ui:

ports:
- ${webport}:${webport}/tcp
environment:
DEVICE: ${device}
PEER: qdb1
PORT: '${qdbport}'
WEBPORT: '${webport}'
labels:
io.rancher.container.dns: 'true'
command:
- /start.sh
- httpd
image: makazi/quasardb:2.0.0-rc.8
net: host

qdb1-data:
labels:
io.rancher.container.start_once: 'true'
command:
- /bin/true
image: busybox
volumes:
- /var/db/qdb
- /var/lib/qdb

qdb2-data:
labels:
io.rancher.container.start_once: 'true'
command:
- /bin/true
image: busybox
volumes:
- /var/db/qdb
- /var/lib/qdb

qdb2:
ports:
- ${qdbport}:${qdbport}/tcp
environment:
ID: 2/2
DEVICE: ${device}
PEER: qdb1
PORT: '${qdbport}'
REPLICATION: ${replication}
labels:
io.rancher.sidekicks: qdb2-data
io.rancher.container.dns: 'true'
command:
- /start.sh
image: makazi/quasardb:2.0.0-rc.8
volumes_from:
- qdb2-data
net: host
qdb1:
ports:
- ${qdbport}:${qdbport}/tcp
environment:
ID: 1/2
DEVICE: ${device}
PORT: '${qdbport}'
REPLICATION: ${replication}
labels:
io.rancher.sidekicks: qdb1-data
io.rancher.container.dns: 'true'
command:
- /start.sh
image: makazi/quasardb:2.0.0-rc.8
volumes_from:
- qdb1-data
net: host



RabbitMQ

community-rabbitmq-3.png


docker-compose.yml

rabbitmq:

image: rdaneel/rabbitmq-conf:0.2.0
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: rabbitmq-base,rabbitmq-datavolume
volumes_from:
- rabbitmq-datavolume
environment:
- RABBITMQ_NET_TICKTIME=${net_ticktime}
- RABBITMQ_CLUSTER_PARTITION_HANDLING=${cluster_partition_handling}
- CONFD_ARGS=${confd_args}
rabbitmq-datavolume:
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
volumes:
- /etc/rabbitmq
- /opt/rancher/bin
entrypoint: /bin/true
image: rabbitmq:3.6-management
rabbitmq-base:
labels:
io.rancher.container.hostname_override: container_name
image: rabbitmq:3.6-management
restart: always
volumes_from:
- rabbitmq-datavolume
net: "container:rabbitmq"
entrypoint:
- /opt/rancher/bin/run.sh
environment:
- RABBITMQ_ERLANG_COOKIE=${erlang_cookie}

Launch



rancher-security-bench

community-rancher-bench-security.png


docker-compose.yml

rancher-bench-security:

image: germanramos/rancher-bench-security:1.11.0
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
net: host
pid: host
stdin_open: true
tty: true
volumes:
- /var/lib:/var/lib
- /var/run/docker.sock:/var/run/docker.sock
- /usr/lib/systemd:/usr/lib/systemd
- /etc:/etc
- /tmp:/tmp
environment:
- INTERVAL=${INTERVAL}

web-server:
image: germanramos/nginx-php-fpm:v5.6.21
stdin_open: true
tty: true
labels:
traefik.enable: stack
traefik.domain: ${TRAEFIK_DOMAIN}
traefik.port: 80
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.hostname_override: container_name
volumes:
- /tmp/cis:/var/www/html



Registry

community-registry.svg.png


  • private docker hubです

  • LDAPとアカウントを連携できる


docker-compose.yml

db:

image: mysql:5.7.10
environment:
MYSQL_DATABASE: portus
MYSQL_ROOT_PASSWORD: ${ROOTPASSWORD}
MYSQL_USER: portus
MYSQL_PASSWORD: ${DBPASSWORD}
tty: true
stdin_open: true
volumes:
- ${DIR}/db:/var/lib/mysql
labels:
registry.portus.db: 1
sslproxy:
image: nginx:1.9.9
tty: true
stdin_open: true
links:
- portus:portus
volumes:
- ${DIR}/certs:/etc/nginx/certs:ro
- ${DIR}/proxy:/etc/nginx/conf.d:ro
labels:
io.rancher.scheduler.affinity:container_label_soft: registry.portus.db=1
registry:
image: registry:2.3.1
environment:
REGISTRY_LOG_LEVEL: warn
REGISTRY_STORAGE_DELETE_ENABLED: true
REGISTRY_AUTH: token
REGISTRY_AUTH_TOKEN_REALM: https://${DOMAIN}:${PPORT}/v2/token
REGISTRY_AUTH_TOKEN_SERVICE: ${DOMAIN}:${RPORT}
REGISTRY_AUTH_TOKEN_ISSUER: ${DOMAIN}
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /certs/registry.crt
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/registry.crt
REGISTRY_HTTP_TLS_KEY: /certs/registry.key
REGISTRY_HTTP_SECRET: httpsecret
REGISTRY_NOTIFICATIONS_ENDPOINTS: >
- name: portus
url: http://portus:3000/v2/webhooks/events
timeout: 500
threshold: 5
backoff: 1
tty: true
stdin_open: true
links:
- portus:portus
volumes:
- ${DIR}/certs:/certs
- ${DIR}/data:/var/lib/registry
lb:
image: rancher/load-balancer-service
tty: true
stdin_open: true
ports:
- ${RPORT}:5000/tcp
- ${PPORT}:443/tcp
labels:
io.rancher.loadbalancer.target.sslproxy: ${PPORT}=443
io.rancher.loadbalancer.target.registry: ${RPORT}=5000
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:not_host_label: lb=0
io.rancher.scheduler.affinity:not_host_label: registry.enabled=false
links:
- registry:registry
- sslproxy:sslproxy
portus:
image: sshipway/portus:2.0.5
environment:
PORTUS_MACHINE_FQDN: ${DOMAIN}
PORTUS_PRODUCTION_HOST: db
PORTUS_PRODUCTION_DATABASE: portus
PORTUS_PRODUCTION_USERNAME: portus
PORTUS_PRODUCTION_PASSWORD: ${DBPASSWORD}
PORTUS_GRAVATAR_ENABLED: true
PORTUS_KEY_PATH: /certs/registry.key
PORTUS_PASSWORD: ${DBPASSWORD}
PORTUS_SECRET_KEY_BASE: ${ROOTPASSWORD}
PORTUS_CHECK_SSL_USAGE_ENABLED: true
PORTUS_SMTP_ENABLED: false
PORTUS_LDAP_ENABLED: ${LDAP}
PORTUS_LDAP_HOSTNAME: ${LDAPHOST}
PORTUS_LDAP_PORT: ${LDAPPORT}
PORTUS_LDAP_METHOD: ${LDAPTLS}
PORTUS_LDAP_BASE: ${LDAPBASE}
PORTUS_LDAP_UID: ${LDAPBINDUID}
PORTUS_LDAP_AUTHENTICATION_ENABLED: ${LDAPBIND}
PORTUS_LDAP_AUTHENTICATION_BIND_DN: ${LDAPBINDDN}
PORTUS_LDAP_AUTHENTICATION_PASSWORD: ${LDAPBINDPASS}
PORTUS_LDAP_GUESS_EMAIL_ENABLED: true
PORTUS_LDAP_GUESS_EMAIL_ATTR: mail
PORTUS_PORT: ${PPORT}
REGISTRY_SSL_ENABLED: true
REGISTRY_HOSTNAME: ${DOMAIN}
REGISTRY_PORT: ${RPORT}
REGISTRY_NAME: Registry
tty: true
stdin_open: true
volumes:
- ${DIR}/certs:/certs
- ${DIR}/proxy:/etc/nginx/conf.d
links:
- db:db
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.affinity:container_label_soft: registry.portus.db=1
registry.portus.app: 1


Registry Convoy

community-registry.svg.png


  • Registryをconvoyのvolumeを利用して使うことができる


docker-compose.yml

db:

image: mysql:5.7.10
environment:
MYSQL_DATABASE: portus
MYSQL_ROOT_PASSWORD: ${ROOTPASSWORD}
MYSQL_USER: portus
MYSQL_PASSWORD: ${DBPASSWORD}
tty: true
stdin_open: true
volume_driver: ${DRIVER}
volumes:
- ${PFX}-db:/var/lib/mysql
labels:
registry.portus.db: 1
sslproxy:
image: nginx:1.9.9
tty: true
stdin_open: true
links:
- portus:portus
volume_driver: ${DRIVER}
volumes:
- ${PFX}-certs:/etc/nginx/certs:ro
- ${PFX}-proxy:/etc/nginx/conf.d:ro
labels:
io.rancher.scheduler.affinity:container_label_soft: registry.portus.db=1
registry:
image: registry:2.3.1
environment:
REGISTRY_LOG_LEVEL: warn
REGISTRY_STORAGE_DELETE_ENABLED: true
REGISTRY_AUTH: token
REGISTRY_AUTH_TOKEN_REALM: https://${DOMAIN}:${PPORT}/v2/token
REGISTRY_AUTH_TOKEN_SERVICE: ${DOMAIN}:${RPORT}
REGISTRY_AUTH_TOKEN_ISSUER: ${DOMAIN}
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /certs/registry.crt
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/registry.crt
REGISTRY_HTTP_TLS_KEY: /certs/registry.key
REGISTRY_HTTP_SECRET: httpsecret
REGISTRY_NOTIFICATIONS_ENDPOINTS: >
- name: portus
url: http://portus:3000/v2/webhooks/events
timeout: 500
threshold: 5
backoff: 1
tty: true
stdin_open: true
links:
- portus:portus
volume_driver: ${DRIVER}
volumes:
- ${PFX}-certs:/certs
- ${PFX}-data:/var/lib/registry
lb:
image: rancher/load-balancer-service
tty: true
stdin_open: true
ports:
- ${RPORT}:5000/tcp
- ${PPORT}:443/tcp
labels:
io.rancher.loadbalancer.target.sslproxy: ${PPORT}=443
io.rancher.loadbalancer.target.registry: ${RPORT}=5000
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:not_host_label: lb=0
io.rancher.scheduler.affinity:not_host_label: registry.enabled=false
links:
- registry:registry
- sslproxy:sslproxy
portus:
image: sshipway/portus:2.0.5
environment:
PORTUS_MACHINE_FQDN: ${DOMAIN}
PORTUS_PRODUCTION_HOST: db
PORTUS_PRODUCTION_DATABASE: portus
PORTUS_PRODUCTION_USERNAME: portus
PORTUS_PRODUCTION_PASSWORD: ${DBPASSWORD}
PORTUS_GRAVATAR_ENABLED: true
PORTUS_KEY_PATH: /certs/registry.key
PORTUS_PASSWORD: ${DBPASSWORD}
PORTUS_SECRET_KEY_BASE: ${ROOTPASSWORD}
PORTUS_CHECK_SSL_USAGE_ENABLED: true
PORTUS_SMTP_ENABLED: false
PORTUS_LDAP_ENABLED: ${LDAP}
PORTUS_LDAP_HOSTNAME: ${LDAPHOST}
PORTUS_LDAP_PORT: ${LDAPPORT}
PORTUS_LDAP_METHOD: ${LDAPTLS}
PORTUS_LDAP_BASE: ${LDAPBASE}
PORTUS_LDAP_UID: cn
PORTUS_LDAP_AUTHENTICATION_ENABLED: ${LDAPBIND}
PORTUS_LDAP_AUTHENTICATION_BIND_DN: ${LDAPBINDDN}
PORTUS_LDAP_AUTHENTICATION_PASSWORD: ${LDAPBINDPASS}
PORTUS_LDAP_GUESS_EMAIL_ENABLED: true
PORTUS_LDAP_GUESS_EMAIL_ATTR: mail
PORTUS_PORT: ${PPORT}
REGISTRY_SSL_ENABLED: true
REGISTRY_HOSTNAME: ${DOMAIN}
REGISTRY_PORT: ${RPORT}
REGISTRY_NAME: Registry
tty: true
stdin_open: true
volume_driver: ${DRIVER}
volumes:
- ${PFX}-certs:/certs
- ${PFX}-proxy:/etc/nginx/conf.d
links:
- db:db
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.affinity:container_label_soft: registry.portus.db=1
registry.portus.app: 1



REX-Ray

community-rexray.jpeg


  • REX-Rayは、ベンダーに依存しないストレージオーケストレーションエンジンを提供します。 主な設計目標は、DockerコンテナおよびMesosフレームワークとタスクに永続的なストレージを提供することです。

  • また、Goパッケージ、CLIツール、およびLinuxサービスとして追加のユースケースで使用できるようになります。


docker-compose.yml

rexray:

image: wlan0/sdc2
stdin_open: true
tty: true
privileged: true
net: host
environment:
STACK_NAME: ${SCALEIO_STACK_NAME}
SYSTEM_ID: ${SCALEIO_SYSTEM_ID}
MDM_IP: ${SCALEIO_MDM_IP}
volumes:
- /proc:/host/proc
labels:
io.rancher.container.pull_image: always
io.rancher.container.dns: true
io.rancher.scheduler.affinity:host_label: rexray.scaleio=true



RocketChat

community-rocket-chat.svg.png


Route53 DNS

library-route53.svg.png


docker-compose.yml

mongo:

image: mongo
# volumes:
# - ./data/runtime/db:/data/db
# - ./data/dump:/dump
command: mongod --smallfiles --oplogSize 128

rocketchat:
image: rocketchat/rocket.chat:latest
# volumes:
# - ./uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://yourhost:3000
- MONGO_URL=mongodb://mongo:27017/rocketchat
links:
- mongo:mongo
ports:
- 3000:3000

# hubot, the popular chatbot (add the bot user first and change the password before starting this image)
hubot:
image: rocketchat/hubot-rocketchat
environment:
- ROCKETCHAT_URL=rocketchat:3000
- ROCKETCHAT_ROOM=GENERAL
- ROCKETCHAT_USER=bot
- ROCKETCHAT_PASSWORD=botpassword
- BOT_NAME=bot
# you can add more scripts as you'd like here, they need to be installable by npm
- EXTERNAL_SCRIPTS=hubot-help,hubot-seen,hubot-links,hubot-diagnostics
links:
- rocketchat:rocketchat
# this is used to expose the hubot port for notifications on the host on port 3001, e.g. for hubot-jenkins-notifier
ports:
- 3001:8080



ScaleIO NAS/DAS

community-scaleio.jpeg


docker-compose.yml


tb:
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
image: wlan0/tb
labels:
io.rancher.container.pull_image: always
io.rancher.container.hostname_override: container_name

sds:
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
image: wlan0/sds
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.pull_image: always

mdm:
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- /dev/shm:/dev/shm
image: wlan0/mdm
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/primary_mdm
io.rancher.container.pull_image: always

primary-mdm:
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- /dev/shm:/dev/shm
image: wlan0/mdm
command: /usr/sbin/init
entrypoint: /run_mdm_and_configure.sh
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/mdm
io.rancher.container.pull_image: always



Secrets Bridge

community-secrets-bridge-server.png


  • Secrets Bridgeサービスは、起動時のDockerコンテナがVault内の秘密と安全に接続されるように、RancherとVaultを統合する標準化された方法です。

  • シークレットブリッジサービスは、サーバーとエージェントで構成されています


docker-compose.yml

secrets-bridge:

image: rancher/secrets-bridge:v0.2.0
environment:
CATTLE_ACCESS_KEY: ${CATTLE_ACCESS_KEY}
CATTLE_SECRET_KEY: ${CATTLE_SECRET_KEY}
CATTLE_URL: ${CATTLE_URL}
VAULT_TOKEN: ${VAULT_TOKEN}
VAULT_CUBBYPATH: ${VAULT_CUBBYPATH}
command:
- server
- --vault-url
- ${VAULT_URL}
- --rancher-url
- $CATTLE_URL
- --rancher-secret
- ${CATTLE_SECRET_KEY}
- --rancher-access
- ${CATTLE_ACCESS_KEY}
secrets-bridge-lb:
ports:
- "${LBPORT}:8181"
image: rancher/load-balancer-service
links:
- secrets-bridge:secrets-bridge


Secrets Bridge Agents

community-secrets-bridge-server.png


  • Secrets Bridgeサービスは、起動時のDockerコンテナがVault内の秘密と安全に接続されるように、RancherとVaultを統合する標準化された方法です。

  • シークレットブリッジサービスは、サーバーとエージェントで構成されています


docker-compose.yml

secrets-bridge:

image: rancher/secrets-bridge:v0.2.0
command: agent --bridge-url ${BRIDGE_URL}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
privileged: true
labels:
io.rancher.container.create_agent: true
io.rancher.container.agent.role: agent
io.rancher.scheduler.global: true


Sematext Docker Agent

community-sematext.png


  • https://github.com/sematext/sematext-agent-docker

  • Docker用Sematext Agentは、SPM Docker Monitoring&Logsene / Hosted ELK Log Management用のDocker APIからメトリック、イベント、ログを収集します。 CoreOS、RancherOS、Docker Swarm、Kubernetes、Apache Mesos、Hashicorp Nomad、Amzon ECS、...インストールを参照してください。


docker-compose.yml

sematext-agent:  

image: 'sematext/sematext-agent-docker:${image_version}'
environment:
- LOGSENE_TOKEN=${logsene_token}
- SPM_TOKEN=${spm_token}
- GEOIP_ENABLED=${geoip_enabled}
- HTTPS_PROXY=${https_proxy}
- HTTP_PROXY=${http_proxy}
- MATCH_BY_IMAGE=${match_by_image}
- MATCH_BY_NAME=${match_by_name}
- SKIP_BY_IMAGE=${match_by_image}
- SKIP_BY_NAME=${match_by_name}
- LOGAGENT_PATTERNS=${logagent_patterns}
- KUBERNETES=${kubernetes}
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
io.rancher.scheduler.global: 'true'

Launch



Sentry

community-sentry.png


docker-compose.yml

sentry-postgres:

environment:
POSTGRES_USER: sentry
POSTGRES_PASSWORD: secret
PGDATA: /data/postgres/data
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
log_opt: {}
image: postgres:9.5.3
stdin_open: true
sentry-cron:
environment:
SENTRY_EMAIL_HOST: ${sentry_email_host}
SENTRY_EMAIL_PASSWORD: ${sentry_email_password}
SENTRY_EMAIL_PORT: '${sentry_email_port}'
SENTRY_EMAIL_USER: ${sentry_email_user}
SENTRY_SECRET_KEY: ${sentry_secret_key}
SENTRY_SERVER_EMAIL: ${sentry_server_email}
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
command:
- run
- cron
log_opt: {}
image: sentry:8.5.0
links:
- sentry-postgres:postgres
- sentry-redis:redis
stdin_open: true
sentry-redis:
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
log_opt: {}
image: redis:3.2.0-alpine
stdin_open: true
sentry:
ports:
- ${sentry_public_port}:9000/tcp
environment:
SENTRY_EMAIL_HOST: ${sentry_email_host}
SENTRY_EMAIL_PASSWORD: ${sentry_email_password}
SENTRY_EMAIL_PORT: '${sentry_email_port}'
SENTRY_EMAIL_USER: ${sentry_email_user}
SENTRY_SECRET_KEY: ${sentry_secret_key}
SENTRY_SERVER_EMAIL: ${sentry_server_email}
log_driver: ''
labels:
io.rancher.container.pull_image: always
tty: true
command:
- /bin/bash
- -c
- sentry upgrade --noinput && sentry createuser --email ${sentry_inital_user_email} --password ${sentry_inital_user_password} --superuser && /entrypoint.sh run web || /entrypoint.sh run web
log_opt: {}
image: sentry:8.5.0
links:
- sentry-postgres:postgres
- sentry-redis:redis
stdin_open: true
sentry-worker:
environment:
SENTRY_EMAIL_HOST: ${sentry_email_host}
SENTRY_EMAIL_PASSWORD: ${sentry_email_password}
SENTRY_EMAIL_PORT: '${sentry_email_port}'
SENTRY_EMAIL_USER: ${sentry_email_user}
SENTRY_SECRET_KEY: ${sentry_secret_key}
SENTRY_SERVER_EMAIL: ${sentry_server_email}
log_driver: ''
labels:
io.rancher.scheduler.global: 'true'
io.rancher.container.pull_image: always
tty: true
command:
- run
- worker
log_opt: {}
image: sentry:8.5.0
links:
- sentry-postgres:postgres
- sentry-redis:redis
stdin_open: true


SonarQube

community-sonarqube.png


docker-compose.yml

sonarqube-data:

image: busybox
net: none
labels:
io.rancher.container.start_once: true
volumes:
- /opt/sonarqube/extensions/plugins

sonarqube:
image: sonarqube
ports:
- ${http_port}:9000
links:
- postgres
environment:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
SONARQUBE_JDBC_USERNAME: ${postgres_user}
SONARQUBE_JDBC_PASSWORD: ${postgres_password}
SONARQUBE_JDBC_URL: jdbc:postgresql://postgres/sonar
labels:
io.rancher.sidekicks: sonarqube-data
volumes_from:
- sonarqube-data

postgres-data:
image: busybox
net: none
labels:
io.rancher.container.start_once: true
volumes:
- ${postgres_data}

postgres:
image: postgres:latest
ports:
- ${postgress_port}:5432
environment:
PGDATA: ${postgres_data}
POSTGRES_DB: ${postgres_db}
POSTGRES_USER: ${postgres_user}
POSTGRES_PASSWORD: ${postgres_password}
tty: true
stdin_open: true
labels:
io.rancher.sidekicks: postgres-data
volumes_from:
- postgres-data



Sysdig

community-sysdig.png


  • strace + tcpdump + lsof + htop + iftop + Lua = sysdig って感じです。curses によるグラフィカルなUI出力なども可能です。

  • Logging driversを試してみた


docker-compose.yml

sysdig:

container_name: sysdig
privileged: true
stdin_open: true
tty: true
image: sysdig/sysdig:${VERSION}
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
labels:
io.rancher.scheduler.global: true
io.rancher.scheduler.affinity:host_label_ne: ${HOST_EXCLUDE_LABEL}


Sysdig Cloud

community-sysdig-cloud.svg.png


  • Sysdig のクラウドサービス版


docker-compose.yml

sysdig-agent:

container_name: sysdig-agent
privileged: true
image: sysdig/agent:${VERSION}
net: "host"
pid: "host"
environment:
ACCESS_KEY: ${SDC_ACCESS_KEY}
TAGS: "${SDC_TAGS}"
ADDITIONAL_CONF: "${SDC_ADDITIONAL_CONF}"
log_opt:
max-size: ${LOG_SIZE}
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock
- /dev:/host/dev
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- /usr:/host/usr:ro
labels:
io.rancher.scheduler.global: true
io.rancher.scheduler.affinity:host_label_ne: ${HOST_EXCLUDE_LABEL}


Taiga

community-taiga.png


docker-compose.yml

postgres:

image: postgres
environment:
- POSTGRES_DB=taiga
- POSTGRES_USER=taiga
- POSTGRES_PASSWORD=password
volumes:
- ${DATABASE}:/var/lib/postgresql/data

rabbit:
image: rabbitmq:3
hostname: rabbit

redis:
image: redis:3

celery:
image: celery
links:
- rabbit

events:
image: kartoffeltoby/taiga-events:latest
links:
- rabbit

taiga:
image: kartoffeltoby/taiga:latest
restart: always
links:
- postgres
- events
- rabbit
- redis
environment:
- TAIGA_HOSTNAME=${DOMAIN}
- TAIGA_DB_HOST=postgres
- TAIGA_DB_NAME=taiga
- TAIGA_DB_USER=taiga
- TAIGA_DB_PASSWORD=password
- HTTPS_SELF_DOMAIN=${DOMAIN}
- TAIGA_SSL=True
- TAIGA_SLEEP=10
volumes:
- ${DATA}:/usr/src/taiga-back/media



TeamCity

community-teamcity.png


docker-compose.yml

teamcity-data:

image: busybox
tty: true
volumes:
- /var/lib/teamcity

teamcity-server:
image: sjoerdmulder/teamcity:latest
ports:
- ${http_port}:8111
links:
- postgres:${postgress_container}
environment:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
labels:
io.rancher.sidekicks: teamcity-data
volumes_from:
- teamcity-data

postgres-data:
image: busybox
tty: true
volumes:
- ${postgres_data}

postgres:
image: postgres:latest
ports:
- ${postgress_port}:5432
environment:
PGDATA: ${postgres_data}
POSTGRES_DB: ${postgres_db}
POSTGRES_USER: ${postgres_user}
POSTGRES_PASSWORD: ${postgres_password}
tty: true
stdin_open: true
labels:
io.rancher.sidekicks: postgres-data
volumes_from:
- postgres-data

teamcity-agent:
image: sjoerdmulder/teamcity-agent:latest
links:
- teamcity-server:teamcity-server
environment:
TEAMCITY_SERVER: http://teamcity-server:8111



Traefik

community-traefik.png


docker-compose.yml

traefik:

ports:
- ${admin_port}:8000/tcp
- ${http_port}:${http_port}/tcp
- ${https_port}:${https_port}/tcp
log_driver: ''
labels:
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.sidekicks: traefik-conf
io.rancher.container.hostname_override: container_name
tty: true
log_opt: {}
image: rawmind/alpine-traefik:1.1.2-1
environment:
- CONF_INTERVAL=${refresh_interval}
- TRAEFIK_HTTP_PORT=${http_port}
- TRAEFIK_HTTPS_PORT=${https_port}
- TRAEFIK_HTTPS_ENABLE=${https_enable}
- TRAEFIK_ACME_ENABLE=${acme_enable}
- TRAEFIK_ACME_EMAIL=${acme_email}
- TRAEFIK_ACME_ONDEMAND=${acme_ondemand}
- TRAEFIK_ACME_ONHOSTRULE=${acme_onhostrule}
volumes_from:
- traefik-conf
traefik-conf:
log_driver: ''
labels:
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:host_label: ${host_label}
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
io.rancher.container.start_once: 'true'
tty: true
log_opt: {}
image: rawmind/rancher-traefik:0.3.4-18
net: none
volumes:
- /opt/tools



Turtl

community-turtl.png


  • ノート、研究、パスワード、ブックマーク、夢のログ、写真、書類その他の安全を守るためのプライベートな場所です。 Turtlの簡単なタグ付けとフィルタリングは、個人的なプロフェッショナルなプロジェクトのために組織化や研究に理想的です。

  • 最高のプライバシーのTurtlをEvernoteだと考えてください。

  • https://turtlapp.com/


docker-compose.yml

turtl-api-data:

labels:
io.rancher.container.start_once: 'true'
entrypoint:
- /bin/true
image: busybox
volumes:
- /opt/api/uploads
- /var/lib/rethinkdb/instance1

turtl-api:
ports:
- 8181:8181/tcp
environment:
DISPLAY_ERRORS: ${DISPLAY_ERRORS}
FQDN: ${FQDN}
SITE_URL: ${SITE_URL}
LOCAL_UPLOAD_URL: ${LOCAL_UPLOAD_URL}
LOCAL_UPLOAD_PATH: ${LOCAL_UPLOAD_PATH}
AWS_S3_TOKEN: ${AWS_S3_TOKEN}
ADMIN_EMAIL: ${ADMIN_EMAIL}
EMAIL_FROM: ${EMAIL_FROM}
SMTP_USER: ${SMTP_USER}
SMTP_PASS: ${SMTP_PASS}
DEFAULT_STORAGE_LIMIT: ${DEFAULT_STORAGE_LIMIT}
STORAGE_INVITE_CREDIT: ${STORAGE_INVITE_CREDIT}
image: webofmars/turtl-docker:latest
stdin_open: true
tty: true
labels:
io.rancher.sidekicks: turtl-api-data
volumes_from:
- turtl-api-data



Weave Scope

community-weavescope.png


  • Weave Scope はブラウザを通して、ホスト上でどのようなコンテナやアプリケーション(プロセス)が稼働しているかどうか見ることができ、その関連性をリアルタイムでマッピング(地図化)するためのものです。

  • Weave Scopeでコンテナ構成をリアルタイム視覚化


docker-compose.yml

weavescope-probe:

image: weaveworks/scope:1.0.0
privileged: true
net: host
pid: host
labels:
io.rancher.scheduler.global: true
io.rancher.container.dns: true
links:
- weavescope-app
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
tty: true
command:
- "--probe.docker"
- "true"
- "--no-app"
- "weavescope-app"
weavescope-app:
image: weaveworks/scope:1.0.0
ports:
- "4040:4040"
command:
- "--no-probe"



Wekan

community-wekan.svg.png


docker-compose.yml

wekandb:

image: mongo
# volumes:
# - ./data/runtime/db:/data/db
# - ./data/dump:/dump
command: mongod --smallfiles --oplogSize 128
ports:
- 27017

wekan:
image: mquandalle/wekan
links:
- wekandb
environment:
- MONGO_URL=mongodb://wekandb/wekan
- ROOT_URL=http://localhost:80
ports:
- 80:80



Wordpress

community-wordpress.png


  • 定番のオープンソースのブログソフトウェア


docker-compose.yml

wordpress:

image: wordpress
links:
- db:mysql
ports:
- ${public_port}:80

db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: example



XPilots

community-xpilot.1.png


  • ネットワーク型シューティングゲーム「XPilot」


docker-compose.yml

server:

environment:
PASSWORD: ${PASSWORD}
log_driver: ''
command:
- -server
log_opt: {}
tty: false
stdin_open: false
image: sshipway/xpilot:latest
labels:
xpilot: server
client:
environment:
DISPLAY: ${DISPLAY}
NAME: ${NAME}
SERVER: xpilot
log_driver: ''
command:
- xpilot
log_opt: {}
image: sshipway/xpilot:latest
links:
- server:xpilot
tty: false
stdin_open: false
labels:
io.rancher.scheduler.affinity:container_label_soft: xpilot=server
io.rancher.container.start_once: 'true'



まとめ


  • けっこう長い記事になってしまった。

  • 修正点や追加してほしいサンプルがあればおしらせくださいまし。

  • 次はk8sも全て晒してみようと思う