1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

プロキシを挟む自身のネットワーク構成内のマシンにDifyを構築してみました(プロキシの壁は厚いです)

Posted at

外部とプロキシ経由で通信する自身のネットワーク構成内のマシンにDifyを構築する手順についてです。
課題はまだまだありますが、現状の状況を記事とさせていただきます。

当時、他に類似の記事を見つけられませんでしたので、誰かの参考になればと思い投稿いたします。すでに良いやり方をご存じの方おられましたら教えていただけると幸いです。

前提

前提として、以下がありますのでご注意ください。

  • 環境
    • Difyバージョン:1.0.0
      • 注意事項:ssrf_proxyを経由せず自身のネットワーク構成用のプロキシを使用する構成に変更
        • ssrf_proxyを使用していないのでセキュリティ面にリスクがあります。
    • マシン
      • CPU(VCPU):16(/proc/cpuinfoの数)
      • Memory:32GB
      • IPアドレス:10.10.10.10(仮)
    • OS
      • Ubuntu 22.04.1
    • ミドルウェア
      • docker:28.0.1
        • rootless dockerを使用
          • プロキシ設定は以下のファイルに記載
            • ~/.config/systemd/user/docker.service.d/http-proxy.conf
      • docker compose: v2.33.1
      • firewall:ssh(22番)ポートのみ穴あけ
    • ネットワーク
      • 外部接続にプロキシを使用
    • 2025年2~3月頃に試した内容です。
    • 投稿内容は個人の意見であり、所属組織の公式見解ではありません。

なぜこの方法をやりたかったか?

ノーコードのDifyを使って、手軽に秘密情報を扱える環境が欲しかったからです。クラウド(SaaSやIaaS)に秘密情報を置くのはリスクがあるので。
幸い、私の環境には自身のネットワーク構成内に生成AIがあるため、その生成AIと連携する範囲であれば、秘密情報を自身のネットワーク外に出さずに扱える、と言うわけです。

構築手順

生成AIに聞いたり、公式ドキュメントを参考にしたり、エラー等をGoogle検索して構築しました。

$ export HTTP_PROXY=http://myproxy.co.jp:8888
$ export HTTPS_PROXY=http://myproxy.co.jp:8888
$ git clone https://github.com/langgenius/dify.git
$ cd dify
$ git checkout 1.0.0
$ cd docker
$ cp .env.example .env
$ vi .env
…(略)
# パスワード系は必要に応じて修正ください。
…(略)
UPLOAD_FILE_SIZE_LIMIT=100 # ナレッジに登録できるファイルサイズを100Mに変更
…(略)
CODE_EXECUTION_READ_TIMEOUT=600 # コードブロックのタイムアウト時間(秒)を延長
…(略)
SANDBOX_WORKER_TIMEOUT=600 # コードブロックのタイムアウト時間(秒)を延長
…(略)
NGINX_CLIENT_MAX_BODY_SIZE=100M # ナレッジに登録できるファイルサイズを100Mに変更
…(略)
EXPOSE_NGINX_PORT=48080 # DifyのWebGUI HTTPポートを80ポートから変更
EXPOSE_NGINX_SSL_PORT=48443 # DifyのWebGUI HTTPSポートを443ポートから変更
$ vi docker-compose.yaml
# 以下の変更を実施(詳細な全文は最後に記述しています)
#   プロキシの設定(HTTP_PROXY, HTTPS_PROXY, NO_PROXY)を各サービスに実施
#   ssrf_proxyを使用しない設定に変更

docker-compose.yamlファイルの内容の抜粋

# NP_PROXYに記述している「10.10.10.10」は、Difyを構築するマシンのIPアドレスです。
…(略)
x-shared-env: &shared-api-worker-env
  …(略)
  HTTP_PROXY: http://myproxy.co.jp:8888
  HTTPS_PROXY: http://myproxy.co.jp:8888
  NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
…(略)
# 全サービスのenvironmentにHTTP_PROXY、HTTPS_PROXY、NO_PROXYを記載します。
# ssrf_proxyを活用すると、ワークフローのコードブロックでrequestsがエラーとなるため、活用しないようにします。
#   sandboxは、特に色々設定を変更します。
services:
  # API service
  api:
    …(略)
    environment:
      …(略)
      - HTTP_PROXY=http://myproxy.co.jp:8888
      - HTTPS_PROXY=http://myproxy.co.jp:8888
      - NO_PROXY=localhost,127.0.0.1, 10.10.10.10,sandbox,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    …(略)
    networks:
      #- ssrf_proxy_network
      - default
…(略)
  sandbox:
    …(略)
    environment:
      …(略)
      #HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128}
      #HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    …(略)
    networks:
      #- ssrf_proxy_network
      - default
…(略)

docker composeコマンドでDifyを起動する。

$ docker compose up -d

WindowsマシンからTeraTermでDify構築マシンにログインしているので、ポートフォワードしてWebGUIに接続するようにする。
(ポートの番号は、設定のEXPOSE_NGINX_PORTと合わせてください。)

ポートフォワード設定
 TeraTermの設定SSH転送の追加で、以下を追加
  ローカルのポート(L):80
  リッスン(I):
  リモート側のホスト(M):127.0.0.1
  ポート(P):48080

ブラウザでDifyにアクセスし初期設定(管理者アカウント)して使う。
 http://127.0.0.1

課題

  • ssrf_proxyを有効なデフォルト構成で動作させると、ワークフローのコードブロックから外部への通信がエラーとなる
    • 調査中:暫定的にsandboxからssrf_proxyを経由しないようにすることで回避が可能

ご参考:ssrf_proxyを使用しないことによるセキュリティ面の懸念

生成AIに聞いてみると、以下の回答が返ってきました。ご参考までに。

結論から言うと、「完全に外部と遮断され、外部通信が必ずプロキシを経由する」というネットワーク構成が正しく機能しているのであれば、ssrf_proxy コンポーネントを使わずに Dify を運用しても大きな問題は起きない可能性があります。ただし、以下の点を考慮する必要があります。

  1. Dify で想定している SSRF 対策との整合性

    • Dify には、ユーザー入力などをきっかけに「意図せず内部ネットワークにアクセスしてしまう」ような SSRF (Server-Side Request Forgery) 攻撃を防ぐ目的で ssrf_proxy コンポーネントが存在します。
    • 「外部と自身のマシン(内部ネットワーク)はプロキシ経由でしか通信できない」 ことが事実上の防壁になっているのであれば、ssrf_proxy 相当の機能を満たしていると考えられます。
    • ただし、アプリケーション内部で何らかの理由でローカル IP (一般的には 127.0.0.1 や 10.x など) あてにアクセスが行われる可能性があり、プロキシがそれをブロックしていない場合は SSRF のリスクが残る場合があります。プロキシ側の設定で内部宛てアクセスをどう扱うかが鍵になります。
  2. プロキシ設定周りの不備によるセキュリティリスク

    • ネットワーク境界をプロキシのみが握っている場合、プロキシ設定を正しく行わなかったり、例外ホスト設定 (no_proxy など) に漏れがあると、意図せず直接アクセスできる経路が存在するリスクがあります。
    • プロキシ自体が外部から不要なアクセスを受けるような設定/脆弱性があると、そこを経由して内部ネットワークにアクセスされる可能性があるため、プロキシのセキュリティ設定を適切に維持することが必要です。
  3. Dify の各コンポーネントがプロキシ経由に対応しているか

    • Dify が外部 API (言語モデル API や各種サービス) を呼び出す際に、全てのコンポーネントがきちんとプロキシ設定を参照してくれるかを確認する必要があります。Docker Compose や Kubernetes などで導入する場合、コンテナの環境変数 (HTTP_PROXY / HTTPS_PROXY / NO_PROXY など) を適切に設定していないと、想定外に直接通信しようとして失敗する・通信が遮断されるなどのトラブルが発生します。
    • 特に HTTPS のインスペクションなどが絡むと、証明書の信頼関係の設定が追加で必要になる場合があります。
  4. SSRF に対する最終的な責任と検証

    • ssrf_proxy を外すと、SSRF 対策をネットワーク構成やプロキシ側に依存する状態になります。その場合、本当に想定外のリクエストが発生しないか、あるいは万が一発生しても内部リソースに到達しないか、という観点でのテストや監査が重要です。
    • Dify 本体のアプリケーションログやファイアウォール/プロキシのログなどをチェックすることで、意図しないアクセスが行われていないか継続的に確認できます。

まとめると、「プロキシ経由以外で外部と通信できない」ことが厳格に守られており、かつプロキシやネットワーク機器の設定が万全であれば、ssrf_proxy を使わない運用であっても大きな問題は生じないと考えられます。しかし、SSRF による内部リソースへのアクセスや、プロキシ設定の不備による意図せぬダイレクトアクセスなどのリスクは依然残り得るので、ネットワークとプロキシの構成・設定を厳密に検証する、という点を忘れないことが重要です。

参考にさせていただいたサイト

以下のサイトを参考にさせていただきました。皆々様の博識ある情報の共有、感謝感謝です。

  • WebGUIログイン時に「Expecting value: line1 column 1 (char 0)」のエラーが発生する問題対応
  • プラグインのネットワークインストールがエラーとなる問題対応
  • ワークフローを「アプリを実行」から実行する際、長時間(60秒以上)かかる場合、実行結果が表示されず、次のメッセージが表示される「 Results are not displayed due to timeout. Please refer to the logs to gather complete results. 」

その他

  • コードブロックが、15秒でエラーとなる。設定SANDBOX_WORKER_TIMEOUTを60以上に設定しても60秒程度でエラーとなる。
    • .env設定ファイルで60のキーワードで探したら、それっぽい設定CODE_EXECUTION_READ_TIMEOUT=60が見つかったので、600(10分)に変更し、Difyを再起動し、コードブロックを120秒(2分)実行させてみるとエラーなく成功した。
      • 生成AIに聞いてもGoogle検索しても、対応方法がわからず、こういうやり方でもなんとかなりました。(結果オーライということで)
  • 最初はDifyバージョン0.15.3でも試していました、バージョン1.0.0との違いは、docker-compose.yamlファイルのNO_PROXYに、plugin_daemonを加えるかどうかがありました。

おわり

何とか外部とプロキシを間に挟むネットワーク構成内のマシンにDifyを構築して動かすことができました。
まだ網羅的に検証や動作確認できたわけではありませんが、活用して何か不備があれば適宜改善して活用できればと思っています。

あと、docker-compose.yamlファイルを修正せず、docker-compose.override.yamlファイルを作成し、このファイルに修正内容を記載するとよい(バージョンアップ対応等)と、知り合いの方から教えていただいたので、次はその方法にチャレンジしてみたいと思います。

参考設定ファイル

docker-compose.yamlファイルの内容の全体を表示(ここをクリックすると表示されます)
# ==================================================================
# WARNING: This file is auto-generated by generate_docker_compose
# Do not modify this file directly. Instead, update the .env.example
# or docker-compose-template.yaml and regenerate this file.
# ==================================================================

x-shared-env: &shared-api-worker-env
  CONSOLE_API_URL: ${CONSOLE_API_URL:-}
  CONSOLE_WEB_URL: ${CONSOLE_WEB_URL:-}
  SERVICE_API_URL: ${SERVICE_API_URL:-}
  APP_API_URL: ${APP_API_URL:-}
  APP_WEB_URL: ${APP_WEB_URL:-}
  FILES_URL: ${FILES_URL:-}
  LOG_LEVEL: ${LOG_LEVEL:-INFO}
  LOG_FILE: ${LOG_FILE:-/app/logs/server.log}
  LOG_FILE_MAX_SIZE: ${LOG_FILE_MAX_SIZE:-20}
  LOG_FILE_BACKUP_COUNT: ${LOG_FILE_BACKUP_COUNT:-5}
  LOG_DATEFORMAT: ${LOG_DATEFORMAT:-%Y-%m-%d %H:%M:%S}
  LOG_TZ: ${LOG_TZ:-UTC}
  DEBUG: ${DEBUG:-false}
  FLASK_DEBUG: ${FLASK_DEBUG:-false}
  SECRET_KEY: ${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U}
  INIT_PASSWORD: ${INIT_PASSWORD:-}
  DEPLOY_ENV: ${DEPLOY_ENV:-PRODUCTION}
  CHECK_UPDATE_URL: ${CHECK_UPDATE_URL:-https://updates.dify.ai}
  OPENAI_API_BASE: ${OPENAI_API_BASE:-https://api.openai.com/v1}
  MIGRATION_ENABLED: ${MIGRATION_ENABLED:-true}
  FILES_ACCESS_TIMEOUT: ${FILES_ACCESS_TIMEOUT:-300}
  ACCESS_TOKEN_EXPIRE_MINUTES: ${ACCESS_TOKEN_EXPIRE_MINUTES:-60}
  REFRESH_TOKEN_EXPIRE_DAYS: ${REFRESH_TOKEN_EXPIRE_DAYS:-30}
  APP_MAX_ACTIVE_REQUESTS: ${APP_MAX_ACTIVE_REQUESTS:-0}
  APP_MAX_EXECUTION_TIME: ${APP_MAX_EXECUTION_TIME:-1200}
  DIFY_BIND_ADDRESS: ${DIFY_BIND_ADDRESS:-0.0.0.0}
  DIFY_PORT: ${DIFY_PORT:-5001}
  SERVER_WORKER_AMOUNT: ${SERVER_WORKER_AMOUNT:-1}
  SERVER_WORKER_CLASS: ${SERVER_WORKER_CLASS:-gevent}
  SERVER_WORKER_CONNECTIONS: ${SERVER_WORKER_CONNECTIONS:-10}
  CELERY_WORKER_CLASS: ${CELERY_WORKER_CLASS:-}
  GUNICORN_TIMEOUT: ${GUNICORN_TIMEOUT:-360}
  CELERY_WORKER_AMOUNT: ${CELERY_WORKER_AMOUNT:-}
  CELERY_AUTO_SCALE: ${CELERY_AUTO_SCALE:-false}
  CELERY_MAX_WORKERS: ${CELERY_MAX_WORKERS:-}
  CELERY_MIN_WORKERS: ${CELERY_MIN_WORKERS:-}
  API_TOOL_DEFAULT_CONNECT_TIMEOUT: ${API_TOOL_DEFAULT_CONNECT_TIMEOUT:-10}
  API_TOOL_DEFAULT_READ_TIMEOUT: ${API_TOOL_DEFAULT_READ_TIMEOUT:-60}
  DB_USERNAME: ${DB_USERNAME:-postgres}
  DB_PASSWORD: ${DB_PASSWORD:-difyai123456}
  DB_HOST: ${DB_HOST:-db}
  DB_PORT: ${DB_PORT:-5432}
  DB_DATABASE: ${DB_DATABASE:-dify}
  SQLALCHEMY_POOL_SIZE: ${SQLALCHEMY_POOL_SIZE:-30}
  SQLALCHEMY_POOL_RECYCLE: ${SQLALCHEMY_POOL_RECYCLE:-3600}
  SQLALCHEMY_ECHO: ${SQLALCHEMY_ECHO:-false}
  POSTGRES_MAX_CONNECTIONS: ${POSTGRES_MAX_CONNECTIONS:-100}
  POSTGRES_SHARED_BUFFERS: ${POSTGRES_SHARED_BUFFERS:-128MB}
  POSTGRES_WORK_MEM: ${POSTGRES_WORK_MEM:-4MB}
  POSTGRES_MAINTENANCE_WORK_MEM: ${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}
  POSTGRES_EFFECTIVE_CACHE_SIZE: ${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}
  REDIS_HOST: ${REDIS_HOST:-redis}
  REDIS_PORT: ${REDIS_PORT:-6379}
  REDIS_USERNAME: ${REDIS_USERNAME:-}
  REDIS_PASSWORD: ${REDIS_PASSWORD:-difyai123456}
  REDIS_USE_SSL: ${REDIS_USE_SSL:-false}
  REDIS_DB: ${REDIS_DB:-0}
  REDIS_USE_SENTINEL: ${REDIS_USE_SENTINEL:-false}
  REDIS_SENTINELS: ${REDIS_SENTINELS:-}
  REDIS_SENTINEL_SERVICE_NAME: ${REDIS_SENTINEL_SERVICE_NAME:-}
  REDIS_SENTINEL_USERNAME: ${REDIS_SENTINEL_USERNAME:-}
  REDIS_SENTINEL_PASSWORD: ${REDIS_SENTINEL_PASSWORD:-}
  REDIS_SENTINEL_SOCKET_TIMEOUT: ${REDIS_SENTINEL_SOCKET_TIMEOUT:-0.1}
  REDIS_USE_CLUSTERS: ${REDIS_USE_CLUSTERS:-false}
  REDIS_CLUSTERS: ${REDIS_CLUSTERS:-}
  REDIS_CLUSTERS_PASSWORD: ${REDIS_CLUSTERS_PASSWORD:-}
  CELERY_BROKER_URL: ${CELERY_BROKER_URL:-redis://:difyai123456@redis:6379/1}
  BROKER_USE_SSL: ${BROKER_USE_SSL:-false}
  CELERY_USE_SENTINEL: ${CELERY_USE_SENTINEL:-false}
  CELERY_SENTINEL_MASTER_NAME: ${CELERY_SENTINEL_MASTER_NAME:-}
  CELERY_SENTINEL_SOCKET_TIMEOUT: ${CELERY_SENTINEL_SOCKET_TIMEOUT:-0.1}
  WEB_API_CORS_ALLOW_ORIGINS: ${WEB_API_CORS_ALLOW_ORIGINS:-*}
  CONSOLE_CORS_ALLOW_ORIGINS: ${CONSOLE_CORS_ALLOW_ORIGINS:-*}
  STORAGE_TYPE: ${STORAGE_TYPE:-opendal}
  OPENDAL_SCHEME: ${OPENDAL_SCHEME:-fs}
  OPENDAL_FS_ROOT: ${OPENDAL_FS_ROOT:-storage}
  S3_ENDPOINT: ${S3_ENDPOINT:-}
  S3_REGION: ${S3_REGION:-us-east-1}
  S3_BUCKET_NAME: ${S3_BUCKET_NAME:-difyai}
  S3_ACCESS_KEY: ${S3_ACCESS_KEY:-}
  S3_SECRET_KEY: ${S3_SECRET_KEY:-}
  S3_USE_AWS_MANAGED_IAM: ${S3_USE_AWS_MANAGED_IAM:-false}
  AZURE_BLOB_ACCOUNT_NAME: ${AZURE_BLOB_ACCOUNT_NAME:-difyai}
  AZURE_BLOB_ACCOUNT_KEY: ${AZURE_BLOB_ACCOUNT_KEY:-difyai}
  AZURE_BLOB_CONTAINER_NAME: ${AZURE_BLOB_CONTAINER_NAME:-difyai-container}
  AZURE_BLOB_ACCOUNT_URL: ${AZURE_BLOB_ACCOUNT_URL:-https://<your_account_name>.blob.core.windows.net}
  GOOGLE_STORAGE_BUCKET_NAME: ${GOOGLE_STORAGE_BUCKET_NAME:-your-bucket-name}
  GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: ${GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64:-}
  ALIYUN_OSS_BUCKET_NAME: ${ALIYUN_OSS_BUCKET_NAME:-your-bucket-name}
  ALIYUN_OSS_ACCESS_KEY: ${ALIYUN_OSS_ACCESS_KEY:-your-access-key}
  ALIYUN_OSS_SECRET_KEY: ${ALIYUN_OSS_SECRET_KEY:-your-secret-key}
  ALIYUN_OSS_ENDPOINT: ${ALIYUN_OSS_ENDPOINT:-https://oss-ap-southeast-1-internal.aliyuncs.com}
  ALIYUN_OSS_REGION: ${ALIYUN_OSS_REGION:-ap-southeast-1}
  ALIYUN_OSS_AUTH_VERSION: ${ALIYUN_OSS_AUTH_VERSION:-v4}
  ALIYUN_OSS_PATH: ${ALIYUN_OSS_PATH:-your-path}
  TENCENT_COS_BUCKET_NAME: ${TENCENT_COS_BUCKET_NAME:-your-bucket-name}
  TENCENT_COS_SECRET_KEY: ${TENCENT_COS_SECRET_KEY:-your-secret-key}
  TENCENT_COS_SECRET_ID: ${TENCENT_COS_SECRET_ID:-your-secret-id}
  TENCENT_COS_REGION: ${TENCENT_COS_REGION:-your-region}
  TENCENT_COS_SCHEME: ${TENCENT_COS_SCHEME:-your-scheme}
  OCI_ENDPOINT: ${OCI_ENDPOINT:-https://objectstorage.us-ashburn-1.oraclecloud.com}
  OCI_BUCKET_NAME: ${OCI_BUCKET_NAME:-your-bucket-name}
  OCI_ACCESS_KEY: ${OCI_ACCESS_KEY:-your-access-key}
  OCI_SECRET_KEY: ${OCI_SECRET_KEY:-your-secret-key}
  OCI_REGION: ${OCI_REGION:-us-ashburn-1}
  HUAWEI_OBS_BUCKET_NAME: ${HUAWEI_OBS_BUCKET_NAME:-your-bucket-name}
  HUAWEI_OBS_SECRET_KEY: ${HUAWEI_OBS_SECRET_KEY:-your-secret-key}
  HUAWEI_OBS_ACCESS_KEY: ${HUAWEI_OBS_ACCESS_KEY:-your-access-key}
  HUAWEI_OBS_SERVER: ${HUAWEI_OBS_SERVER:-your-server-url}
  VOLCENGINE_TOS_BUCKET_NAME: ${VOLCENGINE_TOS_BUCKET_NAME:-your-bucket-name}
  VOLCENGINE_TOS_SECRET_KEY: ${VOLCENGINE_TOS_SECRET_KEY:-your-secret-key}
  VOLCENGINE_TOS_ACCESS_KEY: ${VOLCENGINE_TOS_ACCESS_KEY:-your-access-key}
  VOLCENGINE_TOS_ENDPOINT: ${VOLCENGINE_TOS_ENDPOINT:-your-server-url}
  VOLCENGINE_TOS_REGION: ${VOLCENGINE_TOS_REGION:-your-region}
  BAIDU_OBS_BUCKET_NAME: ${BAIDU_OBS_BUCKET_NAME:-your-bucket-name}
  BAIDU_OBS_SECRET_KEY: ${BAIDU_OBS_SECRET_KEY:-your-secret-key}
  BAIDU_OBS_ACCESS_KEY: ${BAIDU_OBS_ACCESS_KEY:-your-access-key}
  BAIDU_OBS_ENDPOINT: ${BAIDU_OBS_ENDPOINT:-your-server-url}
  SUPABASE_BUCKET_NAME: ${SUPABASE_BUCKET_NAME:-your-bucket-name}
  SUPABASE_API_KEY: ${SUPABASE_API_KEY:-your-access-key}
  SUPABASE_URL: ${SUPABASE_URL:-your-server-url}
  VECTOR_STORE: ${VECTOR_STORE:-weaviate}
  WEAVIATE_ENDPOINT: ${WEAVIATE_ENDPOINT:-http://weaviate:8080}
  WEAVIATE_API_KEY: ${WEAVIATE_API_KEY:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}
  QDRANT_URL: ${QDRANT_URL:-http://qdrant:6333}
  QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456}
  QDRANT_CLIENT_TIMEOUT: ${QDRANT_CLIENT_TIMEOUT:-20}
  QDRANT_GRPC_ENABLED: ${QDRANT_GRPC_ENABLED:-false}
  QDRANT_GRPC_PORT: ${QDRANT_GRPC_PORT:-6334}
  MILVUS_URI: ${MILVUS_URI:-http://127.0.0.1:19530}
  MILVUS_TOKEN: ${MILVUS_TOKEN:-}
  MILVUS_USER: ${MILVUS_USER:-root}
  MILVUS_PASSWORD: ${MILVUS_PASSWORD:-Milvus}
  MILVUS_ENABLE_HYBRID_SEARCH: ${MILVUS_ENABLE_HYBRID_SEARCH:-False}
  MYSCALE_HOST: ${MYSCALE_HOST:-myscale}
  MYSCALE_PORT: ${MYSCALE_PORT:-8123}
  MYSCALE_USER: ${MYSCALE_USER:-default}
  MYSCALE_PASSWORD: ${MYSCALE_PASSWORD:-}
  MYSCALE_DATABASE: ${MYSCALE_DATABASE:-dify}
  MYSCALE_FTS_PARAMS: ${MYSCALE_FTS_PARAMS:-}
  COUCHBASE_CONNECTION_STRING: ${COUCHBASE_CONNECTION_STRING:-couchbase://couchbase-server}
  COUCHBASE_USER: ${COUCHBASE_USER:-Administrator}
  COUCHBASE_PASSWORD: ${COUCHBASE_PASSWORD:-password}
  COUCHBASE_BUCKET_NAME: ${COUCHBASE_BUCKET_NAME:-Embeddings}
  COUCHBASE_SCOPE_NAME: ${COUCHBASE_SCOPE_NAME:-_default}
  PGVECTOR_HOST: ${PGVECTOR_HOST:-pgvector}
  PGVECTOR_PORT: ${PGVECTOR_PORT:-5432}
  PGVECTOR_USER: ${PGVECTOR_USER:-postgres}
  PGVECTOR_PASSWORD: ${PGVECTOR_PASSWORD:-difyai123456}
  PGVECTOR_DATABASE: ${PGVECTOR_DATABASE:-dify}
  PGVECTOR_MIN_CONNECTION: ${PGVECTOR_MIN_CONNECTION:-1}
  PGVECTOR_MAX_CONNECTION: ${PGVECTOR_MAX_CONNECTION:-5}
  PGVECTO_RS_HOST: ${PGVECTO_RS_HOST:-pgvecto-rs}
  PGVECTO_RS_PORT: ${PGVECTO_RS_PORT:-5432}
  PGVECTO_RS_USER: ${PGVECTO_RS_USER:-postgres}
  PGVECTO_RS_PASSWORD: ${PGVECTO_RS_PASSWORD:-difyai123456}
  PGVECTO_RS_DATABASE: ${PGVECTO_RS_DATABASE:-dify}
  ANALYTICDB_KEY_ID: ${ANALYTICDB_KEY_ID:-your-ak}
  ANALYTICDB_KEY_SECRET: ${ANALYTICDB_KEY_SECRET:-your-sk}
  ANALYTICDB_REGION_ID: ${ANALYTICDB_REGION_ID:-cn-hangzhou}
  ANALYTICDB_INSTANCE_ID: ${ANALYTICDB_INSTANCE_ID:-gp-ab123456}
  ANALYTICDB_ACCOUNT: ${ANALYTICDB_ACCOUNT:-testaccount}
  ANALYTICDB_PASSWORD: ${ANALYTICDB_PASSWORD:-testpassword}
  ANALYTICDB_NAMESPACE: ${ANALYTICDB_NAMESPACE:-dify}
  ANALYTICDB_NAMESPACE_PASSWORD: ${ANALYTICDB_NAMESPACE_PASSWORD:-difypassword}
  ANALYTICDB_HOST: ${ANALYTICDB_HOST:-gp-test.aliyuncs.com}
  ANALYTICDB_PORT: ${ANALYTICDB_PORT:-5432}
  ANALYTICDB_MIN_CONNECTION: ${ANALYTICDB_MIN_CONNECTION:-1}
  ANALYTICDB_MAX_CONNECTION: ${ANALYTICDB_MAX_CONNECTION:-5}
  TIDB_VECTOR_HOST: ${TIDB_VECTOR_HOST:-tidb}
  TIDB_VECTOR_PORT: ${TIDB_VECTOR_PORT:-4000}
  TIDB_VECTOR_USER: ${TIDB_VECTOR_USER:-}
  TIDB_VECTOR_PASSWORD: ${TIDB_VECTOR_PASSWORD:-}
  TIDB_VECTOR_DATABASE: ${TIDB_VECTOR_DATABASE:-dify}
  TIDB_ON_QDRANT_URL: ${TIDB_ON_QDRANT_URL:-http://127.0.0.1}
  TIDB_ON_QDRANT_API_KEY: ${TIDB_ON_QDRANT_API_KEY:-dify}
  TIDB_ON_QDRANT_CLIENT_TIMEOUT: ${TIDB_ON_QDRANT_CLIENT_TIMEOUT:-20}
  TIDB_ON_QDRANT_GRPC_ENABLED: ${TIDB_ON_QDRANT_GRPC_ENABLED:-false}
  TIDB_ON_QDRANT_GRPC_PORT: ${TIDB_ON_QDRANT_GRPC_PORT:-6334}
  TIDB_PUBLIC_KEY: ${TIDB_PUBLIC_KEY:-dify}
  TIDB_PRIVATE_KEY: ${TIDB_PRIVATE_KEY:-dify}
  TIDB_API_URL: ${TIDB_API_URL:-http://127.0.0.1}
  TIDB_IAM_API_URL: ${TIDB_IAM_API_URL:-http://127.0.0.1}
  TIDB_REGION: ${TIDB_REGION:-regions/aws-us-east-1}
  TIDB_PROJECT_ID: ${TIDB_PROJECT_ID:-dify}
  TIDB_SPEND_LIMIT: ${TIDB_SPEND_LIMIT:-100}
  CHROMA_HOST: ${CHROMA_HOST:-127.0.0.1}
  CHROMA_PORT: ${CHROMA_PORT:-8000}
  CHROMA_TENANT: ${CHROMA_TENANT:-default_tenant}
  CHROMA_DATABASE: ${CHROMA_DATABASE:-default_database}
  CHROMA_AUTH_PROVIDER: ${CHROMA_AUTH_PROVIDER:-chromadb.auth.token_authn.TokenAuthClientProvider}
  CHROMA_AUTH_CREDENTIALS: ${CHROMA_AUTH_CREDENTIALS:-}
  ORACLE_HOST: ${ORACLE_HOST:-oracle}
  ORACLE_PORT: ${ORACLE_PORT:-1521}
  ORACLE_USER: ${ORACLE_USER:-dify}
  ORACLE_PASSWORD: ${ORACLE_PASSWORD:-dify}
  ORACLE_DATABASE: ${ORACLE_DATABASE:-FREEPDB1}
  RELYT_HOST: ${RELYT_HOST:-db}
  RELYT_PORT: ${RELYT_PORT:-5432}
  RELYT_USER: ${RELYT_USER:-postgres}
  RELYT_PASSWORD: ${RELYT_PASSWORD:-difyai123456}
  RELYT_DATABASE: ${RELYT_DATABASE:-postgres}
  OPENSEARCH_HOST: ${OPENSEARCH_HOST:-opensearch}
  OPENSEARCH_PORT: ${OPENSEARCH_PORT:-9200}
  OPENSEARCH_USER: ${OPENSEARCH_USER:-admin}
  OPENSEARCH_PASSWORD: ${OPENSEARCH_PASSWORD:-admin}
  OPENSEARCH_SECURE: ${OPENSEARCH_SECURE:-true}
  TENCENT_VECTOR_DB_URL: ${TENCENT_VECTOR_DB_URL:-http://127.0.0.1}
  TENCENT_VECTOR_DB_API_KEY: ${TENCENT_VECTOR_DB_API_KEY:-dify}
  TENCENT_VECTOR_DB_TIMEOUT: ${TENCENT_VECTOR_DB_TIMEOUT:-30}
  TENCENT_VECTOR_DB_USERNAME: ${TENCENT_VECTOR_DB_USERNAME:-dify}
  TENCENT_VECTOR_DB_DATABASE: ${TENCENT_VECTOR_DB_DATABASE:-dify}
  TENCENT_VECTOR_DB_SHARD: ${TENCENT_VECTOR_DB_SHARD:-1}
  TENCENT_VECTOR_DB_REPLICAS: ${TENCENT_VECTOR_DB_REPLICAS:-2}
  ELASTICSEARCH_HOST: ${ELASTICSEARCH_HOST:-0.0.0.0}
  ELASTICSEARCH_PORT: ${ELASTICSEARCH_PORT:-9200}
  ELASTICSEARCH_USERNAME: ${ELASTICSEARCH_USERNAME:-elastic}
  ELASTICSEARCH_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic}
  KIBANA_PORT: ${KIBANA_PORT:-5601}
  BAIDU_VECTOR_DB_ENDPOINT: ${BAIDU_VECTOR_DB_ENDPOINT:-http://127.0.0.1:5287}
  BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS: ${BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS:-30000}
  BAIDU_VECTOR_DB_ACCOUNT: ${BAIDU_VECTOR_DB_ACCOUNT:-root}
  BAIDU_VECTOR_DB_API_KEY: ${BAIDU_VECTOR_DB_API_KEY:-dify}
  BAIDU_VECTOR_DB_DATABASE: ${BAIDU_VECTOR_DB_DATABASE:-dify}
  BAIDU_VECTOR_DB_SHARD: ${BAIDU_VECTOR_DB_SHARD:-1}
  BAIDU_VECTOR_DB_REPLICAS: ${BAIDU_VECTOR_DB_REPLICAS:-3}
  VIKINGDB_ACCESS_KEY: ${VIKINGDB_ACCESS_KEY:-your-ak}
  VIKINGDB_SECRET_KEY: ${VIKINGDB_SECRET_KEY:-your-sk}
  VIKINGDB_REGION: ${VIKINGDB_REGION:-cn-shanghai}
  VIKINGDB_HOST: ${VIKINGDB_HOST:-api-vikingdb.xxx.volces.com}
  VIKINGDB_SCHEMA: ${VIKINGDB_SCHEMA:-http}
  VIKINGDB_CONNECTION_TIMEOUT: ${VIKINGDB_CONNECTION_TIMEOUT:-30}
  VIKINGDB_SOCKET_TIMEOUT: ${VIKINGDB_SOCKET_TIMEOUT:-30}
  LINDORM_URL: ${LINDORM_URL:-http://lindorm:30070}
  LINDORM_USERNAME: ${LINDORM_USERNAME:-lindorm}
  LINDORM_PASSWORD: ${LINDORM_PASSWORD:-lindorm}
  OCEANBASE_VECTOR_HOST: ${OCEANBASE_VECTOR_HOST:-oceanbase}
  OCEANBASE_VECTOR_PORT: ${OCEANBASE_VECTOR_PORT:-2881}
  OCEANBASE_VECTOR_USER: ${OCEANBASE_VECTOR_USER:-root@test}
  OCEANBASE_VECTOR_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456}
  OCEANBASE_VECTOR_DATABASE: ${OCEANBASE_VECTOR_DATABASE:-test}
  OCEANBASE_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai}
  OCEANBASE_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G}
  UPSTASH_VECTOR_URL: ${UPSTASH_VECTOR_URL:-https://xxx-vector.upstash.io}
  UPSTASH_VECTOR_TOKEN: ${UPSTASH_VECTOR_TOKEN:-dify}
  UPLOAD_FILE_SIZE_LIMIT: ${UPLOAD_FILE_SIZE_LIMIT:-15}
  UPLOAD_FILE_BATCH_LIMIT: ${UPLOAD_FILE_BATCH_LIMIT:-5}
  ETL_TYPE: ${ETL_TYPE:-dify}
  UNSTRUCTURED_API_URL: ${UNSTRUCTURED_API_URL:-}
  UNSTRUCTURED_API_KEY: ${UNSTRUCTURED_API_KEY:-}
  SCARF_NO_ANALYTICS: ${SCARF_NO_ANALYTICS:-true}
  PROMPT_GENERATION_MAX_TOKENS: ${PROMPT_GENERATION_MAX_TOKENS:-512}
  CODE_GENERATION_MAX_TOKENS: ${CODE_GENERATION_MAX_TOKENS:-1024}
  MULTIMODAL_SEND_FORMAT: ${MULTIMODAL_SEND_FORMAT:-base64}
  UPLOAD_IMAGE_FILE_SIZE_LIMIT: ${UPLOAD_IMAGE_FILE_SIZE_LIMIT:-10}
  UPLOAD_VIDEO_FILE_SIZE_LIMIT: ${UPLOAD_VIDEO_FILE_SIZE_LIMIT:-100}
  UPLOAD_AUDIO_FILE_SIZE_LIMIT: ${UPLOAD_AUDIO_FILE_SIZE_LIMIT:-50}
  SENTRY_DSN: ${SENTRY_DSN:-}
  API_SENTRY_DSN: ${API_SENTRY_DSN:-}
  API_SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0}
  API_SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0}
  WEB_SENTRY_DSN: ${WEB_SENTRY_DSN:-}
  NOTION_INTEGRATION_TYPE: ${NOTION_INTEGRATION_TYPE:-public}
  NOTION_CLIENT_SECRET: ${NOTION_CLIENT_SECRET:-}
  NOTION_CLIENT_ID: ${NOTION_CLIENT_ID:-}
  NOTION_INTERNAL_SECRET: ${NOTION_INTERNAL_SECRET:-}
  MAIL_TYPE: ${MAIL_TYPE:-resend}
  MAIL_DEFAULT_SEND_FROM: ${MAIL_DEFAULT_SEND_FROM:-}
  RESEND_API_URL: ${RESEND_API_URL:-https://api.resend.com}
  RESEND_API_KEY: ${RESEND_API_KEY:-your-resend-api-key}
  SMTP_SERVER: ${SMTP_SERVER:-}
  SMTP_PORT: ${SMTP_PORT:-465}
  SMTP_USERNAME: ${SMTP_USERNAME:-}
  SMTP_PASSWORD: ${SMTP_PASSWORD:-}
  SMTP_USE_TLS: ${SMTP_USE_TLS:-true}
  SMTP_OPPORTUNISTIC_TLS: ${SMTP_OPPORTUNISTIC_TLS:-false}
  INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-4000}
  INVITE_EXPIRY_HOURS: ${INVITE_EXPIRY_HOURS:-72}
  RESET_PASSWORD_TOKEN_EXPIRY_MINUTES: ${RESET_PASSWORD_TOKEN_EXPIRY_MINUTES:-5}
  CODE_EXECUTION_ENDPOINT: ${CODE_EXECUTION_ENDPOINT:-http://sandbox:8194}
  CODE_EXECUTION_API_KEY: ${CODE_EXECUTION_API_KEY:-dify-sandbox}
  CODE_MAX_NUMBER: ${CODE_MAX_NUMBER:-9223372036854775807}
  CODE_MIN_NUMBER: ${CODE_MIN_NUMBER:--9223372036854775808}
  CODE_MAX_DEPTH: ${CODE_MAX_DEPTH:-5}
  CODE_MAX_PRECISION: ${CODE_MAX_PRECISION:-20}
  CODE_MAX_STRING_LENGTH: ${CODE_MAX_STRING_LENGTH:-80000}
  CODE_MAX_STRING_ARRAY_LENGTH: ${CODE_MAX_STRING_ARRAY_LENGTH:-30}
  CODE_MAX_OBJECT_ARRAY_LENGTH: ${CODE_MAX_OBJECT_ARRAY_LENGTH:-30}
  CODE_MAX_NUMBER_ARRAY_LENGTH: ${CODE_MAX_NUMBER_ARRAY_LENGTH:-1000}
  CODE_EXECUTION_CONNECT_TIMEOUT: ${CODE_EXECUTION_CONNECT_TIMEOUT:-10}
  CODE_EXECUTION_READ_TIMEOUT: ${CODE_EXECUTION_READ_TIMEOUT:-60}
  CODE_EXECUTION_WRITE_TIMEOUT: ${CODE_EXECUTION_WRITE_TIMEOUT:-10}
  TEMPLATE_TRANSFORM_MAX_LENGTH: ${TEMPLATE_TRANSFORM_MAX_LENGTH:-80000}
  WORKFLOW_MAX_EXECUTION_STEPS: ${WORKFLOW_MAX_EXECUTION_STEPS:-500}
  WORKFLOW_MAX_EXECUTION_TIME: ${WORKFLOW_MAX_EXECUTION_TIME:-1200}
  WORKFLOW_CALL_MAX_DEPTH: ${WORKFLOW_CALL_MAX_DEPTH:-5}
  MAX_VARIABLE_SIZE: ${MAX_VARIABLE_SIZE:-204800}
  WORKFLOW_PARALLEL_DEPTH_LIMIT: ${WORKFLOW_PARALLEL_DEPTH_LIMIT:-3}
  WORKFLOW_FILE_UPLOAD_LIMIT: ${WORKFLOW_FILE_UPLOAD_LIMIT:-10}
  HTTP_REQUEST_NODE_MAX_BINARY_SIZE: ${HTTP_REQUEST_NODE_MAX_BINARY_SIZE:-10485760}
  HTTP_REQUEST_NODE_MAX_TEXT_SIZE: ${HTTP_REQUEST_NODE_MAX_TEXT_SIZE:-1048576}
  SSRF_PROXY_HTTP_URL: ${SSRF_PROXY_HTTP_URL:-http://ssrf_proxy:3128}
  SSRF_PROXY_HTTPS_URL: ${SSRF_PROXY_HTTPS_URL:-http://ssrf_proxy:3128}
  TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000}
  PGUSER: ${PGUSER:-${DB_USERNAME}}
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-${DB_PASSWORD}}
  POSTGRES_DB: ${POSTGRES_DB:-${DB_DATABASE}}
  PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata}
  SANDBOX_API_KEY: ${SANDBOX_API_KEY:-dify-sandbox}
  SANDBOX_GIN_MODE: ${SANDBOX_GIN_MODE:-release}
  SANDBOX_WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15}
  SANDBOX_ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true}
  SANDBOX_HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128}
  SANDBOX_HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128}
  SANDBOX_PORT: ${SANDBOX_PORT:-8194}
  WEAVIATE_PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate}
  WEAVIATE_QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25}
  WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-true}
  WEAVIATE_DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none}
  WEAVIATE_CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1}
  WEAVIATE_AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true}
  WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}
  WEAVIATE_AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai}
  WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true}
  WEAVIATE_AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai}
  CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456}
  CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider}
  CHROMA_IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE}
  ORACLE_PWD: ${ORACLE_PWD:-Dify123456}
  ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8}
  ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision}
  ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000}
  ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296}
  ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000}
  MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin}
  MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin}
  ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379}
  MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000}
  MILVUS_AUTHORIZATION_ENABLED: ${MILVUS_AUTHORIZATION_ENABLED:-true}
  PGVECTOR_PGUSER: ${PGVECTOR_PGUSER:-postgres}
  PGVECTOR_POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456}
  PGVECTOR_POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify}
  PGVECTOR_PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}
  OPENSEARCH_DISCOVERY_TYPE: ${OPENSEARCH_DISCOVERY_TYPE:-single-node}
  OPENSEARCH_BOOTSTRAP_MEMORY_LOCK: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true}
  OPENSEARCH_JAVA_OPTS_MIN: ${OPENSEARCH_JAVA_OPTS_MIN:-512m}
  OPENSEARCH_JAVA_OPTS_MAX: ${OPENSEARCH_JAVA_OPTS_MAX:-1024m}
  OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123}
  OPENSEARCH_MEMLOCK_SOFT: ${OPENSEARCH_MEMLOCK_SOFT:--1}
  OPENSEARCH_MEMLOCK_HARD: ${OPENSEARCH_MEMLOCK_HARD:--1}
  OPENSEARCH_NOFILE_SOFT: ${OPENSEARCH_NOFILE_SOFT:-65536}
  OPENSEARCH_NOFILE_HARD: ${OPENSEARCH_NOFILE_HARD:-65536}
  NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_}
  NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false}
  NGINX_PORT: ${NGINX_PORT:-80}
  NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443}
  NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt}
  NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key}
  NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3}
  NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto}
  NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M}
  NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65}
  NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s}
  NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s}
  NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false}
  CERTBOT_EMAIL: ${CERTBOT_EMAIL:-your_email@example.com}
  CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-your_domain.com}
  CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-}
  SSRF_HTTP_PORT: ${SSRF_HTTP_PORT:-3128}
  SSRF_COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid}
  SSRF_REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194}
  SSRF_SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox}
  SSRF_DEFAULT_TIME_OUT: ${SSRF_DEFAULT_TIME_OUT:-5}
  SSRF_DEFAULT_CONNECT_TIME_OUT: ${SSRF_DEFAULT_CONNECT_TIME_OUT:-5}
  SSRF_DEFAULT_READ_TIME_OUT: ${SSRF_DEFAULT_READ_TIME_OUT:-5}
  SSRF_DEFAULT_WRITE_TIME_OUT: ${SSRF_DEFAULT_WRITE_TIME_OUT:-5}
  EXPOSE_NGINX_PORT: ${EXPOSE_NGINX_PORT:-80}
  EXPOSE_NGINX_SSL_PORT: ${EXPOSE_NGINX_SSL_PORT:-443}
  #EXPOSE_NGINX_PORT: ${EXPOSE_NGINX_PORT:-48080}
  #EXPOSE_NGINX_SSL_PORT: ${EXPOSE_NGINX_SSL_PORT:-48443}
  POSITION_TOOL_PINS: ${POSITION_TOOL_PINS:-}
  POSITION_TOOL_INCLUDES: ${POSITION_TOOL_INCLUDES:-}
  POSITION_TOOL_EXCLUDES: ${POSITION_TOOL_EXCLUDES:-}
  POSITION_PROVIDER_PINS: ${POSITION_PROVIDER_PINS:-}
  POSITION_PROVIDER_INCLUDES: ${POSITION_PROVIDER_INCLUDES:-}
  POSITION_PROVIDER_EXCLUDES: ${POSITION_PROVIDER_EXCLUDES:-}
  CSP_WHITELIST: ${CSP_WHITELIST:-}
  CREATE_TIDB_SERVICE_JOB_ENABLED: ${CREATE_TIDB_SERVICE_JOB_ENABLED:-false}
  MAX_SUBMIT_COUNT: ${MAX_SUBMIT_COUNT:-100}
  TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-10}
  #DB_PLUGIN_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin}
  DB_PLUGIN_DATABASE: ${DB_PLUGIN_DATABASE:-dify}
  EXPOSE_PLUGIN_DAEMON_PORT: ${EXPOSE_PLUGIN_DAEMON_PORT:-5002}
  PLUGIN_DAEMON_PORT: ${PLUGIN_DAEMON_PORT:-5002}
  PLUGIN_DAEMON_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi}
  PLUGIN_DAEMON_URL: ${PLUGIN_DAEMON_URL:-http://plugin_daemon:5002}
  PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}
  PLUGIN_PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false}
  PLUGIN_DEBUGGING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0}
  PLUGIN_DEBUGGING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003}
  EXPOSE_PLUGIN_DEBUGGING_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost}
  EXPOSE_PLUGIN_DEBUGGING_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}
  PLUGIN_DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}
  PLUGIN_DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001}
  ENDPOINT_URL_TEMPLATE: ${ENDPOINT_URL_TEMPLATE:-http://localhost/e/{hook_id}}
  MARKETPLACE_ENABLED: ${MARKETPLACE_ENABLED:-true}
  MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai}
  FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true}
  HTTP_PROXY: http://myproxy.co.jp:8888
  HTTPS_PROXY: http://myproxy.co.jp:8888
  NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon

services:
  # API service
  api:
    image: langgenius/dify-api:1.0.0
    restart: always
    environment:
      # Use the shared environment variables.
      <<: *shared-api-worker-env
      # Startup mode, 'api' starts the API server.
      MODE: api
      SENTRY_DSN: ${API_SENTRY_DSN:-}
      SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0}
      SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0}
      PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}
      INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      #NO_PROXY: localhost,127.0.0.1,10.10.10.10
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,sandbox,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    depends_on:
      - db
      - redis
    volumes:
      # Mount the storage directory to the container, for storing user files.
      - ./volumes/app/storage:/app/api/storage
    networks:
      #- ssrf_proxy_network
      - default

  # worker service
  # The Celery worker for processing the queue.
  worker:
    image: langgenius/dify-api:1.0.0
    restart: always
    environment:
      # Use the shared environment variables.
      <<: *shared-api-worker-env
      # Startup mode, 'worker' starts the Celery worker for processing the queue.
      MODE: worker
      SENTRY_DSN: ${API_SENTRY_DSN:-}
      SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0}
      SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0}
      PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}
      INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    depends_on:
      - db
      - redis
    volumes:
      # Mount the storage directory to the container, for storing user files.
      - ./volumes/app/storage:/app/api/storage
    networks:
      #- ssrf_proxy_network
      - default

  # Frontend web application.
  web:
    image: langgenius/dify-web:1.0.0
    restart: always
    environment:
      CONSOLE_API_URL: ${CONSOLE_API_URL:-}
      APP_API_URL: ${APP_API_URL:-}
      SENTRY_DSN: ${WEB_SENTRY_DSN:-}
      NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0}
      TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000}
      CSP_WHITELIST: ${CSP_WHITELIST:-}
      MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai}
      MARKETPLACE_URL: ${MARKETPLACE_URL:-https://marketplace.dify.ai}
      TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-}
      INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-}
      PM2_INSTANCES: ${PM2_INSTANCES:-2}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon

  # The postgres database.
  db:
    image: postgres:15-alpine
    restart: always
    environment:
      PGUSER: ${PGUSER:-postgres}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456}
      POSTGRES_DB: ${POSTGRES_DB:-dify}
      PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    command: >
      postgres -c 'max_connections=${POSTGRES_MAX_CONNECTIONS:-100}'
               -c 'shared_buffers=${POSTGRES_SHARED_BUFFERS:-128MB}'
               -c 'work_mem=${POSTGRES_WORK_MEM:-4MB}'
               -c 'maintenance_work_mem=${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}'
               -c 'effective_cache_size=${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}'
    volumes:
      - ./volumes/db/data:/var/lib/postgresql/data
    healthcheck:
      test: [ 'CMD', 'pg_isready' ]
      interval: 1s
      timeout: 3s
      retries: 30
    ports:
      - '${EXPOSE_DB_PORT:-5432}:5432'

  # The redis cache.
  redis:
    image: redis:6-alpine
    restart: always
    environment:
      REDISCLI_AUTH: ${REDIS_PASSWORD:-difyai123456}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    volumes:
      # Mount the redis data directory to the container.
      - ./volumes/redis/data:/data
    # Set the redis password when startup redis server.
    command: redis-server --requirepass ${REDIS_PASSWORD:-difyai123456}
    healthcheck:
      test: [ 'CMD', 'redis-cli', 'ping' ]

  # The DifySandbox
  sandbox:
    image: langgenius/dify-sandbox:0.2.10
    restart: always
    environment:
      # The DifySandbox configurations
      # Make sure you are changing this key for your deployment with a strong key.
      # You can generate a strong key using `openssl rand -base64 42`.
      API_KEY: ${SANDBOX_API_KEY:-dify-sandbox}
      GIN_MODE: ${SANDBOX_GIN_MODE:-release}
      WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15}
      ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true}
      #HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128}
      #HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128}
      SANDBOX_PORT: ${SANDBOX_PORT:-8194}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    volumes:
      - ./volumes/sandbox/dependencies:/dependencies
      - ./volumes/sandbox/conf:/conf
    healthcheck:
      test: [ 'CMD', 'curl', '-f', 'http://localhost:8194/health' ]
    networks:
      #- ssrf_proxy_network
      - default

  # plugin daemon
  plugin_daemon:
    image: langgenius/dify-plugin-daemon:0.0.3-local
    restart: always
    environment:
      # Use the shared environment variables.
      <<: *shared-api-worker-env
      #DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin}
      DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify}
      SERVER_PORT: ${PLUGIN_DAEMON_PORT:-5002}
      SERVER_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi}
      MAX_PLUGIN_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}
      PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false}
      DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001}
      DIFY_INNER_API_KEY: ${INNER_API_KEY_FOR_PLUGIN:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}
      PLUGIN_REMOTE_INSTALLING_HOST: ${PLUGIN_REMOTE_INSTALL_HOST:-0.0.0.0}
      PLUGIN_REMOTE_INSTALLING_PORT: ${PLUGIN_REMOTE_INSTALL_PORT:-5003}
      PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd}
      FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    #depends_on:
      #- db
      #- redis
    ports:
      - "${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}:${PLUGIN_DEBUGGING_PORT:-5003}"
    volumes:
      - ./volumes/plugin_daemon:/app/storage
    #networks:
      #- default


  # ssrf_proxy server
  # for more information, please refer to
  # https://docs.dify.ai/learn-more/faq/install-faq#id-18.-why-is-ssrf_proxy-needed
  ssrf_proxy:
    image: ubuntu/squid:latest
    restart: always
    volumes:
      - ./ssrf_proxy/squid.conf.template:/etc/squid/squid.conf.template
      - ./ssrf_proxy/docker-entrypoint.sh:/docker-entrypoint-mount.sh
    entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ]
    environment:
      # pls clearly modify the squid env vars to fit your network environment.
      HTTP_PORT: ${SSRF_HTTP_PORT:-3128}
      COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid}
      REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194}
      SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox}
      SANDBOX_PORT: ${SANDBOX_PORT:-8194}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    networks:
      #- ssrf_proxy_network
      - default

  # Certbot service
  # use `docker-compose --profile certbot up` to start the certbot service.
  certbot:
    image: certbot/certbot
    profiles:
      - certbot
    volumes:
      - ./volumes/certbot/conf:/etc/letsencrypt
      - ./volumes/certbot/www:/var/www/html
      - ./volumes/certbot/logs:/var/log/letsencrypt
      - ./volumes/certbot/conf/live:/etc/letsencrypt/live
      - ./certbot/update-cert.template.txt:/update-cert.template.txt
      - ./certbot/docker-entrypoint.sh:/docker-entrypoint.sh
    environment:
      - CERTBOT_EMAIL=${CERTBOT_EMAIL}
      - CERTBOT_DOMAIN=${CERTBOT_DOMAIN}
      - CERTBOT_OPTIONS=${CERTBOT_OPTIONS:-}
      - HTTP_PROXY=http://myproxy.co.jp:8888
      - HTTPS_PROXY=http://myproxy.co.jp:8888
      - NO_PROXY=localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
      #HTTP_PROXY: http://myproxy.co.jp:8888
      #HTTPS_PROXY: http://myproxy.co.jp:8888
      #NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    entrypoint: [ '/docker-entrypoint.sh' ]
    command: [ 'tail', '-f', '/dev/null' ]

  # The nginx reverse proxy.
  # used for reverse proxying the API service and Web service.
  nginx:
    image: nginx:latest
    restart: always
    volumes:
      - ./nginx/nginx.conf.template:/etc/nginx/nginx.conf.template
      - ./nginx/proxy.conf.template:/etc/nginx/proxy.conf.template
      - ./nginx/https.conf.template:/etc/nginx/https.conf.template
      - ./nginx/conf.d:/etc/nginx/conf.d
      - ./nginx/docker-entrypoint.sh:/docker-entrypoint-mount.sh
      - ./nginx/ssl:/etc/ssl # cert dir (legacy)
      - ./volumes/certbot/conf/live:/etc/letsencrypt/live # cert dir (with certbot container)
      - ./volumes/certbot/conf:/etc/letsencrypt
      - ./volumes/certbot/www:/var/www/html
    entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ]
    environment:
      NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_}
      NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false}
      NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443}
      NGINX_PORT: ${NGINX_PORT:-80}
      # You're required to add your own SSL certificates/keys to the `./nginx/ssl` directory
      # and modify the env vars below in .env if HTTPS_ENABLED is true.
      NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt}
      NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key}
      NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3}
      NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto}
      NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M}
      NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65}
      NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s}
      NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s}
      NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false}
      CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    depends_on:
      - api
      - web
    ports:
      - '${EXPOSE_NGINX_PORT:-80}:${NGINX_PORT:-80}'
      - '${EXPOSE_NGINX_SSL_PORT:-443}:${NGINX_SSL_PORT:-443}'
      #- '${EXPOSE_NGINX_PORT:-48080}:${NGINX_PORT:-80}'
      #- '${EXPOSE_NGINX_SSL_PORT:-48443}:${NGINX_SSL_PORT:-443}'

  # The Weaviate vector store.
  weaviate:
    image: semitechnologies/weaviate:1.19.0
    profiles:
      - ''
      - weaviate
    restart: always
    volumes:
      # Mount the Weaviate data directory to the con tainer.
      - ./volumes/weaviate:/var/lib/weaviate
    environment:
      # The Weaviate configurations
      # You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information.
      PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate}
      QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25}
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-false}
      DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none}
      CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1}
      AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true}
      AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}
      AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai}
      AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true}
      AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon

  # Qdrant vector store.
  # (if used, you need to set VECTOR_STORE to qdrant in the api & worker service.)
  qdrant:
    image: langgenius/qdrant:v1.7.3
    profiles:
      - qdrant
    restart: always
    volumes:
      - ./volumes/qdrant:/qdrant/storage
    environment:
      QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon

  # The Couchbase vector store.
  couchbase-server:
    build: ./couchbase-server
    profiles:
      - couchbase
    restart: always
    environment:
      - CLUSTER_NAME=dify_search
      - COUCHBASE_ADMINISTRATOR_USERNAME=${COUCHBASE_USER:-Administrator}
      - COUCHBASE_ADMINISTRATOR_PASSWORD=${COUCHBASE_PASSWORD:-password}
      - COUCHBASE_BUCKET=${COUCHBASE_BUCKET_NAME:-Embeddings}
      - COUCHBASE_BUCKET_RAMSIZE=512
      - COUCHBASE_RAM_SIZE=2048
      - COUCHBASE_EVENTING_RAM_SIZE=512
      - COUCHBASE_INDEX_RAM_SIZE=512
      - COUCHBASE_FTS_RAM_SIZE=1024
      - HTTP_PROXY=http://myproxy.co.jp:8888
      - HTTPS_PROXY=http://myproxy.co.jp:8888
      - NO_PROXY=localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
      #HTTP_PROXY: http://myproxy.co.jp:8888
      #HTTPS_PROXY: http://myproxy.co.jp:8888
      #NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    hostname: couchbase-server
    container_name: couchbase-server
    working_dir: /opt/couchbase
    stdin_open: true
    tty: true
    entrypoint: [ "" ]
    command: sh -c "/opt/couchbase/init/init-cbserver.sh"
    volumes:
      - ./volumes/couchbase/data:/opt/couchbase/var/lib/couchbase/data
    healthcheck:
      # ensure bucket was created before proceeding
      test: [ "CMD-SHELL", "curl -s -f -u Administrator:password http://localhost:8091/pools/default/buckets | grep -q '\\[{' || exit 1" ]
      interval: 10s
      retries: 10
      start_period: 30s
      timeout: 10s

  # The pgvector vector database.
  pgvector:
    image: pgvector/pgvector:pg16
    profiles:
      - pgvector
    restart: always
    environment:
      PGUSER: ${PGVECTOR_PGUSER:-postgres}
      # The password for the default postgres user.
      POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456}
      # The name of the default postgres database.
      POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify}
      # postgres data directory
      PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    volumes:
      - ./volumes/pgvector/data:/var/lib/postgresql/data
    healthcheck:
      test: [ 'CMD', 'pg_isready' ]
      interval: 1s
      timeout: 3s
      retries: 30

  # pgvecto-rs vector store
  pgvecto-rs:
    image: tensorchord/pgvecto-rs:pg16-v0.3.0
    profiles:
      - pgvecto-rs
    restart: always
    environment:
      PGUSER: ${PGVECTOR_PGUSER:-postgres}
      # The password for the default postgres user.
      POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456}
      # The name of the default postgres database.
      POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify}
      # postgres data directory
      PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    volumes:
      - ./volumes/pgvecto_rs/data:/var/lib/postgresql/data
    healthcheck:
      test: [ 'CMD', 'pg_isready' ]
      interval: 1s
      timeout: 3s
      retries: 30

  # Chroma vector database
  chroma:
    image: ghcr.io/chroma-core/chroma:0.5.20
    profiles:
      - chroma
    restart: always
    volumes:
      - ./volumes/chroma:/chroma/chroma
    environment:
      CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456}
      CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider}
      IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon

  # OceanBase vector database
  oceanbase:
    image: quay.io/oceanbase/oceanbase-ce:4.3.3.0-100000142024101215
    profiles:
      - oceanbase
    restart: always
    volumes:
      - ./volumes/oceanbase/data:/root/ob
      - ./volumes/oceanbase/conf:/root/.obd/cluster
      - ./volumes/oceanbase/init.d:/root/boot/init.d
    environment:
      OB_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G}
      OB_SYS_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456}
      OB_TENANT_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456}
      OB_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai}
      OB_SERVER_IP: '127.0.0.1'
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon

  # Oracle vector database
  oracle:
    image: container-registry.oracle.com/database/free:latest
    profiles:
      - oracle
    restart: always
    volumes:
      - source: oradata
        type: volume
        target: /opt/oracle/oradata
      - ./startupscripts:/opt/oracle/scripts/startup
    environment:
      ORACLE_PWD: ${ORACLE_PWD:-Dify123456}
      ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon

  # Milvus vector database services
  etcd:
    container_name: milvus-etcd
    image: quay.io/coreos/etcd:v3.5.5
    profiles:
      - milvus
    environment:
      ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision}
      ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000}
      ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296}
      ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    volumes:
      - ./volumes/milvus/etcd:/etcd
    command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
    healthcheck:
      test: [ 'CMD', 'etcdctl', 'endpoint', 'health' ]
      interval: 30s
      timeout: 20s
      retries: 3
    networks:
      - milvus

  minio:
    container_name: milvus-minio
    image: minio/minio:RELEASE.2023-03-20T20-16-18Z
    profiles:
      - milvus
    environment:
      MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin}
      MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    volumes:
      - ./volumes/milvus/minio:/minio_data
    command: minio server /minio_data --console-address ":9001"
    healthcheck:
      test: [ 'CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live' ]
      interval: 30s
      timeout: 20s
      retries: 3
    networks:
      - milvus

  milvus-standalone:
    container_name: milvus-standalone
    image: milvusdb/milvus:v2.5.0-beta
    profiles:
      - milvus
    command: [ 'milvus', 'run', 'standalone' ]
    environment:
      ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379}
      MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000}
      common.security.authorizationEnabled: ${MILVUS_AUTHORIZATION_ENABLED:-true}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    volumes:
      - ./volumes/milvus/milvus:/var/lib/milvus
    healthcheck:
      test: [ 'CMD', 'curl', '-f', 'http://localhost:9091/healthz' ]
      interval: 30s
      start_period: 90s
      timeout: 20s
      retries: 3
    depends_on:
      - etcd
      - minio
    ports:
      - 19530:19530
      - 9091:9091
    networks:
      - milvus

  # Opensearch vector database
  opensearch:
    container_name: opensearch
    image: opensearchproject/opensearch:latest
    profiles:
      - opensearch
    environment:
      discovery.type: ${OPENSEARCH_DISCOVERY_TYPE:-single-node}
      bootstrap.memory_lock: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true}
      OPENSEARCH_JAVA_OPTS: -Xms${OPENSEARCH_JAVA_OPTS_MIN:-512m} -Xmx${OPENSEARCH_JAVA_OPTS_MAX:-1024m}
      OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123}
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    ulimits:
      memlock:
        soft: ${OPENSEARCH_MEMLOCK_SOFT:--1}
        hard: ${OPENSEARCH_MEMLOCK_HARD:--1}
      nofile:
        soft: ${OPENSEARCH_NOFILE_SOFT:-65536}
        hard: ${OPENSEARCH_NOFILE_HARD:-65536}
    volumes:
      - ./volumes/opensearch/data:/usr/share/opensearch/data
    networks:
      - opensearch-net

  opensearch-dashboards:
    container_name: opensearch-dashboards
    image: opensearchproject/opensearch-dashboards:latest
    profiles:
      - opensearch
    environment:
      OPENSEARCH_HOSTS: '["https://opensearch:9200"]'
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    volumes:
      - ./volumes/opensearch/opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml
    networks:
      - opensearch-net
    depends_on:
      - opensearch

  # MyScale vector database
  myscale:
    container_name: myscale
    image: myscale/myscaledb:1.6.4
    profiles:
      - myscale
    restart: always
    environment:
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    tty: true
    volumes:
      - ./volumes/myscale/data:/var/lib/clickhouse
      - ./volumes/myscale/log:/var/log/clickhouse-server
      - ./volumes/myscale/config/users.d/custom_users_config.xml:/etc/clickhouse-server/users.d/custom_users_config.xml
    ports:
      - ${MYSCALE_PORT:-8123}:${MYSCALE_PORT:-8123}

  # https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html
  # https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-prod-prerequisites
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3
    container_name: elasticsearch
    profiles:
      - elasticsearch
      - elasticsearch-ja
    restart: always
    volumes:
      - ./elasticsearch/docker-entrypoint.sh:/docker-entrypoint-mount.sh
      - dify_es01_data:/usr/share/elasticsearch/data
    environment:
      ELASTIC_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic}
      VECTOR_STORE: ${VECTOR_STORE:-}
      cluster.name: dify-es-cluster
      node.name: dify-es0
      discovery.type: single-node
      xpack.license.self_generated.type: basic
      xpack.security.enabled: 'true'
      xpack.security.enrollment.enabled: 'false'
      xpack.security.http.ssl.enabled: 'false'
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    ports:
      - ${ELASTICSEARCH_PORT:-9200}:9200
    deploy:
      resources:
        limits:
          memory: 2g
    entrypoint: [ 'sh', '-c', "sh /docker-entrypoint-mount.sh" ]
    healthcheck:
      test: [ 'CMD', 'curl', '-s', 'http://localhost:9200/_cluster/health?pretty' ]
      interval: 30s
      timeout: 10s
      retries: 50

  # https://www.elastic.co/guide/en/kibana/current/docker.html
  # https://www.elastic.co/guide/en/kibana/current/settings.html
  kibana:
    image: docker.elastic.co/kibana/kibana:8.14.3
    container_name: kibana
    profiles:
      - elasticsearch
    depends_on:
      - elasticsearch
    restart: always
    environment:
      XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: d1a66dfd-c4d3-4a0a-8290-2abcb83ab3aa
      #NO_PROXY: localhost,127.0.0.1,elasticsearch,kibana
      NO_PROXY: localhost,127.0.0.1,elasticsearch,kibana,weaviate,qdrand,db,redis,web,worker,plugin_daemon
      XPACK_SECURITY_ENABLED: 'true'
      XPACK_SECURITY_ENROLLMENT_ENABLED: 'false'
      XPACK_SECURITY_HTTP_SSL_ENABLED: 'false'
      XPACK_FLEET_ISAIRGAPPED: 'true'
      I18N_LOCALE: zh-CN
      SERVER_PORT: '5601'
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      #NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    ports:
      - ${KIBANA_PORT:-5601}:5601
    healthcheck:
      test: [ 'CMD-SHELL', 'curl -s http://localhost:5601 >/dev/null || exit 1' ]
      interval: 30s
      timeout: 10s
      retries: 3

  # unstructured .
  # (if used, you need to set ETL_TYPE to Unstructured in the api & worker service.)
  unstructured:
    image: downloads.unstructured.io/unstructured-io/unstructured-api:latest
    profiles:
      - unstructured
    restart: always
    environment:
      HTTP_PROXY: http://myproxy.co.jp:8888
      HTTPS_PROXY: http://myproxy.co.jp:8888
      NO_PROXY: localhost,127.0.0.1,10.10.10.10,weaviate,qdrand,db,redis,web,worker,plugin_daemon
    volumes:
      - ./volumes/unstructured:/app/data

networks:
  # create a network between sandbox, api and ssrf_proxy, and can not access outside.
  ssrf_proxy_network:
    driver: bridge
    internal: true
  milvus:
    driver: bridge
  opensearch-net:
    driver: bridge
    internal: true

volumes:
  oradata:
  dify_es01_data:
1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?