LoginSignup
3
1

🐢Dify | Install

Last updated at Posted at 2024-06-07

For the Japanese version, click here

Dify

Dify is an open-source GUI platform designed to build AGI apppication. This time, I use Dify to quickly prototype and test before development in Langchain.

Local installation

For an easy installation, simply use the default settings of Dify.

Requirement

Ensure your machine meets these minimum system requirements and has Docker and Docker Compose are installed:

  • CPU >= 2 Core
  • RAM >= 4GB

Step 1: Clone Dify

Clone the Dify repository to your machine:

git clone https://github.com/langgenius/dify.git

Acttualy you only need the dify/docker from repository

Step 2. Start Dify

Navigate to the docker directory and run following command to start Dify:

cd dify/docker
docker compose up -d

Results:

[+] Running 11/11
  Network docker_default             Created    0.0s
  Network docker_ssrf_proxy_network  Created    0.0s
  Container docker-ssrf_proxy-1      Started    1.7s
  Container docker-db-1              Started    1.6s
  Container docker-weaviate-1        Started    1.6s
  Container docker-redis-1           Started    1.7s
  Container docker-sandbox-1         Started    1.7s
  Container docker-web-1             Started    1.7s
  Container docker-api-1             Started    2.3s
  Container docker-worker-1          Started    2.2s
  Container docker-nginx-1           Started    2.8s

Step 3. Verify container status:

Run the following command to check if all containers are running successfully:

docker compose ps

Results:

NAME                  IMAGE                              COMMAND                   SERVICE      CREATED         STATUS                   PORTS
docker-api-1          langgenius/dify-api:0.6.10         "/bin/bash /entrypoi…"   api          5 minutes ago   Up 5 minutes             5001/tcp
docker-db-1           postgres:15-alpine                 "docker-entrypoint.s…"   db           5 minutes ago   Up 5 minutes (healthy)   5432/tcp
docker-nginx-1        nginx:latest                       "/docker-entrypoint.…"   nginx        5 minutes ago   Up 5 minutes             0.0.0.0:80->80/tcp
docker-redis-1        redis:6-alpine                     "docker-entrypoint.s…"   redis        5 minutes ago   Up 5 minutes (healthy)   6379/tcp
docker-sandbox-1      langgenius/dify-sandbox:0.2.1      "/main"                   sandbox      5 minutes ago   Up 5 minutes
docker-ssrf_proxy-1   ubuntu/squid:latest                "entrypoint.sh -f /e…"   ssrf_proxy   5 minutes ago   Up 5 minutes             3128/tcp
docker-weaviate-1     semitechnologies/weaviate:1.19.0   "/bin/weaviate --hos…"   weaviate     5 minutes ago   Up 5 minutes
docker-web-1          langgenius/dify-web:0.6.10         "/bin/sh ./entrypoin…"   web          5 minutes ago   Up 5 minutes             3000/tcp
docker-worker-1       langgenius/dify-api:0.6.10         "/bin/bash /entrypoi…"   worker       5 minutes ago   Up 5 minutes             5001/tcp

Access Dify

Acesss http://localhost/install to use Dify.

Since I have used Dify before, it displays the login screen. If this is your first run, you will need to create an account.
image.png

Your workspace will look like this:
image.png

Setup the model

Click on top right corner, select Settings > Model Provider
image.png
image.png

Using Ollama

  1. Visit https://www.ollama.com/ and download the Ollama client for your system
  2. Run Ollama (I will use Llama2, but you can choose any model that suit your needs, visit Olama models for more details)
ollama run llama2
  1. Verify the launch at http://localhost:11434
    image.png

  2. Integrate Ollama in Dify

    • Scroll down and choose Ollama
      image.png
    • Fill in:
      • Model Name: llama2
      • Base URL: http://<your-ollama-endpoint-domain>:11434
        I run Dify from docker, so I will set the base URL to http://host.docker.internal:11434
        Enter the base URL where the Ollama service is accessible.
      • Model Type: Chat
      • Model Context Length: 4096
        The maximum context length of the model. If unsure, use the default value of 4096.
      • Maximum Token Limit: 4096
        The maximum number of tokens returned by the model. If there are no specific requirements for the model, this can be consistent with the model context length.
      • Support for Vision: Yes
        Check this option if the model supports image understanding (multimodal), like llava.
        image.png

Using OpenAI

I am curently using OpenAI create prototypes and test.

Choose OpenAi and fill in, just the API Key is enough
image.png

And you're all set
image.png
image.png
image.png

3
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
3
1