1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

【入門者向け】LangGraph を基本から条件付きエッジでの回数制限まで学ぶ

Posted at

今までコードサンプルをもとに使っていたLangGraphを少し真面目に基本から理解してみました。
こちらのQuickstartのGraph APIを使ったコードを参考にしています。

コード集

コードを少しずつ拡張していっています。

前提

pyproject.toml
requires-python = ">=3.12"
dependencies = [
    "grandalf>=0.8",
    "jupyterlab>=4.4.10",
    "langchain[openai]>=1.0.7",
    "langgraph>=1.0.3",
    "langgraph-cli[inmem]>=0.4.7",
    "python-dotenv>=1.2.1",
]

基本

最も基本の型。LLMも使っていないシンプルなGraph。

from langgraph.graph import StateGraph, MessagesState, START, END

def mock_llm(state: MessagesState):
    return {"messages": [{"role": "ai", "content": "hello world"}]}

graph = StateGraph(MessagesState)
graph.add_node(mock_llm)
graph.add_edge(START, "mock_llm")
graph.add_edge("mock_llm", END)
graph = graph.compile()
graph.invoke({"messages": [{"role": "user", "content": "hi!"}]})

結果

ターミナル出力
{'messages': 
  [HumanMessage(content='hi!', additional_kwargs={}, response_metadata={}, id='394d2aa4-0721-4261-ad2a-ff9f902e2dea'),
   AIMessage(content='hello world', additional_kwargs={}, response_metadata={}, id='2af182ea-2851-4b25-8f8f-3d31595513c4')]}
graph構造出力
graph.get_graph().print_ascii()
graph構造出力結果
+-----------+  
| __start__ |  
+-----------+  
      *        
      *        
      *        
+----------+   
| mock_llm |   
+----------+   
      *        
      *        
      *        
 +---------+   
 | __end__ |   
 +---------+   

MessageStateはmessagesを積み上げていく方式なので出力結果に2つのメッセージが出ます。標準コードにある以下のadd_messages部分が「積み上げ」を意味します。

message.py
class MessagesState(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]

上書き方式のStateでやってみると

from typing import List, TypedDict
from typing_extensions import Annotated

def overwrite(_, new):
    return new

class OverWriteState(TypedDict):
    messages: Annotated[List[str], overwrite]  # 連結ではなく "新しい値で置き換え"


def mock_llm(state: OverWriteState):
    return {"messages": [{"role": "ai", "content": "hello world"}]}

graph = StateGraph(OverWriteState)
graph.add_node(mock_llm)
graph.add_edge(START, "mock_llm")
graph.add_edge("mock_llm", END)
graph = graph.compile()
graph.invoke({"messages": [{"role": "user", "content": "hi!"}]})

mock_llm関数のreturn結果だけが出てきます(上書きされているので)。

{'messages': [{'role': 'ai', 'content': 'hello world'}]}

Azure OpenAI 呼出

こんどはAzure OpenAI のモデルを呼び出します。
事前に.envに環境変数定義

.env
AZURE_OPENAI_API_KEY=<Key>
AZURE_OPENAI_ENDPOINT=https://<resorce>.openai.azure.com/
OPENAI_API_VERSION=2025-03-01-preview
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-5-nano

importと環境変数読込

import os

from dotenv import load_dotenv
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, MessagesState, START, END
from langchain.messages import HumanMessage, SystemMessage, ToolMessage

load_dotenv(override=True)
chat model初期化とテスト
model = init_chat_model(
    "azure_openai:gpt-5-nano",
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)
response = model.invoke("こんにちは")
print(response)
出力結果(少し整形)
content='こんにちは!今日はどうされましたか?何かお手伝いできることがあれば教えてください。日本語の練習、質問への回答、翻訳・要約、レシピ、旅行の計画、技術的な疑問など、いろいろ対応します。話
したい話題はありますか?' additional_kwargs={'refusal': None} response_metadata={'token_usage':
{'completion_tokens': 468, 後略
LLM呼出関数
def llm_call(state: MessagesState):
    """LLM decides whether to call a tool or not"""

    return {
        "messages": [
            model.invoke(
                [
                    SystemMessage(
                        content="You are a helpful assistant tasked with performing arithmetic on a set of inputs."
                    )
                ]
                + state["messages"]
            )
        ]
    }

グラフ定義です。今度はpretty_print関数で実行結果を出力(mlangchain_core.messagesを継承しているので使用可能)。

グラフ
graph = StateGraph(MessagesState)
graph.add_node(llm_call)
graph.add_edge(START, "llm_call")
graph.add_edge("llm_call", END)
graph = graph.compile()
messages = graph.invoke({"messages": [HumanMessage(content="おはよう")]})
for m in messages["messages"]:
    m.pretty_print()
実行結果
================================ Human Message =================================

おはよう
================================== Ai Message ==================================

おはようございます!

Tool 呼出

まとめて全体書きます。前回の差分は以下の2点。この状態だと、複数回Toolを呼び出せないので、X+X+X みたいな計算には対応できません(その対応はこの次)。

  • Tool 定義と呼出
  • Tool 呼出後のLLM呼出
import os

from dotenv import load_dotenv
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, MessagesState, START, END
from langchain.messages import HumanMessage, SystemMessage, ToolMessage
from langchain.tools import tool

load_dotenv(override=True)

model = init_chat_model(
    "azure_openai:gpt-5-nano",
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)
response = model.invoke("こんにちは")
print(response)

# Define tools
@tool
def multiply(a: int, b: int) -> int:
    """Multiply `a` and `b`.

    Args:
        a: First int
        b: Second int
    """
    return a * b


@tool
def add(a: int, b: int) -> int:
    """Adds `a` and `b`.

    Args:
        a: First int
        b: Second int
    """
    return a + b

def llm_call(state: MessagesState):
    """LLM decides whether to call a tool or not"""

    return {
        "messages": [
            model_with_tools.invoke(
                [
                    SystemMessage(
                        content="You are a helpful assistant tasked with performing arithmetic on a set of inputs."
                    )
                ]
                + state["messages"]
            )
        ]
    }

def tool_node(state: MessagesState):
    """Performs the tool call"""

    result = []
    for tool_call in state["messages"][-1].tool_calls:
        tool = tools_by_name[tool_call["name"]]
        observation = tool.invoke(tool_call["args"])
        result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
    return {"messages": result}

graph = StateGraph(MessagesState)
graph.add_node("llm_call1", llm_call)
graph.add_node("llm_call2", llm_call)
graph.add_node("tool_node", tool_node)
graph.add_edge(START, "llm_call1")
graph.add_edge("llm_call1", "tool_node")
graph.add_edge("tool_node", "llm_call2")
graph.add_edge("llm_call2", END)
graph = graph.compile()
messages = graph.invoke({"messages": [HumanMessage(content="Add 3 and 4.")]})
for m in messages["messages"]:
    m.pretty_print()

graph.get_graph().print_ascii()の実行結果です。

Graph構造
+-----------+  
| __start__ |  
+-----------+  
      *        
      *        
      *        
+-----------+  
| llm_call1 |  
+-----------+  
      *        
      *        
      *        
+-----------+  
| tool_node |  
+-----------+  
      *        
      *        
      *        
+-----------+  
| llm_call2 |  
+-----------+  
      *        
      *        
      *        
 +---------+   
 | __end__ |   
 +---------+   

少しわかりにくい部分で変数tools_by_nameの中身。Tool定義とコードの中身が入っています。

tools_by_name
{'add': StructuredTool(name='add', description='Adds `a` and `b`.\n\n    Args:\n        a: First int\n        b: Second int', args_schema=<class 'langchain_core.utils.pydantic.add'>, func=<function add at 0x7f8fdc123600>),
 'multiply': StructuredTool(name='multiply', description='Multiply `a` and `b`.\n\n    Args:\n        a: First int\n        b: Second int', args_schema=<class 'langchain_core.utils.pydantic.multiply'>, func=<function multiply at 0x7f8fdc123100>)}

前回LLM呼出とのコード差分です。

 from dotenv import load_dotenv
 from langchain.chat_models import init_chat_model
 from langgraph.graph import StateGraph, MessagesState, START, END
-from langchain.messages import SystemMessage, HumanMessage
+from langchain.messages import HumanMessage, SystemMessage, ToolMessage
+from langchain.tools import tool
 
 load_dotenv(override=True)
 
@@ -14,12 +15,38 @@
 response = model.invoke("こんにちは")
 print(response)
 
+@tool
+def multiply(a: int, b: int) -> int:
+    """Multiply `a` and `b`.
+
+    Args:
+        a: First int
+        b: Second int
+    """
+    return a * b
+
+
+@tool
+def add(a: int, b: int) -> int:
+    """Adds `a` and `b`.
+
+    Args:
+        a: First int
+        b: Second int
+    """
+    return a + b
+
+# Augment the LLM with tools
+tools = [add, multiply]
+tools_by_name = {tool.name: tool for tool in tools}
+model_with_tools = model.bind_tools(tools)
+
 def llm_call(state: MessagesState):
     """LLM decides whether to call a tool or not"""
 
     return {
         "messages": [
-            model.invoke(
+            model_with_tools.invoke(
                 [
                     SystemMessage(
                         content="You are a helpful assistant tasked with performing arithmetic on a set of inputs."
@@ -30,11 +57,25 @@
         ]
     }
 
+def tool_node(state: MessagesState):
+    """Performs the tool call"""
+
+    result = []
+    for tool_call in state["messages"][-1].tool_calls:
+        tool = tools_by_name[tool_call["name"]]
+        observation = tool.invoke(tool_call["args"])
+        result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
+    return {"messages": result}
+
 graph = StateGraph(MessagesState)
-graph.add_node(llm_call)
-graph.add_edge(START, "llm_call")
-graph.add_edge("llm_call", END)
+graph.add_node("llm_call1", llm_call)
+graph.add_node("llm_call2", llm_call)
+graph.add_node("tool_node", tool_node)
+graph.add_edge(START, "llm_call1")
+graph.add_edge("llm_call1", "tool_node")
+graph.add_edge("tool_node", "llm_call2")
+graph.add_edge("llm_call2", END)
 graph = graph.compile()
-messages = graph.invoke({"messages": [HumanMessage(content="おはよう")]})
+messages = graph.invoke({"messages": [HumanMessage(content="Add 3 and 4.")]})
 for m in messages["messages"]:
     m.pretty_print()

条件つきエッジ

今度は、前回のコードに対して条件付きエッジを使って、1つのLLM呼出ノードでTool呼出をするか終了するかを判定します。should_continue関数を作って判断しています。これによりLLM呼出とTool呼出の繰り返しができるので、X+X+Xみたいな計算にも対応できます。

import os
from typing import Literal

from dotenv import load_dotenv
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, MessagesState, START, END
from langchain.messages import HumanMessage, SystemMessage, ToolMessage
from langchain.tools import tool

load_dotenv(override=True)

model = init_chat_model(
    "azure_openai:gpt-5-nano",
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)
response = model.invoke("こんにちは")
print(response)

@tool
def multiply(a: int, b: int) -> int:
    """Multiply `a` and `b`.

    Args:
        a: First int
        b: Second int
    """
    return a * b


@tool
def add(a: int, b: int) -> int:
    """Adds `a` and `b`.

    Args:
        a: First int
        b: Second int
    """
    return a + b

# Augment the LLM with tools
tools = [add, multiply]
tools_by_name = {tool.name: tool for tool in tools}
model_with_tools = model.bind_tools(tools)

def llm_call(state: MessagesState):
    """LLM decides whether to call a tool or not"""

    return {
        "messages": [
            model_with_tools.invoke(
                [
                    SystemMessage(
                        content="You are a helpful assistant tasked with performing arithmetic on a set of inputs."
                                            )
                ]
                + state["messages"]
            )
        ]
    }

def tool_node(state: MessagesState):
    """Performs the tool call"""

    result = []
    for tool_call in state["messages"][-1].tool_calls:
        tool = tools_by_name[tool_call["name"]]
        observation = tool.invoke(tool_call["args"])
        result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
    return {"messages": result}

def should_continue(state: MessagesState) -> Literal["tool_node", END]:
    """Decide if we should continue the loop or stop based upon whether the LLM made a tool call"""

    messages = state["messages"]
    last_message = messages[-1]

    # If the LLM makes a tool call, then perform an action
    if last_message.tool_calls:
        return "tool_node"

    # Otherwise, we stop (reply to the user)
    return END

graph = StateGraph(MessagesState)
graph.add_node("llm_call", llm_call)
graph.add_node("tool_node", tool_node)
graph.add_edge(START, "llm_call")
graph.add_conditional_edges(
    "llm_call",
    should_continue,
    ["tool_node", END]
)
graph.add_edge("tool_node", "llm_call")
graph = graph.compile()
messages = graph.invoke({"messages": [HumanMessage(content="Add 3 and 4.")]})
for m in messages["messages"]:
    m.pretty_print()

ASCIIだと見にくいので、mermaidでグラフ構造出力。llm_callから2つの点線が出ているのが条件。

from IPython.display import Image, display
display(Image(graph.get_graph(xray=True).draw_mermaid_png()))

image.png

前回との差分を出力

 import os
+from typing import Literal
 
 from dotenv import load_dotenv
 from langchain.chat_models import init_chat_model
@@ -67,15 +68,30 @@
         result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
     return {"messages": result}
 
+def should_continue(state: MessagesState) -> Literal["tool_node", END]:
+    """Decide if we should continue the loop or stop based upon whether the LLM made a tool call"""
+
+    messages = state["messages"]
+    last_message = messages[-1]
+
+    # If the LLM makes a tool call, then perform an action
+    if last_message.tool_calls:
+        return "tool_node"
+
+    # Otherwise, we stop (reply to the user)
+    return END
+
 graph = StateGraph(MessagesState)
-graph.add_node("llm_call1", llm_call)
-graph.add_node("llm_call2", llm_call)
+graph.add_node("llm_call", llm_call)
 graph.add_node("tool_node", tool_node)
-graph.add_edge(START, "llm_call1")
-graph.add_edge("llm_call1", "tool_node")
-graph.add_edge("tool_node", "llm_call2")
-graph.add_edge("llm_call2", END)
+graph.add_edge(START, "llm_call")
+graph.add_conditional_edges(
+    "llm_call",
+    should_continue,
+    ["tool_node", END]
+)
+graph.add_edge("tool_node", "llm_call")
 graph = graph.compile()
 messages = graph.invoke({"messages": [HumanMessage(content="Add 3 and 4.")]})
 for m in messages["messages"]:
     m.pretty_print()

メモリにLLM呼出回数を持って回数制限追加

メモリにLLM呼出回数を持って呼出回数制限をします。

import operator
import os
from typing import Literal
from typing_extensions import TypedDict, Annotated

from dotenv import load_dotenv
from langchain.chat_models import init_chat_model
from langgraph.graph import StateGraph, START, END
from langchain.messages import AnyMessage, HumanMessage, SystemMessage, ToolMessage
from langchain.tools import tool

load_dotenv(override=True)

MAX_RETRY: int = 3

class MessagesState(TypedDict):
    messages: Annotated[list[AnyMessage], operator.add]
    llm_calls: int


model = init_chat_model(
    "azure_openai:gpt-5-nano",
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)
response = model.invoke("こんにちは")
print(response)

@tool
def multiply(a: int, b: int) -> int:
    """Multiply `a` and `b`.

    Args:
        a: First int
        b: Second int
    """
    return a * b


@tool
def add(a: int, b: int) -> int:
    """Adds `a` and `b`.

    Args:
        a: First int
        b: Second int
    """
    return a + b

# Augment the LLM with tools
tools = [add, multiply]
tools_by_name = {tool.name: tool for tool in tools}
model_with_tools = model.bind_tools(tools)

def llm_call(state: MessagesState):
    """LLM decides whether to call a tool or not"""

    result = { "llm_calls": state.get('llm_calls', 0) + 1 }
    if result["llm_calls"] >= MAX_RETRY:
        # 取り急ぎSystemMessageを使用
        result["messages"] = [SystemMessage(content="規定回数に達したため終了")]
    else:
        result["messages"] = [
                model_with_tools.invoke(
                    [
                        SystemMessage(
                            content="You are a helpful assistant tasked with performing arithmetic on a set of inputs."
                        )
                    ]
                    + state["messages"]
                )
        ]
    return result

def tool_node(state: MessagesState):
    """Performs the tool call"""

    result = []
    for tool_call in state["messages"][-1].tool_calls:
        tool = tools_by_name[tool_call["name"]]
        observation = tool.invoke(tool_call["args"])
        result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
    return {"messages": result}

def should_continue(state: MessagesState) -> Literal["tool_node", END]:
    """Decide if we should continue the loop or stop based upon whether the LLM made a tool call"""

    # LLM を規定回数以上呼び出していたら強制終了
    if state["llm_calls"] >= MAX_RETRY:
        return END

    messages = state["messages"]
    last_message = messages[-1]

    # If the LLM makes a tool call, then perform an action
    try:
        if last_message.tool_calls:
            return "tool_node"
    except AttributeError:
        pass  # 規定回数なので終了

    # Otherwise, we stop (reply to the user)
    return END

graph = StateGraph(MessagesState)
graph.add_node("llm_call", llm_call)
graph.add_node("tool_node", tool_node)
graph.add_edge(START, "llm_call")
graph.add_conditional_edges(
    "llm_call",
    should_continue,
    ["tool_node", END]
)
graph.add_edge("tool_node", "llm_call")
graph = graph.compile()
messages = graph.invoke({"messages": [HumanMessage(content="1 * 3 + 1 * 2 + 1は?Toolを必ず使って")]})
for m in messages["messages"]:
    m.pretty_print()

最後が規定回数に達したため中途半端に終わっています。

実行結果
================================ Human Message =================================

1 * 3 + 1 * 2 + 1は?Toolを必ず使って
================================== Ai Message ==================================
Tool Calls:
  multiply (call_SWm4mtRIGGPWF9eVTnfA7wrw)
 Call ID: call_SWm4mtRIGGPWF9eVTnfA7wrw
  Args:
    a: 1
    b: 3
  multiply (call_Sb9v0BIEbX1mYy2mxojTambV)
 Call ID: call_Sb9v0BIEbX1mYy2mxojTambV
  Args:
    a: 1
    b: 2
================================= Tool Message =================================

3
================================= Tool Message =================================

2
================================== Ai Message ==================================
Tool Calls:
  add (call_qDtvy1Z9nfhqDvCQT7idb3wr)
 Call ID: call_qDtvy1Z9nfhqDvCQT7idb3wr
  Args:
    a: 3
    b: 2
================================= Tool Message =================================

5
================================ System Message ================================

規定回数に達したため終了

前回分からの差分コード。

+import operator
 import os
 from typing import Literal
+from typing_extensions import TypedDict, Annotated
 
 from dotenv import load_dotenv
 from langchain.chat_models import init_chat_model
-from langgraph.graph import StateGraph, MessagesState, START, END
-from langchain.messages import HumanMessage, SystemMessage, ToolMessage
+from langgraph.graph import StateGraph, START, END
+from langchain.messages import AnyMessage, HumanMessage, SystemMessage, ToolMessage
 from langchain.tools import tool
 
 load_dotenv(override=True)
 
+MAX_RETRY: int = 3
+
+class MessagesState(TypedDict):
+    messages: Annotated[list[AnyMessage], operator.add]
+    llm_calls: int
+
+
 model = init_chat_model(
     "azure_openai:gpt-5-nano",
     azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
@@ -45,18 +54,23 @@
 def llm_call(state: MessagesState):
     """LLM decides whether to call a tool or not"""
 
-    return {
-        "messages": [
-            model_with_tools.invoke(
-                [
-                    SystemMessage(
-                        content="You are a helpful assistant tasked with performing arithmetic on a set of inputs."
-                                            )
-                ]
-                + state["messages"]
-            )
+    result = { "llm_calls": state.get('llm_calls', 0) + 1 }
+
+    if result["llm_calls"] >= MAX_RETRY:
+        # 取り急ぎSystemMessageを使用
+        result["messages"] = [SystemMessage(content="規定回数に達したため終了")]
+    else:
+        result["messages"] = [
+                model_with_tools.invoke(
+                    [
+                        SystemMessage(
+                            content="You are a helpful assistant tasked with performing arithmetic on a set of inputs."
+                        )
+                    ]
+                    + state["messages"]
+                )
         ]
-    }
+    return result
 
 def tool_node(state: MessagesState):
     """Performs the tool call"""
@@ -71,12 +85,19 @@
 def should_continue(state: MessagesState) -> Literal["tool_node", END]:
     """Decide if we should continue the loop or stop based upon whether the LLM made a tool call"""
 
+    # LLM を規定回数以上呼び出していたら強制終了
+    if state["llm_calls"] >= MAX_RETRY:
+        return END
+
     messages = state["messages"]
     last_message = messages[-1]
 
     # If the LLM makes a tool call, then perform an action
-    if last_message.tool_calls:
-        return "tool_node"
+    try:
+        if last_message.tool_calls:
+            return "tool_node"
+    except AttributeError:
+        pass  # 規定回数なので終了
 
     # Otherwise, we stop (reply to the user)
     return END
@@ -92,6 +113,6 @@
 )
 graph.add_edge("tool_node", "llm_call")
 graph = graph.compile()
-messages = graph.invoke({"messages": [HumanMessage(content="Add 3 and 4.")]})
+messages = graph.invoke({"messages": [HumanMessage(content="1 * 3 + 1 * 2 + 1は?Toolを必ず使って")]})
 for m in messages["messages"]:
     m.pretty_print()

LangSmith Studio

LangSmith Studio を使って、UIからメモリステートなどを確認可能。

.envに以下を追加。未確認だがLANGSMITH_TRACINGはいらないかも。

.env
LANGSMITH_API_KEY=<key>
LANGSMITH_TRACING=true

langgraph.json を定義。

langgraph.json
{
  "$schema": "https://langgra.ph/schema.json",
  "dependencies": ["."],
  "graphs": {
    "agent": "./<対象Python>.py:graph"
  },
  "env": ".env",
  "image_distro": "wolfi"
}

ターミナルでlanggraph devを実行。これで、パブリックのLangSmithがローカルに接続してにくる模様。

$ langgraph dev
INFO:langgraph_api.cli:

        Welcome to

╦  ┌─┐┌┐┌┌─┐╔═╗┬─┐┌─┐┌─┐┬ ┬
║  ├─┤││││ ┬║ ╦├┬┘├─┤├─┘├─┤
╩═╝┴ ┴┘└┘└─┘╚═╝┴└─┴ ┴┴  ┴ ┴

- 🚀 API: http://127.0.0.1:2024
- 🎨 Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- 📚 API Docs: http://127.0.0.1:2024/docs

This in-memory server is designed for development and testing.
For production use, please use LangSmith Deployment.

こんなUI。右にノードの変遷とメモリステート。

image.png

Input方法が初見でわからなかった。View Rawボタンを押すとJSON形式になるので簡単。入力したら「Submit」。
image.png

1
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?