背景
最近身の回りで、AIエージェントといえばLanggraphみたいな流れになってきたので触ってみる
Quick Startを触ってみる。
Part1~7まである
Part1
非常にシンプルなチャットフローの作成
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]
# グラフビルダーの定義
graph_builder = StateGraph(State)
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, verbose=True)
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
# グラフの定義
# ノード(エージェント)とエッジ(順序)によってによって成り立つグラフ
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
graph = graph_builder.compile()
def stream_graph_updates(user_input: str, graph: StateGraph):
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)
# 実際に会話してみる
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
stream_graph_updates(user_input, graph)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input, graph)
break
Part2
ツールノードを作成し、チャットボットからツールノードに向くエッジを作成
この辺で、公式サイトにチュートリアルのコードがほぼまとめられていることに気づいて流用し始める
from typing import Annotated
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from dotenv import load_dotenv
import os
load_dotenv()
OPENAI_API_KEY = os.environ['OPENAI_API_KEY']
TAVILY_API_KEY = os.environ['TAVILY_API_KEY']
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
# ツールの定義
# Tavily: LLMでの使用を想定したWeb検索エンジン
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, verbose=True)
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
# 条件付きエッジ作成
# 今回だとすべての質問に対してツールを使う?
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
graph = graph_builder.compile()
# 会話部分の実装は省略
会話の様子
(3.13.1) root@b5f1f59af63d:/home/volume/langgraph_demo# python add_tools.py
User: What do you know about LangGraph?
Assistant:
Assistant: [{"url": "https://langchain-ai.github.io/langgraph/", "content": "LangGraph is a framework for creating stateful, multi-actor applications with LLMs, using cycles, controllability, and persistence. Learn how to use LangGraph with LangChain, LangSmith, and Anthropic tools to build agent and multi-agent workflows."}, {"url": "https://www.datacamp.com/tutorial/langgraph-tutorial", "content": "LangGraph is a library within the LangChain ecosystem that simplifies the development of complex, multi-agent large language model (LLM) applications. Learn how to use LangGraph to create stateful, flexible, and scalable systems with nodes, edges, and state management."}]
Assistant: LangGraph is a framework for creating stateful, multi-actor applications with Large Language Models (LLMs). It utilizes cycles, controllability, and persistence. LangGraph can be used with LangChain, LangSmith, and Anthropic tools to build agent and multi-agent workflows. It simplifies the development of complex, multi-agent LLM applications by providing features for creating stateful, flexible, and scalable systems with nodes, edges, and state management. You can learn more about LangGraph by visiting the [LangGraph website](https://langchain-ai.github.io/langgraph/) or checking out tutorials like the one on [DataCamp](https://www.datacamp.com/tutorial/langgraph-tutorial).
User:
Part3
チャットボットにメモリ機能を搭載
デフォでメモリがないんですね、と気づくfrom typing import Annotated
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from dotenv import load_dotenv
import os
load_dotenv()
OPENAI_API_KEY = os.environ['OPENAI_API_KEY']
TAVILY_API_KEY = os.environ['TAVILY_API_KEY']
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, verbose=True)
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
# メモリ設定
# 今回はインメモリにすべて会話履歴を格納(DBに格納する設定のSaverメソッドもあるらしい)
# メモリを格納するスレッドを定義しなければならない
memory = MemorySaver()
config = {"configurable": {"thread_id": "1"}}
graph = graph_builder.compile(checkpointer=memory)
def stream_graph_updates(user_input: str, graph: StateGraph, config: dict):
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
event["messages"][-1].pretty_print()
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
stream_graph_updates(user_input, graph, config)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input, graph, config)
break
User: Hi there! My name is Will.
================================ Human Message =================================
Hi there! My name is Will.
================================== Ai Message ==================================
Hello Will! How can I assist you today?
User: Remember my name?
================================ Human Message =================================
Remember my name?
================================== Ai Message ==================================
Yes, I remember your name, Will. How can I help you today?
参考:メモリ機能実装前のチャット実行結果
User: Hi there! My name is Will.
Assistant: Hello Will! How can I assist you today?
User: Remember my name?
Assistant: Of course, I remember your name! How can I assist you today?
User: What is my name?
Assistant: I'm sorry, but I don't have access to your personal information, including your name. If there's anything else you'd like to know or discuss, feel free to ask!
Part4
ツールとのやりとりの間に人力インプットを挟む
ツールの力だけだとおかしな方向にいく場合は人の入力をはさんで調整しようね、ということコンパイルのパラメータにinterrupt_beforeというのがあるので追記する。
修正箇所コードのみ表示
graph = graph_builder.compile(checkpointer=memory, interrupt_before=["tools"],)
会話ログ
確かにツールが呼ばれるタイミングで人の入力を挟もうとしている
User: I'm learning LangGraph. Could you do some research on it for me?
================================ Human Message =================================
I'm learning LangGraph. Could you do some research on it for me?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (call_y4gY7p8fj3x4wJCinKnXdwhO)
Call ID: call_y4gY7p8fj3x4wJCinKnXdwhO
Args:
query: LangGraph
User:
Part5
状態更新
フローを途中で止めて、違うツールに回答させるように変更する
def stream_graph_updates(user_input: str, graph: StateGraph, config: dict):
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
event["messages"][-1].pretty_print()
snapshot = graph.get_state(config)
existing_message = snapshot.values["messages"][-1]
print("Original")
print("Message ID", existing_message.id)
print(existing_message.tool_calls[0])
# 新しいツールを作ってそちらに回答させる
new_tool_call = existing_message.tool_calls[0].copy()
new_tool_call["args"]["query"] = "LangGraph human-in-the-loop workflow"
new_message = AIMessage(
content=existing_message.content,
tool_calls=[new_tool_call],
# Important! The ID is how LangGraph knows to REPLACE the message in the state rather than APPEND this messages
id=existing_message.id,
)
print("Updated")
print(new_message.tool_calls[0])
print("Message ID", new_message.id)
graph.update_state(config, {"messages": [new_message]})
print("\n\nTool calls")
print(graph.get_state(config).values["messages"][-1].tool_calls)
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
会話ログ
User: I'm learning LangGraph. Could you do some research on it for me?
================================ Human Message =================================
I'm learning LangGraph. Could you do some research on it for me?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (call_WPH0G00J2cY9iDQ2GhN0RLEK)
Call ID: call_WPH0G00J2cY9iDQ2GhN0RLEK
Args:
query: LangGraph
Original
Message ID run-bc7c07ff-7027-49a8-bc0b-b54e64e6efcb-0
{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph'}, 'id': 'call_WPH0G00J2cY9iDQ2GhN0RLEK', 'type': 'tool_call'}
Updated
{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph human-in-the-loop workflow'}, 'id': 'call_WPH0G00J2cY9iDQ2GhN0RLEK', 'type': 'tool_call'}
Message ID run-bc7c07ff-7027-49a8-bc0b-b54e64e6efcb-0
Tool calls
[{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph human-in-the-loop workflow'}, 'id': 'call_WPH0G00J2cY9iDQ2GhN0RLEK', 'type': 'tool_call'}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (call_WPH0G00J2cY9iDQ2GhN0RLEK)
Call ID: call_WPH0G00J2cY9iDQ2GhN0RLEK
Args:
query: LangGraph human-in-the-loop workflow
================================= Tool Message =================================
Name: tavily_search_results_json
[{"url": "https://www.youtube.com/watch?v=9BPCV5TYPmg", "content": "In this video, I'll show you how to handle persistence with LangGraph, enabling a unique Human-in-the-Loop workflow. This approach allows a human to grant an"}, {"url": "https://medium.com/@kbdhunga/implementing-human-in-the-loop-with-langgraph-ccfde023385c", "content": "In this article, we will explore how the human-in-the-loop mechanism works in LangGraph with a simple example. In this example, we introduce a breakpoint before the tools are executed (action) to"}]
================================== Ai Message ==================================
I found some information on LangGraph's human-in-the-loop workflow. Here are a couple of resources you can check out:
1. [YouTube Video: Handling Persistence with LangGraph for Human-in-the-Loop Workflow](https://www.youtube.com/watch?v=9BPCV5TYPmg)
2. [Medium Article: Implementing Human-in-the-Loop with LangGraph](https://medium.com/@kbdhunga/implementing-human-in-the-loop-with-langgraph-ccfde023385c)
These resources should provide you with insights into how LangGraph incorporates human-in-the-loop workflows.
Part6
Stateをカスタマイズする
Stateに「人間に聞く」フラグを付け、人間の専門に関する質問は人間に聞くようにする
コード
from typing import Annotated
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage, AIMessage, ToolMessage
# NOTE: you must use langchain-core >= 0.3 with Pydantic v2
from pydantic import BaseModel
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from dotenv import load_dotenv
import os
import traceback
import re
import sys
load_dotenv()
OPENAI_API_KEY = os.environ['OPENAI_API_KEY']
TAVILY_API_KEY = os.environ['TAVILY_API_KEY']
class State(TypedDict):
messages: Annotated[list, add_messages]
# This flag is new
ask_human: bool
class RequestAssistance(BaseModel):
"""Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions.
To use this function, relay the user's 'request' so the expert can provide the right guidance.
"""
request: str
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, verbose=True)
# We can bind the llm to a tool definition, a pydantic model, or a json schema
llm_with_tools = llm.bind_tools(tools + [RequestAssistance])
def chatbot(state: State):
response = llm_with_tools.invoke(state["messages"])
ask_human = False
if (
response.tool_calls
and response.tool_calls[0]["name"] == RequestAssistance.__name__
):
ask_human = True
return {"messages": [response], "ask_human": ask_human}
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools=[tool]))
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# Typically, the user will have updated the state during the interrupt.
# If they choose not to, we will include a placeholder ToolMessage to
# let the LLM continue.
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Append the new messages
"messages": new_messages,
# Unset the flag
"ask_human": False,
}
graph_builder.add_node("human", human_node)
def select_next_node(state: State):
if state["ask_human"]:
return "human"
# Otherwise, we can route as before
return tools_condition(state)
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", "__end__": "__end__"},
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("human", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
interrupt_before=["human"],
)
config = {"configurable": {"thread_id": "1"}}
def stream_graph_updates(user_input: str=None, graph: StateGraph=None, config: dict=None):
# The config is the **second positional argument** to stream() or invoke()!
if user_input is None:
events = graph.stream(
None, config, stream_mode="values"
)
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
event["messages"][-1].pretty_print()
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
stream_graph_updates(user_input, graph, config)
except Exception as e:
error_class = type(e)
error_description = str(e)
err_msg = '%s: %s' % (error_class, error_description)
print(err_msg)
tb = traceback.extract_tb(sys.exc_info()[2])
trace = traceback.format_list(tb)
print('---- traceback ----')
for line in trace:
if '~^~' in line:
print(line.rstrip())
else:
text = re.sub(r'\n\s*', ' ', line.rstrip())
print(text)
print('-------------------')
break
会話ログ
Langgraphに関することはhumanに聞き、それ以外はWeb検索している
User: What is the recipe of Coke?
================================ Human Message =================================
What is the recipe of Coke?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (call_CFAcOKNhCO9c0WNUtXmDusYf)
Call ID: call_CFAcOKNhCO9c0WNUtXmDusYf
Args:
query: Coca-Cola recipe
================================= Tool Message =================================
Name: tavily_search_results_json
[{"url": "https://www.thisamericanlife.org/427/original-recipe", "content": "A 1979 newspaper article claims to have the original formula for Coca-Cola, one of the most jealously guarded trade secrets in the world. The podcast investigates the authenticity of the recipe and makes a batch to see if it tastes like Coke."}, {"url": "https://www.livescience.com/33042-leaked-coca-cola-recipe-not-quite-real-thing.html", "content": "The media has blown up about Coca-Cola's famous secret recipe showing up in an Atlanta newspaper. Life's Little Mysteries confirms that this is the real Coke, but it's no big secret."}]
================================== Ai Message ==================================
The recipe for Coca-Cola is one of the most jealously guarded trade secrets in the world. There have been claims about the original formula being revealed, but its authenticity is still a topic of debate. If you are interested in learning more about this, you can check out the following links:
1. [Original Recipe - This American Life](https://www.thisamericanlife.org/427/original-recipe)
2. [Leaked Coca-Cola Recipe - Live Science](https://www.livescience.com/33042-leaked-coca-cola-recipe-not-quite-real-thing.html)
These sources provide insights into the history and speculation surrounding the Coca-Cola recipe.
User: I need some expert guidance for building this AI agent. Could you request assistance for me?
================================ Human Message =================================
I need some expert guidance for building this AI agent. Could you request assistance for me?
================================== Ai Message ==================================
Tool Calls:
RequestAssistance (call_y57pVssIuiZgVp2cjGxajR77)
Call ID: call_y57pVssIuiZgVp2cjGxajR77
Args:
request: I need expert guidance for building an AI agent.
User:
Part7
どこかでやりたい。やりとりの履歴をさかのぼり、ある部分から会話を続けることができる様子
終わりに
反省
- 今回pythonファイルで実行したが、Notebookでやる前提でチュートリアルが記載されていた。素直に従えばよかった。
この後やりたいこと
- Part7の続き
- langgraph serverのチュートリアル