背景
Code InterpreterはChatGPT上でファイルに対して処理が可能な機能としてリリースされました。
ただし、他の記事にもあるように、サーバーの制約が多いことから、ローカルで同機能が実施できるかを探していたところ、LangChainを活用したパッケージを公開している人がいたので、実施してみました。
結論
実行自体はできず、LangChainでの制約?なのか「GPT-4」が利用できませんでした。
どなたか分かる方、ご教示ください!
概要
このJupyter Notebookは、codeinterpreter-api というAPIの使用方法を示すものです。
参考として、公式のblog も提供されています。
内容の要点:
-
os
ライブラリを使用して、OpenAIのAPIキーを環境変数として設定。 -
CodeInterpreterSession
を使用して、「テクジャイアンツ(Apple、Google、Microsoft、Amazon)の2023年の相対的なパフォーマンスをプロットする」ための指示をAIに送信。 - AIからのレスポンスを表示し、関連するイメージファイルも表示。
環境
Colab(2023.08.10 時点)
実行プログラム
!pip install "codeinterpreterapi[all]"
import os
os.environ["OPENAI_API_KEY"] = "sk-XXXc"
os.environ["VERBOSE"] = "True"
from codeinterpreterapi import CodeInterpreterSession
async with CodeInterpreterSession() as session:
response = await session.generate_response(
"""
Plot the relative performance of tech giants
(Apple, Google, Microsoft, Amazon) in 2023.
Use a normalized plot so we can easily compare
the percentage changes of each stock's price over the period.
"""
)
print("AI: ", response.content)
for file in response.files:
file.show_image()
INFO: Using a LocalBox which is not fully isolated
and not scalable across multiple users.
Make sure to use a CODEBOX_API_KEY in production.
Set envar SHOW_INFO=False to not see this again.
Starting kernel...
Waiting for kernel to start...
DEPRECATION WARNING: Use agenerate_response for async generation.
This function will be converted to sync in the future.
You can use generate_response_sync for now.
[1m> Entering new AgentExecutor chain...[0m
AI: Sorry, something went while generating your response.Please try again or restart the session.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/codeinterpreterapi/session.py", line 437, in agenerate_response
response = await self.agent_executor.arun(input=user_request.content)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 517, in arun
await self.acall(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 310, in acall
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 304, in acall
await self._acall(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py", line 1078, in _acall
next_step_output = await self._atake_next_step(
File "/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py", line 925, in _atake_next_step
output = await self.agent.aplan(
File "/usr/local/lib/python3.10/dist-packages/codeinterpreterapi/agents/functions_agent.py", line 253, in aplan
predicted_message = await self.llm.apredict_messages(
File "/usr/local/lib/python3.10/dist-packages/langchain/chat_models/base.py", line 429, in apredict_messages
return await self._call_async(messages, stop=_stop, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chat_models/base.py", line 369, in _call_async
result = await self.agenerate(
File "/usr/local/lib/python3.10/dist-packages/langchain/chat_models/base.py", line 205, in agenerate
raise exceptions[0]
File "/usr/local/lib/python3.10/dist-packages/langchain/chat_models/base.py", line 306, in _agenerate_with_cache
return await self._agenerate(
File "/usr/local/lib/python3.10/dist-packages/langchain/chat_models/openai.py", line 416, in _agenerate
response = await acompletion_with_retry(
File "/usr/local/lib/python3.10/dist-packages/langchain/chat_models/openai.py", line 95, in acompletion_with_retry
return await _completion_with_retry(**kwargs)
File "/usr/local/lib/python3.10/dist-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chat_models/openai.py", line 93, in _completion_with_retry
return await llm.client.acreate(**kwargs)
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/chat_completion.py", line 45, in acreate
return await super().acreate(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
response, _, api_key = await requestor.arequest(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 382, in arequest
resp, got_stream = await self._interpret_async_response(result, stream)
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 726, in _interpret_async_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.
備考
パッケージのサンプルコードを見るとhttp URLを指定しプロンプト上に記載すると、データの取得が可能なようなので、ChatGPT上よりかは汎化できる気がします。