目的
手元CSVをそのままAPI化して、watsonx Orchestrateから検索する
環境
- Windows + WSL(Ubuntu)
- Python 3.12 (venv)
- VSCode Remote WSL
0. 作業ディレクトリ
mkdir -p ~/workspaces/wxo
cd ~/workspaces/wxo
1. CSV配置
01_aquarium_species.csv
"tank_code","species_code","individual_code","species_name","spec_detail","nickname","care_level"
"T01","SP100","001","カクレクマノミ","海水/24℃/小型","ニモ","B"
"T01","SP120","002","ナンヨウハギ","海水/25℃/中型","ドリー","B"
"T02","SP200","001","ミズクラゲ","海水/20℃/浮遊","クララ","C"
"T02","SP210","002","アカクラゲ","海水/18℃/注意","レッド","D"
"T03","SP300","001","チンアナゴ","海水/26℃/砂地","ニョロ","B"
"T03","SP320","002","ガーデンイール","海水/26℃/群れ","ガーデン","B"
"T04","SP400","001","ピラニア","淡水/27℃/肉食","ピラ","D"
"T04","SP420","002","アロワナ","淡水/28℃/大型","ドラゴン","A"
2. SQLite作成
init_db.py
import csv
import sqlite3
import unicodedata
def norm(text):
return unicodedata.normalize("NFKC", text).lower().replace(" ", "")
conn = sqlite3.connect("products.db")
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS products")
cur.execute("""
CREATE TABLE products (
species_name TEXT,
spec_detail TEXT,
nickname TEXT,
species_name_norm TEXT,
spec_detail_norm TEXT,
nickname_norm TEXT
)
""")
with open("01_aquarium_species.csv", encoding="utf-8-sig") as f:
reader = csv.DictReader(f)
rows = []
for r in reader:
rows.append((
r["species_name"],
r["spec_detail"],
r["nickname"],
norm(r["species_name"]),
norm(r["spec_detail"]),
norm(r["nickname"]),
))
cur.executemany("INSERT INTO products VALUES (?, ?, ?, ?, ?, ?)", rows)
conn.commit()
conn.close()
実行:
python init_db.py
3. FastAPI
app.py
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
import sqlite3
import unicodedata
DB_FILE = "products.db"
PUBLIC_SERVER_URL = "http://127.0.0.1:8000"
app = FastAPI()
def norm(text):
return unicodedata.normalize("NFKC", text).lower().replace(" ", "")
def query_db(sql, params):
conn = sqlite3.connect(DB_FILE)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
cur.execute(sql, params)
rows = cur.fetchall()
conn.close()
return [dict(r) for r in rows]
@app.get("/search-products", operation_id="search_products")
def search_products(temp: str = "", keyword: str = ""):
sql = "SELECT * FROM products WHERE 1=1"
params = []
if temp:
sql += " AND spec_detail_norm LIKE ?"
params.append(f"%{norm(temp)}%")
if keyword:
sql += " AND species_name_norm LIKE ?"
params.append(f"%{norm(keyword)}%")
return {"results": query_db(sql, params)}
def custom_openapi():
schema = get_openapi(
title="Aquarium API",
version="1.0.0",
routes=app.routes,
)
schema["openapi"] = "3.0.3"
schema["servers"] = [{"url": PUBLIC_SERVER_URL}]
return schema
app.openapi = custom_openapi
4. 起動
uvicorn app:app --host 127.0.0.1 --port 8000
5. Tunnel
cloudflared tunnel --url http://127.0.0.1:8000
6. 重要:URL反映
PUBLIC_SERVER_URL = "https://xxxxx.trycloudflare.com"
再起動
7. OpenAPI取得
curl https://xxxxx.trycloudflare.com/openapi.json -o openapi.json
8. WXO登録
orchestrate tools import -k openapi -f openapi.json
9. Agent
spec_version: v1
kind: native
name: aquarium_agent
llm: groq/openai/gpt-oss-120b
style: react
instructions: |
条件に合う生体を検索する
tools:
- search_products
orchestrate agents import -f product_agent.yaml
10. 実行
26℃くらいの魚いる?
ハマり
- servers忘れると死ぬ
- Tunnel URL変わる
- openapi取り直し必要
- 正規化しないと検索死ぬ
メモ
- 手順は固定
- データは完全別物
- PoCはこれで十分