20241009 memo LLM AI(25)
https://qiita.com/kaizen_nagoya/items/80f00b3d6b44e00fec05
√LLM 準備 AI(30)
https://qiita.com/kaizen_nagoya/items/425908fe1e19a5c29bdf
LLM に取り組む背景、目的、目標
AI(2)https://qiita.com/kaizen_nagoya/items/cfed9b4fbb2db49c76fb
LLM 取り組む構造(想定)AI(29)
https://qiita.com/kaizen_nagoya/items/ebf98ce7c56cdc07c116
tool and key word AI(17)
https://qiita.com/kaizen_nagoya/items/69402fcd3a3dd4e51ed0
llm study AI(15)
https://qiita.com/kaizen_nagoya/items/8b0acb6baeb2f4e353a4
@kzuzuo 「知識構造の多様性」 記録 AI(12)
https://qiita.com/kaizen_nagoya/items/946a8be68869764e13f0
@syoyo「e-Gov 法令データ」AI(13)
https://qiita.com/kaizen_nagoya/items/2c561aaaf1e86f5f536b
LLMはハルシネーションを自覚しているか 松尾研 LLM コミュニティ "Paper & Hacks Vol.24" AI(6) https://qiita.com/kaizen_nagoya/items/f49ecdd9ae8374524f62
Small-scale proxies for large-scale Transformer training instabilities LLM AI(7)
https://qiita.com/kaizen_nagoya/items/da0d806d9271b367e9b5
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
https://qiita.com/kaizen_nagoya/items/729f6eb47cf1d2574e3e
document list, AI(19)
https://qiita.com/kaizen_nagoya/items/385e56e581bd9507518d