一言にすると?
公式GitHubから入れて古い方を削除し、LLMは入れ直しました。
経緯
gpt-oss:20b
をpull
しようとして詰まりました。
OS
$ uname -a
Linux k 6.14.0-27-generic #27~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Jul 22 17:38:49 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
1 バージョン上げろと言われる
$ ollama pull gpt-oss:20b
pulling manifest
Error: pull model manifest: 412:
The model you are attempting to pull requires a newer version of Ollama.
Please download the latest version at:
https://ollama.com/download
$ curl -fsSL https://ollama.com/install.sh | sh # 公式のダウンロード方法
>>> Installing ollama to /usr/local
[sudo] User のパスワード:
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> Downloading Linux ROCm amd64 bundle
######################################################################## 100.0%
>>> The Ollama API is now available at [ローカルIP].
>>> Install complete. Run "ollama" from the command line.
>>> AMD GPU ready.
$ ollama -v
ollama version is 0.9.6 #バージョンが上がっていません
$ ollama pull gpt-oss:20b
pulling manifest
Error: pull model manifest: 412:
The model you are attempting to pull requires a newer version of Ollama.
Please download the latest version at:
https://ollama.com/download
2 当然、繰り返しても改善しません
$ ollama list
NAME ID SIZE MODIFIED
gemma3:4b-it-q4_K_M [ ] 3.3 GB 2 days ago
gemma3n:e4b-it-q4_K_M [ ] 7.5 GB 2 days ago
deepseek-r1:7b-qwen-distill-q4_K_M [ ] 4.7 GB 2 days ago
deepseek-r1:32b-qwen-distill-q4_K_M [ ] 19 GB 2 days ago
qwen3:30b-a3b-instruct-2507-q4_K_M [ ] 18 GB 2 days ago
gemma3:27b-it-q4_K_M [ ] 17 GB 3 days ago
qwen3:4b-q4_K_M [ ] 2.6 GB 4 days ago
gemma3:12b-it-q4_K_M [ ] 8.1 GB 4 days ago
$ curl -fsSL https://ollama.com/install.sh | sh
>>> Cleaning up old version at /usr/local/lib/ollama
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> Downloading Linux ROCm amd64 bundle
######################################################################## 100.0%
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
>>> AMD GPU ready.
$ ollama pull gpt-oss:20b
pulling manifest
Error: pull model manifest: 412:
The model you are attempting to pull requires a newer version of Ollama.
Please download the latest version at:
https://ollama.com/download
3 which ollama で確認
$ ollama -v
ollama version is 0.9.6
$ which ollama
/usr/local/bin/ollama
$ ollama -v
ollama version is 0.9.6
Warning: client version is 0.11.3 # こちらが読み込まれません
$ sudo systemctl stop ollama
$ sudo rm -f /usr/local/bin/ollama
$ sudo rm -rf /usr/local/lib/ollama
$ curl -fsSL https://ollama.com/install.sh | sh #公式のダウンロード方法
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> Downloading Linux ROCm amd64 bundle
######################################################################## 100.0%
>>> The Ollama API is now available at [ローカルIP].
>>> Install complete. Run "ollama" from the command line.
>>> AMD GPU ready. # VRAMは無いタイプだとCPUオンリーになるようです
$ ollama -v
ollama version is 0.9.6
Warning: client version is 0.11.3 # 改善していません
4 GitHubからインストールし、/snap/bin/ollamaを消す
$ sudo systemctl stop ollama
$ sudo rm -f /usr/local/bin/ollama
$ sudo rm -rf /usr/local/lib/ollama
$ ollama -v
-bash: /usr/local/bin/ollama: そのようなファイルやディレクトリはありません # アンイストール出来ています
$ curl -LO https://github.com/ollama/ollama/releases/latest/download/ollama-linux-amd64.tgz # https://github.com/ollama/ollama/releases/tag/v0.11.3 を参照しました
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 1251M 100 1251M 0 0 35.6M 0 0:00:35 0:00:35 --:--:-- 37.1M
$ tar -xvzf ollama-linux-amd64.tgz
./
./lib/
./lib/ollama/
./lib/ollama/libggml-cpu-sse42.so
./lib/ollama/libcublasLt.so.12.8.4.1
./lib/ollama/libggml-cpu-sandybridge.so
./lib/ollama/libcudart.so.12
./lib/ollama/libggml-cpu-skylakex.so
./lib/ollama/libggml-cpu-haswell.so
./lib/ollama/libggml-cpu-icelake.so
./lib/ollama/libggml-hip.so
./lib/ollama/libcudart.so.12.8.90
./lib/ollama/libcublasLt.so.12
./lib/ollama/libcublas.so.12.8.4.1
./lib/ollama/libggml-base.so
./lib/ollama/libggml-cpu-x64.so
./lib/ollama/libggml-cpu-alderlake.so
./lib/ollama/libcublas.so.12
./lib/ollama/libggml-cuda.so
./bin/
./bin/ollama
$ sudo install -m 755 bin/ollama /usr/local/bin/ollama
$ ollama -v
ollama version is 0.9.6
Warning: client version is 0.11.3
$ which -a ollama
/usr/local/bin/ollama
/snap/bin/ollama #ここに残っています
$ sudo snap remove ollama
ollama removed
$ which -a ollama
/usr/local/bin/ollama
$ ollama -v
Warning: could not connect to a running Ollama instance
Warning: client version is 0.11.3 #認識しています
$ ollama list #Ollamaを呼んでみます
Error: ollama server not responding - could not connect to ollama server, run 'ollama serve' to start it
$ sudo systemctl start ollama #起こします
[sudo] User のパスワード:
$ ollama list # 無事起動。しかし、LLMが空っぽです
NAME ID SIZE MODIFIED
$ ollama pull gpt-oss:20b
5 LLMの入れ直しはシェルスクリプトで行いました
$ ollama list
NAME ID SIZE MODIFIED
gpt-oss:20b [ ] 13 GB 3 minutes ago
$ cat <<EOF > repull.sh #まとめてpullするシェルスクリプトです
#!/bin/bash
ollama pull gemma3:4b-it-q4_K_M
ollama pull gemma3n:e4b-it-q4_K_M
ollama pull deepseek-r1:7b-qwen-distill-q4_K_M
ollama pull deepseek-r1:32b-qwen-distill-q4_K_M
ollama pull qwen3:30b-a3b-instruct-2507-q4_K_M
ollama pull gemma3:27b-it-q4_K_M
ollama pull qwen3:4b-q4_K_M
ollama pull gemma3:12b-it-q4_K_M
EOF
chmod +x repull.sh
./repull.sh # ここまでを一度に貼り付けています。
6 無事バージョンが上がりgpt-oss:20bも加わりました
$ ollama list
NAME ID SIZE MODIFIED
gemma3:12b-it-q4_K_M [ ] 8.1 GB 3 minutes ago
qwen3:4b-q4_K_M [ ] 2.6 GB 6 minutes ago
gemma3:27b-it-q4_K_M [ ] 17 GB 8 minutes ago
qwen3:30b-a3b-instruct-2507-q4_K_M [ ] 18 GB 16 minutes ago
deepseek-r1:32b-qwen-distill-q4_K_M [ ] 19 GB 24 minutes ago
deepseek-r1:7b-qwen-distill-q4_K_M [ ] 4.7 GB 33 minutes ago
gemma3n:e4b-it-q4_K_M [ ] 7.5 GB 36 minutes ago
gemma3:4b-it-q4_K_M [ ] 3.3 GB 39 minutes ago
gpt-oss:20b [ ] 13 GB About an hour ago
以上です。