hfuob
@hfuob (慎也)

Are you sure you want to delete the question?

If your question is resolved, you may close it.

Leaving a resolved question undeleted may help others!

We hope you find it useful!

Stable Diffusion WebUI(Google Colab)でAI画像を動かしたい

解決したいこと

Stable Diffusion WebUI(Google Colab)でAI画像を動かしたい

発生している問題・エラー

Google Colab上でStable Diffusion WebUIを起動したあと、各設定をし、generate(生成)をクリックすると、「AssertionError: extension access disabled because of command line flags(コマンド ライン フラグが原因で、拡張機能へのアクセスが無効になっています)」のエラーが表示されます。また、画像は生成されましたが、キャラクターは何も表示されていません。
エラーを解消しAI画像を生成する方法をご存知の方はご連絡お願いします。

生成された画像

image.png

WEB UI起動時のコード

!COMMANDLINE_ARGS="--share --gradio-debug --xformers --gradio-auth me:qwerty " REQS_FILE="requirements.txt" python launch.py

Python 3.10.11 (main, Apr  5 2023, 14:15:10) [GCC 9.4.0]
Commit hash: 5ab7f213bec2f816f9c5644becb32eb72c8ffb89
Installing gfpgan
Installing clip
Installing open_clip
Installing xformers
Cloning Stable Diffusion into /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning Taming Transformers into /content/stable-diffusion-webui/repositories/taming-transformers...
Cloning K-diffusion into /content/stable-diffusion-webui/repositories/k-diffusion...
Cloning CodeFormer into /content/stable-diffusion-webui/repositories/CodeFormer...
Cloning BLIP into /content/stable-diffusion-webui/repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements
Installing requirements for Mov2mov
Installing requirements for ffmpeg

Installing sd-webui-controlnet requirement: mediapipe
Installing sd-webui-controlnet requirement: svglib
Installing sd-webui-controlnet requirement: fvcore

Launching Web UI with arguments: --share --gradio-debug --xformers --gradio-auth me:qwerty
2023-05-08 13:18:50.729273: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-05-08 13:18:51.729251: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to /content/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors

100% 3.97G/3.97G [00:24<00:00, 177MB/s]
ControlNet v1.1.150
ControlNet v1.1.150
Calculating sha256 for /content/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors: 6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from /content/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading (…)olve/main/vocab.json: 100% 961k/961k [00:00<00:00, 1.38MB/s]
Downloading (…)olve/main/merges.txt: 100% 525k/525k [00:00<00:00, 1.01MB/s]
Downloading (…)cial_tokens_map.json: 100% 389/389 [00:00<00:00, 2.54MB/s]
Downloading (…)okenizer_config.json: 100% 905/905 [00:00<00:00, 5.48MB/s]
Downloading (…)lve/main/config.json: 100% 4.52k/4.52k [00:00<00:00, 19.4MB/s]
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0): 
Model loaded in 37.7s (calculate hash: 13.8s, load weights from disk: 0.5s, create model: 5.5s, apply weights to model: 6.1s, apply half(): 1.1s, load VAE: 10.0s, move model to device: 0.6s).
Running on local URL:  http://127.0.0.1:7860
Running on public URL: https://dc3148242716213977.gradio.live


This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces

画像生成時のエラー

Error completing request
Arguments: ('task(swmnjxkf0k28y1y)', '[]') {}
Traceback (most recent call last):
  File "/content/stable-diffusion-webui/modules/call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/content/stable-diffusion-webui/modules/ui_extensions.py", line 100, in check_updates
    check_access()
  File "/content/stable-diffusion-webui/modules/ui_extensions.py", line 24, in check_access
    assert not shared.cmd_opts.disable_extension_access, "extension access disabled because of command line flags"
AssertionError: extension access disabled because of command line flags

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 399, in run_predict
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1299, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1022, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/content/stable-diffusion-webui/modules/ui_extensions.py", line 28, in apply_and_restart
    check_access()
  File "/content/stable-diffusion-webui/modules/ui_extensions.py", line 24, in check_access
    assert not shared.cmd_opts.disable_extension_access, "extension access disabled because of command line flags"
AssertionError: extension access disabled because of command line flags

Start parsing the number of mov frames
The video conversion is completed, images:955
current progress: 1/1
  0% 0/16 [00:00<?, ?it/s]
  6% 1/16 [00:02<00:37,  2.47s/it]
 19% 3/16 [00:02<00:08,  1.51it/s]
 31% 5/16 [00:03<00:03,  2.90it/s]
 38% 6/16 [00:03<00:02,  3.61it/s]
 44% 7/16 [00:03<00:02,  4.27it/s]
 50% 8/16 [00:03<00:01,  4.83it/s]
 56% 9/16 [00:03<00:01,  5.34it/s]
 62% 10/16 [00:03<00:01,  5.74it/s]
 69% 11/16 [00:03<00:00,  6.05it/s]
 75% 12/16 [00:04<00:00,  6.30it/s]
 81% 13/16 [00:04<00:00,  6.49it/s]
 88% 14/16 [00:04<00:00,  6.60it/s]
 94% 15/16 [00:04<00:00,  6.70it/s]
100% 16/16 [00:04<00:00,  3.45it/s]
Downloading: "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth" to /content/stable-diffusion-webui/models/Codeformer/codeformer-v0.1.0.pth

100% 359M/359M [00:03<00:00, 124MB/s]
Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" to /content/stable-diffusion-webui/repositories/CodeFormer/weights/facelib/detection_Resnet50_Final.pth

100% 104M/104M [00:00<00:00, 173MB/s]
Downloading: "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth" to /content/stable-diffusion-webui/repositories/CodeFormer/weights/facelib/parsing_parsenet.pth

100% 81.4M/81.4M [00:00<00:00, 165MB/s]
Start generating .mp4 file
The generation is complete, the directory::outputs/mov2mov-videos/1683552353.mp4

Total progress: 100% 16/16 [00:17<00:00,  1.06s/it]

自分で試したこと

・URL記事を参考にWEB UIの設定を行いました。
https://note.com/toshimaru_news/n/n7831d5ed921d

・下記URLの③インストールしたモデルを適用する為に、「拡張機能」タブ→「インストール済」タブに切り替え「適用して再起動」を押下
http://kozi001.com/2023/04/16/howto-use-sd-webui-extension-control-net-open-pose-editor/#AIStable_diffusion_web_UIx1f60dx1f495x1f495AIYouTube

1

No Answers yet.

Your answer might help someone💌