ComfyUI・AnimateDiffでgif生成すると、黒いgifしか生成されない件について
解決したいこと
ComfyUIとAnimateDiffを使用してアニメーション生成を行っているのですが、黒いだけのgifとpng画像しか生成されず、お力を貸していただきたいです。
環境
Mac / メモリ 24GB
PyTorch 2.1.0
Python 3.10.6
発生している問題・エラー
img2imgなどの画像だけの生成は問題なくできるのですが、アニメーションの生成になると真っ黒なものしか生成されなくなってしまいます。
Checkpointにモデルデータ、vae、AnimateDiffのモーションモジュールデータも適切なファイルの場所に格納されていると思われるので、エラーなどの表示などは見当たらず、、
ワークフローは下記公式サイトのtxt2imgの画像を使用しています。
設定も全て同様で生成しています。
https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved
以下はComfyUI起動からgif生成までのターミナルのコマンドです。
Last login: Wed Oct 18 23:30:27 on ttys000
maikokawano@MacBook-Air ~ % cd comfyUI
python main.py
** ComfyUI start up time: 2023-10-19 00:00:06.317072
Prestartup times for custom nodes:
0.0 seconds: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-Manager
Total VRAM 24576 MB, total RAM 24576 MB
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.0.0 with CUDA None (you have 2.1.0)
Python 3.10.6 (you have 3.10.6)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
xformers version: 0.0.21
Set vram state to: SHARED
Device: mps
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
### Loading: ComfyUI-Manager (V0.36)
### ComfyUI Revision: 1589 [c2bb34d8] | Released on '2023-10-18'
Registered sys.path: ['/Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_pycocotools', '/Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_oneformer', '/Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/__init__.py', '/Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_mmpkg', '/Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_midas_repo', '/Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_detectron2', '/Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux', '/Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux/src', '/Users/maikokawano/ComfyUI/comfy', '/Users/maikokawano/ComfyUI', '/Users/maikokawano/.pyenv/versions/3.10.6/lib/python310.zip', '/Users/maikokawano/.pyenv/versions/3.10.6/lib/python3.10', '/Users/maikokawano/.pyenv/versions/3.10.6/lib/python3.10/lib-dynload', '/Users/maikokawano/.pyenv/versions/3.10.6/lib/python3.10/site-packages', '../..']
/Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux/node_wrappers/dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Import times for custom nodes:
0.0 seconds: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet
0.0 seconds: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite
0.0 seconds: /Users/maikokawano/ComfyUI/custom_nodes/comfyui-animatediff
0.0 seconds: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved
0.0 seconds: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-Manager
0.4 seconds: /Users/maikokawano/ComfyUI/custom_nodes/comfyui_controlnet_aux
Starting server
To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
FETCH DATA from: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
got prompt
model_type EPS
adm 0
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'}
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 20/20 [01:43<00:00, 5.17s/it]
Prompt executed in 110.98 seconds
FETCH DATA from: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
FETCH DATA from: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
FETCH DATA from: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
FETCH DATA from: /Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
got prompt
model_type EPS
adm 0
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'}
[AnimateDiffEvo] - INFO - Loading motion module mm_sd_v14.ckpt
Requested to load SD1ClipModel
Loading 1 new model
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (10) less or equal to context_length None.
[AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v14.ckpt version v1.
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 10/10 [06:38<00:00, 39.89s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module mm_sd_v14.ckpt version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
[AnimateDiffEvo] - INFO - Removing motion module mm_sd_v14.ckpt from cache
/Users/maikokawano/ComfyUI/comfy/model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32))
/Users/maikokawano/ComfyUI/comfy/model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32))
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates']
/Users/maikokawano/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py:106: RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(img, 0, 255).astype(np.uint8))
Prompt executed in 419.58 seconds
0 likes