Skip to content

[bug]: use 8060S gfx1151 They will all crash at approximately the VAE position, throwing an "Aborted (core dumped)" error. #8306

@KoDelioDa

Description

@KoDelioDa

Is there an existing issue for this problem?

  • I have searched the existing issues

Install method

Invoke's Launcher

Operating system

Linux

GPU vendor

AMD (ROCm)

GPU model

8060s gfx1151

GPU VRAM

96GB

Version number

v6.1.0rc2

Browser

No response

System Information

No response

What happened

[2025-07-19 18:57:41,245]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '09ff6503-6725-483c-9608-44c1fc4e780e:text_encoder_2' (T5EncoderModel) onto cuda device in 4.90s. Total model size: 9083.39MB, VRAM: 9083.39MB (100.0%)
You set add_prefix_space. The tokenizer needs to be converted from the slow tokenizers
[2025-07-19 18:57:41,346]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '09ff6503-6725-483c-9608-44c1fc4e780e:tokenizer_2' (T5TokenizerFast) onto cuda device in 0.00s. Total model size: 0.03MB, VRAM: 0.00MB (0.0%)
[2025-07-19 18:57:43,147]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '25f65633-5dfe-4931-a12d-b381429bb9fd:text_encoder' (CLIPTextModel) onto cuda device in 0.02s. Total model size: 469.44MB, VRAM: 469.44MB (100.0%)
[2025-07-19 18:57:43,176]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '25f65633-5dfe-4931-a12d-b381429bb9fd:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-07-19 18:58:02,412]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '177348be-0538-425f-9218-e056b81a1331:transformer' (Flux) onto cuda device in 18.50s. Total model size: 22700.13MB, VRAM: 22700.13MB (100.0%)
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [01:03<00:00, 2.11s/it]
[2025-07-19 18:59:08,784]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'd9d88fbc-d32d-40af-bbbf-b387ec1c5ee5:vae' (AutoEncoder) onto cuda device in 0.02s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
Aborted (core dumped)

What you expected to happen

I want it to run successfully, or is there any way to enable VAE to run on the CPU? like Comfyui

How to reproduce the problem

1.run invokeai-web --root ~/invokeai
2.use webui and click 'invoke'
3.waitting for........ then 'Aborted (core dumped)'

Additional context

I used https://github.com/ROCm/TheRock to build my pyroch.

python -m pip install torch torchaudio torchvision --force-reinstall --index-url https://d2awnip2yjpvqn.cloudfront.net/v2/gfx1151

Discord username

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions