runtime error

Exit code: 1. Reason: `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with: ``` pip install accelerate ``` . config.json: 0%| | 0.00/609 [00:00<?, ?B/s] config.json: 100%|██████████| 609/609 [00:00<00:00, 3.31MB/s] diffusion_pytorch_model.fp16.safetensors: 0%| | 0.00/196M [00:00<?, ?B/s] diffusion_pytorch_model.fp16.safetensors: 100%|█████████▉| 196M/196M [00:00<00:00, 342MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 101, in <module> vae_model = VaeWrapper("video") File "/home/user/app/vae_wrapper.py", line 68, in __init__ self.vae_model = self.get_vae(latent_type, variant) File "/home/user/app/vae_wrapper.py", line 86, in get_vae vae_model.set_use_memory_efficient_attention_xformers(True) File "/usr/local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 412, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "/usr/local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 408, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 408, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 408, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "/usr/local/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 405, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "/usr/local/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 411, in set_use_memory_efficient_attention_xformers raise ValueError( ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU

Container logs:

Fetching error logs...