issue using llama.cpp
#2
by
Grimble
- opened
I am running llama-server and using the Q8_0.gguf and the llama-joycaption-beta-one-llava-mmproj-model-f16.gguf. It loads fine and seems to work initially but I get a lot of browser pop-up errors like "Failed to load image or audio file" and
mtmd_helper_bitmap_init_from_buf: failed to decode image bytes
srv log_server_r: request: POST /v1/chat/completions 127.0.0.1 400
in the terminal. I can't tell if this is a model issue or a llama-server issue. Any help would be appreciated. I'm trying .png images which I'm not sure are supported but also .jpg.
Try it on koboldcpp and let me know if problems persist. Follow the instructions in the model card.