Skip to content

Using a word like limón on the prompt makes llama.cpp crash #4544

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
FiveTechSoft opened this issue Dec 20, 2023 · 9 comments
Closed

Using a word like limón on the prompt makes llama.cpp crash #4544

FiveTechSoft opened this issue Dec 20, 2023 · 9 comments

Comments

@FiveTechSoft
Copy link

If you use any word like limón or similar, llama.cpp crashes

is this a bug ? How to solve it ?

many thanks

@mkesper
Copy link

mkesper commented Dec 20, 2023

I think you need to provide more information about your setup and starting parameters.

Cannot confirm:

User: What's a limón?

Bot: A limón (also known as a lime) is a small citrus fruit that has a bright yellow skin and juicy white flesh. It's often used for its tangy flavor in cooking, baking, and drinks like margaritas. Would you like more information on limons or anything else?

With mistralai_mistral-7b-instruct-v0.1 on GNU/Linux.

@FiveTechSoft
Copy link
Author

Where to download mistralai_mistral-7b-instruct-v0.1 from ? Is it a GGUF file ?

We have tested it with phi-2_Q4_K_M.gguf, also with Orca gguf. Both crashes when using "limón"

many thanks

@mkesper
Copy link

mkesper commented Dec 20, 2023

Indeed it's a GGUF from https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF I run it like this:
./server -c 4096 --mlock --model mistral-7b-instruct-v0.1.Q4_K_M.gguf

@FiveTechSoft
Copy link
Author

Dear Michael,

You are right, that one does not crash.

Using phi-2_Q4_K_M.gguf it crashes, also with orca-2-7b.Q2_K.gguf

many thanks

@LoganDark
Copy link
Contributor

Are you sure that this is an issue with llama.cpp? Where does it crash - during tokenization or during inferencing? What UI are you using? UTF-8 issues are very widespread (especially when decoding the model's output), so it's entirely likely that it's not actually llama.cpp's fault.

@Ar57m
Copy link

Ar57m commented Dec 21, 2023

I have phi2 3km and phi1.5 6k, both works fine with the word limón

@Ar57m
Copy link

Ar57m commented Dec 24, 2023

#4618 maybe it's related

Copy link
Contributor

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Mar 18, 2024
Copy link
Contributor

github-actions bot commented Apr 2, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants