-
Notifications
You must be signed in to change notification settings - Fork 537
Runtime error with mmap #3850
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@rcontesti which branch are you using? main branch or release branch? |
I'm using the main branch. I just cloned following the instructions of (Getting started)[https://pytorch.org/executorch/main/getting-started-setup] with |
Thanks for adding repro instructions and your environment @rcontesti ! I just tried the command on the same commit (9d58de1), and can export Llama3 8B/8B-Instruct and stories (the failure reported in #2907). Your environment looks good. To make sure I have the right repro, which Llama3 8B model are you using (8B/8B-Instruct?) and where is it from (Meta official website, HF?) As a workaround, could you try removing |
Hi @lucylq, thanks for the support I'm using LLama3 8B (Not 8B-Instruct). I'm downloading with meta link sent to my email. I'm also trying to run it with --branch v0.2.0. The problem is that with v0.20.0 now I cannot run:
|
Ah - for Llama3, we have to use the main branch. The release branch doesn't contain all the features required to export Llama3. It looks like that warning was removed from the documentation - will add it back in, thanks for pointing it out ~ Edit: though, I don't think we should see errors when installing ExecuTorch on the release branch. @GregoryComer looks like the buck2 resolver? |
Summary: Use main branch for llama3. Looks like the warning was removed in D56358723 ? See: pytorch#3850 Differential Revision: D58212855
Hi team, once again many thanks for the support. I returned to the main branch as suggested and reinstalled llama3 8B. The process is killed without generating any pte file. I'm not getting much verbosity from the message. I'm running
And getting
It seems I'm exciding memory with Torch, which is strange as I have 32gb. Anyway I will try in a larger machine. Any suggestions in terms of memory optimization? |
@rcontesti yeah my local machine is 32gb. it is pretty tight. i had to close some running applications to make the script work. during export, the peak memory becomes somewhat high. |
Thank @mergennachin. I'm closing the isse since the topic has drifted quite a bit. QQ: Is there anyway to make the export process take longer so that I could be handled with less memory? Since the export process is the peak demand of memory and is usually a one-off and I don't care how long it takes. Most of us I guess are mostly concern about run time once the graph is flattened. Anyway many thanks! |
Not sure if this is related to this previous issue 2907
But when following the Llama tutorial (link)[https://github.com/pytorch/executorch/blob/main/examples/models/llama2/README.md] to run llama3 with the following script
I'm getting the following runtime error:
I'm using the following environment
The text was updated successfully, but these errors were encountered: