-
Notifications
You must be signed in to change notification settings - Fork 133
Python Language server takes a huge amount of RAM (more than 10+GB) #1426
Comments
I can see a load of exceptions in those language server logs; I think it might be failing halfway through and restarting. Can you try out the beta build to see if the issue goes away? Set the following in your user settings (ignore the warning that the option doesn't exist), then reload.
I think the exceptions are #1335, which is fixed in 0.3.47+, which I have now moved to stable as well. |
The intellisense doesnt work anymore after I enabled this. I didn't encounter a leak this time, (I dont know if this is related to enabling the beta or not (as I haven't faced one with the old version either after this), so far I have not faced any leaks
so basically if I stopped pressing any key it would stop, and if I continued, it would do so as well(even writting in a python string as in Here are the logs for when the beta was active : logfile |
In the same environment, can you open a file that's just |
yes I do here is the log : log
|
Based on that, I'm fairly certain this is the same as #1401 (but we can reopen as needed). I'd appreciate it if you could set the following in your config and reload to trigger a download (to v0.3.56), then see if you can grab one of the stack traces for those exceptions, and post an update on #1401. "python.analysis.downloadChannel": "daily" Closing in favor of #1401. |
it happened again just moments ago! it maxed out 10+G of system RAM. unfortunately I couldn't save any log! |
The leak happened moments ago again! and here is the log : https://we.tl/t-7XyjjmUIgf |
@Coderx7 please open separate issue as this thread is closed. Case may or may not be the same. Thanks. |
Actually since it is not resolved as fixed and rather than as duplicated, reopening. |
import torch
import torch.nn as nn
import torchvision
#import torchvision.transforms as transforms
from torchvision import transforms
from torchvision import datasets
from torch import cuda anounted to torch + pandas + matplotlib + scipy + numpy, but I ended up with 1GB peak, 400M post analysis. However, this is stock P3.7 |
LS is on 0.5, I am planning to close this unless it is still happening. |
Today I faced this issue! it took more than 10GB+ of my system RAM.
I cant reproduce it now(see below), it seems when this happens, the intellisense also stops working. and analysis goes on forever!I seemed to me, this happens when I try to code (use intellisense) when the language server is still analyzing something, and that just kills the intellisense for good! and each time I try to write new code, and analysis continues, more ram is consumed. I'm not sure about this but I got this feeling.
I attached some log files, hopefully they should show something.
I'm using the latest version
Python Language Server:
Microsoft Python Language Server version 0.3.46.0
Python Extension :
2019.8.29288 (6 August 2019)
VSCode info
UPDATE:
OK this happened again and I could save the log and record it before the vscode crashed!

Here is the console log :
Here is the second vslog and the the language server log (copied form output tab):
download 2 log files separately .
and this is the sample test code I used :
The text was updated successfully, but these errors were encountered: