-
Notifications
You must be signed in to change notification settings - Fork 133
High memory usage in 0.3.40(memory leak?) #1298
Comments
How much time have you used it? |
Sorry I should have clarified. |
I've checked and I still had vscode open after today. Attached is the log from the python output. |
Ok, thank you! |
We just built 0.3.22, which is available in the beta/daily channels. It contains some fixes which we believe will help solve some of the leaks. You can set this and reload to update:
|
I'll try it and report back. Thanks for the quick feedback 👍 |
@CaselIT , If you have a venv, can you create |
Attached is a pip freeze of the current environment. |
I'm noticing the same memory issue. I'll leave VS Code running for about 1hr and it will end up using 10gb of RAM. OS Version: Version Windows 10.0.18362.175 |
Please try the daily build, which has a few more fixes (mainly #1316).
Currently 0.3.28. |
Hi, |
Closing then. |
@MikhailArkhipov I'm still having problems on version 0.3.30. I'm using the setting Attached an usage graph collected from the performance monitor. I've also added the cpu usage (in blue) to the graph, since it seems correlated with the memory increase I can share the raw data from the performance monitor if it may be useful. |
I've tried also with today's dayly, version 0.3.36 @MikhailArkhipov can you reopen the issue? Thanks |
Is there some documentation on how I can profile the memory used by the language server? Sadly, I cannot share the project I'm working on, but I would like to help solve this if I can |
There are tools in Visual Studio (Analyze menu) as well as in Debug | Windows | Show Diagnostic Tools |
Actually we don't necessary need your project. Issue may be in some large library you may be importing so list of common imported and installed packages and type of environment (regular, virtual, Anaconda). No need to publish private pieces. |
Thanks for the advice, I'll try to profile the memory tomorrow. I've shared the all the packages I have in the environment in this comment #1298 (comment) above. I don't think my project can be considered large, since it's less than 20k loc |
Attached is a profiler session file that can be opened with https://memprofiler.com/ I don't know how to properly use the tool, but it seems most of the memory falls in the category unused overhead I hope iy helps solving this |
I'm not using |
@CaselIT, do you have any big files in your repo? 50kb or larger? |
We just built 0.3.46 to the beta/daily channels with a fix for when modules get reloaded (among other potential fixes), if you'd like to retry. (@juliotux since you had pointed out the pip issues.) |
#1407 may also be a source of some leaks, but I'd think only a small contribution. |
I think it makes sense to persist some of the analyzing work into some static file |
It does, ongoing work in #472. But leak is a leak and before static storage is created regular analysis needs to run. |
wow...... that's a faaast reply.... great thing to know, thanks! |
Thanks for the updates
I'm on holiday at the moment, when I get back I'll try it and report back |
On 0.3.46 the leaking behavior seems to be solved. Now, the LS reaches around 1GB while analyzing the files, and after it, it drops to around 400 MB. Multiple |
Sorry for the long absence, but I've been on holiday and then had to work on another project for a bit. This week I've returned to work to the project that has been causing problems, and I still have large memory issues. This is a dump from I'm not been tracking its memory usage but I've noticed that sometimes its memory usage decreases: a few minutes before dumping the memory to collect the stats the language server was using ~8gb, so some progress there seems to have happened. Hope it helps. I can track its memory usage it may help |
Just a follow up. I'm currently working on the sqlalchemy library so I have it installed in editable mode with pip ( I was using sqlalchemy also on my original project. I still cannot trigger it on demand, but at least this is now happening on an open source project, so it may be easier to reproduce I'm using a env only for this, so the packages installed are not that many (sadly I had jupyterlab installed in the same env, so this increases the number of packages, but I'm not using it or referencing it in the project): PackagesPackage Version Location ------------------ ------------ ------------------------------------------------- apipkg 1.5 appdirs 1.4.3 asn1crypto 1.2.0 atomicwrites 1.3.0 attrs 19.2.0 backcall 0.1.0 black 19.3b0 bleach 3.1.0 certifi 2019.9.11 cffi 1.13.0 Click 7.0 colorama 0.4.1 cryptography 2.7 decorator 4.4.0 defusedxml 0.6.0 entrypoints 0.3 execnet 1.7.1 importlib-metadata 0.23 ipykernel 5.1.2 ipython 7.8.0 ipython-genutils 0.2.0 jedi 0.15.1 Jinja2 2.10.3 json5 0.8.5 jsonschema 3.0.2 jupyter-client 5.3.3 jupyter-core 4.5.0 jupyterlab 1.1.4 jupyterlab-server 1.0.6 MarkupSafe 1.1.1 mistune 0.8.4 mock 3.0.5 more-itertools 7.2.0 nbconvert 5.6.0 nbformat 4.4.0 notebook 6.0.1 packaging 19.2 pandocfilters 1.4.2 parso 0.5.1 pickleshare 0.7.5 pip 19.2.3 pluggy 0.13.0 prometheus-client 0.7.1 prompt-toolkit 2.0.10 psycopg2 2.8.3 py 1.8.0 pycparser 2.19 Pygments 2.4.2 PyMySQL 0.9.3 pyparsing 2.4.2 pyrsistent 0.15.4 pytest 5.2.1 pytest-forked 1.0.2 pytest-xdist 1.30.0 python-dateutil 2.8.0 pywin32 225 pywinpty 0.5.5 pyzmq 18.1.0 Send2Trash 1.5.0 setuptools 41.4.0 six 1.12.0 SQLAlchemy 1.4.0b1.dev0 c:\\sqlalchemy\lib terminado 0.8.2 testpath 0.4.2 toml 0.10.0 tornado 6.0.3 traitlets 4.3.3 wcwidth 0.1.7 webencodings 0.5.1 wheel 0.33.6 wincertstore 0.2 zipp 0.6.0 I'll try recreating the env with only sqlalchemy and its dependencies to check if the problem persists |
Cloning and opening |
Since about version 0.5 I don't seem to notice overly large memory usage, sometimes I've seen a couple of GB, but I have not had to kill the language server for it. I'm not monitoring it closely though I'm not sure if the underling issue has been solved or not, but I think this can be closed for now. If I notice again the same high usage I'll reopen this (or a new one) |
Thanks! 2GB is high-ish, but with large libraries sometimes happens during peak consumption which should be released when analysis is done. |
Thanks for the improvements you and your team have doing to the language server 👍 |
The fixed implemented in #832 worked for some releases, but I'm having again high memory usage problems (10gb+) with version 0.3.20
These seems to be due to a memory leak: if finishes analyzing the project without problems, using less than 1gb in my case. After working on the project for some time it starts using more memory, even more than 10gb+.
I've not kept note of the details regarding after how long or if there is an action that triggers it. I usually notice a slow down, check the task manager and usually the language service is using many gb of memory. I have not kept track to see it the increase in memory usage is sudden or more gradual.
Below are the packages of the project used and some system information. I haven't checked if I can reproduce this issue in other projects
Is there some logging or telemetry I can enable to help with this issue?
Extension version: 2019.6.22090
Microsoft Python Language Server version 0.3.20.0
Python version: 3.6.8
VS Code version: Code 1.36.0 (0f3794b38477eea13fb47fbe15a42798e6129338, 2019-07-03T13:25:46.372Z)
OS version: Windows_NT x64 10.0.18362
requirements of the project
argon2_cffi==18.3.0
python-dateutil==2.7.5
decorator==4.3.0
falcon>=2,<3
falcon-auth==1.1.0
falcon-cors==1.1.7
graphene==2.1.3
graphene_sqlalchemy==2.0.0
jsonschema==2.6.0
keras>=2.1.2
numpy>=1.13.3
ortools<7.1
pandas>=0.22.0,!=0.24.0
psycopg2-binary>=2.7.3.2
psycopg2<2.8
pyjwt>=1.6.4
scikit-learn>=0.19.1
scipy>=1.0.0
SQLAlchemy<1.3.0
SQLAlchemy-Utils>=0.32.21
sqlalchemy-postgres-copy>=0.5.0
simplejson>=3.13.2
tensorflow==1.12.0
pyDOE>=0.3.8
geomdl>=4.1.0
pyomo==5.5.0
pyutilib>=5.6.3
joblib==0.11
pytest>=4.4.0
pytest-cov>=2.6.1
yapf>=0.20.0,!=0.27
flake8>=3.7.0
matplotlib>=2.2.3
waitress==1.1.0
pydot==1.2.4
System Info
flash_3d: enabled
flash_stage3d: enabled
flash_stage3d_baseline: enabled
gpu_compositing: enabled
multiple_raster_threads: enabled_on
native_gpu_memory_buffers: disabled_software
oop_rasterization: disabled_off
protected_video_decode: enabled
rasterization: enabled
skia_deferred_display_list: disabled_off
skia_renderer: disabled_off
surface_synchronization: enabled_on
video_decode: enabled
viz_display_compositor: disabled_off
webgl: enabled
webgl2: enabled
The text was updated successfully, but these errors were encountered: