You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I believe there may be a memory leak when we repeatedly cannot connect to redis. I first noticed this because our app was misconfigured and kept trying and failing to connect to redis; memory usage would grow and grow until after a few hours Kubernetes would OOM kill it.
I am not experienced in tracking node memory leaks but I think I have uncovered evidence by running this script locally:
const {createClient} = require("redis");
const client = createClient({
url: "redis://localhost:11111", // anywhere redis isn't running, or a bad hostname
// this line optional but creates objects faster:
socket: {reconnectStrategy: retries => 0}
});
client.on('error', () => {});
client.connect();
I ran that with node --inspect; in DevTools I went to the Memory tab and took two consecutive Heap Snapshots with a few seconds in between. Selecting the second snapshot I changed "All Objects" at the top do "Objects allocated between Snapshot 2 and Snapshot 1". It shows a large number of Promises and other objects created (in some cases I've seen a small number of TimersLists also rank high). Additional snapshots just get bigger and bigger in terms of MB.
When I run with --inspect I can also see the node process's RSS memory in ps aux get bigger and bigger. For some reason, without the flag, it grows but seems to be growing slower (I have to keep it on for a few minutes to notice growth beyond what is GC'd).
This doesn't happen when redis connects successfully. I also don't see it version 3 of the package; here is what I used there:
const redis = require("redis");
const {createClient} = redis;
const client = createClient({
url: "redis://localhost:11111",
retry_strategy: options => {
//console.log("error!"); // this logs here
return 0;
}
});
client.on('error', console.log); // doesn't seem to log anything but whatever
client.set("key","value",redis.print);
I tried debugging a bit but didn't get anywhere. The Promises from Devtools looked to be related to _RedisClient_isolationPool, but I commented out most code related to that and it seemed to be still happening. I've also tried running with --expose-gc and manuallying GC every second; that didn't change much.
Since this only affects things when redis repeatedly can't connect, I realize this is probably not a priority bug, so feel free to ignore it. I did want to let everyone know what I saw and found though in case anyone comes across it.
Environment:
Node.js Version: v14.16.1
Redis Server Version: n/a
Node Redis Version: 4.1.0, 4.0.0-rc.3
Platform: Ubuntu 19.04, 20.04.4
The text was updated successfully, but these errors were encountered:
I experienced the same issue, the heap dump I got seems to indicate an infinite chain of promises though :
It looks to me that the recursive call made to this.#connect in packages/client/lib/client/socket.ts ( source code 🔗 ) is the culprit for this chain. None of those promises can be garbage collected until the last one in the chain is resolved, and the exit condition for this recursive loop is this.reconnectStrategy(...) returning an error.
By default reconnectStrategy returns Math.min(retries * 50, 500); hence it never exits.
@evanbattaglia A workaround is to implement your own exit condition by overwriting the reconnectStrategy function, using your provided example :
const{ createClient }=require('redis')constclient=createClient({url: 'redis://localhost:11111',// anywhere redis isn't running, or a bad hostname// this line optional but creates objects faster:socket: {reconnectStrategy: (retries)=>{if(retries>100)returnnewError('Too many attemps')return0}},})client.on('error',()=>{})client.connect()
EDIT: found this article mentioning the same sort of behaviour and outcome 🤷
I believe there may be a memory leak when we repeatedly cannot connect to redis. I first noticed this because our app was misconfigured and kept trying and failing to connect to redis; memory usage would grow and grow until after a few hours Kubernetes would OOM kill it.
I am not experienced in tracking node memory leaks but I think I have uncovered evidence by running this script locally:
I ran that with
node --inspect
; in DevTools I went to the Memory tab and took two consecutive Heap Snapshots with a few seconds in between. Selecting the second snapshot I changed "All Objects" at the top do "Objects allocated between Snapshot 2 and Snapshot 1". It shows a large number of Promises and other objects created (in some cases I've seen a small number ofTimersList
s also rank high). Additional snapshots just get bigger and bigger in terms of MB.When I run with
--inspect
I can also see the node process's RSS memory inps aux
get bigger and bigger. For some reason, without the flag, it grows but seems to be growing slower (I have to keep it on for a few minutes to notice growth beyond what is GC'd).This doesn't happen when redis connects successfully. I also don't see it version 3 of the package; here is what I used there:
I tried debugging a bit but didn't get anywhere. The Promises from Devtools looked to be related to
_RedisClient_isolationPool
, but I commented out most code related to that and it seemed to be still happening. I've also tried running with --expose-gc and manuallying GC every second; that didn't change much.Since this only affects things when redis repeatedly can't connect, I realize this is probably not a priority bug, so feel free to ignore it. I did want to let everyone know what I saw and found though in case anyone comes across it.
Environment:
The text was updated successfully, but these errors were encountered: