Yesterday, I noticed the miners on my Mac Mini (M4 Pro, 64GB RAM) were into swap, as the memory associated with each process had climbed to >30GB each. I ctrl-C’d each (and neither went down particularly gracefully, but they did go down), and restarted them uneventfully; both resumed mining where they left off, and both had a memory footprint of around 17.5GB, which is where they started when I originally ran them a few days ago. Now this morning, I see they’ve both crept up to 19GB and seem to be slowly rising again. (I’m using the code unmodified from GitHub, with no variations from the make/build instructions other than those needed to run two miners on the same box.) Anyone else with a similar setup seeing the same? Happy to run additional diagnostics if it would be of use to the core team.
I’ve observed the same on Linux. Killed one node after some time, and now the remaining one has utilized the memory that I freed up. Still not sure if this will become problematic, it could just be a cache of some kind.
Let’s emphasize that this is a possible memory leak, for now.
It was definitely a problem for me, as one of the two miners was chewing up a full processor but was twenty blocks behind and not pulling down any new ones (the other miner was doing fine to all appearances).
I’ve seen one process swell up to over 80GB+ of RAM in Linux. Depending on the number of nodes you’re running, it might be prudent for miners to run a restart script on their machines, to avoid this issue. Not clear if the worker threads legit need over 64Gb of ram, or if it is indeed a “memory leak”.