Now NOT leaking memory, buuut…

After a previous restart of my miners (2 on a Mac Mini (M4 Pro, 64GB RAM) due to growing memory use that wound up swapping, on this restart CPU (100% of a core) and memory (started at 17.5GB per miner and grew to 20GB each) are holding steady, but both seemed to have crapped out. One last processed block 352 three hours ago (10:04GMT) and the other’s last block was 344 yesterday (23:07GMT), and now for both it’s been an unbroken string of periodic candidate block timestamp updated since those last blocks. Oh wait, one of them just gave me a potential reorg. I’ll leave it like this for a few hours longer before restarting in case anyone wants to request diagnostics or ask follow up questions. And any clarity that anyone can provide on this behavior would be appreciated.

1 Like

the network came under attack yesterday from specific IP address. Here is the info:

The network is currently experiencing a denial of service attack from this IP address. If you see your node getting overloaded with an extreme number of requests per second, please add this IP address to your firewall:

216.82.192.27

Also ensure that you are running the latest code on master and have rebuilt the hoon and the binary.

It looks like they’re sending out huge numbers of elder block requests, I was seeing about a million logs per hour about that IP overnight. I banned them using ufw and conntrack.

IMPORTANT:

for the firewall block to work:

  • block outgoing and incoming: your peer will try to reach out to the IP based on gossip, inbound isn’t enough

  • You must put more specific deny rules higher in the ufw list than more general allow rules

  • once the firewall rules are enabled you will want to drop all source and destination streams between that IP and your server using ‘conntrack’ (ask chatgpt

  • restarting your peer won’t do anything, the kernel keeps the streams open

hope this helps

1 Like

Are you still running the code from last week, or did you recompile and update it? I guess I wonder if the memory stability is due to new code, or upgrades to the network.

My code is from 5 days ago, so not containing the newest update to consensus.hoon. So in other words, I’m running the exact same code that saw the ballooning memory size earlier.

(Now getting around to applying the recommended firewall updates and then code update.)

1 Like

I was trying to estimate how much mining mynodes were doing, by seeing how many %mining log entries there were. The stats were terrible, but given the possible DDOS attack (I did see a lot of requesting elders entries…), I’ll have to configure my firewall and run my node tests all over again…oh well.

1 Like

Because I’m running on macOS, I configured pf to ban that address both in and out, made sure it was set to run on restart.

Pulled down the update to consensus.hoon, did all the remakes/rebuilds, and now will see how things normalize regarding 1) catching up to the latest blocks, 2) continuing to mine, 3) and memory profile. Updates to come later!

2 Likes

Both miners are apparently mining, and running at 200% CPU and holding 17.6GB of RAM, but one of the two had a looong string of Dropping inbound stream because we are at capacity messages, and the other didn’t. The former is now like thirty blocks behind the latter, although both continue to add blocks. We’ll see if the former ever catches up.

1 Like