[Bug] Tons of "WARNING Rate limiting ourselves. message type: respond_peers" messages in log
See original GitHub issueWhat happened?
After upgrade to 1.5.1 (from 1.5.0), full node shows a large number of “WARNING Rate limiting ourselves. message type: respond_peers” messages in debug.log. Full node is in sync. Most connected peers are in sync, have even farmed a block since the upgrade, but these warning messages are filling up the log.
Version
1.5.1
What platform are you using?
Linux
What ui mode are you using?
CLI
Relevant log output
2022-08-23T22:56:57.376 full_node full_node_server : WARNING Rate limiting ourselves. message type: respond_proof_of_weight, peer: 31.19.126.71
2022-08-23T22:56:58.384 full_node full_node_server : WARNING Rate limiting ourselves. message type: respond_proof_of_weight, peer: 31.19.126.71
2022-08-23T22:56:59.391 full_node full_node_server : WARNING Rate limiting ourselves. message type: respond_proof_of_weight, peer: 31.19.126.71
2022-08-23T22:57:01.625 full_node full_node_server : WARNING Rate limiting ourselves. message type: respond_peers, peer: 190.16.85.253
2022-08-23T22:57:01.626 full_node full_node_server : WARNING Rate limiting ourselves. message type: respond_peers, peer: 109.111.131.6
Last 5 lines of log
Since upgrade, 1851 lines of this message type has shown up in log, with a total of 14 unique IPs.
Issue Analytics
- State:
- Created a year ago
- Comments:44 (7 by maintainers)
Top Results From Across the Web
WARNING Rate limiting ourselves. message type - Chia Forum
Just upgraded to Chia 1.5.1 and getting “WARNING Rate limiting ourselves. message type: respond_peers,” is there anything I need to fix or ...
Read more >WARNING Rate limiting ourselves. message type - Reddit
I keep getting alerts. I can't find a block for about 4 months. since these warnings have multiplied. What is the reason for...
Read more >1874745 – [ovn-northd] No rate limiting on WARN log ...
Bug 1874745 - [ovn-northd] No rate limiting on WARN log messages for ... add vm1 ovs-vsctl add-port br-int vm1 -- set interface vm1...
Read more >Rate limiting yourself from overloading external API's
YOu could implement a queue: Put message/action in queue, execute actions until rate limit hit, interpret limit message and adjust queued messages for...
Read more >Best practices for Bandwidth rate limits
Graphic of hands changing speed on speed limit sign ... The same type of error is returned when exceeding the call rate limit....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’m farming solo and I have this problem too. I also have a few of this message but it is far less frequent than the respond_peers message:
full_node full_node_server : WARNING Rate limiting ourselves. message type: respond_end_of_sub_slot, peer:
And also this one (repeated with the node ID of each harvester, several times):
farmer chia.plot_sync.receiver : ERROR reset: node_id ebd9821959799dff28ccf5f5d0efeef9b341444131349a1029678f41dc7df54e, current_sync: [state 0, sync_id 0, next_message_id 0, plots_processed 0, plots_total 0, delta [valid +0/-0, invalid +0/-0, keys missing: +0/-0, duplicates: +0/-0], time_done None]
The last message has its own topic already.
The warnings and the error came with version 1.5.1. I took another look in the archived logs to see if the errors where there and the frequency they occurred before but couldn’t find them, it is pretty clear to me that 1.5.1 is the culprit for introducing all this problems.
There are some config.yaml settings you can adjust if you want:
log_maxfilesrotation - keep this many log files in rotation. default is 7 log_maxbytesrotation - max bytes in log before rotating, default is 52428800 (50 MiB) log_use_gzip - compress rotated logs, default false.
So for example, you could keep more but smaller rotated logs, etc.