Too many open files
See original GitHub issueDescribe the bug Seit dem letzten Update (ich glaube eher an einen Zufall) ist die Rpi-CCU irgendwann einfach nicht mehr erreichbar. Das System macht komplett dicht. Dieses wiederholt sich zur Zeit unregelmässig alle paar Tage.
Die hmserver.log ist geflutte von folgenden Logzeilen:
Jun 24 11:54:02 org.apache.http.impl.client.DefaultHttpClient INFO [raspberrypi:hm-rpc.1_WorkerPool-0] I/O exception (java.net.SocketException) caught when connecting to {}->http://192.168.2.83:2010: Too many open files
Jun 24 11:54:02 org.apache.http.impl.client.DefaultHttpClient INFO [raspberrypi:hm-rpc.1_WorkerPool-0] Retrying connect to {}->http://192.168.2.83:2010
Zu den Zeitpunkten der Abstürze kommt dann dieser Eintrag dazu:
Jun 24 20:23:05 de.eq3.cbcs.server.core.persistence.AbstractPersistency ERROR [vert.x-worker-thread-1] AP 3014F711A0001F5A4993DDAD: The config file for the AP could not be written
java.nio.file.FileSystemException: /etc/config/crRFD/data/3014F711A0001F5A4993DDAD.ap: Too many open files
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
at java.nio.file.Files.newOutputStream(Files.java:216)
at de.eq3.cbcs.persistence.kryo.KryoPersistenceWorker.saveAccessPoint(KryoPersistenceWorker.java:543)
at de.eq3.cbcs.persistence.kryo.KryoPersistenceWorker.handle(KryoPersistenceWorker.java:210)
at de.eq3.cbcs.persistence.kryo.KryoPersistenceWorker.handle(KryoPersistenceWorker.java:89)
at io.vertx.core.eventbus.impl.HandlerRegistration.deliver(HandlerRegistration.java:212)
at io.vertx.core.eventbus.impl.HandlerRegistration.handle(HandlerRegistration.java:191)
at io.vertx.core.eventbus.impl.EventBusImpl.lambda$deliverToHandler$3(EventBusImpl.java:505)
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:337)
at io.vertx.core.impl.TaskQueue.lambda$new$0(TaskQueue.java:60)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Voreingestellte ulimit -n war bei 1024. Ich hab es nun mal auf >8k hochgedreht, mal schauen was passiert.
Steps to reproduce the behavior
- unknown
- wait some days
Expected behavior Eine Rpi-CCU die Stabil läuft =)
Screenshots
System information:
- Aktuelle Firmwareversion: 3.57.5.20210525
- DC 20%
- CS 0%
- HM & HM-IP Funk Komponenten
- Raspberry PI 3
- RPI-RF-MOD
- raspberrymatic 5.10.17 # 1 SMP PREEMPT Tue May 25 10:15:23 UTC 2021 aarch64 GNU/Linux
- Mem Total: 978720 Mem Free: 436696
- Load: 0.08 0.17 0.09
- /root 59% used
- vcgencmd measure_temp: 61 °C
cat /proc/sys/fs/file-max
:5088 0 90746
ls -of | wc -l
: 4826- ulimit -a
data seg size (kb) (-d) unlimited
scheduling priority (-e) 0
file size (blocks) (-f) unlimited
pending signals (-i) 3554
max locked memory (kb) (-l) 64
max memory size (kb) (-m) unlimited
open files (-n) 1024
POSIX message queues (bytes) (-q) 819200
real-time priority (-r) 0
stack size (kb) (-s) 8192
cpu time (seconds) (-t) unlimited
max user processes (-u) 3554
virtual memory (kb) (-v) unlimited
file locks (-x) unlimited
“open files” wurde nun auf >8k gestellt.
Additional context none
Issue Analytics
- State:
- Created 2 years ago
- Comments:11 (7 by maintainers)
Top Results From Across the Web
How to Solve the “Too Many Open Files” Error on Linux
If you've ever seen the “Too many files open” error message in a terminal window or found it in your system logs, it...
Read more >How to Fix the 'Too Many Open Files' Error in Linux?
Very often 'too many open files' errors occur on high-load Linux servers. It means that a process has opened too many files (file ......
Read more >Fixing the “Too many open files” Error in Linux - Baeldung
Fixing the “Too many open files” Error in Linux ; A process has three file descriptors open by default, denoted by 0 for...
Read more >How to solve "Too many Open Files" in Java applications
The error Java IOException “Too many open files” can happen on high-load servers and it means that a process has opened too many...
Read more >Too many files open (UNIX and Linux) - IBM
The Too many open files message occurs on UNIX and Linux operating systems. The default setting for the maximum number of open files...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Zur Verbesserung zukünftiger Analysen diesbzgl. habe ich nun mal zum monit watchdog eine überwachung der max. file descriptoren hinzugefügt. Nun sollte man zumindest eine WatchDog Benachrichtigung erhalten wenn die anzahl von file descriptoren > 95% ausgeschöpft sind.
Seit dem die Pol Events im ioBroker nun auf 900 Sekunden stehen, gibt es keine Probleme mehr. Vorher standen diese Mal auf 180 Sekunden. Ich vermute, dass irgendwas die Verarbeitung deutlich ausbremst und somit sich das ganze Aufstaut.
Was, hab ich leider nicht gefunden.