Logrotation on output log file causes empty log files
See original GitHub issueMy logrotate.d script:
/var/log/myfirstforeverscript.log {
daily
rotate 10
missingok
notifempty
compress
sharedscripts
}
forever runs in daemon mode.
forever start -a -l /var/log/myfirstforeverscript.log -c node --pidfile /var/run/myfirstforeverscript.pid /myfirstforeverscript.js
After logrotate runs overnight, it copies sends it off to the file fine, but forever then doesn’t continue writing to the main log file.
It’s most likely something up with my logrotate.d script, but any ideas?
Issue Analytics
- State:
- Created 12 years ago
- Reactions:3
- Comments:47 (5 by maintainers)
Top Results From Across the Web
How to Manage Log Files Using Logrotate
1) and then delete data from the original one—instead of renaming it—so the logging service can continue to write to the file without ......
Read more >How to Use Logrotate to Manage Log Files in Linux
This problem is typically solved through log rotation, a process that involves renaming or compressing a log file before it gets too large,...
Read more >How To Manage Logfiles with Logrotate on Ubuntu 20.04
Logrotate is a system utility that manages the automatic rotation and compression of log files. If log files were not rotated, compressed, ...
Read more >Troubleshooting issues with log rotation and archiving
Cause. Any of the following may cause this: Files under /var/log may be ... The example output indicates a 1GB .pcap file is...
Read more >Is logrotate supposed to work only in logs inside /var/log?
If it performs the rotation with --force , then that implies that your configuration is fine, but logrotate does not believe that the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Having done server maintenance for several years, this is a common issue that is not specific to
forever
. I understand what’s going to be this:While you appear to be logging to a file, you are really logging to a file descriptor. After log rotation by an external application, the application continues to log to a file descriptor, but now it is no longer connected with the file, which has been re-created through log rotation. While the log may be empty, your disk space may well be continuing to increase.
Possible solutions to log rotation complications
logrotate and copytruncate
Above there was a recommendation to use
logrotate
and thecopytruncate
options. This is designed to workaround the file-descriptor-vs-file issue described above by leaving the relationship intact. Instead of renaming the file, it’s first copied to the rotated location and then truncated back to an empty state, as opposed to renaming the file. This works, but feels like a workaround.Restart the app
logrotate
and similar solutions can help you send a command to restart the app during log rotation so that filename-vs-file-descriptor relationship gets refreshed. This works too. If like me, you are also on-call to respond to problems with apps restarting at midnight, you would probably prefer to find another solution that doesn’t mess with your application in the middle of the night. (What could go wrong with simply restarting an app in the middle of the night?)Build log rotation into forever
You could submit a pull request which adds log rotation into
forever
, but this is a general problem. Does it make sense for every single server or process supervisor to roll-its-own log rotation solution? Surely there’s a more general solution to this.Log directly from your app over the network to syslog or a 3rd-party service.
This avoids the direct use of log files, but most of the options I’ve looked for this in Node.js share the same design flaw: They don’t (or didn’t recently) handle the “sad path” of the remote logging server being unavailable. If they coped with it at all, the solution was to put buffered records into an in-memory queue of unlimited size. Given enough logging or a long enough outage, memory would eventually fill up and things would crash. Limiting the buffer queue size would address that issue, but it illustrates a point: designing robust network services is hard. Your are likely busy building and maintaining your main application. Do you want to also be responsible for the memory, latency and CPU concerns of a network logging client embedded in your application?
For reference, here are the related bug reports I’ve opened about this issue:
If you are using this a module that logs over the network directly, you might wish to check how it handles the possibility that the network or logging service is down.
Log to STDOUT and STDERR, use syslog
If your application simply logs to STDOUT and STDERR instead of a log file, then you’ve eliminated the problematic direct-use of logging files and created a foundation for something that specializes in logging to handle the logs.
I recommend reading the post Logs are Streams, Not Files which makes a good case for why you should log to STDOUT and shows how you can use pipe to logs to
rsyslog
(or another syslog server) from there, which specialize in being good at logging. They can do things like forward your logs to a third party service like LogEntries, and handle potential networks issues there outside your application.Logging to STDOUT and STDERR is also considered a best practice in the App Container Spec. I expect to see more of this logging pattern as containerization catches on.
There are also good arguments out there for logging as JSON, but I won’t detour into that now.
Log to STDOUT, use systemd
systemd
can do process-supervision (like forever), including user-owned services, not just root. It’s also designed to handle logging that services send to STDOUT and STDERR and has a powerfuljournalctl
tool built-in. There’s no requirement that your process supervisor be written in Node.js just because your app is.Systemd will be standard in future Ubuntu releases and is already standard in Fedora. CoreOs uses Systemd inside its container to handle process supervision and logging, but also because it starts in under a second.
How to Log to STDOUT effectively with forever?
About now, you may be looking at the
--fifo
option forforever
, since it advertises that it sends logs to STDOUT. Pefect! Not quite. Note only is not clear which logs it would send to STDOUT, but it turns out that--fifo
is only meant to apply to thelogs
command, but that’s not clear because the documentation forforever
doesn’t tell you which flags go with which options..What you might hope works:
Besides that
--fifo
doesn’t work withstart
currently, there’s also the issue thatstart
causes the app to run in the background, disconnected fromforever's
STDOUT and STDERR. The issue can be solved by using the bash feature of process substition, like this:The
>(...)
syntax causesbash
substitute the command there for a file descriptor that pipes to that command, like a named pipe. When you runforever list
, you’ll see an entry in thelogfile
column that looks like/dev/fd/63
. Just like a regular log file, this syntax works even whenstart
runs the app in the background.You can use the same approach to with the
-o
and-e
flags as well. But output sent to-l
already includes the data sent to both the STDOUT and STDERR that would be logged to the-o
and-e
options. (But maybe it shouldn’t) Also, you would end up with threelogger
processes running that way.You are not limited to using this syntax to pipe your logs to
logger
, you could use the syntax to pipe your logs to anything that is designed to receive logs on STDIN.You should add “copytruncate” to your config file, that does the job.