[3.0.0-rc] Exception during formatting kills logging
See original GitHub issueIf a formatter throws an exception, logging stops working – even if the exception is handled.
Related: #1248, Possibly Related: #1144
Recreation:
const winston = require('winston');
const logger = winston.createLogger( {
transports: [ new winston.transports.Console()],
format: winston.format.printf((info) => {
// Set a trap.
if (info.message === 'ENDOR') {
throw new Error('ITS A TRAP!');
} else {
return info.message;
}
})
});
// STart out logging fine.
console.log('Console: I am the raw console.');
logger.info('Logger: I am the logger.');
logger.info("Logger: I hear Endor is nice this time of year.");
// Trigger the trap. Swallow the error so processing continues.
try {
logger.info('ENDOR');
} catch (err) { console.log('Swallowed Error: ', err); }
// Confirm that the program is still running.
console.log("Console: Logger? Are you there?");
// All subsequent logger messages silently fail.
logger.info("I can hear you!");
logger.info("Can you hear me?");
logger.info("HELLO?");
Issue Analytics
- State:
- Created 5 years ago
- Reactions:4
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Spark org.apache.spark.SparkException: Task failed... - 240802
i am storing file in orc format with snappy compression. Error: at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)<br>Caused by: org ...
Read more >Configuration - Spark 3.0.0 Documentation - Apache Spark
Logging can be configured through log4j.properties . ... This should write to STDOUT a JSON string in the format of the ResourceInformation class....
Read more >Debugging OOM exceptions and job abnormalities - AWS Glue
You can debug out-of-memory (OOM) exceptions and job abnormalities in AWS Glue. The following sections describe scenarios for debugging out-of-memory ...
Read more >STAF V3 User's Guide
The JVM log files are stored in the {STAF/DataDir}/lang/java/jvm/<JVMName> directory and contain JVM start information such as the date/time when the JVM was ......
Read more >MLflow 2.1.0 documentation
This will be included in the exception's serialized JSON representation. ... log_models – If True , trained models are logged as MLflow model...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The only reason that would be viable for me is because I am using a custom wrapper to
winston
. For most users, that would mean wrapping every call tologger.log()
in atry/catch
.I agree that any format should handle potential errors. However, we all know that not all errors are foreseen and many occur in rare cases. For example, #1248 identifies a case where some objects will cause an
Error
in a seemingly very trivialformat
function.Of course, if that error goes unhandled AND the app is following best practices, the application will terminate. HOWEVER, most logging is done alongside other code which will often be wrapped inside try/catch blocks that will are designed to handle the functional failures rather than the logging failures. Interpreting logging failures as functional failures can cause a host of new side effects; it is impractical to train each such catch block to uniquely handle logging errors.
We’re left with the standard practice being one big try/catch inside every non-trivial
format
where “non-trivial” is subject to interpretation.Yeah… I totally understand that. Without the knowledge of what the formatter was trying to do, how can you meaningfully handle exceptions that come out of it?
Maybe, the best that can be done is to catch an error coming out of the call to
format.transform()
and log theinfo
object withconsole.error()
– At least the information went somewhere and future logging is unaffected. Hopefully applications are doing something with the console error output (instead of&2>/dev/null
) since it is always possible for important information to show up there. There is precedence for usingconsole.log()
as a last-ditch output mechanism elsewhere inwinston@3
@crussell52 dug in and discovered that the silencing was due to the
callback
in_transform
not being invoked. #1347 solves this by re-throwing the error, but then ensuring that thecallback
is invoked to allow stream processing to continue if the user decides to catch the error.