question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[3.0.0-rc] Exception during formatting kills logging

See original GitHub issue

If a formatter throws an exception, logging stops working – even if the exception is handled.

Related: #1248, Possibly Related: #1144

Recreation:

const winston = require('winston');

const logger = winston.createLogger( {
    transports: [ new winston.transports.Console()],
    format: winston.format.printf((info) => {
        // Set a trap.
        if (info.message === 'ENDOR') {
            throw new Error('ITS A TRAP!');
        } else {
            return info.message;
        }
    })
});

// STart out logging fine.
console.log('Console: I am the raw console.');
logger.info('Logger: I am the logger.');
logger.info("Logger: I hear Endor is nice this time of year.");

// Trigger the trap.  Swallow the error so processing continues.
try {
    logger.info('ENDOR');
} catch (err) { console.log('Swallowed Error: ', err); }

// Confirm that the program is still running.
console.log("Console: Logger? Are you there?");

// All subsequent logger messages silently fail.
logger.info("I can hear you!");
logger.info("Can you hear me?");
logger.info("HELLO?");

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:4
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
crussell52commented, Apr 6, 2018

… alternatively it could be up to you the caller who swallowed the error to reconnect everything.

The only reason that would be viable for me is because I am using a custom wrapper to winston. For most users, that would mean wrapping every call to logger.log() in a try/catch.

my gut says that it’s up to the format to handle the error

I agree that any format should handle potential errors. However, we all know that not all errors are foreseen and many occur in rare cases. For example, #1248 identifies a case where some objects will cause an Error in a seemingly very trivial format function.

Of course, if that error goes unhandled AND the app is following best practices, the application will terminate. HOWEVER, most logging is done alongside other code which will often be wrapped inside try/catch blocks that will are designed to handle the functional failures rather than the logging failures. Interpreting logging failures as functional failures can cause a host of new side effects; it is impractical to train each such catch block to uniquely handle logging errors.

We’re left with the standard practice being one big try/catch inside every non-trivial format where “non-trivial” is subject to interpretation.

Not sure the best approach on how to handle this

Yeah… I totally understand that. Without the knowledge of what the formatter was trying to do, how can you meaningfully handle exceptions that come out of it?

Maybe, the best that can be done is to catch an error coming out of the call to format.transform() and log the info object with console.error() – At least the information went somewhere and future logging is unaffected. Hopefully applications are doing something with the console error output (instead of &2>/dev/null) since it is always possible for important information to show up there. There is precedence for using console.log() as a last-ditch output mechanism elsewhere in winston@3

1reaction
indexzerocommented, Jun 1, 2018

@crussell52 dug in and discovered that the silencing was due to the callback in _transform not being invoked. #1347 solves this by re-throwing the error, but then ensuring that the callback is invoked to allow stream processing to continue if the user decides to catch the error.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Spark org.apache.spark.SparkException: Task failed... - 240802
i am storing file in orc format with snappy compression. Error: at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)<br>Caused by: org ...
Read more >
Configuration - Spark 3.0.0 Documentation - Apache Spark
Logging can be configured through log4j.properties . ... This should write to STDOUT a JSON string in the format of the ResourceInformation class....
Read more >
Debugging OOM exceptions and job abnormalities - AWS Glue
You can debug out-of-memory (OOM) exceptions and job abnormalities in AWS Glue. The following sections describe scenarios for debugging out-of-memory ...
Read more >
STAF V3 User's Guide
The JVM log files are stored in the {STAF/DataDir}/lang/java/jvm/<JVMName> directory and contain JVM start information such as the date/time when the JVM was ......
Read more >
MLflow 2.1.0 documentation
This will be included in the exception's serialized JSON representation. ... log_models – If True , trained models are logged as MLflow model...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found