Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Improving output: verbose mode

See original GitHub issue

How currently the output looks like?

900 out of 1067  (wemake_python_styleguide/violations/     should_use_text =                                                                                          FAILED: mutmut wemake_python_styleguide/violations/ --apply --mutation "    should_use_text = False⤑1"

901 out of 1067  (wemake_python_styleguide/violations/     error_template =                                                                                           FAILED: mutmut wemake_python_styleguide/violations/ --apply --mutation "    error_template = 'Found parens right after a keyword'⤑0"

902 out of 1067  (wemake_python_styleguide/violations/     error_template = 
903 out of 1067  (wemake_python_styleguide/violations/     code = 313⤑0)

It shows two cases: failing and successful state. Let’s talk about both of them. I will also try to cover some general thoughts that I have.

First of all, I would like to analyze what happened. Since this process it rather long, I am not able to watch it in real time. The final output is also quite verbose, so it is hard to read it though. What do I suggest? It would be nice to have some short report that will cover:

  1. What mutants managed to survive: what was changed, what was the change, in what file
  2. What tests killed the most mutants (so these tests can be documented as important ones, etc), what tests did not caught any (so these tests might be removed in the future or refactored)

Secondly, it is possible to have better “inline experience”. What do I mean by that?

  1. Currently when reading through the test cases I am missing some required information: what source line number was changed, to what, what mutation rule caused this
  2. In case that the mutant was killed I guess we can show how many tests did fail. It would be very helpful and will give you a better understanding of what is going on

I would like to say, that this tool is absolutely awesome! Thank you for building it!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:17 (10 by maintainers)

github_iconTop GitHub Comments

boxedcommented, Nov 18, 2018

Well crap, the output is wrong. What you want to do is:

mutmut results

To show a diff of a mutant:

mutmut show 7

And to apply

mutmut apply 7

The mutation ids are just primary keys into the SQLite cache database now. This is a lot nicer to use imo.

boxedcommented, Nov 11, 2018

I’m going to totally rework the output to remove this bug and make it nicer in some other ways.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Increasing Build Log Verbosity for Visual Studio
Start Visual Studio. · Click on "Tools"->"Options...". · Select the item "Project and Solutions"->"Build and Run". · Set the verbosity to Detailed ...
Read more >
Verbose mode - Wikipedia
When running programs in the command-line, verbose output is typically outputted in standard output or standard error. Many command line programs can be...
Read more >
How can I make Visual Studio's build be very verbose?
In Visual Studio go to your project and right click on it and select properties. In Linker there is a command line option....
Read more >
2 Understanding Verbose Output
The -Xverbose command-line option enables verbose output from the Oracle JRockit JVM. You can use the verbose output for monitoring, tuning, and diagnostics ......
Read more >
Verbose Mode Options - Intel
Verbose Mode Options ... To increase the amount of detail in your output, ... Output from single check for gpu with verbose text....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found