catkin run_tests should not return 0 if there are failing tests
See original GitHub issueCurrently catkin run_tests
will return 0 even if there are failing tests (for example see this travis build that had a test failure). This behavior is probably because run_tests
is an alias for catkin_make run_tests
. It would be nice if catkin run_tests
could return a better error code.
In the meantime, I am calling catkin_test_results
after catkin run_tests
.
Issue Analytics
- State:
- Created 8 years ago
- Reactions:9
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Build Packages — catkin_tools 0.0.0 documentation
If a workspace is not yet initialized, build can initialize it with the default configuration, but only if it is called from the...
Read more >rostest fails when run using "catkin_make run_tests"
Hi all, I have a node and some_test.test file like this: The node and test script "test_node_api.py" both import messages from some other ......
Read more >Gtest failed when catkin_make run_tests. Undefined Reference
Basically, in my CMakerList.txt, I need to add_library(visual_robot src/visual_robot.cpp) and link it with my test file by using ...
Read more >Unit Testing — ROS Training For Industry 0.1 documentation
Write a dummy test that will return true if executed. This will test our framework and we will replace it later with more...
Read more >Example of writing C++ tests in ROS (Kinetic)
Plus I wanted my test to show debug output if I need it. ... By default, catkin_make returns 0 even if one or...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
This is a serious problem. We can basically not trust out Jenkins pipeline because of this. Is there a way to at least systematically obtain the error code if a test failed?
catkin_test_results says “Summary: 0 tests, 0 errors, 0 failures, 0 skipped”, so one would need to diff the number of tests here with the number of tests found, which seems like an unreliable method as well.