Create a manual UI sanity check runbook
See original GitHub issueRequirement - what kind of business use case are you trying to solve?
- Relates to https://github.com/jaegertracing/jaeger/issues/2734#issuecomment-783356447
- Ensure a bug-free UI release by documenting a consistent set of manual sanity checks on the Jaeger UI.
- Motivated by a lack of confidence in existing unit test coverage when applying dependabot version bumps.
- Doubles as a UI feature document for new users, particularly for features that are less apparent.
- Potential input in the future for automated UI tests with a framework like Selenium if the investment in time is worthwhile and deemed to not be overly brittle. Subject to further consideration and discussion.
Problem - what in Jaeger blocks you from solving the requirement?
No such document exists.
Proposal - what do you suggest to solve the problem or improve the existing situation?
Create a SANITY_TEST.md file consisting of BDD-style test cases to be executed prior to releasing a new version until we have enough confidence in unit test coverage.
Perhaps start with core features and expand to more scenarios where appropriate, for example:
- Scenario 1: Generating traces
**given** the user has started Jaeger all-in-one
**and** is on the Jaeger UI homepage (e.g. http://localhost:16686/)
**when** the user refreshes the page the page once
**then** the jaeger-query service should appear as an item in the Service drop-down
- Scenario 2: Finding traces
**given** Scenario 1
**when** the user selects the jaeger-query Service
**and** sets a Lookback of "1 hour"
**and** clicks the "Find Traces" button
**then** at least one trace should appear in the trace results summary
Any open questions to address
- Will this be too tedious to expect maintainers to go through?
- We will need to consider the maintenance cost of such a document for when the UI behaviour changes. Should we expect contributors to update this doc along with their PR + unit tests? Can the sanity checks be written to minimize changes required due to UI changes?
- Should we, instead, invest more effort into increasing unit test coverage? Do we have the resources to do so?
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (9 by maintainers)
Top Results From Across the Web
How to Create a Runbook: A Guide for Sysadmins & MSPs
1. Planning ... Which procedures do you regularly execute? Some ideas: ... A runbook should contain your most regularly executed tasks, as well...
Read more >How to Create Runbooks: A Small Business Guide
How to create a runbook for your small business · Actionable: Focus on defined actions, not theory. · Accurate: Test and validate the...
Read more >Are you/your team doing manual QA before releases? - Reddit
The manual testing is more for sanity checking and just confirming requirements are met or edge cases are handled (which in our apps...
Read more >3.13. Sanity Check — Bugzilla 5.0.6 documentation
Bugzilla includes a “Sanity Check” that can perform several basic database checks, ... If the script cannot fix the problem it will require...
Read more >Manual QA · GitBook - Engineering at Comic Relief
A high level testing to check the stability of the build after new functionality/feature is delivered to ensure the critical functionalities are working....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Sure, sounds good to me! I just added it to the agenda.
That’s correct! Here’s a draft project proposal. Let me know if you’d like to be a mentor or co-mentor.
UI testing for Jaeger
Mentor: Juraci Paixão Kröhling (@jpkrohling) Status: Pending
One of the key parts of Jaeger is its UI. Unfortunately, we have little to no automation when it comes to ensuring the UI is working properly before we perform a new release.
Your role in this project is to help us improve the situation by first determining what to test, writing BDD-style tests that can be executed manually by the maintainers before a release, and automating the tests using tools like Cucumber, Watir, or Selenium.
For this project, you’ll need to be proficient with Javascript, and you’ll leave this internship with a great understanding of Quality Assurance practices, especially when applied to testing user interfaces, as well as intermediary-level knowledge about distributed tracing and Jaeger.