question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Enable AUTHORING UX approval tests creation

See original GitHub issue

Background

Today there is no good story for template authors to test their templates and ensure they work as intended after doing changes and environment around them changes(.NET Framework, Template Engine…)

We have https://github.com/dotnet/templating/tree/main/tools/ProjectTestRunner but its pretty hard to understand and navigate for novice template author, also tooling is not compiled as dotnet global tool to just use… We run tests in our repo using Process.Start("dotnet new console) and than check output see example here.. Again, not very good way for template author to run/maintain tests.

Outcomes

This enables template development inner loop. We want to support approval tests, which means template author would do:

  1. dotnet new console once to create initial content
  2. On test run TemplateEngine will provide ability to run tests that compare content from step 1) with what it would generate, if change is intentional author re-runs step 1) and commits to git changes.

Justification

  • Customer impact - 1st party customers has easy tools to test their templates (popular request)
  • Engineering impact - automated testing contribute to less bugs - automated testing allows to reduce amount of manual testing - teams don’t need to invent and support own tooling for testing

Prerequisite

What needs to be solved how to handle random values like PortNumber or GUIDs…

Subtasks

Investigations:

  • Get stats on usage of nondeterministic generators (Guid, Now, Port, Random) - @vlada-shubina
  • Investigate ways of usage of XUnit Verifier so that multiple verification can be performed and reported (even if multiple are failing)
    • Verify.Net doesn’t support verification of multiple files at the moment.
    • Simon is considering to implement it in near future
    • We stick to 1-by-1 file comparison at the moment
  • Go through CommonTemplatesTests to access what functionality we’ll need from the test framework in order to transform and adopt these tests.
    • stdout and stderr content comparison
    • content regex matching
    • content substring matching, absence of patterns
    • newlines normalization
    • custom content checking (xml parsing)
  • Investigate options to programaticaly change dotnet sdk version to run a sdk tool (as a fallback we can programatically create and then discard global.json) We will leverage global.json for this - simplified approach:
ren global.json global.json.bak
dotnet new globaljson --sdk-version <version>
ren global.json.bak global.json

Subtasks for MVP (not to be exposed to customers):

  • Generalize and repurpose Microsoft.TemplateEngine.TemplateLocalizer as templates authoring toolset. Packaged as nuget - @vlada-shubina
  • Define configuration model for a single test case ({template to be tested; dotnet sdk version; parameter values; approvals location}). Create System.CommandLine Parser transforming CLI arguments to this configuration model
  • Verification logic module (the API and actual logic doesn’t have to be polished for first version) - @JanKrivanek
  • Add programatic way of simple scrubbing and/or replacing keyed by files.
  • Transform and onboard CommonTemplatesTests to the new framework https://github.com/dotnet/sdk/pull/28707

V2 (preparation for customers exposed toolset):

  • Define Verification module API. CLI and MSBuild interfaces should call the logic through the API
  • Extract external process wrapping logic (Command) from Microsoft.DotNet.Cli.Utils or find another utility for wrapping CLI processes - and get rid of copied code within the Microsoft.TemplateEngine.Authoring.TemplateVerifier. (this task might be joined with https://github.com/dotnet/templating/issues/5296)
  • Support batch execution logic for multiple test cases (probably configured by files)
  • Support filter/ignore lists (with defualt behavior that should suffice in most common cases) - e.g. to be able to ignore images, bin/* outputs etc.
  • Support for not installed templates (arbitrary location on disk)
  • Support for switching sdk versions
  • Documenting API and CLI in docs/wiki
  • Simplify and unify working with paths (include/exclude patterns, paths given in custom scruber and custom verifier etc.) - tooling should be permissive and be able to accept mix-n-matches of dir separator chars, plus it should have settings for enforcing separator char used in path it’s passing out (custom scrubber and verifier). Paths passed out should probably be passed as custom type - allowing to fetch relative path (to template root or test root), absolute path, paths with custom spearators
  • Telemetry opt out in our integration tests (currently: https://github.com/dotnet/sdk/blob/main/src/Tests/Microsoft.NET.TestFramework/Commands/DotnetNewCommand.cs#L20, possibilities: explicit set of env in integration set fixture; or ability to inject env into instantiatot process via API)

Next iterations (ideally part of the first customer facing version):

  • Review (and adjust if needed) signing of tooling imposed by source build - is it required for shipping?
  • Rewrite more snapshots-based integration test for templating instantiation in sdk to use the tooling
  • Add telemetry
  • Implement context detection and extraction for nondeterministic generators handling (so e.g. for Port generator, the logic should be able to detect the resulting value in the generated output and then process the output by replacing all instances of the generator being used).
  • Add Template Validator as another tool in the authoring toolset. Implement just a sample of most importatnt validations - [ ] Create MSBuild Task version of the Template Validator
    • Design and use continuable errors during validation - so that as much errors as possible cna be reported during single run (while not reporting nonsense issues caused by inconsistent data detected in previous steps).
  • Investigate, Design and implement deterministic mode for Macros (and hence generators): https://github.com/dotnet/templating/pull/5223
  • Build in property based testing

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:23 (23 by maintainers)

github_iconTop GitHub Comments

3reactions
SimonCroppcommented, Aug 24, 2022

I don’t think it’d be difficult to create a helper method that verifies a content of a folder, though it’d make our lives easier if that helper was provided out of a box. 😃

I can make this happen

2reactions
JanKrivanekcommented, Sep 5, 2022

Based on brainstroming session with @vlada-shubina those are the tasks we came up with:

Investigations:

  • Get stats on usage of nondeterministic generators (Guid, Now, Port, Random) - @vlada-shubina
  • Investigate ways of usage of XUnit Verifier so that multiple verification can be performed and reported (even if multiple are failing)
    • Verify.Net doesn’t support verification of multiple files at the moment.
    • Simon is considering to implement it in near future
    • We stick to 1-by-1 file comparison at the moment
  • Go through CommonTemplatesTests to access what functionality we’ll need from the test framework in order to transform and adopt these tests.
    • stdout and stderr content comparison
    • content regex matching
    • content substring matching, absence of patterns
    • newlines normalization
    • custom content checking (xml parsing)
  • Investigate options to programaticaly change dotnet sdk version to run a sdk tool (as a fallback we can programatically create and then discard global.json) We will leverage global.json for this - simplified approach:
ren global.json global.json.bak
dotnet new globaljson --sdk-version <version>
ren global.json.bak global.json

Subtasks for MVP (not to be exposed to customers):

  • Generalize and repurpose Microsoft.TemplateEngine.TemplateLocalizer as templates authoring toolset. Packaged as nuget - @vlada-shubina
  • Define configuration model for a single test case ({template to be tested; dotnet sdk version; parameter values; approvals location}). Create System.CommandLine Parser transforming CLI arguments to this configuration model
  • Verification logic module (the API and actual logic doesn’t have to be polished for first version) - @JanKrivanek
  • Add programatic way of simple scrubbing and/or replacing keyed by files.
  • Transform and onboard CommonTemplatesTests to the new framework

V2 (preparation for customers exposed toolset):

  • Define Verification module API. CLI and MSBuild interfaces should call the logic through the API
  • (In Progress) Implement context detection and extraction for nondeterministic generators handling (so e.g. for Port generator, the logic should be able to detect the resulting value in the generated output and then process the output by replacing all instances of the generator being used).
  • Support batch execution logic for multiple test cases (probably configured by files)
  • Support filter/ignore lists (with defualt behavior that should suffice in most common cases) - e.g. to be able to ignore images, bin/* outputs etc.

Next iterations (ideally part of the first customer facing version):

  • Add telemetry
  • Add Template Validator as another tool in the authoring toolset. Implement just a sample of most importatnt validations (more comprehensive list: https://github.com/dotnet/templating/issues/2623)
    • Create MSBuild Task version of the Template Validator
    • Design and use continuable errors during validation - so that as much errors as possible cna be reported during single run (while not reporting nonsense issues caused by inconsistent data detected in previous steps).
  • Investigate, Design and implement deterministic mode for Macros (and hence generators)
Read more comments on GitHub >

github_iconTop Results From Across the Web

Use Approval Tests with Swagger to ensure your API
Now you can run the test, the first time you pass the test it should fail and the library should created two files...
Read more >
Set up signing workflows
Creating a custom workflow. All users have the option to create workflows, depending on the enabled settings at the account/group level. Account ...
Read more >
Content testing: What method and process works best?
Once your test has been reviewed and approved internally, begin gathering test users. Choose them wisely. If you're testing content that was ...
Read more >
Automatically Launch a Salesforce Approval Processes
This post will cover a specific business need: launching an approval process automatically, without manual intervention from an user. In ...
Read more >
Obtaining Consent for User Research
To ensure that user research provides meaningful data, UX teams must ... ethical oversight and approval authority on any planned research.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found