question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

running minimal example recipes and checking output

See original GitHub issue

hi, i’m trying to figure out where the output goes when I run a recipe. According to the docs it would be specified in the hyperparams.yaml file but none of the recipes i’ve tried have an output_folder already specified to modify (or verify) and so I assume it would use a sensible default if none is given?

But in any case even if I explicitly add an entry for output_folder in the yaml file, or as a runtime flag eg --output_folder <foo>, I don’t see any outputs where i’d expect, whether in the current directory or relative to the speechbrain root, nor does anything change under speechbrain/results.

Is it me or am I missing something? I can run the tests just fine (one or two fail but the rest complete fine). Also training does take place when I run the recipe, I just don’t know where to check the results.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6

github_iconTop GitHub Comments

2reactions
Gastroncommented, Feb 17, 2021

We discussed this in a call today, and we had a couple of ideas:

  • Many of the minimal examples are currently mostly useful for integration testing (and are used as such). A bulk of the minimal examples should be moved to tests/integration_tests (which is currently a symlink, but we’d make it a real path).
  • The rest of the minimal examples could be moved outside recipes into a top-level minimal_examples path. This would make it clear that their primary function is to serve as examples. In addition, to work better as examples, we should add a bunch of documentation explaining the structure of those minimal recipes, and we should add all the normal scaffolding code that recipes have (like creating an output directory and logging and storing results).
  • Another very similar idea was to create a directory of template recipes. These would be starting points for general tasks: sequence classification (e.g. speaker recognition), sequence transduction (e.g. ASR, though that might warrant multiple templates), etc.
0reactions
tensorfoocommented, Feb 19, 2021

@tensorfoo what would be, in your opinion, a potential solution so peoples do not follow your path (hence, do not observe the same problem)?

My path was following the doc which described how to run an experiment. I started with the minimal examples because the tutorial in the google colab did the same,

For instance, let’s run one of the minimal examples made available with SpeechBrain:

%cd speechbrain/recipes/minimal_examples/neural_networks/ASR_CTC/
!python example_asr_ctc_experiment.py hyperparams.yaml  

In this case, we trained a CTC-based speech recognizer with a tiny dataset stored in the folder samples. As you can see, the training loss is very small, which indicates that the model is implemented correctly. The validation loss, instead, is high. This happens because, as expected, the dataset is too small to allow the network to generalize.

For a more detailed description of the minimal examples, please see the tutorial on “minimal examples step-by-step”.

All the results of the experiments are stored in the output_folder defined in the yaml file. Here, you can find, the checkpoints, the trained models, a file summarizing the performance, and a logger. (emphasis mine)

And that’s where I got stuck.

What I would have liked to see instead, was the result of the training so I could experiment and try to see how it all works by changing inputs, hyperparams and maybe when i’m more comfortable, the code for the recipe, then comparing the results by checking outputs. If this is unreasonable, that’s fair enough, i’m a beginner so it’s possible I don’t know yet what I should be doing! The linked pull request by @pplantinga looks very close to what i’d hoped.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Running AppConnect Recipes: Test - Insightly Help Center
The test Recipe button picks up a single trigger event and runs it through your Recipe to create a job. This lets you...
Read more >
Recipes
Recipes · Recipes are a means to cause a series of commands to run on a machine. · All recipes MUST have corresponding...
Read more >
Testing Recipes - React
Rendering. Commonly, you might want to test whether a component renders correctly for given props. Consider a simple component that renders a message...
Read more >
casey/just: Just a command runner - GitHub
For example, running this recipe: set positional-arguments @foo bar: echo $0 echo $1. Will produce the following output: $ just foo hello foo...
Read more >
Introduction to testing in ASR
To invoke pytest and run all ASR tests change directory into your asr/ ... When debugging it will be useful to check the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found