question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Framewise transcription evaluation

See original GitHub issue

TL;DR: Basically all I’m asking for is taking frames as inputs to @rabitt’s mir_eval.multipitch module.

Hi everyone,

recent transcription papers:

  • Kelz et al., "On the Potential of Simple Framewise Approaches to Piano Transcription ". https://arxiv.org/abs/1612.05153
  • Sigtia et al., “An End-to-End Neural Network for Polyphonic Piano Music Transcription”

This is basically http://scikit-learn.org/stable/modules/model_evaluation.html#multiclass-and-multilabel-classification with using the macro samples parameter, but scaled with the number of frames labels.

As it seems that people use it, would it be useful to have this in mir_eval? If we go with the scikit-learn implementation which I would strongly suggest, this adds it back as a dependency. Opinions?

Issue Analytics

  • State:open
  • Created 7 years ago
  • Comments:12 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
justinsalamoncommented, Jan 3, 2017

Btw, note-level eval is already implemented in mir_eval, as is multi-f0. So (assuming the implementations are correct) to get frame-level metrics for transcription you’d just have to sample the note events onto a fixed time grid (which I think is also already implemented somewhere) and then feed that into the multi-f0 metrics, as @stefan-balke noted in the first comment. Pinging @rabitt

0reactions
stefan-balkecommented, Mar 4, 2017

Yep, on my list. @justinsalamon, see you at ICASSP then 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

On the Potential of Simple Framewise Approaches to Piano ...
We explore a novel way of conceptualising the task of polyphonic music transcription, using so-called invertible neural networks. Invertible ...
Read more >
Paper tables with annotated results for Musical Features for ...
Musical Features for Automatic Music Transcription Evaluation ; Framewise mistakes in highest voice, F-measure, Yes ; Framewise mistakes in lowest voice ...
Read more >
Revisiting the Onsets and Frames Model with Additive Attention
set location prediction and frame-wise multi-pitch detec- tion [7]. ... evaluate the corresponding accuracy of transcription. The.
Read more >
(PDF) On the Potential of Simple Framewise Approaches to Piano ...
... an in-depth analysis of neural network-based framewise transcription. ... MusicNet, to serve as a source of supervision and evaluation of machine ...
Read more >
Investigating the Perceptual Validity of Evaluation Metrics for ...
Automatic Music Transcription (AMT) is usually evaluated using low-level criteria ... The framewise Precision (P f ), Recall (R f ) and F-Measure...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found