question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Measure the model performance

See original GitHub issue

Hi,

Thanks for the simpler implementation of MAML.

As per the MAML paper At the end of meta-training, new tasks are sampled fromp(T),and meta-performance is measured by the model’s perfor-mance after learning fromKsamples. Generally, tasksused for meta-testing are held out during meta-training.

Anybody has tried fine-tuning the model with few number (0 to 10) of samples for a new class which was not there in the training dataset and measured the performance?

Is that part of the code already available in this repository?

Thank you, KK

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:17

github_iconTop GitHub Comments

1reaction
zzpustccommented, Dec 9, 2020

Tasks in testing phase have the same structures as those in training phase. So, in testing phase, query_set should be generated as what in training does. The optimal theta will adapt on support data in testing tasks and the performance will be measured on query data in testing tasks.

Just treat the task as data in common supervised learning. The structure of the ‘data’ should keep consistent.

@HongduanTian Hi, have you ever tried saving the model, and load model weights in an extra evaluate code? I have tried, I got ~47% accuracy when training with this repo(miniImagenet_train.py), but when I tried the operation mentioned above, I can only get ~44% accuracy. Can you provide some insights to me? Many thx!

1reaction
kk2491commented, Nov 21, 2019

@HongduanTian Yes, also it is happening every 500 episodes. Did you use original code from here. Does that have model performance part ?

Thank you, KK

Read more comments on GitHub >

github_iconTop Results From Across the Web

Top 10 model performance metrics for classification ML models
1. Confusion Matrix · 2. Type I Error · 3. Type II Error · 4. Accuracy · 5. Recall/ Sensitivity/ TPR · 6....
Read more >
How to Measure the performance of a Model ? | Analytics Vidhya
1. Accuracy : This is one of the famous and easy old school type of measure to know the quality of our model....
Read more >
Performance Metrics in Machine Learning [Complete Guide]
Metrics are used to monitor and measure the performance of a model (during training and testing), and don't need to be differentiable. However, ......
Read more >
Evaluating a machine learning model.
The three main metrics used to evaluate a classification model are accuracy, precision, and recall. Accuracy is defined as the percentage of ...
Read more >
Model Evaluation Metrics in Machine Learning - KDnuggets
Gain or Lift is a measure of the effectiveness of a classification model calculated as the ratio between the results obtained with and...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found