Measure the model performance
See original GitHub issueHi,
Thanks for the simpler implementation of MAML.
As per the MAML paper At the end of meta-training, new tasks are sampled fromp(T),and meta-performance is measured by the model’s perfor-mance after learning fromKsamples. Generally, tasksused for meta-testing are held out during meta-training.
Anybody has tried fine-tuning the model with few number (0 to 10) of samples for a new class which was not there in the training dataset and measured the performance?
Is that part of the code already available in this repository?
Thank you, KK
Issue Analytics
- State:
- Created 4 years ago
- Comments:17
Top Results From Across the Web
Top 10 model performance metrics for classification ML models
1. Confusion Matrix · 2. Type I Error · 3. Type II Error · 4. Accuracy · 5. Recall/ Sensitivity/ TPR · 6....
Read more >How to Measure the performance of a Model ? | Analytics Vidhya
1. Accuracy : This is one of the famous and easy old school type of measure to know the quality of our model....
Read more >Performance Metrics in Machine Learning [Complete Guide]
Metrics are used to monitor and measure the performance of a model (during training and testing), and don't need to be differentiable. However, ......
Read more >Evaluating a machine learning model.
The three main metrics used to evaluate a classification model are accuracy, precision, and recall. Accuracy is defined as the percentage of ...
Read more >Model Evaluation Metrics in Machine Learning - KDnuggets
Gain or Lift is a measure of the effectiveness of a classification model calculated as the ratio between the results obtained with and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@HongduanTian Hi, have you ever tried saving the model, and load model weights in an extra evaluate code? I have tried, I got ~47% accuracy when training with this repo(miniImagenet_train.py), but when I tried the operation mentioned above, I can only get ~44% accuracy. Can you provide some insights to me? Many thx!
@HongduanTian Yes, also it is happening every 500 episodes. Did you use original code from here. Does that have model performance part ?
Thank you, KK