question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

The config.yaml settings in experiments/ are quite different from the settings in the paper

See original GitHub issue

@jerrybai1995 @vkoltun @zkolter

Thanks for the nice work! I am quite interested in the MDEQ model and try to train it on the cifar10 dataset. However, I can’t get the expected performances reported in the paper. Specifically, the accuracy of MDEQ-small(ours) without data augmentation in Table 1 is 87.1% while I only get 80.3% after removing the data augmentation code in tools/cls_train.py.

I checked the experiments\cifar\cls_mdeq_TINY.yaml and experiments\cifar\cls_mdeq_LARGE.yaml carefully and found that the settings are quite different from the settings in the paper (i.e. Table 4 in Appendix A), including dropout rate, For-Backword Thresholds, group num of GroupNormalization and so on. I have adjusted the setting in Table 4 while the performance is not improved. I have no idea with whether the LR_STEP or other settings in the .yaml file which can not be found in the paper harm the training process.

Is there any suggestion to reimplement the performance(87.1% ± 0.4%) in Table 1 in the paper? In addition, I will appreciate that if you can share the YAML files used in your paper experiments since I find the config.yaml settings of cifar10 and imagenet in experiments/ are quite different from the settings in Table 4 in Appendix A of the original paper.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
jianjieluocommented, Aug 28, 2020

After adding the weight_decay param discussed above and trying with the newest code & config(22731498bcd5524976664000367caa179fe6c56e), I can finally get ~84.5% accuracy on CIFAR-10 without augmentation, which is reasonable according to jerrybai1995’s response. I will close this issue.

Thanks for the quick response and bug fix again 💯.

0reactions
jerrybai1995commented, Aug 28, 2020

Ah, good point about the WD! Another thing that I overlooked when I cleaned up the code. Thanks a lot for the pointer.

And yes— do let me know if you still can’t improve over 80% on CIFAR-10 (no augmentation) 😄

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Write Configuration Files in Your Machine Learning ...
Manage parameters and initial settings with config files. ... A quick example is when running different Machine Learning experiments to find ...
Read more >
YAML Configuration File
The current YAML configuration file is in beta state. There are many options and features that it doesn't support yet. This document will...
Read more >
Introduction to Determined
Configuration settings can be specified by passing a YAML configuration file when launching the workload via the Determined CLI.
Read more >
10 YAML tips for people who hate YAML | Enable Sysadmin
These tips might ease your pain. There are lots of formats for configuration files: a list of values, key and value pairs, INI...
Read more >
Introduction to YAML Configurations - CircleCI
circleci/config.yml file, we will add a Docker execution environment to our job build . # CircleCI configuration file ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found