Accuracy not as per paper
See original GitHub issueFor the following code:
import torch, torchvision
from efficientnet_pytorch import EfficientNet
from tqdm import tqdm
def main(l):
model = EfficientNet.from_pretrained(model_name='efficientnet-b'+str(l))
model.cuda()
model.eval()
dataset = torchvision.datasets.ImageNet('/home/milton/.torch/datasets/', split='val', download=False,
transform=torchvision.transforms.Compose([
torchvision.transforms.Resize(256),
torchvision.transforms.CenterCrop(224),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.485, 0.456, 0.406),
(0.229, 0.224, 0.225)),
]))
dataloader = torch.utils.data.DataLoader(dataset, batch_size=200, shuffle=False, num_workers=40)
with torch.no_grad():
correct_predictions = torch.zeros(1)
correct_topk_predictions = torch.zeros(1)
for data, labels in tqdm(dataloader):
data, labels = data.cuda(), labels.cuda()
output = model(data)
predictions = output.argmax(dim=1, keepdim=True)
correct_predictions += predictions.eq(labels.view_as(predictions)).sum()
_, predictions_topk = output.topk(5, 1, True, True)
predictions_topk = predictions_topk.t()
correct_topk_predictions += predictions_topk.eq(labels.view(1, -1).expand_as(predictions_topk)).sum()
print(100*(correct_predictions/dataset.__len__()).item(), 100*(correct_topk_predictions/dataset.__len__()).item())
if __name__ == '__main__':
for l in range(8):
main(l)
I am getting the following accuracies:
Loaded pretrained weights for efficientnet-b0
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:57<00:00, 5.18it/s]
76.1240005493164 92.97999739646912
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 31519111/31519111 [00:02<00:00, 11364441.82it/s]
Loaded pretrained weights for efficientnet-b1
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [01:10<00:00, 3.96it/s]
77.88599729537964 93.76800060272217
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 36804509/36804509 [00:04<00:00, 8495839.83it/s]
Loaded pretrained weights for efficientnet-b2
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [01:13<00:00, 3.77it/s]
77.94600129127502 93.76999735832214
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 49388949/49388949 [00:05<00:00, 9453856.32it/s]
Loaded pretrained weights for efficientnet-b3
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [01:31<00:00, 2.95it/s]
77.78000235557556 93.60799789428711
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 77999237/77999237 [00:06<00:00, 12893608.18it/s]
Loaded pretrained weights for efficientnet-b4
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [01:57<00:00, 2.27it/s]
77.69399881362915 93.57600212097168
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 122410125/122410125 [00:09<00:00, 13483970.56it/s]
Loaded pretrained weights for efficientnet-b5
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [02:36<00:00, 1.68it/s]
75.95000267028809 92.69000291824341
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 173245619/173245619 [00:13<00:00, 12484517.39it/s]
Loaded pretrained weights for efficientnet-b6
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [03:19<00:00, 1.30it/s]
76.528000831604 93.0679976940155
Loaded pretrained weights for efficientnet-b7
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 250/250 [04:24<00:00, 1.03s/it]
76.83600187301636 93.21600198745728
The accuracies are not as per the paper. Am I doing something wrong? Please help me. Sorry for being a noob.
Issue Analytics
- State:
- Created 4 years ago
- Comments:12 (1 by maintainers)
Top Results From Across the Web
Why is accuracy not the best measure for assessing ...
Accuracy can be a useful measure if we have the same amount of samples per class but if we have an imbalanced set...
Read more >Practices of Science: Precision vs. Accuracy
Precision and accuracy are two ways that scientists think about error. Accuracy refers to how close a measurement is to the true or...
Read more >Classification Accuracy is Not Enough: More Performance ...
Classification Accuracy is Not Enough: More Performance Measures You Can Use Β· Precision. Precision is the number of True Positives divided byΒ ...
Read more >Accuracy, Precision, and Significant Figures | Physics
The measurements in the paper example are both accurate and precise, but in some cases, measurements are accurate but not precise, or they...
Read more >When classification accuracy is not enough: Explaining news ...
When classification accuracy is not enough: Explaining news credibility assessment ... up to the depth of five links and 10,000 URLs per website....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Iβm new to Efficient-Net. Using your script, I get the same results. From looking at examples/imagenet/main.py as well as #12 - image-size and interpolation are important in EfficientNets. A modified version of your script to incorporate this is:
See some results in the table below.
I also tried counting the model flops, using both thop and ptflops. They gave very low estimates, off by a factor of 30-50 at the high end. ptflops did get the number of params correct, just not the flops; maybe itβs because of the use of F.conv2d instead of torch.nn.Conv2d.
Here are some possible next-steps:
@lukemelas, thanks for the quick response and this repo! We would like to compare vanilla trained efficient nets to the autoaugment ones on some tasks. If you happen to still have the pytorch vanilla weights somewhere, could you please post a download link? Otherwise, we could probably convert them from tensorflow with you code. Thanks again π.