Why 3 classes instead of 2?
See original GitHub issue@liuruijin17 Hi! I have a small question: would you kindly tell me why use 3 classes? Labels are all 1(lane)/2(no object), it seems 2 classes should suffice. Then there is another question, maybe one can directly use BCE loss here with only 1-dim output with class_embed
?
I’ve checked the DETR repo for discussions on single class detection and still can’t figure out why you use 3-dim output at class_embed
.
EDIT: Is it related to COCO index starts at 1?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Does taking 3 classes in a semester look bad? - Quora
Yes you can and I recommend it! Graduating may take a bit longer but you will graduate with a much higher GPA and...
Read more >Was taking 3 classes now 2. You guys ever have to drop a ...
Better to drop a single class than to get poor grades in 3 classes. But like someone above said, I'm too cheap to...
Read more >Why don't colleges make terms shorter and with fewer classes?
Condensed courses usually means: Taking less courses at the same time which makes compartmentalization for memorization easier. You (typically) ...
Read more >How Many Classes Should I Take A Semester?
Since most schools have two semesters per year and degrees are designed to take four years to get, that comes out to 15...
Read more >All classes on 2 days, or split them up amongst 4 days? - Reddit
Getting them done on two days allows you 5 "free" days to get your work done. Until this semester, I've run two-day classes,...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
https://github.com/liuruijin17/LSTR/blob/6044f7b2c5892dba7201c273ee632b4962350223/models/py_utils/detr_loss.py#L46 She changes the second argument to be zero since the above line would initialize target_classes as a full ‘2’ tensor (‘2’ for background)
https://github.com/liuruijin17/LSTR/blob/6044f7b2c5892dba7201c273ee632b4962350223/models/py_utils/detr_loss.py#L33 then changes the empty_weight is initialized by
torch.ones(self.num_classes)
The difference is (1) 3 classes output, ‘1’: lane, ‘2’ background, ‘0’ no supervision (2) 2 classes output, ‘1’: lane, ‘0’ background
She trained on another dataset, so I would train this version when our GPUs have free spaces. If the accuracy also drops, then there still may have bugs that should be fixed.
@liuruijin17 Thanks! So 2/3 classes should not affect best hyper-parameters. Maybe my slightly lower performance comes from not freezing the first BN layer.