Some questions
See original GitHub issueHi @dbolya, I have some questions about the training and validation phase. I would appreciate if you can help me!
-
If I want to eval the pretrained model on a full-HD video or HD webcam, is it necessary to train the model with bigger
max_size
? if yes, suppose my video is 1080p what should be themax_size
for training? -
Is it possible to turn off the boundary box in eval mode and get only masks without bb and detection accuracy?
-
any suggestion on how can I modify the code to get one color per instance instead of having different colors for one instance in an image?.
Thanks!
Issue Analytics
- State:
- Created 4 years ago
- Comments:13 (6 by maintainers)
Top Results From Across the Web
100 Getting to Know You Questions - SignUpGenius
100 Getting to Know You Questions · Who is your hero? · If you could live anywhere, where would it be? · What...
Read more >500 Good Questions to Ask - Conversation Starters World
GOOD QUESTIONS TO ASK · What weird food combinations do you really enjoy? · What social stigma does society need to get over?...
Read more >450 Fun Questions to Ask People in ANY Situation (That Work!)
Deep Questions to Ask People · Who knows you best? · Where do you see yourself in 10 years? · What makes you...
Read more >400 Fun Questions to Ask People (Friends, Family, Strangers)
400 Wacky, Wild & Totally Fun Questions to Ask Anyone—Including Friends, Family & Even Strangers! Find a good, interesting, and random question ......
Read more >272 Deep Questions to Ask: A Guy, Girl, Friend, or Anyone
One way is to ask them deep questions. So here are some deep questions you can ask different people--people like your partner, friends,...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@Auth0rM0rgan I don’t know of any. Though you might be able to use some traditional CV methods to find dark blobs close to detected masks? Idk how that would work but if you don’t want to make a dataset of your own, that’s probably your best bet.
As for making masks more precise, YOLACT++'s base model gives the highest performance of all published real-time methods, but if you’re okay with reducing speed you have options:
[(None, -2, {}), (256, 3, {'padding': 1})]
on this line: https://github.com/dbolya/yolact/blob/db81124874817895db69f2dc443f5c24e0e3f491/data/config.py#L691 to[(None, -2, {}), (256, 3, {'padding': 1})] * 2
where the* 2
at the end dictates how much higher resolution you want (*1
for 138x138,*2
for 276x276,*3
for552x552
, etc.) though be wary of how much GPU memory you have.yolact_im700_config
is defined and you can define a similar one for YOLACT++.Yeah, the process should be pretty similar as what’s in the script. The main difference is that looks like the masks are all independent, so you can just load the mask with pillow or something and convert it to numpy. Then the box is pretty straightforward, though it looks like those are relative coordinates and COCO expects absolute coordinates so you’ll have to multiply them by the image width / height.