Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Hi @dbolya, I have some questions about the training and validation phase. I would appreciate if you can help me!

  1. If I want to eval the pretrained model on a full-HD video or HD webcam, is it necessary to train the model with bigger max_size? if yes, suppose my video is 1080p what should be the max_size for training?

  2. Is it possible to turn off the boundary box in eval mode and get only masks without bb and detection accuracy?

  3. any suggestion on how can I modify the code to get one color per instance instead of having different colors for one instance in an image?.


Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:13 (6 by maintainers)

github_iconTop GitHub Comments

dbolyacommented, Jan 29, 2020

@Auth0rM0rgan I don’t know of any. Though you might be able to use some traditional CV methods to find dark blobs close to detected masks? Idk how that would work but if you don’t want to make a dataset of your own, that’s probably your best bet.

As for making masks more precise, YOLACT++'s base model gives the highest performance of all published real-time methods, but if you’re okay with reducing speed you have options:

  1. You can add an extra upsample layer for higher quality masks. To do that, change the [(None, -2, {}), (256, 3, {'padding': 1})] on this line: to [(None, -2, {}), (256, 3, {'padding': 1})] * 2 where the * 2 at the end dictates how much higher resolution you want (*1 for 138x138, *2 for 276x276, *3 for 552x552, etc.) though be wary of how much GPU memory you have.
  2. Increase the input image size (currently 550 for the base version). Check how yolact_im700_config is defined and you can define a similar one for YOLACT++.
dbolyacommented, Feb 10, 2020

Yeah, the process should be pretty similar as what’s in the script. The main difference is that looks like the masks are all independent, so you can just load the mask with pillow or something and convert it to numpy. Then the box is pretty straightforward, though it looks like those are relative coordinates and COCO expects absolute coordinates so you’ll have to multiply them by the image width / height.

Read more comments on GitHub >

github_iconTop Results From Across the Web

100 Getting to Know You Questions - SignUpGenius
100 Getting to Know You Questions · Who is your hero? · If you could live anywhere, where would it be? · What...
Read more >
500 Good Questions to Ask - Conversation Starters World
GOOD QUESTIONS TO ASK · What weird food combinations do you really enjoy? · What social stigma does society need to get over?...
Read more >
450 Fun Questions to Ask People in ANY Situation (That Work!)
Deep Questions to Ask People · Who knows you best? · Where do you see yourself in 10 years? · What makes you...
Read more >
400 Fun Questions to Ask People (Friends, Family, Strangers)
400 Wacky, Wild & Totally Fun Questions to Ask Anyone—Including Friends, Family & Even Strangers! Find a good, interesting, and random question ......
Read more >
272 Deep Questions to Ask: A Guy, Girl, Friend, or Anyone
One way is to ask them deep questions. So here are some deep questions you can ask different people--people like your partner, friends,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found