question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

About batch and frame

See original GitHub issue

Hi @j96w ! Thank you for your work. line 131~154 in train.py

            for i, data in enumerate(dataloader, 0):
                #…………
                train_count += 1

                if train_count % opt.batch_size == 0:
                    logger.info('Train time {0} Epoch {1} Batch {2} Frame {3} Avg_dis:{4}'.format(time.strftime("%Hh %Mm %Ss", time.gmtime(time.time() - st_time)), epoch, int(train_count / opt.batch_size), train_count, train_dis_avg / opt.batch_size))

I think one iteration is one batch, so batch number should be equivalent to train_count. In logger.info, why batch number equals int(train_count / opt.batch_size), and frame number equals train_count?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:10 (2 by maintainers)

github_iconTop GitHub Comments

4reactions
j96wcommented, Mar 21, 2019

To realize the pixel-wise dense fusion idea we introduced in this work, we need to keep the correspondence relationship between each RGB pixel and its corresponding depth (see the correspondence index choose return by the dataloader in each iteration). The main problem here about batch training is the size of the input RGB-D crops are different (with a large variance from 40x40 to 400x400 because of the different size of the objects). If we resize or use padding to make them the same size like most other works did, the correspondence relationship will be hard to maintain, since the resize of the depth image is different from the RGB resize and will lead to the deformation of the pointcloud generated from the resized depth. Thanks for mention this point, we have already got a buffering idea to improve the training efficiency of this code WRT this issue and will update soon.

2reactions
emigmocommented, Apr 24, 2019

@gaobaoding hi, I can explain your confuse. in pytorch, first “batch_size” times “loss.backward()” and then follow “optimizer.step()”, all the loss will be summed automatically and do one bp to update the parameters (SGD). but “optimizer.step()” after each “loss.backward()” would update the parameters at each time (GD). SGD will be better than GD, I think you know that.

Read more comments on GitHub >

github_iconTop Results From Across the Web

BatchFrame.com
Pseudo Effect Maker ... Create complex effect controls with just a few clicks using our intuitive interface and easily share and reuse them...
Read more >
Presentation: Batch and Event Frames - OSIsoft - PI System
This talk is for batch and event frame customers looking for ideas on how to utilize our modern PI Visualization and Integrator technologies....
Read more >
MT-2 & GT-2 45 Batch Built Frames - Mosaic Cycles
A Batch Built Mosaic frame can be finished to order within the Mosaic Finishwork Program and delivered to a Mosaic Dealer in as...
Read more >
Autodesk Flame Family 2023 Help | Batch Setup Start Frame
The Batch Start Frame is automatically set to the Background segment Source Start value when the Create Batch Group function is used from...
Read more >
Batch Group Objects - General Discussion
Both does Frameit want to frame each vector (hence me wanting to group them, working with hundreds of frames, so batch is key)...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found