Replicating GT/SH in Table 1
See original GitHub issueI have a question about the experiment GT/SH described in Table 1. If I understand correctly this is the setup:
- Train input is 16 (i.e. 17 - neck) ground truth 2D keypoints using all the videos from subjects 1,5,6,7,8.
- Test input is 16 stacked hourglass 2D keypoints using all the videos from subjects 9 and all the videos except Directions from subject 11.
- Output for both train and test is 17 3D keypoints.
- Protocol #2 is used, which corresponds to using all frames, all cameras, averaging over the actions, and using Procrustes alignment.
In order to replicate the above setup I changed the following lines in your code from:
# Read stacked hourglass 2D predictions if use_sh, otherwise use groundtruth 2D projections
if FLAGS.use_sh:
train_set_2d, test_set_2d, data_mean_2d, data_std_2d, dim_to_ignore_2d, dim_to_use_2d = data_utils.read_2d_predictions(actions, FLAGS.data_dir)
else:
train_set_2d, test_set_2d, data_mean_2d, data_std_2d, dim_to_ignore_2d, dim_to_use_2d = data_utils.create_2d_data( actions, FLAGS.data_dir, rcams )
to
# Use GT for train and SH for test
train_set_2d, _, data_mean_2d, data_std_2d, dim_to_ignore_2d, dim_to_use_2d = data_utils.create_2d_data( actions, FLAGS.data_dir, rcams )
_, test_set_2d, _, _, _, _ = data_utils.read_2d_predictions(actions, FLAGS.data_dir)
so I can train on GT and test on SH (i.e. GT/SH) instead of SH/SH or GT/GT.
However, I’m having hard time getting 60.52, since at the first epoch the error is 63.95 and it grows instead of decreasing, i.e. at epoch 5 it is 66.58.
I used the following command to train:
python src/predict_3dpose.py --camera_frame --residual --batch_norm --dropout 0.5 --max_norm --evaluateActionWise --use_sh --epochs 100
Thanks a lot for any help on this matter!
Issue Analytics
- State:
- Created 5 years ago
- Comments:19 (10 by maintainers)
Top Results From Across the Web
1D Sed DNS vs GTSH Revision v1.2 - OSTI.GOV
Table 1 : Governing equations of the GTSH KT-TFM ... Five replicates (otherwise identical simulations with a different randomization of initial.
Read more >UltraScale Architecture Configuration User Guide - Xilinx
Table 1 -1: Configuration Modes in UltraScale Architecture-based FPGAs ... in the master SLR and is automatically replicated to the other.
Read more >Table Replication - SAP Help Portal
Any replica tables that were created in earlier versions are no longer valid and must be recreated for the new SAP HANA release....
Read more >The Generalized TTL Security Mechanism (GTSM) RFC 3682
Was draft-gill-gtsh (rtg) ... Table of Contents 1. Introduction . ... Experimental [Page 1] RFC 3682 Generalized TTL Security Mechanism February 2004 5.2.2....
Read more >Microheterogeneity of thyroid-stimulating hormone
3\g=m\gTSH/l and 64\m=+-\5nmol ... (hyperthyroid); 58\m=+-\6\g=m\gTSH/1 and 32\m=+-\6nmol ... pituitaries from the hyperthyroid group (Table 1).
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@una-dinosauria, thanks for the pointer to my work.
@meijieru, Fig. 4© in our paper summarizes the performance when training on GT and testing on SH detections.
Here’s the github page if you’d like to find the code and additional details.
@Nicholasli1995 I do not maintain the Pytorch implementation. Please open an issue in the corresponding repository.
The expected number should be the one in the paper.