How to train a video model?
See original GitHub issueThank you for the great work!
I have some questions
I want to use DeOldify to train a video model like pix2pix. Which notebook should I use?
I found that it did not convert color images to black and white image as training data in ColorizeTrainingVideo.ipynb, but in ColorizeTrainingArtistic.ipynb and ColorizeTrainingStable.ipynb
Should I train ColorizeTrainingArtistic.ipynb or ColorizeTrainingStable.ipynb first, and then train the ColorizeTrainingVideo.ipynb?
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Video classification with Keras and Deep Learning
In this tutorial, you will learn how to perform video classification using Keras, Python, and Deep Learning.
Read more >Video Classification | Video Classification Model In Python
Steps to build Video Classification model · Explore the dataset and create the training and validation set. · Extract frames from all the...
Read more >Video Classification with a CNN-RNN Architecture - Keras
Description: Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset. ... This example demonstrates video ...
Read more >Video classification | TensorFlow Lite
The video classification model can learn to predict whether new videos belong to any of the classes provided during training. This process is ......
Read more >Introduction to Video Classification and Human Activity ...
In this tutorial, we will cover how to train a model with moving average in Keras.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

You have a few options. If you want to start completely from scratch, then you should train ColorizeTrainingStable.ipynb. This is stated at the top of ColorizeTrainingVideo.ipynb:
But this isn’t the only way to get a pretrained generator. You can also download its weights from the readme, in the section that looks like this:
Download the stable ones if you want to start the ColorizeTrainingVideo.ipynb exactly like you would training ColorizeTrainingStable.ipynb on your own first. Or you download the video weights to get straight to GAN training and bypass pre-training on noise augmentation.
I’ve already said no dude. I’m not sure what you think changed between now and then…