Colourising B&W footage with DeOldify

If you’ve been following the channel lately, or had one of the videos I’vae uploaded recently recommended to you, you’ll know that over the past couple of weeks – I’ve uploaded a lot of colourised black and white footage – and in this video, I want to show you just how I did it.

First things first, you’ll need a video that you want to convert – so find something on your favourite video platform: YouTube, Dailymotion, Twitch, Vimeo, or wherever else your source black and white footage is hosted.
In this example, I’m going to use YouTube because we’re all here and have access to it.

Part 1: Colourising your footage

Visit bit.ly/deoldifyvideos, and this will load what’s called a Jupyter Notebook.
What I’d recommend if you have a Google account is to Save to Drive at the top – so what this will do is store your video in a folder on your Google Drive.

It is possible to skip this step, but I’ve found it to be more reliable if it’s saved to your Drive; so it can actually save the changes to your cloud storage: whereas if you run the process from this URL to start with, it may give you an error message that files couldn’t be saved.

We’ll get to that later.

Scroll down to the bottom of the page, and where it says ‘Source URL’, paste the link of the video you want to convert.
Different sites work differently, and in my experience – when using YouTube, it’s best not to use the shortened URL but the full youtube.com link.

Now select the Runtime menu at the top and select Run All (the keyboard shortcut for this is Ctrl + F9)

While that’s running, I’ll use this time to explain what a Jupyter Notebook is. It’s a virtual environment which allows you to safely run code from within your browser in different ‘cells’
Each cell will execute different commands and snippets of code in sequence from your browser and use the computing power of a virtual computer on a server somewhere else in the world: this is called an instance.

You’ll see that the first command it runs downloads a resource from Github.
If you wanted, you could even click that link and install this software on your own computer, but this will require a lot of computing power from your own machine. I won’t go into that in this video – and don’t recommend it unless you have a powerful graphics card that can keep up with it.


Once it starts running, you’ll be able to see the output of what’s happening, and the cell that’s currently running will be represented by a spinning circle around the Play button.

The steps before the ‘Instructions’ section will install the prerequisites needed onto your instance to be able to perform the task of colourisation: it’ll download the required python libraries, and the training data for the machine learning to run from, and finally the colouriser itself.

The ‘Colorize!!’ section is the one we’ll spend most of our time, once run – we can largely ignore the section above once it’s complete.
I’d recommend keeping the render_factor at 21, and for the sake of consistency – leave the watermark enabled.

This doesn’t mean that it’s going to plaster the logo of this software onto your video; it will simply put a small palette icon at the bottom left to indicate that this colourisation was performed by an AI.
With machine learning and AI being so indistinguishable from reality, this gives the viewer an indicator that a computer had a hand in this process.

Once running, you’ll see a green and grey progress bar – depending on the length and quality of your video, this may take some time.
It will sit for a long time on 100%, but don’t worry – it is still working, and will eventually notify you that the process has complete.

Click the folder icon on the left hand side of your browser, and navigate to the Deoldify folder, then video, and result.

You should then see a file called video.mp4 – right click this, and select Download.
Again, this will take some time and will be represented by a circular, orange progress bar: once done, it’ll give you a file download dialogue box, where you can save your video to where you’d like to store it.

Part 2: Re-encoding to 4K.

Note that your video.mp4 file that you’ve downloaded will save at the same frame rate and resolution as the original source footage: with a combination of it being old, low quality footage and YouTube’s compression, this will colourise your footage, but it’ll be as equally low quality as the original.

If we truly want to bring the footage into the 21st century, we want it to be 60 frames per second, in at least 4K.
For this, I use a free command line tool called FFMPEG, and these are the parameters I used, I’ll also put this in the description, just replace the in.mp4 and out.avi filenames with your in/out filenames.
I’ve saved it to a .bat file in the folder with all my videos, so I can write a list of all the videos to convert, and just run this batch file so it processes them all sequentially, one after another.

During early experiments, I noticed that though the resolution was technically 4K, there were patches of colour in certain areas that still made it look low quality.
I applied a generic .cube file as a lookup table to help with basic colour grading, and using the same LUT file with all the different videos gives a certain consistency to the files I upload, despite them being from varying eras and image qualities.

Colourising vintage B&W photos of Sheffield

I use machine learning to colourify some old B&W photos of Sheffield I found online, and then upscale them to twice the resolution so they’d not lose detail in HD.
I then used GIMP to adjust the exposure, white balance and cleaned up the brightness and contrast, but did not alter the image in any way.

The (free) tools I used

DeOldify (https://bit.ly/deoldifyimages)
This tool is a Jupyter notebook that will take your black and white images from a URL, and use Machine Learning to colourise it for you

There’s also DeOldify Videos (https://bit.ly/deoldifyvideos) which will do the same thing, but with video footage – I didn’t use this for this particular project, but plan to make a video on it soon.

Zyro Image Upscaler (https://zyro.com/tools/image-upscaler) – because some of the source pictures were quite small- when stretched to fit the size of the 1080p video, they would appear pixelated.
Zyro is a free tool that uses machine learning to upscale your image (to around double the resolution)

The Process

First, I started by finding various black and white images of Sheffield from places like DuckDuckGo Images and Pinterest, I saved them all to a folder on my computer: the source images were all different resolutions and qualities – this is why the style looks inconsistent.

Once I have the images downloaded, I uploaded them to a folder on my server, so that I could provide the exact URLs in DeOldify, without having to worry about hitting hotlink barriers from the original images.
I purposely did not remove watermarks because I think that’s a really crappy thing to do.

One by one, I ran the process on each image, I could’ve quite easily written in a for loop to +1 the number of the filename, but didn’t want to automate it because I knew that all the images were different, and would require tweaking individually.

Using DeOldify

DeOldify uses machine learning to colourise your photographs, so it requires a sets of data that has already been trained, and it will use that knowledge of what it has learnt to give you its final result.
It is possible to download it and run on your own machine, which is great if you have a graphics card that costs about the same as a second hand car, but how we will be doing it is through a Google Cloud instance – we’ll be using Google’s computing power to process the image.

With the DeOldify link, we can do this from the Jupyter Notebook within our browser:

You’ll need to press the play button on each section sequentially to prepare the virtual environment by installing the required repositories.
Sign in to Google, and press the play buttons one by one to prepare the Google Cloud session, and install the required software onto your Google-hosted cloud computing instance.

When you get to this section, providing you do not have any errors – you’ll be able to enter a URL for DeOldify to process in the source_url section.
Theoretically, you can find any black and white image you want to process, right click it and Copy Image Location (may be worded slightly similarly depending on which browser you use) and paste it into here.

The render_factor slider kind of works like colour saturation, if it’s too low – the colour changes will barely be visible, but if it’s too high – the colour saturation will be too high, and would spill out of your mesh.
I’ve found with the set of images I use, somewhere between 25 and 35 will give you best results, play with it and see what works for you: again, with the variation of images I used – I needed to tweak it for each individual image.

Once you’ve hit play on this section, there’ll be a little delay – and it will show you your colourised image, and below it – a comparison of the original image and your image.
Right click, and save the output image: if you want to save the side by side comparison, you can do that too the same way.

I saved it into a ‘processed’ folder on my hard-drive to keep them separate from the black and white originals. which were in a ‘source’ folder.

Remember that DeOldify will only save the image at the same resolution as the source picture, a lot of the images I used were below 1000 pixels large, but my output resolution for this video will be 1920×1080, twice the size of that.

Zyro Image Upscaler

If I used the images as-is, they’d appear pixelated and unclear when displayed on a HD screen, so I used the Zyro Image Upscaler, another machine learning tool to try to upscale the images.
The interface is simple, you can just drag an image onto it – and it will do the rest for you -once complete, it will show you a preview of the upscaled image – and if you follow the route to download it, it will ask for your email address so that it can send a link to the full size image to you.

I wasn’t keen on that idea, especially since I had 22 images to process, so cheekily right clicked on the image it had offered me, and selected to Open in New Tab.
Surprisingly, it gave me the full size version in a new browser tab anyway without having to enter my email address, so I just saved that and used it in my video.

Motion Capture!

I got my hands on a Kinect, with the sole purpose of using it for motion capture, this is an early experiment of capturing data and importing it into Unreal.
I will revisit this with better lighting, attire (not a dressing gown!) and some words.

The (free) tools used

iPi Recorder and iPi Motion Capture Studio (https://ipisoft.com/download)
These two tools are used in conjunction with each other to capture and record the motion capture data from your Kinect, and export it to a format that your 3D software or game engine can interpret.

When you first install these, it will also install some additional components, such as the Kinect SDK.

The Process

Once your software is fully installed, and your Kinect camera is connected – you’ll need to load up iPi Recorder, and assuming your Kinect has been detected, you’ll be able to select it and press Record.

Before you start recording, you’ll need to select the Background tab, and press Evaluate Background without being in the shot – this is so that the software can differentiate between what’s in the foreground (you, or your subject) and the background.
You’ll need to ensure that the ground is visible, by adjusting the elevation of the camera with the slider at the top of the output display.

Once you’ve done this, head over to the Record tab, and start recording your footage.


I will say that the interpretation isn’t perfect, but if you move slowly at first – you’ll be able to figure out the nuances of what works, and what doesn’t.