Ctrl A to select all, Effect – Normalize (this’ll adjust the waveform so that it’s a consistent volume) – Select ONLY the beginning, before you begin speaking: Effect – Noise Reduction – Get Noise Profile – OK
Ctrl A to select all again Effect – Noise Reduction – OK. Normalize it again until your audio sounds clear. Repeat if necessary.
Commander Keen is a series of six side-scrolling platform games, and was primarily developed by id software in 1990.
The premise of the story is that you play as Commander Keen, the secret identity of 8-year old genius, Billy Blaze – and you have to defend the galaxy from alien attacks.
The first instalment is from Invasion of the Vorticons, and this is an attempt to model the layout of the first level in 3D, even to the point of decoding the signage from the Standard Galactic Alphabet, which Sheepit didn’t recognise – so the translations show on the last segment (a happy accident!)
I’ve not modelled any collectibles like the old-school Pepsi can, pizza slice, raygun, lollipop, book or the player and enemies, because I will set these up as actors within Unreal.
Fixed version of yesterdays upload that had the audio issues: my voice is at the normal pitch and speed for this one.
Recapping some of the instructions from the playlist on this tool creators Youtube channel, since the videos are silent. Also featured is a first-person mode called Advanced Shooter Game, which I may do another video on in the future.
I’ve experimented with trying different GameModes and default Pawn classes for the Player from the Unreal Marketplace, and so far have found some where I can damage the AI, and some where the AI can damage the player, but so far none that are mutual.
Here’s a little rundown of the Marketplace assets I’ve been playing with in the Sheffield map while trying to explore a framework to base the game on. I’ve also explored vehicles, because a lot of people that I show my game to, they immediately ask about driving cars…
For me and my partner, the weekend hasn’t officially begun until we watch these guys, Da Tweekaz on their Friday livestream.
I’ve been working on replicating their studio in Blender and Unreal Engine for a while, but kept it relatively hidden – but an out-of-the-blue comment on one of my other fan art videos, Bassbrain told me that now was a serendipitous time to show a little montage of the work-in-progress.
The hardest part was choosing a favourite track – there are so many! So I went with the most apt: Tweekacore & Darren Styles – Partystarter
Development gameplay footage of a couple of areas I’ve been working on around the Sheffield city centre, these are really designed as purely map elements. I’ve used some artistic license (and Megascans) to help the look and flow of the level. It’s set in the near-future, so I’m allowed!
Timestamps 00:00 – Intro 00:04 – Fictitious area behind St. Pauls Tower to encourage skirmishes. 00:48 – Winter Gardens: Botany museum / laboratory 00:55 – Passageway between Winter Gardens and Millenium Gallery 01:08 – Scientific laboratory greyblock 01:27 – Peace Gardens 01:56 – The roof of The Roebuck
If you’ve been following the channel lately, or had one of the videos I’vae uploaded recently recommended to you, you’ll know that over the past couple of weeks – I’ve uploaded a lot of colourised black and white footage – and in this video, I want to show you just how I did it.
First things first, you’ll need a video that you want to convert – so find something on your favourite video platform: YouTube, Dailymotion, Twitch, Vimeo, or wherever else your source black and white footage is hosted. In this example, I’m going to use YouTube because we’re all here and have access to it.
Part 1: Colourising your footage
Visit bit.ly/deoldifyvideos, and this will load what’s called a Jupyter Notebook. What I’d recommend if you have a Google account is to Save to Drive at the top – so what this will do is store your video in a folder on your Google Drive.
It is possible to skip this step, but I’ve found it to be more reliable if it’s saved to your Drive; so it can actually save the changes to your cloud storage: whereas if you run the process from this URL to start with, it may give you an error message that files couldn’t be saved.
We’ll get to that later.
Scroll down to the bottom of the page, and where it says ‘Source URL’, paste the link of the video you want to convert. Different sites work differently, and in my experience – when using YouTube, it’s best not to use the shortened URL but the full youtube.com link.
Now select the Runtime menu at the top and select Run All (the keyboard shortcut for this is Ctrl + F9)
While that’s running, I’ll use this time to explain what a Jupyter Notebook is. It’s a virtual environment which allows you to safely run code from within your browser in different ‘cells’ Each cell will execute different commands and snippets of code in sequence from your browser and use the computing power of a virtual computer on a server somewhere else in the world: this is called an instance.
You’ll see that the first command it runs downloads a resource from Github. If you wanted, you could even click that link and install this software on your own computer, but this will require a lot of computing power from your own machine. I won’t go into that in this video – and don’t recommend it unless you have a powerful graphics card that can keep up with it.
Once it starts running, you’ll be able to see the output of what’s happening, and the cell that’s currently running will be represented by a spinning circle around the Play button.
The steps before the ‘Instructions’ section will install the prerequisites needed onto your instance to be able to perform the task of colourisation: it’ll download the required python libraries, and the training data for the machine learning to run from, and finally the colouriser itself.
The ‘Colorize!!’ section is the one we’ll spend most of our time, once run – we can largely ignore the section above once it’s complete. I’d recommend keeping the render_factor at 21, and for the sake of consistency – leave the watermark enabled.
This doesn’t mean that it’s going to plaster the logo of this software onto your video; it will simply put a small palette icon at the bottom left to indicate that this colourisation was performed by an AI. With machine learning and AI being so indistinguishable from reality, this gives the viewer an indicator that a computer had a hand in this process.
Once running, you’ll see a green and grey progress bar – depending on the length and quality of your video, this may take some time. It will sit for a long time on 100%, but don’t worry – it is still working, and will eventually notify you that the process has complete.
Click the folder icon on the left hand side of your browser, and navigate to the Deoldify folder, then video, and result.
You should then see a file called video.mp4 – right click this, and select Download. Again, this will take some time and will be represented by a circular, orange progress bar: once done, it’ll give you a file download dialogue box, where you can save your video to where you’d like to store it.
Part 2: Re-encoding to 4K.
Note that your video.mp4 file that you’ve downloaded will save at the same frame rate and resolution as the original source footage: with a combination of it being old, low quality footage and YouTube’s compression, this will colourise your footage, but it’ll be as equally low quality as the original.
If we truly want to bring the footage into the 21st century, we want it to be 60 frames per second, in at least 4K. For this, I use a free command line tool called FFMPEG, and these are the parameters I used, I’ll also put this in the description, just replace the in.mp4 and out.avi filenames with your in/out filenames. I’ve saved it to a .bat file in the folder with all my videos, so I can write a list of all the videos to convert, and just run this batch file so it processes them all sequentially, one after another.
During early experiments, I noticed that though the resolution was technically 4K, there were patches of colour in certain areas that still made it look low quality. I applied a generic .cube file as a lookup table to help with basic colour grading, and using the same LUT file with all the different videos gives a certain consistency to the files I upload, despite them being from varying eras and image qualities.
I use machine learning to colourify some old B&W photos of Sheffield I found online, and then upscale them to twice the resolution so they’d not lose detail in HD. I then used GIMP to adjust the exposure, white balance and cleaned up the brightness and contrast, but did not alter the image in any way.
The (free) tools I used
DeOldify (https://bit.ly/deoldifyimages) This tool is a Jupyter notebook that will take your black and white images from a URL, and use Machine Learning to colourise it for you
There’s also DeOldify Videos (https://bit.ly/deoldifyvideos) which will do the same thing, but with video footage – I didn’t use this for this particular project, but plan to make a video on it soon.
Zyro Image Upscaler (https://zyro.com/tools/image-upscaler) – because some of the source pictures were quite small- when stretched to fit the size of the 1080p video, they would appear pixelated. Zyro is a free tool that uses machine learning to upscale your image (to around double the resolution)
First, I started by finding various black and white images of Sheffield from places like DuckDuckGo Images and Pinterest, I saved them all to a folder on my computer: the source images were all different resolutions and qualities – this is why the style looks inconsistent.
Once I have the images downloaded, I uploaded them to a folder on my server, so that I could provide the exact URLs in DeOldify, without having to worry about hitting hotlink barriers from the original images. I purposely did not remove watermarks because I think that’s a really crappy thing to do.
One by one, I ran the process on each image, I could’ve quite easily written in a for loop to +1 the number of the filename, but didn’t want to automate it because I knew that all the images were different, and would require tweaking individually.
DeOldify uses machine learning to colourise your photographs, so it requires a sets of data that has already been trained, and it will use that knowledge of what it has learnt to give you its final result. It is possible to download it and run on your own machine, which is great if you have a graphics card that costs about the same as a second hand car, but how we will be doing it is through a Google Cloud instance – we’ll be using Google’s computing power to process the image.
With the DeOldify link, we can do this from the Jupyter Notebook within our browser:
You’ll need to press the play button on each section sequentially to prepare the virtual environment by installing the required repositories. Sign in to Google, and press the play buttons one by one to prepare the Google Cloud session, and install the required software onto your Google-hosted cloud computing instance.
When you get to this section, providing you do not have any errors – you’ll be able to enter a URL for DeOldify to process in the source_url section. Theoretically, you can find any black and white image you want to process, right click it and Copy Image Location (may be worded slightly similarly depending on which browser you use) and paste it into here.
The render_factor slider kind of works like colour saturation, if it’s too low – the colour changes will barely be visible, but if it’s too high – the colour saturation will be too high, and would spill out of your mesh. I’ve found with the set of images I use, somewhere between 25 and 35 will give you best results, play with it and see what works for you: again, with the variation of images I used – I needed to tweak it for each individual image.
Once you’ve hit play on this section, there’ll be a little delay – and it will show you your colourised image, and below it – a comparison of the original image and your image. Right click, and save the output image: if you want to save the side by side comparison, you can do that too the same way.
I saved it into a ‘processed’ folder on my hard-drive to keep them separate from the black and white originals. which were in a ‘source’ folder.
Remember that DeOldify will only save the image at the same resolution as the source picture, a lot of the images I used were below 1000 pixels large, but my output resolution for this video will be 1920×1080, twice the size of that.
Zyro Image Upscaler
If I used the images as-is, they’d appear pixelated and unclear when displayed on a HD screen, so I used the Zyro Image Upscaler, another machine learning tool to try to upscale the images. The interface is simple, you can just drag an image onto it – and it will do the rest for you -once complete, it will show you a preview of the upscaled image – and if you follow the route to download it, it will ask for your email address so that it can send a link to the full size image to you.
I wasn’t keen on that idea, especially since I had 22 images to process, so cheekily right clicked on the image it had offered me, and selected to Open in New Tab. Surprisingly, it gave me the full size version in a new browser tab anyway without having to enter my email address, so I just saved that and used it in my video.