Sketchup to Unreal (via Blender)

Transcript

Hello – and welcome, or welcome back to the channel. I’m going to be using some Sheffield buildings I found on 3D Warehouse, and importing them into my Unreal map. With them using imagery from Google Earth, I can’t actually use them in the game – but for a sense of where things should go, and how they should look – they’re a lot more useful than the grey blockout buildings you see on the screen. The building we’ve just ran past is Montgomery Theatre on Surrey Street.

I’ve got 3D Warehouse open, and I’m going to use Sheffield Library for this example. So, I download the Sketchup file. I’m going to use the 2023 version, because I’ve downloaded the latest Sketchup – and activated a 30 day free trial of Sketchup Pro – the reason for this is that the Free version of Sketchup doesn’t allow you to export files like we’ll need to do.

I should also apologise for the ReStream overlay, I wasn’t expecting this to come up while I was recording offline screen activity – and don’t feel like re-recording the footage.

Once I’ve opened it, I’ll go to File → Export, and save it as Collada DAE Remember to select Options and ensure that ‘Export texture maps’ is enabled.

I’ll jump over to Blender, where I have a map of Sheffield from the blender-osm plugin, and find the Library. In this map, I’ve coloured all the Sketchup imports yellow, so that I can identify them later – and I’ll search for Library. Once I’ve found it on the Outliner, I’ll press Numpad full stop, or period – to jump to the building on the map

I’ll go into edit mode, select the roof – and Shift S → Cursor to selected, and back to Object mode.

Now when I’ve imported the .dae file, I’ll press Shift → S again and this time ‘Selection to Cursor’

This will snap the imported Sketchup file to the approximate region of the actual building on the map

I’ll eyeball it so it lines up with the map, and select Origin → To Geometry

You’ll be able to tell when it has worked, because all the elements will have those orange dots on them.

Once that’s complete, click File → Export – and this time you want to export to an FBX file.

Now I’m going to head over to Unreal, and import the building.

Open the Content Drawer

I’ve got a subfolder for each building, because they are made of a lot of different parts of mesh.

And I’ll drag the FBX file into the newly created folder, and select Import All.

I’ve got a search filter enabled to only show Static Mesh, and I’ll Select All, and drag them anywhere onto the scene.

To put them into place, I’ll reset the Location and Rotation with the backwards-facing arrow button – and then ‘F’ to jump to the building on the map.

This building actually has quite a bit going on; the yellow mesh is the previous Library I had there -but untextured, so I’ll be replacing that. The dark grey building is The Graves Gallery, which occupies the same building as the library.

I’ll open the content drawer, and I want to adjust the collision for all these mesh elements -so I’ll open a row at a time, to keep track of which I’ve done.

From the panel on the right, I’ll search for Collision to filter out the settings, and change the Collision Complexity to ‘Use Complex Collision as Simple’

Now I need to do that for all of the tabs open.

File → Save All

Now I can delete the yellow mesh-only library, and you can see the textured version in place.

I’ll click Play from Here, and see how it looks in-game.

It’s above ground level, and that’s to be expected – I need to model in the walkway, and the library actually has stairs up to it to access the entrance.

And then if we come up to the roof, you can see how the two buildings are merged.

Let’s have a little run across the rooftops, to make sure the collision is working

And while we are here, we can enjoy the view.

From up here, we can see one of the University buildings straight ahead, and if I run over here – we can see a building I’d prepared earlier – the Millenium Gallery.

I’ve done it again with the City Hall. This time I’m going to let it play out in realtime, so you can see what I’m doing.

If we have a quick run around it, we can see that it’s raised off the ground – but in actual fact it does have steps raised up towards it, so it won’t be flat on the ground – again, I’ll have to put the walkway in for it.

This is why it’s so important to set up the collision on the mesh elements – you’ll see on some parts, the player seems to be walking on the air.

A Very Brief Introduction to Render AI for Blender

In this very brief video, I’m going to give you a very brief introduction to Render AI for Blender.
I’ll show you how to create a basic render using Render AI and then discuss some of the features and benefits of using Render AI.

If you’re new to Render AI or Blender, this video is a great way to get started.
By the end of this video, you’ll have a basic understanding of how to use Render AI in Blender and be able to create basic renders quickly and easily!

Tools used

AI Render
https://blendermarket.com/products/ai-render

Automatic 1111
https://github.com/AUTOMATIC1111/stable-diffusion-webui

Lexica
https://lexica.art

Music

Beave – Talk [NCS Release]
Music provided by NoCopyrightSounds
Free Download/Stream: https://ncs.io/Talk
Watch: https://youtu.be/uZi8_rnqgHg

Animating a portrait of myself with EbSynth

A quick video where I take a portrait of myself that my girlfriend drew and painted, and animate it with EbSynth.

Hello, this is the first video for a while, and I wanted to share a portrait of me that my girlfriend had painted, which I really like – it’s very Van Gogh.
I also wanted to show you how to quickly animate some video footage in the same style using a package called ebsynth.

The first thing you’ll need is some video footage.
I’ve tried to replicate the angle and basic shapes of the picture for the best effect when I come to animate it.
I import the footage into Blender, crop out any jumpy movement at the beginning and move my head around a little.

Once I have a short little clip, I’ll render them out as PNG.
For the sake of consistency, I’m going to render it at the same resolution as the original image, 1600×1474.
That might sound like a weird resolution, and it is – but I’ll scale it accordingly to fit in the video you’re watching now.

Once I have my frames all rendered in PNG, I’ll import them all into eBsynth in the Video section, and the single image into the Keyframes section.
Choose a directory, and Synth.
It might take a while, so I’ll pause the video here.

And voila.

Sheffield FPS: Reprise

It has been way too long since I’ve worked on this, and a trip into the city centre had inspired me to pick up where I’d left off with the project: which insofar has been work on a few buildings, no actual gameplay implementation…yet.

Naturally, I will have to carefully consider the plot-line – masked protesters amidst a war within the city doesn’t feel dystopian anymore, it’s practically a reality.

This is a short video demonstrating the process used to import the base character from the Armory first-person template project into the 3D representation of the map.
Really, it’s a compilation of the past week or so’s screen recordings that I hadn’t done anything with.

And this is it working!

In between these two videos, I’d also done a calibration live-stream, in preparation for future development streams, in which I had one mystery viewer observing the whole show, and I have no idea who it was!

I haven’t embedded it here because it isn’t so integral to the story that it needs embedding, and if I were to episodify them all, this it would be a pilot episode, or a dress rehersal: feel free to have a watch, though.

Videos that I wish I knew what to do with

Colour analysis of some terrible low resolution video footage of lastnights Uber-Storm over Sheffield. Media had predicted a 2 month downfall of rain within a few hours. it was a very intense thunderstorm.

I’ve done some colour analysis so you can see a breakdown of the chaos within the clouds. Music: Actual thunderstorm ( and some weird unexplainable blips)

There’s a few seconds of some footage of an Assetto Corsa replay where I’d tried to replicate my friend Jme’s Volkswagen Polo – and _then_ the lightning.

Pubs I miss: The Barge:

I’d tried to grab some georeferenced data of The Barge in Grimsby, but it kept crashing. I sped it up and made it fit a dance track.

If you want to do that yourself, add your video footage and then your audio: divide the length (in frames) of your video file by the length (in frames) of your audio file, add a Speed Transform to your video, and set the multiply speed to what your calculator results in.

Music
S3RL – The Bass & The Melody

"Williams' night out" ft. Sayanti Ghosh

More on the Whatsapp e-collaboration project with Sayanti, in this video – she explains the premise behind the project in both English and Bengali, we try our hand at stitching a story together, and I finish with a walkthrough around the 3D set I’ve been working on.

Development screenshots

Why are there no Bengali subtitles?

Sorry – I did try!

Planning

Sometimes I need to take screen breaks, so I sit with a pen and paper and plot my next moves when I get to my desk.
With the same ethos of a swordsman who does not unsheathe his sword unless he is prepared to use it, I must take the same approach with screen recording software: so I don’t record unless I’m showing something.
I digress… this was an idea I had.


Keentools Facebuilder [blender addon]

Keentools is a small tool I came across a little while ago, but haven’t had much of a chance to play with – so I thought I’d do a post about it, because the Keentools Blender addon is currently in beta and is free – but once out of beta, this could become a paid product once it’s released – so I would grab a copy while you can.

How it works, is that you first Create a head, this is a blank, generic head object, and you insert reference images into the tool – and select points on the face and head – and line them up with the actual photograph.


I can see this tool being my go-to for character development: at least for the head and face anyway.

I’ve learnt through experimentation that too many points like this does make things very confusing, especially with multiple images from different angles – a word of warning to try to keep things as simple as possible.

The trade off is that the more information that you give the plugin through these pins, is that it can guess the position of the camera more accurately – and will place a 3D camera in its representative position.



For an example, I’m going to leave it at this for now – this is a poor quality, low lit photo so I am not expecting amazing results.
These will vastly improve once I have a studio lighting and chroma key background setup.

Once you’re happy with your model, and are confident that you have captured as much source imagery as possible, you can create a UV map of your subjects face from the images you’ve provided – and wrap it around the head model.

For the curious, here’s how the UV map looks once it has finished processing- if you look to the ears and the left side of my face, it’s completely blank – because images I took and used as reference for the pins did not include this portion of my face, so it has not mapped it, and these blank areas will show on the model as plain black mesh.

The original came out very dark, but that’s my own fault for not managing the 3D lighting, so I’ve compensated by adjusting the colour and levels in GIMP for a realistic look.
A reminder to check all of your views to make sure everything lines up and is in proportion.
You can see from the UV map that there is a part of the right ear missing, so the texture has been approximated and doesn’t have the right scale.
The brightly coloured area to the right of the image is where no texture data was available, so it is rendered as a single colour.

Under a production environment, while I will use this process: I will spend a lot more time on it, this has been a quick demonstration of an experimentation for the purpose of this post.
The game assets will also be taken under studio settings with a 1000 watt halogen lamp which will be pure white light, rather than a 60 watt incandescent light (that has a more yellowy hue) that is commonly found in homes.

This image is the right shape, but it isn’t the right thickness – currently, it’s only one pixel thick.
If you think of this in real world terms, it’s akin to the thickness of the outer skin of an onion. It needs to be thicker.

For games, this will be acceptable because we want to keep the amount of polygons down, but what about for cutscenes, video where we want the characters to look believable to tell the story?


I’ve applied a Solidify modifer to the mesh, which has thickened every part of it – and this does look more realistic, especially if you look around the nostril and the top of the ear: light is shining through it as it would in the real world, it’s not paper thin like the images above.
Only, I’ve solidified it so much, it doesn’t even look like the same person any more – I will have to find a parameter that works for each individual model.



A Tree and a Wiki

There are a lot of trees in Sheffield (we have had some controversy around this of late) – and thankfully we have groups like STAG (Sheffield Tree Action Group) who are doing wonderful things to help keep those numbers up, and prevent them from being unnecessarily felled by our council.

There’s a new addition to the top menu, Wiki.
This’ll be an explorable Wikipedia-a-like of the story-line of the game, and any future projects connected to this universe. It’s based on a series of stories I’ve written before, so the information available will fill up quickly, and will not detract from progress on the 3D modelling front.

I digress. The most obvious place to house afformentioned trees would be a building where a range of interesting plant life can be found, the Winter Gardens (you’ll be glad to know that in 2030, it still serves the same purpose)
But first, let’s have a look at this tree:

Sketchfab model

Winter Gardens foliage area


Tree generation

I’ve naturally tried to find a Blender plugin that would help with creating trees, and there are some available, but they are paid plugins.
If possible, I want to avoid this – keeping with the ethos of this site: a shoestring budget studio.

If there’s a resource out there for free that does some of the work for me, saving time and money – I’ll take that, thanks!
My search for a free Blender plugin turned out to be fruitless (pun intended!) and I look towards free, specialised software instead – and came across Abraro; and was suitably impressed.



Almost every aspect of the tree is customisable, so you can let the Charles Darwin within you loose, and create any kind of tree you can imagine like a mad scientist genetic botanist.
Once you have created your tree – click File -> Export and save it as an .obj file on your computer: you can then import it into Blender either as a new object, or straight into your scene.


When first importing the tree, it did put a strain on my (limited) graphics card, and I would get areas like this across the Blender application, which as you can imagine made it quite awkward to use:


I used the Decimate modifier on it to reduce its poly count from around 29,000 to 1,500.
Not only does this lessen system resources, it also makes it more
appropriate for a game asset.


I’ve only included this image because the crash makes it looks Vaporwave.


Not all models work, this is overly detailed and results in lots of rogue mesh, though you can see the actual shape of the tree in orange.

Below, I’ve modified the trunk and shape of a Black Tupelo so that it is shorter and more bush-like, and experimented with the leaves.


It turned out massive and soon after loading, Blender gave up and shut itself down: this’ll have to be something I branch into once I’ve upgraded to a higher spec PC.

Image result for black tupelo
A Black Tulepo

It looks very scientific (above)

The tree with its leaves enabled.

Some renders!

Some slight colour modification in GIMP, no changes to the actual content of the model.

These have been scaled down by your web browser to make it fit the screen on your device, but you can right click the image, and open it in a new tab to see it full size: mobile users can usually press and hold their finger down on the picture to open in a new tab, which will allow you to pinch and zoom around it.

Tonights render queue