This will download a zip file, open and extract it. It’ll look like this:
Look in this folder called model.dae
The textures don’t quite work….
…and then sometimes they do:
Using layouts from KMZ files is finickity in Blender. I’m only interested in the mesh itself, its shape, and I’ll completely retexture this myself. One reason being that .kmz files use textures from Google Street View, which, as an open-source project, I can’t (and don’t want to use Google imagery)
Fortunately, as you can see from this (largely upscaled) example of one of the image textures, there’s enough detail to paint over to produce my own textures.
Keentools is a small tool I came across a little while ago, but haven’t had much of a chance to play with – so I thought I’d do a post about it, because the Keentools Blender addon is currently in beta and is free – but once out of beta, this could become a paid product once it’s released – so I would grab a copy while you can.
How it works, is that you first Create a head, this is a blank, generic head object, and you insert reference images into the tool – and select points on the face and head – and line them up with the actual photograph.
I can see this tool being my go-to for character development: at least for the head and face anyway.
I’ve learnt through experimentation that too many points like this does make things very confusing, especially with multiple images from different angles – a word of warning to try to keep things as simple as possible.
The trade off is that the more information that you give the plugin through these pins, is that it can guess the position of the camera more accurately – and will place a 3D camera in its representative position.
For an example, I’m going to leave it at this for now – this is a poor quality, low lit photo so I am not expecting amazing results. These will vastly improve once I have a studio lighting and chroma key background setup.
Once you’re happy with your model, and are confident that you have captured as much source imagery as possible, you can create a UV map of your subjects face from the images you’ve provided – and wrap it around the head model.
For the curious, here’s how the UV map looks once it has finished processing- if you look to the ears and the left side of my face, it’s completely blank – because images I took and used as reference for the pins did not include this portion of my face, so it has not mapped it, and these blank areas will show on the model as plain black mesh.
Under a production environment, while I will use this process: I will spend a lot more time on it, this has been a quick demonstration of an experimentation for the purpose of this post. The game assets will also be taken under studio settings with a 1000 watt halogen lamp which will be pure white light, rather than a 60 watt incandescent light (that has a more yellowy hue) that is commonly found in homes.
This image is the right shape, but it isn’t the right thickness – currently, it’s only one pixel thick. If you think of this in real world terms, it’s akin to the thickness of the outer skin of an onion. It needs to be thicker.
For games, this will be acceptable because we want to keep the amount of polygons down, but what about for cutscenes, video where we want the characters to look believable to tell the story?
I’ve applied a Solidify modifer to the mesh, which has thickened every part of it – and this does look more realistic, especially if you look around the nostril and the top of the ear: light is shining through it as it would in the real world, it’s not paper thin like the images above. Only, I’ve solidified it so much, it doesn’t even look like the same person any more – I will have to find a parameter that works for each individual model.
This post will partially be about the Ground object, too – our retopologised mesh of the height map data, Terrain. I wrote about this previously, and will assume that you have read it.
The building in the post is a firm favourite of mine, Old Queens Head. It’s not as if the thought of doing the building hadn’t occurred to me before, but something in there inspired me:
Why not use these as reference images? How apt that the reference images have an artistically constructed floor, on the very same building I began to construct the floor for our city on.
The ground in this scene is sunken slightly where the beer garden is. I cannot sink my ground plane in such a way because the height map, Terrain will prevent me, because of it’s accurate topological data. It cannot go that low without cutting a hole into it, which I’m not going to do.
So, I have to build everything around it, up because that direction isn’t locked.
Notice how the checker-board flows into what will be the right-hand entrance; this will allow for a smooth transition upon entering buildings.
Lord Nelson, a public house on Arundel Street, known for its vibrancy. The only UV mapped parts of this building are the doors and banner: the rest are plain colours picked from the reference image at the bottom right.
The windows are a generic glass texture I’ve been using for almost every building, this is so that any changes I make to the standard glass in the environment will affect every building at once.
I have done this deliberately, because I know that the game engine is very funny about transparent textures, and will refuse to compile if I don’t get it just right.
The UV map will later be expanded upon with brick textures, and promotional signage for the side of the building like chalkboards and banners.
This is an earlier render from before I’d mapped in the logo text and doors from the UV map above. The black ‘skirting board’ texture also needed a bit of work.
Blender’s Freestyle lines help to add perspective, a bit of a comicbook feel and help identify issues with the texturing of the mesh – it really helps with scenes like this, because if you know the area, it automatically looks familiar to you.
We slow things down a little, and relax to some nice tech beats while texturing Charles Street car park, the ‘Cheese Grater’ as it’s oft known by locals.
These are base paintings and will be elaborated upon later , adding finer detail in both the image texture, and the mesh it envelops – but for now, we’re just wrapping the images around a box – and telling our UVMap which portion of the image our faces have wrapped onto it.
These models lack detail because I don’t want to get too fixated on one building when there’s a whole city center to bring to life – as well as ragequitting, this is why I’ve made the decision to put the Genting entrance on hold for the time being and work on another building.
Currently trying to model at least one basic building to the point of some resemblance every day. Won’t take too long, eh?