[Assetto Corsa] Irungattukottai, Chennai

I need to bear in mind this is the first sentence people see when this link gets shared.
Anyway, I’ve gotten quite stuck into modding Assetto Corsa over the past week, the workflow I’m using for the Sheffield map is similar to that of AC: using reference data from satellite imagery and streetmap data, and importing it into 3D.

Cadwell

The Cadwell track, after a curious search to see if there is one available was placed on a ‘someones done this already, and done a fantastic job of it – so I don’t need to (yet!)’ list.
Despite being a list with a very long name, it’s a very short list so far.

I recorded a little race around it, added some music to the replay and uploaded it.

I decided to look for a track that I couldn’t find, something new – and that’s where two of the creative projects I’m working on coalesced.

I’d set out to use free and open source software to build a playable track in Blender – and while I didn’t complete that with Cadwell, it did put me on the right track to actually making usable race circuits.
That’s what this post is about.

India ??

I went on a deep-dive of Indian motor racing circuits, and aside from the world-famous Buddh, I could find very little. This was the niche I was looking for.

I’ve started with the first other major circuit I could find, Madras Motor Race Track (Irungattukottai, Chennai) – and is one of a few circuits I have earmarked for a small India track and car mod pack.

The playlist (currently) has 4 videos of developing and testing Irungattukottai in Blender, and are mostly set to ambient music.

For your benefit, I’ve written this with hindsight – so I hope it saves you some time!

The first part we see is after I’ve downloaded the .osm (OpenStreetMap file) from where I’ve selected the region of the track.
I’ve used blender-osm to import the map data, and I’m now trying to figure out how to get it to run.

Once you have it imported, first thing I do is press 7 on the numpad in Blender to jump to Top view, and select the objects until you find the racetrack, and name it 1ROAD. Anything you name it after that (with no spaces) is entirely up to you. This is the track you will drive on.

I export the 3D file to .fbx and import it with ksEditor, part of the Assetto Corsa SDK, From here, allowing that my map is compliant, export it to the tracks folder in your Assetto Corsa directory.
If you are using Steam, it’ll be in:
C:\Program Files (x86)\Steam\steamapps\common\assettocorsa\content\tracks

A quick way to do this is to find the folder of any track you like, copy it and paste it into a new folder with the name of your track, and change everything in your new folder to suit your level.
Once you’ve exported your map to the .kn5 file format, drop it in this folder and make sure the file has the same name as the folder, and any textures you’ve used are in the tracks textures folder.

More to come

What I learnt

  • Everything needs a texture.
    If it doesn’t have one, it’s going to crash ksEditor
  • .dds (DirectDraw Surfaces) do not support layers.
    You can have transparency, but you can’t have multiple layers.
  • It’s best if your road is low resoultion mesh, high resolution imagery.
  • You will need Blender 2.7x
    Yes, going back to it feels like writing with your left hand, but the .kn5 exporter tool really is the best way of exporting your mesh.
    I found that using 2.9x and exporting to ksEditor only caused problems. Revert, revert to 2.7x, open the example track, and you’re on the way to a playable track.
  • Scale your UVs correctly
    I’d not scaled my grass textures correctly from high-res source photos, and the perspective it gave was that I was driving a car, but the size of an ant – each grass blade was the size of my car!
  • Make your track flat
    When I’d imported my track from my 2.8x experimentation with ksEditor, I’d solidifed the track so that it was 3D and had depth.
    I found though, when importing into 2.7x, hitting the side of the meshes depth sent the car into an uncontrollable spasm.
    The track has to be flat. 1 pixel (to the chefs amongst us, that’s one layer of onion skin).
  • If you use the example file from the Blender KN5 Exporter, use it as an example file, and then delete it
    When you playthrough, the intersection between the example racetrack, and your imported racetrack – it causes glitches will cause you to spin out, and fall into invisible potholes.
    It took me until Part 4 to realise this.
  • Use a UV grid. Google images ‘UV Grid’.
    Find a high resolution version, and apply it to everything.
    It makes a great placeholder image, and you can see exactly where to edit when you want to colour the texture.


Car skinning

No such luck, synapses are firing on all cylinders,
I’ve cancelled an important VLC update for the sake of a screenshot, and now when I try to open anything with it, its only use is a strobe light.

When I run the game, I start at a 2:00 angle, which leads me to beleive that the empty to represent AC_PIT_0 is at a strange (or unsupported) angle.

While playing the lap from the OSM import, I’d timed that I was completing the laps in around 45 seconds, which is obviously ridiculously fast – so I scaled the track to be twice the size, and it now seems to work – and would probably make a good Drift track.

I’ve not completed a full track yet, because:
– there’s no kerbs
– the start position is on the grass
– I’m quite sure that since I doubled the size of the track, the timer markers are in the wrong place.

For now, I’m stuck in the pit, I can see the wall above me that I’d just made – and I can’t move, because I haven’t given the pit floor mesh a pit texture.


So this area here is the pit-lane, and behind this would be the stadium area where spectators are sitting.

Madras Motor Race Track Plans To Host Night Races - carandbike
One iconic landmark of the track that absolutely will not be excluded is the MRF Tyres tyre arch, the start line and multi-storey viewing platforms.

Continuing on from the Pitlane after this tyre ends is a wall with a glass divider, like the ones in the picture above.

This is the wireframe of where the pitlane re-joins the racetrack.
Madras Motor Race Track - Wikiwand
The end of this is actually where the Tyre is, and it reaches from one side of the track, behind a barrier to the outermost point of the pitlane.

By this point, the car would have re-joined the circuit, and the track above is part of the maintance route, which the Ligier Maintenance Vehicle from the video(s) above would drive on.
Both barriers are solid walls with viewing windows above them.

MRF Towers

I don’t know if that’s their real name, but they’re a prominent and majestic outlook onto the track. I clearly know which companies logo to find a hi-resolution image of, and aim towards something like this.
Interestingly, the tyre is not where I would expect it to be from the screenshot above; which is odd, since the tyre bears the MRF logo.

My point for this is the two MRF towers just past the start-line, they’re 1x and 1.5x the size of each other, and the top floor is an actual viewing platform – so it’s a great place to put a camera (or two!).
If you’re feeling creative, you could create a camera, and put a sofa scene infront of it to show people watching from their couch inside of their hotel room!

MMRT upgraded facilities revealed as owners eye Asian-level races

While it is possible to complete a track, the kerbs and run-offs are not in place to help with corners (and I ran most of the racetrack backwards, and was not penalised).
This is next.

Assetto Corsa track modding

I’d been looking for a way to revamp the Sheffield map. ~~
Previously, my primary focus had been an FPS – but now I’m thinking about making it as racetracks for the beautiful Assetto Corsa.

I looked for some videos on the level editor, and there is Race Track Builder (currently £44.99, even amidst the Steam Summer Sale)
Race Track Builder is interesting, because it uses map and terrain data from Google Maps, and you edit in the shape of the track using splines and curves, and export it to FBX format so that ksEditor can read the file and convert it to a map file usable ingame.

This is the video that piqued my interest, and especially towards the end confirmed that what I’m looking to do is possible.

I took my Sheffield map (‘Here’s one I prepared earlier…’) – this was created by using the blender-osm plugin.
I exported it the mesh to a .fbx file (a widely supported 3d model format) so that I could import it into ksEditor.

You click OK, and the application force closes.
But it’s right, the fitness store Decathlon doesn’t have a texture assigned to it because I haven’t worked on it yet.

We at least know that ksEditor is at least reading the file structure of the FBX, and it appears to be compliant.
Only thing is, this is a large map with a lot of buildings without textures.
I need to start smaller, and I want to find a way to do it cheaply; we are after all a shoestring budget studio.

blender-osm

You’ll see a lot of content on this page which is produced with the blender-osm tool with the Sheffield map.
I’m going to start smaller, but use the same process – I’m going to replicate a track I remember from my childhood days, family outings to Cadwell Park near Louth with my dad.

Cadwell Park Track Guide | Cadwell Park Circuit Layout
These are great for reference and in-game layout previews, but I’ll also need to consider heightmap and terrain data on the actual map

Day 2

Not the chronological next day, but the second day where I can sit down and make progress: this project has been in my mind since I started thinking about it, so pieces of the jigsaw have popped into my mind.

Using blender-osm, I zoomed right in to the track and imported the data from its Latitude and Longitude.
I’ve previously imported the terrain data and satellite overlay, and the dark grey is where the road has been imported as 3D data
I’ve imported the file above into the Assetto Corsa level editor to export it to the game, but there are no textures.
As a test, I’ve applied a basic Asphalt texture to the track.
And found the elements in ksEditor that pertain to the actual parts of the track.

I’ve gone around the objects and given anything that doesn’t have a texture in the map (aside from the road and ground) a UV grid texture, as a placeholder so that everything in the level is assigned one.

While I have the mesh in place, the game is not told where the actual track is: Assetto Corsa recognises that there’s a Cadwell track in its game directory, and gets as far as trying to start the game with it: this is good, something is working here.

I need to tell it where the track is, and also race information like where the pit stops, starting line, crash pads, timing sectors, marshall positions and cameras are.
The bottom two are rectifiable quite easily, they’re just pictures.
The top two, though – I will need to explore the files of other, working tracks to see what information I’ll need to supply.

The missing files

ui_track.json
This is various specific details you’re asked about the location of the track.
This is the information that’s populated in the bottom left corner of the preview image below.

Map (Map.png)
An outline of your map with a transparent background


Preview
This is any image you’d like to use as the track preview.
One way of generating this is to click the error message, and run the map – if it’s playable, you can press F8 to capture a screenshot and use that as your map preview.
Since my track doesn’t run yet – I made a quick render in Blender of the track to use as a placeholder for now: you can now see it behind the layout.



Screensaver for the Mind: Caustics[LuxCore]



Breakdown


This video by Two Minute Papers (it’s not two minutes, it’s 9.) explains perfectly what I’ve been experimenting with here.
I found this after my work with Caustics using the same renderer they use in this video:

They have more or less the same scene setup as my Bidirectional/Path experimentation below. It will help to have watched this before reading that section.


These are all rendered (and still rendering with LuxCore; a physically based renderer that models the use of light using mathematical equations.
It is particularly useful for photorealistic scenes, but I’ve decided to use it for it’s excellent results using caustics.

It’s particularly slower than the bundled renderers, Cycles and Eevee – but I believe it to be worth it for the result.
This scene has been rendering for almost 3 days at 4K resolution.

Balls!

We start in the very center of the marble matrix
The light is coming from the right of the scene, we can start to see the refraction in the outermost marbles.
As the marbles start to eclipse the light, we can see them brighten.
The stack is sitting on top of a Vortex object, and its rotation is animated on the first and last frame, so that over the duration of the animation, it rotates 1080° on the Z (Height) axis, forcing the stack to collapse.
Camera closeup to the action:
Here we can see the caustics in action – zoom in to the upper-leftmost green marble.
Inside of it you can see the refracted objects, including every marble within visible range through it, and the light.
This scene is primarily focused on the shadows, and how light hits the floor.
Some of the ,marbles look like Poké Balls; this is because the light source is coming from outside of the boundary, and the top edge is casting a shadow onto the actual ball.

This is the effect I was after!

I have brought the sides up and readjusted the lighting.
Now, we can see the balls shadow, and the light passing through it to form colour in the shadow. This is with Path mode enable as the lighting solution.
This is the same frame from the same scene, but with Bidirectional mode enabled.
As Bidirectional will only render on the CPU, with GPU enabled for Path mode, it is possible to run the two renders concurrently in different Blender instances.
If you don’t plan to use your computer to do anything else, that is. It’s quite slow!

Path

Render Engine: LuxCore / Engine: Path

Scene is lit from a spotlight to the right, with a glass texture on the beaker.
The beaker has a red, glass ball inside.
This is to show light refraction on the wood-coloured surface.
The beaker is flat shaded, which is why there are so many sharp lines – from the light hitting each individual edge: you can see this in the beakers shadow.
The camera pans around so we can see the rim; the reason you’d feel cautious about drinking from it is because of its’ Index of Refraction (IOR) is 1.33, this glass is set to pure glass, the type you’d have in your windows. the Roughness is 0, so it absorbs all light that hit it.

If I was working on a glass for a film, animation or game scene, I would increase this and aim for realism; for this scene though, I wanted to use simple geography for an exaggerated effect purely for eyecandy’s sake.


Bidirectional lighting

Render Engine: LuxCore / Engine: Bidirectional

This is rendered with the Bidirectional lighting solution, which inherits the lighting information in different directions. This setting is generally better for caustics.
As you can see from the low sample rate (below), this is just three minutes of rendering before moving on to the next frame.
I find scheduling like this helps to almost guarantee that it will take a certain amount of time – though it can overspill, because it will always complete the sample it’s working on regardless if the timer has gone over your limit.
It’s worth considering that denoising is not included in this time, either.
Next, I put it up to a 1000 to see how it would look, and will try to force myself not sit there watching it for 15 minutes, taking in the new details of each light pass and do something productive.

I’ve also used the Smooth tool on the beaker, which gives it a less jagged look, and applied a Subdivision modifier – which increases the density of the mesh to make it appear smoother, and more curved.
I suspect that this entire, animated scene will take around 3 days to render (250 frames).
I will be managing the sample rate too, this looks too noisy – so I will be doubling the allowed time for each frame once the camera gets closer to the glass.

Sketchfab

I’ve uploaded it to Sketchfab, but it doesn’t quite have the same effect – though you can see the simplicity of the scene.

90 minute per frame rendering

All renders in this section have been given 90 minutes to render each frame. It has been running all weekend.

If you’re on computer, right click and Open image in new tab – if you’re on your phone, long press. Even this is only half-resolution.
These are all rendered out at 4K, and the renderer is set to only progress to the next frame in the animation every 90 minutes.
I kid you not, this has been running for days. I want it to look perfect.
It has taken more than an entire day to render a single second of animation, and watching it on 1x will be fast and fun, but the real beauty as with all Screensavers for the mind is that they’re best watched on half speed, or quarter if you wanted to meditate to it. There’s no surprises, no jumpscares, no dialogue, just peaceful visuals. Simple, peaceful visuals.

No jingles, no credits, just sound and video. If you’re like me you like falling asleep to physics sims and tutorials, and I want this to be non-distractive, non-sleep hypnotic, attention grabbing, dialogue investing works.
Simple shapes, simple things. Pretty to fall asleep to, or for a visual timeout when you’ve been working too hard.

That’s what Screensaver for the Mind is for, pretty visuals you can look at and take your mind off things when your brain needs to go idle too.

With fluid

This is the same scene, but with fluid enabled.
The engine is Path, which uses GPU – as Bidirectional will only work on CPU, I can run them concurrently in separate Blender instances.
Here we can see from the Caustics that the glass has a jam jar type shape, the edge loop reflecting light back downwards around the rim.
We can also see that the light is falling off to the right, by refraction of the red ball veering to the right of the scene. Because it is refracted, its movement will be inverted.
This is earlier on in the sequence, but look right there at the shadows. Bubbles.
If you get close enough to look at them, their Normals are facing inwards.
That’s what gives them the bubble effect.
It’s a Collection of three different size spheres, hidden underneath the ground plane.
They’re in a seperate Collection, so I tell the particle system to render anything in that collection
[1/3 different sized spheres with inverted normals and a glass texture. Index of Refraction: 1.3333]
Luckily we’re near the end, so I’ll be able to show you.

The background is only there to give a light source – the geometry is not inluded in the scene.
This is viewing the bubbles on their isolated view. The rest of the scene is hidden.
These are our bubbles that are splashing about amid the surface.
I didn’t want to use waves or surf, but rather give the impression of fizzy water.
This should be on the cover of a YouTube vaporwave mix

What my renders (deliberately) don’t show you is that the water doesn’t actually reach the bottom. Naturally, I’m going to hide it from the camera, to the viewer – they only see the surface splash – but those that delve deep enough, I will share my tricks with.

Here is where I drastically drop the render quality
a) to hide that fact
b) to finish the composition quicker

I now have it rendering at 3 minutes per frame, so you’re going to need fast eyes to catch the detail, or watch it slowed down. I’m eager to append these frames to my existing composite, so I can render out to video and remove the original frames (weighing in at 21mb per frame).

Further resources:

If you’re interested in using Blender and Luxcore – here are some further resources. Luxcore is absolutely amazing at realistic interior lighting.
Come on, you’re sat at home – download yourself a copy of blender and have a play with it. Luxrender can be downloaded here.
When I say LuxRender is slow, I do not mean to knock on it – LuxRender is limited to the Python API as it’s an addon. If LuxRender had C-level access to the API, it would be much faster. Faster than Cycles, at least.

If you’re familiar with Cycles:
https://www.youtube.com/watch?v=-BmXeUDRqSo

Live interior modelling stream [Luxcore/u:Bone Studio]
https://www.youtube.com/watch?v=XwQZx5-QGkE

LuxRender DLSC[u:DRAVIA. STUDIO ]
https://www.youtube.com/watch?v=dIfwr2YPxPw&t=75s

I followed this tutorial heavily to get the volumetric light effect. [caustics only]
https://www.youtube.com/watch?v=VYbZrH0RGKs&t=315s[u:
Simon Wendsche]

This is the tutorial I followed, purely for the geometry – though this is for the Appleseed renderer, (which I may do next) – the menu options are different
https://www.youtube.com/watch?v=G-uV4NPlggo.[u:
BlenderDiplom]

This is my attempt at the tutorial using LuxCore instead of Appleseed. [23m]

I’ve also yet to try ATI Pro Raedon and good old Yet Another Free RAYtracer, but these will be specific, dedicated videos like LuxCore has, and Appleseed will be.


Next up, smoke.

Cigarettes, bonfires, buildings alight
Let’s hope the physics, I’ve got right.
There is art in
Green smoke of someone fartin’
Let’s keep it fun, up comes the sun.
Let us not fear dystopia, here.
These visuals are here to help you resync
Switch up your brain and rethink.

Luxcore’s not about basic scenes, where every Plane is plain.
The complexity of the scenes you can make is insane.
Start off with a box shaped room,
Tab to edit mode, mousewheel up to zoom.
Ctrl R to loop cut, seperate your window out as a box.
Looking at rendered, it’s quite dark.
Come outta edit mode. Shift A to add a Sun,
and have some fun.

Using Suns with LuxCore,
You have to be careful because they’re hardcore.
A value of 0.2 should settle the score.
Ctrl T, point into the win-doh.
LuxRender suns are bright, because they’re used with all their might.

My other trick,
Is to set your Colour Management to Filmic
This keeps it more realistic to a camera
Use it on RGB and the colourspace will harangue ya.
RGB is limited in pallette,
Using Filmic Log is your pal in it.

How’d you make the vortex? If I tell, you’ll steal.
And that’s what I want, for your own effects – you feel?
Make your canvas with a plane and extrude it on Z
(can I rhyme with zed, as well as zee,
it doesn’t matter to me)

Come out of edit mode, select the plane. Ctrl A
For scale. Alt G for location. Ctrl A again for Loc.
If you’re distracted by the scene, and you’re in a dash,
Isolate the view, press (NumPad) foreslash.
Come out of that again,
As long as you know you can do that, that’s main.
Shift A to add a Force,
We’ll be adding a Vortex, of course.
Alt G to slap it bang in the middle.
Alt I to LocRotScale on frame zero.
Jump to Frame 250 (or so) –Rotate Z 1080.
Insert LocRot.
While it bakes, we might be here a while matey.
BOOM! Vortex spins 1080 degrees in the space of 10 seconds.
Sends the marbles scrambling, what do you reckon?

So next post is going to be Appleseed,
I’ve played with it before so I know what I need.
Only thing I worry about is a user
Shouting ‘content reuser!’
All that’s reused is the camera path,
To help you compare the volumetric math.
Maybe one day I’ll overlay them, just for a laugh.

Yes it’s true that I set each frame to 90 minutes per render,
because if it’s not true, then Return to Sender.
One has water, the other has zonder.



Folding@home [research:covid-19]

I’d like to share with what readers that are reading this something a little more serious than how I’ve previously been.

The project is Folding@home, and it’s a distributed computing service much like the render farms I’ve discussed in previous posts – which uses your idle computing power to analyse scientific data that is used for research.

You can read more about the project and covid-19 on their website. [27/02/20]
It’s a very in depth report; personally, I struggle to understand it – but I know some of you out there will.

The link to the software is buried in the text, so I’ve placed it below, so that it’s easy to find.
https://foldingathome.org/start-folding/



Once you have it running, you’ll have a web based interface that looks like the one below, where you can control the amount of resources Folding uses, and when to use it.

It may be preferable for those with lower end computers to switch it to Idle, so that it doesn’t not impede their functionality by completely slowing their system down.

I have mine set to full, because most of the time during the day it has enough resources to, and is going to be either sitting unused, or rendering while I do my job on another computer.

The bottom right tells you which dataset you are working on, if you click Learn More – it will naturally give you more information.
I only put that sentence as a placeholder to seperate these two images, I’m not trying to be patronising.

Folding also supports teams, if you’re feeling competitive and need to contribute the most research?

The Change Identity screen looks like this:

I’m not entirely sure how the team numbers are assigned, but looking at the randomness of the Top 10 [below] I think you just claim it and tell your friends, family, colleagues, Raspberry Pi’s, cloud instances, botnets, IoT toaster and virtual machines to join this team number and that’s it.
It’s your team. Go Team #!

You can register for a team from this form.

I think it would be a good idea for businesses to register to become teams on here, a sense of unity in researching this together.
Internal stats within the group are good for friendly competition between colleagues – since they can no longer bond over playing sports together.

https://stats.foldingathome.org/teams-monthly
https://stats.foldingathome.org/os
The OS tab gives a live breakdown of current computing power.

Advanced Controls

The section above is about getting Folding up and running with as little effort or fuss as possible.

This section is going to get more technical, for those who want to explore/tinker/administer minutiae controls.
Click the system tray and select the multi coloured Folding icon that looks like a protein block


If you click Preferences on the Toolbar, you’ll have this screen:

There’s a variety of really nice themes and render mode styles to match your current desktop.

Connection: For running Folding on a network

Identity: You, your team and your identity protection.


Slots: Add and remove your CPU and GPU resource availability.
Remote access: There’s text missing from this screenshot, but you can see the headers to give an idea of what they do.
Proxy: Proxy server settings
Advanced: Erm…advanced options?…

Viewer

The viewer shows the dataset it is currently working on, and is completely 3D – you can click and drag this around with your mouse to look at it from different angles.
The protein molecules shift, jiggle and wiggle in different arrangements around while your computer processes the dataset it has downloaded from Folding, once it has completed, it will upload the data back to the researchers.

The dataset

That’s what you see on the main screen:

We can see the current progress here in the Log tab

If we want to dig a little deeper into what’s actually going on, we can look in the data folder:

if you open the file ‘md’ – it will give a very detailed output of what’s happening with your CPU/GPU, and the test settings it is executing.

Here’s a snippet from mine, so you can see:


Screensaver for the mind.

With the game on pause for a while; entertainment projects can wait for now. We’re entering serious times, and playtime is over.

I’d posted the other day about physics simulations, since I’ve had more free time to set up not so much elaborate, the geometry is very simple – but scenes where the environment is affecting the object – and I find it really satisfying to watch. It’s peaceful in its chaos.

Even creating the scene, experimenting with the scenarios – it was ultra relaxing, it makes us smile.

I’t’s calming because nothing is getting hurt, there’s no peril and it’s oddly satisfying, we can disengage fight or flight.
The geometry is basic, so there’s nothing really to focus on, and you can watch in night mode and not miss any of the cruicial action.
There’s no plot, no dialogue, no need for subtitles.

Tonight, I was taking a few minutes out to watch it, and I thought to watch it at slower speeds, and you know what? I wish I’d originally rendered it at 0.5x because I find it a lot more enjoyable.
Here, listen.

I’m not trying to repost my old videos out of laziness, I want to show you something that I’ve just found out from this video.

[inaudible]
Yeah, I know you can play videos at different speeds – but I’m on about this video in particular. It has vastly different moods for every increment, and still syncs with the video; where you notice the chaos in ultra slow motion.

1.0x – Normal [Upbeat, electro. 80s vibe]
0.75x – 16 bit-ish. Very similar to 1.x
0.5x Emotional – This is my favourite, it feels epic.
0.25 Meditation. – Very little space between notes; drony. good for meditation.

The majority of the nation have found themselves being forced to work from home, quite suddenly – and it’s a hell of an adjustment.
I’ve plenty of experience with living and working in the same building or room, and it takes it’s toll. It’s hard to switch off when there is no commute from work to home.

I understand that a lot of people who are now working from home may not have second or third monitors, and may have had to resort to using their high-end TVs as a second monitors – so it wouldn’t do just to them to render Screensavers for the mind in anything less than at least 4K resolution.

Why screensaver for the mind?

In the first part of this article, I spoke about using our overworked computers to contribute to human study about something very important in their free time while they’re idle.
This is the opposite, this is using computers to compute physics in a visually appealing manner to us, so we can go idle: that is very important to us… resting for a few short moments, because we work hard too.

Remember to take regular breaks.

10 PRINT "All work and no play makes [$user%]a dull (var)."
20 GOTO 10

Fluid and Rigid Body [simulations]

Since I’ve been at home more or less all day every day, I’ve found that I have time to experiment with some liquid and rigid body simulations; since it takes so long to calculate and bake the physics into the scene, and then render them out to images.

These are all rendered out as still frames, and are so satisfying to watch when they’re all pieced together as videos – I’m currently aiming to have around 15 short simulations of different types before I compile them into a YouTube video: for now though, I wanted to share some single images and get back into posting regularly here.

The finished video

What are they?

These show simulations of how gravity and collisions affect solid objects (and in the case of the first three images, liquids)
The effects in the other images show a stack of how gravity would affect a set of 12×12 cubes being dropped, having things thrown into them, having coloured balls dropped onto them and the surface they are resting on moved out from under them.

Fluid simulation

I liked the splash effect on this, but I’d not set the liquid in the jar to the right, transparent material, so it does not ripple or move like water would.
Water drop (Eevee renderer, preview bake)
This will show a simplified form of how the water will react to the object that causes the splash
Water drop (Eevee, higher resolution bake)

Rigid body simulations

In the last four images, I’ve used some HDRI Maps as the background: these are panoramic images with lighting data embedded into them, so that they affect the lighting in the scene, making it look more natural than it would if I used artifically placed lighting.

If you look at the image above, the floor is a glossy, reflective surface, but you can see the reflection of the horizon and sky on it – if you look even closer, each individual cube has a slight reflection on it from the 360° scene background – so even if a cube is facing us from the cameras perspective – it will show the reflection of the background behind the camera; and this updates in realtime as it scatters, rotates and tumbles around the simulation.