Screensaver for the Mind: Caustics[LuxCore]



Breakdown


This video by Two Minute Papers (it’s not two minutes, it’s 9.) explains perfectly what I’ve been experimenting with here.
I found this after my work with Caustics using the same renderer they use in this video:

They have more or less the same scene setup as my Bidirectional/Path experimentation below. It will help to have watched this before reading that section.


These are all rendered (and still rendering with LuxCore; a physically based renderer that models the use of light using mathematical equations.
It is particularly useful for photorealistic scenes, but I’ve decided to use it for it’s excellent results using caustics.

It’s particularly slower than the bundled renderers, Cycles and Eevee – but I believe it to be worth it for the result.
This scene has been rendering for almost 3 days at 4K resolution.

Balls!

We start in the very center of the marble matrix
The light is coming from the right of the scene, we can start to see the refraction in the outermost marbles.
As the marbles start to eclipse the light, we can see them brighten.
The stack is sitting on top of a Vortex object, and its rotation is animated on the first and last frame, so that over the duration of the animation, it rotates 1080° on the Z (Height) axis, forcing the stack to collapse.
Camera closeup to the action:
Here we can see the caustics in action – zoom in to the upper-leftmost green marble.
Inside of it you can see the refracted objects, including every marble within visible range through it, and the light.
This scene is primarily focused on the shadows, and how light hits the floor.
Some of the ,marbles look like Poké Balls; this is because the light source is coming from outside of the boundary, and the top edge is casting a shadow onto the actual ball.

This is the effect I was after!

I have brought the sides up and readjusted the lighting.
Now, we can see the balls shadow, and the light passing through it to form colour in the shadow. This is with Path mode enable as the lighting solution.
This is the same frame from the same scene, but with Bidirectional mode enabled.
As Bidirectional will only render on the CPU, with GPU enabled for Path mode, it is possible to run the two renders concurrently in different Blender instances.
If you don’t plan to use your computer to do anything else, that is. It’s quite slow!

Path

Render Engine: LuxCore / Engine: Path

Scene is lit from a spotlight to the right, with a glass texture on the beaker.
The beaker has a red, glass ball inside.
This is to show light refraction on the wood-coloured surface.
The beaker is flat shaded, which is why there are so many sharp lines – from the light hitting each individual edge: you can see this in the beakers shadow.
The camera pans around so we can see the rim; the reason you’d feel cautious about drinking from it is because of its’ Index of Refraction (IOR) is 1.33, this glass is set to pure glass, the type you’d have in your windows. the Roughness is 0, so it absorbs all light that hit it.

If I was working on a glass for a film, animation or game scene, I would increase this and aim for realism; for this scene though, I wanted to use simple geography for an exaggerated effect purely for eyecandy’s sake.


Bidirectional lighting

Render Engine: LuxCore / Engine: Bidirectional

This is rendered with the Bidirectional lighting solution, which inherits the lighting information in different directions. This setting is generally better for caustics.
As you can see from the low sample rate (below), this is just three minutes of rendering before moving on to the next frame.
I find scheduling like this helps to almost guarantee that it will take a certain amount of time – though it can overspill, because it will always complete the sample it’s working on regardless if the timer has gone over your limit.
It’s worth considering that denoising is not included in this time, either.
Next, I put it up to a 1000 to see how it would look, and will try to force myself not sit there watching it for 15 minutes, taking in the new details of each light pass and do something productive.

I’ve also used the Smooth tool on the beaker, which gives it a less jagged look, and applied a Subdivision modifier – which increases the density of the mesh to make it appear smoother, and more curved.
I suspect that this entire, animated scene will take around 3 days to render (250 frames).
I will be managing the sample rate too, this looks too noisy – so I will be doubling the allowed time for each frame once the camera gets closer to the glass.

Sketchfab

I’ve uploaded it to Sketchfab, but it doesn’t quite have the same effect – though you can see the simplicity of the scene.

90 minute per frame rendering

All renders in this section have been given 90 minutes to render each frame. It has been running all weekend.

If you’re on computer, right click and Open image in new tab – if you’re on your phone, long press. Even this is only half-resolution.
These are all rendered out at 4K, and the renderer is set to only progress to the next frame in the animation every 90 minutes.
I kid you not, this has been running for days. I want it to look perfect.
It has taken more than an entire day to render a single second of animation, and watching it on 1x will be fast and fun, but the real beauty as with all Screensavers for the mind is that they’re best watched on half speed, or quarter if you wanted to meditate to it. There’s no surprises, no jumpscares, no dialogue, just peaceful visuals. Simple, peaceful visuals.

No jingles, no credits, just sound and video. If you’re like me you like falling asleep to physics sims and tutorials, and I want this to be non-distractive, non-sleep hypnotic, attention grabbing, dialogue investing works.
Simple shapes, simple things. Pretty to fall asleep to, or for a visual timeout when you’ve been working too hard.

That’s what Screensaver for the Mind is for, pretty visuals you can look at and take your mind off things when your brain needs to go idle too.

With fluid

This is the same scene, but with fluid enabled.
The engine is Path, which uses GPU – as Bidirectional will only work on CPU, I can run them concurrently in separate Blender instances.
Here we can see from the Caustics that the glass has a jam jar type shape, the edge loop reflecting light back downwards around the rim.
We can also see that the light is falling off to the right, by refraction of the red ball veering to the right of the scene. Because it is refracted, its movement will be inverted.
This is earlier on in the sequence, but look right there at the shadows. Bubbles.
If you get close enough to look at them, their Normals are facing inwards.
That’s what gives them the bubble effect.
It’s a Collection of three different size spheres, hidden underneath the ground plane.
They’re in a seperate Collection, so I tell the particle system to render anything in that collection
[1/3 different sized spheres with inverted normals and a glass texture. Index of Refraction: 1.3333]
Luckily we’re near the end, so I’ll be able to show you.

The background is only there to give a light source – the geometry is not inluded in the scene.
This is viewing the bubbles on their isolated view. The rest of the scene is hidden.
These are our bubbles that are splashing about amid the surface.
I didn’t want to use waves or surf, but rather give the impression of fizzy water.
This should be on the cover of a YouTube vaporwave mix

What my renders (deliberately) don’t show you is that the water doesn’t actually reach the bottom. Naturally, I’m going to hide it from the camera, to the viewer – they only see the surface splash – but those that delve deep enough, I will share my tricks with.

Here is where I drastically drop the render quality
a) to hide that fact
b) to finish the composition quicker

I now have it rendering at 3 minutes per frame, so you’re going to need fast eyes to catch the detail, or watch it slowed down. I’m eager to append these frames to my existing composite, so I can render out to video and remove the original frames (weighing in at 21mb per frame).

Further resources:

If you’re interested in using Blender and Luxcore – here are some further resources. Luxcore is absolutely amazing at realistic interior lighting.
Come on, you’re sat at home – download yourself a copy of blender and have a play with it. Luxrender can be downloaded here.
When I say LuxRender is slow, I do not mean to knock on it – LuxRender is limited to the Python API as it’s an addon. If LuxRender had C-level access to the API, it would be much faster. Faster than Cycles, at least.

If you’re familiar with Cycles:
https://www.youtube.com/watch?v=-BmXeUDRqSo

Live interior modelling stream [Luxcore/u:Bone Studio]
https://www.youtube.com/watch?v=XwQZx5-QGkE

LuxRender DLSC[u:DRAVIA. STUDIO ]
https://www.youtube.com/watch?v=dIfwr2YPxPw&t=75s

I followed this tutorial heavily to get the volumetric light effect. [caustics only]
https://www.youtube.com/watch?v=VYbZrH0RGKs&t=315s[u:
Simon Wendsche]

This is the tutorial I followed, purely for the geometry – though this is for the Appleseed renderer, (which I may do next) – the menu options are different
https://www.youtube.com/watch?v=G-uV4NPlggo.[u:
BlenderDiplom]

This is my attempt at the tutorial using LuxCore instead of Appleseed. [23m]

I’ve also yet to try ATI Pro Raedon and good old Yet Another Free RAYtracer, but these will be specific, dedicated videos like LuxCore has, and Appleseed will be.


Next up, smoke.

Cigarettes, bonfires, buildings alight
Let’s hope the physics, I’ve got right.
There is art in
Green smoke of someone fartin’
Let’s keep it fun, up comes the sun.
Let us not fear dystopia, here.
These visuals are here to help you resync
Switch up your brain and rethink.

Luxcore’s not about basic scenes, where every Plane is plain.
The complexity of the scenes you can make is insane.
Start off with a box shaped room,
Tab to edit mode, mousewheel up to zoom.
Ctrl R to loop cut, seperate your window out as a box.
Looking at rendered, it’s quite dark.
Come outta edit mode. Shift A to add a Sun,
and have some fun.

Using Suns with LuxCore,
You have to be careful because they’re hardcore.
A value of 0.2 should settle the score.
Ctrl T, point into the win-doh.
LuxRender suns are bright, because they’re used with all their might.

My other trick,
Is to set your Colour Management to Filmic
This keeps it more realistic to a camera
Use it on RGB and the colourspace will harangue ya.
RGB is limited in pallette,
Using Filmic Log is your pal in it.

How’d you make the vortex? If I tell, you’ll steal.
And that’s what I want, for your own effects – you feel?
Make your canvas with a plane and extrude it on Z
(can I rhyme with zed, as well as zee,
it doesn’t matter to me)

Come out of edit mode, select the plane. Ctrl A
For scale. Alt G for location. Ctrl A again for Loc.
If you’re distracted by the scene, and you’re in a dash,
Isolate the view, press (NumPad) foreslash.
Come out of that again,
As long as you know you can do that, that’s main.
Shift A to add a Force,
We’ll be adding a Vortex, of course.
Alt G to slap it bang in the middle.
Alt I to LocRotScale on frame zero.
Jump to Frame 250 (or so) –Rotate Z 1080.
Insert LocRot.
While it bakes, we might be here a while matey.
BOOM! Vortex spins 1080 degrees in the space of 10 seconds.
Sends the marbles scrambling, what do you reckon?

So next post is going to be Appleseed,
I’ve played with it before so I know what I need.
Only thing I worry about is a user
Shouting ‘content reuser!’
All that’s reused is the camera path,
To help you compare the volumetric math.
Maybe one day I’ll overlay them, just for a laugh.

Yes it’s true that I set each frame to 90 minutes per render,
because if it’s not true, then Return to Sender.
One has water, the other has zonder.



Folding@home [research:covid-19]

I’d like to share with what readers that are reading this something a little more serious than how I’ve previously been.

The project is Folding@home, and it’s a distributed computing service much like the render farms I’ve discussed in previous posts – which uses your idle computing power to analyse scientific data that is used for research.

You can read more about the project and covid-19 on their website. [27/02/20]
It’s a very in depth report; personally, I struggle to understand it – but I know some of you out there will.

The link to the software is buried in the text, so I’ve placed it below, so that it’s easy to find.
https://foldingathome.org/start-folding/



Once you have it running, you’ll have a web based interface that looks like the one below, where you can control the amount of resources Folding uses, and when to use it.

It may be preferable for those with lower end computers to switch it to Idle, so that it doesn’t not impede their functionality by completely slowing their system down.

I have mine set to full, because most of the time during the day it has enough resources to, and is going to be either sitting unused, or rendering while I do my job on another computer.

The bottom right tells you which dataset you are working on, if you click Learn More – it will naturally give you more information.
I only put that sentence as a placeholder to seperate these two images, I’m not trying to be patronising.

Folding also supports teams, if you’re feeling competitive and need to contribute the most research?

The Change Identity screen looks like this:

I’m not entirely sure how the team numbers are assigned, but looking at the randomness of the Top 10 [below] I think you just claim it and tell your friends, family, colleagues, Raspberry Pi’s, cloud instances, botnets, IoT toaster and virtual machines to join this team number and that’s it.
It’s your team. Go Team #!

You can register for a team from this form.

I think it would be a good idea for businesses to register to become teams on here, a sense of unity in researching this together.
Internal stats within the group are good for friendly competition between colleagues – since they can no longer bond over playing sports together.

https://stats.foldingathome.org/teams-monthly
https://stats.foldingathome.org/os
The OS tab gives a live breakdown of current computing power.

Advanced Controls

The section above is about getting Folding up and running with as little effort or fuss as possible.

This section is going to get more technical, for those who want to explore/tinker/administer minutiae controls.
Click the system tray and select the multi coloured Folding icon that looks like a protein block


If you click Preferences on the Toolbar, you’ll have this screen:

There’s a variety of really nice themes and render mode styles to match your current desktop.

Connection: For running Folding on a network

Identity: You, your team and your identity protection.


Slots: Add and remove your CPU and GPU resource availability.
Remote access: There’s text missing from this screenshot, but you can see the headers to give an idea of what they do.
Proxy: Proxy server settings
Advanced: Erm…advanced options?…

Viewer

The viewer shows the dataset it is currently working on, and is completely 3D – you can click and drag this around with your mouse to look at it from different angles.
The protein molecules shift, jiggle and wiggle in different arrangements around while your computer processes the dataset it has downloaded from Folding, once it has completed, it will upload the data back to the researchers.

The dataset

That’s what you see on the main screen:

We can see the current progress here in the Log tab

If we want to dig a little deeper into what’s actually going on, we can look in the data folder:

if you open the file ‘md’ – it will give a very detailed output of what’s happening with your CPU/GPU, and the test settings it is executing.

Here’s a snippet from mine, so you can see:


Screensaver for the mind.

With the game on pause for a while; entertainment projects can wait for now. We’re entering serious times, and playtime is over.

I’d posted the other day about physics simulations, since I’ve had more free time to set up not so much elaborate, the geometry is very simple – but scenes where the environment is affecting the object – and I find it really satisfying to watch. It’s peaceful in its chaos.

Even creating the scene, experimenting with the scenarios – it was ultra relaxing, it makes us smile.

I’t’s calming because nothing is getting hurt, there’s no peril and it’s oddly satisfying, we can disengage fight or flight.
The geometry is basic, so there’s nothing really to focus on, and you can watch in night mode and not miss any of the cruicial action.
There’s no plot, no dialogue, no need for subtitles.

Tonight, I was taking a few minutes out to watch it, and I thought to watch it at slower speeds, and you know what? I wish I’d originally rendered it at 0.5x because I find it a lot more enjoyable.
Here, listen.

I’m not trying to repost my old videos out of laziness, I want to show you something that I’ve just found out from this video.

[inaudible]
Yeah, I know you can play videos at different speeds – but I’m on about this video in particular. It has vastly different moods for every increment, and still syncs with the video; where you notice the chaos in ultra slow motion.

1.0x – Normal [Upbeat, electro. 80s vibe]
0.75x – 16 bit-ish. Very similar to 1.x
0.5x Emotional – This is my favourite, it feels epic.
0.25 Meditation. – Very little space between notes; drony. good for meditation.

The majority of the nation have found themselves being forced to work from home, quite suddenly – and it’s a hell of an adjustment.
I’ve plenty of experience with living and working in the same building or room, and it takes it’s toll. It’s hard to switch off when there is no commute from work to home.

I understand that a lot of people who are now working from home may not have second or third monitors, and may have had to resort to using their high-end TVs as a second monitors – so it wouldn’t do just to them to render Screensavers for the mind in anything less than at least 4K resolution.

Why screensaver for the mind?

In the first part of this article, I spoke about using our overworked computers to contribute to human study about something very important in their free time while they’re idle.
This is the opposite, this is using computers to compute physics in a visually appealing manner to us, so we can go idle: that is very important to us… resting for a few short moments, because we work hard too.

Remember to take regular breaks.

10 PRINT "All work and no play makes [$user%]a dull (var)."
20 GOTO 10

Fluid and Rigid Body [simulations]

Since I’ve been at home more or less all day every day, I’ve found that I have time to experiment with some liquid and rigid body simulations; since it takes so long to calculate and bake the physics into the scene, and then render them out to images.

These are all rendered out as still frames, and are so satisfying to watch when they’re all pieced together as videos – I’m currently aiming to have around 15 short simulations of different types before I compile them into a YouTube video: for now though, I wanted to share some single images and get back into posting regularly here.

The finished video

What are they?

These show simulations of how gravity and collisions affect solid objects (and in the case of the first three images, liquids)
The effects in the other images show a stack of how gravity would affect a set of 12×12 cubes being dropped, having things thrown into them, having coloured balls dropped onto them and the surface they are resting on moved out from under them.

Fluid simulation

I liked the splash effect on this, but I’d not set the liquid in the jar to the right, transparent material, so it does not ripple or move like water would.
Water drop (Eevee renderer, preview bake)
This will show a simplified form of how the water will react to the object that causes the splash
Water drop (Eevee, higher resolution bake)

Rigid body simulations

In the last four images, I’ve used some HDRI Maps as the background: these are panoramic images with lighting data embedded into them, so that they affect the lighting in the scene, making it look more natural than it would if I used artifically placed lighting.

If you look at the image above, the floor is a glossy, reflective surface, but you can see the reflection of the horizon and sky on it – if you look even closer, each individual cube has a slight reflection on it from the 360° scene background – so even if a cube is facing us from the cameras perspective – it will show the reflection of the background behind the camera; and this updates in realtime as it scatters, rotates and tumbles around the simulation.

Update

Edit: The video is complete and is now online

I appreciate it has been a while since my last post, and the time before that – there was a long delay too. I have not given up on the project, and have been working on a video for a week or so – the background track I had used was rather longer than expected, and rather than truncate it – I wanted to ensure that I had enough content for a full, animated video.
The video will use a few different looks, a few re-used motions rendered in different NPR (non-photographic rendering) styles of toon look.
Spoiler alert: There’s a lot of fluid simulations, which use computer-generated physics, and look impressive – but take so long to produce

I have spent whatever time I have available studying 2D animation like crazy, barely sleeping because I’ve been awake absorbing so much theory and knowledge about the process.

I’d like to say that I’ve not been taken off track with it, and that I’ve been fully focused on my game – the thought has always been in the back of my mind how to integrate them; how a show with the characters would work as a cartoon, and my thought was that I always wanted to make the cartoon dark: not as in low brightness or sombre, but with a dark storyline.

A change of tone

As much as I love sci-fi dystopia, and enjoy world-building a society that is deconditioned by war, destruction and fear: a near-future city ruled by corporations and higher powers, our city – I will be taking the site in a different, more uplifting direction for a while.

With the mysterious, invisible killer, Coronavirus being on everybodies minds and mouths at the moment – it’s clear that we are entering troubling times – where the entire country may be placed on lockdown because of this health crisis, and to plot and describe an environment of panic and peoples struggle to survive in an uncertain climate will not feel like science-fiction: It’ll be going on in our daily lives, and will be so raw that it’d hit too close to the bone, and would feel more truthful than the escapism of entertainment that it is intended for.

For the forseeable future, while I will still be working on the game – I will be posting a lot less about it; I feel it is my duty as a content creator to produce content that will invoke positive emotions, and I have made the design decision to make and post more inspiring creations of a more varied nature.

I want to make things that will brighten peoples day in any way possible: while before, my intention was to inform and intrigue: as long as there is a pandemic, and peoples spirits need raising – I believe that is where my skillset needs to be put to use.

I wish everybody the best of health, and to stay safe.

A rendering fluid simulation

2D in 3D [two days]

This is a rollover update, since there wasn’t one yesterday – but there’s enough content to today. It’s my last foray with 3D space for a couple of weeks while I hunker down and really get to grips with Grease Pencil, so that I’m able to produce quality animation: I’ll also need to figure out where in the universe to fit the story.


This is further experimentation in which I try to simulate 2D animation using a camera moving across the city in 3D space, and also one with sound if you watch the ending at half speed. I’ve applied the free Eeevee Comics Shader (v3.0, by Paul Caggegi to most (or some!) of the buildings to look like they are printed, like comics.

The motion is the same camera motion around the city as I used on one of the Skywatcher videos – if it looks familiar to you, except it looks a lot different now!

I wanted to capture depth but not make things look too busy; so the anonymous buildings I’ve not completed yet can blend into the background.

The frames

Let’s have a breakdown of them

With such a large scene as this, it’s difficult to control the lighting and the shadows of what the camera sees, so I’ve parented a light to the camera – so that whatever is in shot is well lit.
You can see the light falloff at the corners of the building.


I’ll finish this later!

Characters: Greenflame, Phil, Cora [2.5D animation]

I’ve taken a little bit of annual leave from the project for a few days – and being quite honest, it could have counted as sick leave too. I’m adapting to some new medication: and while it introduces itself to my body over time, it has had an effect on my habits – and I’ve been in more of an absorbing state of creativity: where I binge watch tutorials, and put my creativity on ‘receive’ rather than ‘transmit’.

See, lately I’ve found a new interest obsession that I’ve just wanted to splurge my education into, 2D animation! I’ve flirted with the idea in my mind for some time it because it’s a great way to tell stories, but never got around to it.
I’m hesitant to say it’s quicker than 3D because this is animation, if you’re doing it right – it’s a slow process no matter what your medium (waiting hours for a couple of seconds of animation to render, or spending hours drawing each frame so that they flow seamlessly from the last frame)

After seeing a video of how far along the Grease Pencil tool in Blender has come, I was blown away, and decided that I needed to learn to master it.

What is Grease Pencil?

Originally, a grease pencil was a wax pencil used for writing on glossy surfaces by traditional animators; and it conjurs to mind somebody sat at a table with a light underneath, sketching on a transparent sheet of cell drawing over the last frame.

In Blender, the grease pencil used to be use for making notes and scribbling around models, and was very limited.
Today, it’s a digital sculpting tool in a 3D software package and it has become so much more, it can (and is being used) to produce 2D animations that are manipulatable in 3D space: and each new version of Blender is bringing to the table phenomenal new features.

Source: blender.org

So, that’s it for the game?

Not at all!
In fact, I’m looking at ways to incorporate them, the cartoons will give depth to the backstory, in short, animated segments.
The cartoon will take place against the backdrop of the city, though it will look very different – cartoon, comic style.

Here’s a Freestyle render of the three main characters; these are our rogue time-travellers that the story centers around: and are the protagonists in the short story, Cora.

Here they are from a different angle, with a white background, and the versions I drew in front of them. It doesn’t matter that they look like zombies, because I’m only using them as reference for body form and perspective while I draw over them.

I drew over the top of the characters in the exact pose they’re in from the view, and then moved the reference image to the side, so that I can compare side-by-side while I add in the details.


I’ve used some glasses from BlendSwap (u:montedre) for Phil’s future visualisation predicting glasses, and headsup-display.
The original model is a pair of headphones and glasses in one, but I’ve given Cora the headphones.

This Freestyle pass collected too much unnecessary contours, so this time I’ve just used one brush to simplify things:

I’ve set a camera to spin around our three characters, and animated them with slightly involuntary movements, so I can get an idea of how they look when they move.

The MakeHuman models in a pose together.
You can see the Freestyle outline around Coras hands specifically.

I’m creating vectors around the outline of the frames
…and have my frames all lined up, so that I can just slide them along to start animating the next frame.
The drawings are 3D mesh that’s modifiable in 3D space
Add a bit of colour…