Wednesday, August 12, 2015

Indirect Lighting for Player Controlled Lights

I'm continuing to work on improving the dynamic lighting in 3DWorld. This post is a short update that builds on the previous two posts (indirect lighting and dynamic lighting + triggers):
Indirect Lighting
Lighting and Triggers

I was reading another blog post from The Witness and it gave me an idea. The contrast is too high between the lit and unlit areas of the basement scene. All of the lighting is coming from the spotlight direct illumination; the indirect reflected light is completely missing. This is why the orange balls look black on the sides facing away from the lights. The scene doesn't look very real.

Let me review how the indirect lighting in 3DWorld works. The scene is divided into a 3D grid of light volumes in {x, y, z} that are uploaded to the GPU and used in the fragment shader to individually light each pixel. During the offline preprocessing phase, each light source emits millions of rays, each of which is traced through the scene using multiple CPU threads. Each ray's weighted RGB (red, green, and blue) light contribution is added to each grid cell that it passes through. This means that the fragment shader can query any point in space within the scene bounds to get the indirect lighting contribution. This uses more memory per volume than lightmaps and therefore the lighting is stored at a coarser granularity. But, it has the advantage that dynamic objects (such as the orange balls) that weren't part of the original static scene can be correctly lit. This approach may also be simpler to implement and more efficient to compute. I haven't implemented lightmaps so I don't know for sure.

Okay, back to the problem. Dynamic lights can be turned on and off when their triggers (light switches) are activated, so the indirect lighting isn't constant. It can't be baked into the (single) global lighting volume of the scene. The indirect lighting can't even be stored per-trigger because it needs to be removed when an individual light is destroyed. What is needed is per-light source volumes that are generated on-the-fly when needed and merged into the final lighting solution when their intensity (or enabled state) changes. Since the light triggering is infrequent, most game frames have the same set of enabled lights. It makes sense to only merge in the new lighting values when they change, and re-upload the merged data to the GPU sparsely. This avoids having to read from multiple 3D lighting textures on the GPU each frame. I haven't actually tried this, but I assume it would have a significant effect on frame rate. The 4-5ms of CPU time updating the lighting every few seconds is negligible.

So what does it look like? Here is a screenshot of the direct + indirect lighting effects on the basement spotlight scene with some orange balls in motion.

Direct + Indirect lighting + Shadows in the basement spotlight scene.



The biggest difference is the reflection of the spotlight hitting the ceiling and floor near the light on the back wall. The sides of the balls facing away from the lights aren't completely black any more. Much better. Unfortunately, now there is a 6 second freeze when the player first turns on the lights as the CPU computes 5 million rays (1M per light source) with 4 bounces each. That really ruins gameplay. Who wants to sit there waiting for the lighting to be computed in the middle of playing the game? It takes longer than loading the scene at the beginning!

One solution is to compute the lighting of each light source once in a preprocessing pass, then write it to disk for later reuse. I modified the scene file reader to accept filenames attached to each light for caching indirect lighting on disk. This works well, reducing lighting computation time from 6s to a few milliseconds.

However, now I'm stuck with multiple 8MB files on disk, one per light source. These files together take up more disk space than the rest of the scene files combined. They need to be compressed. Fortunately, they're easy to compress. The RGB color data is mostly zeros, and 32-bit floating-point numbers have more precision than I need. 8-bit unsigned integers would work just fine - they get converted to 8-bit in the GPU texture later anyway. The first thing I did was to remove most of those zeros. Since these are small local lights, their radius of influence is pretty small. In addition, their light is confined to this one room in the basement. I first filter the lighting values so that any value smaller than 0.1% is clamped to 0. Then I compute the smallest bounding cube that contains all of the nonzero values. This provides a 100-200x reduction in file size and memory usage. The 8MB files are now only 40-80KB. The reduction is enough that it doesn't seem necessary to do the 32-bit => 8-bit data compression.

Here are some screenshots comparing the effect of the different lighting components. In my opinion, the new combined direct + indirect lighting looks much better than direct only. [Ignore the frame rate on the lower left corner - I froze the scene update so the framerate counter wasn't updating. It normally runs at over 200 FPS.]

Uniform lighting shows the base material colors and textures. Some crates were added to provide more interesting shadows.



No lighting. A few emissive objects are visible (light switch and sky visible through the window).
Direct lighting + shadows only. The spotlights themselves are lit by a separate small light. Similar to the previous blog post.


Indirect lighting only. Most of the direct light hits the floor and ceiling near the back wall, reflecting light onto the wall.




Direct and indirect lighting combined form a more realistic global lighting solution for the scene.


5 comments:

  1. Yeah! Per-light illumination volume maps! Makes a lot of sense. Still seems like it would be worth it to knock the luminosity depth back down to 1 Byte. You wouldn't need to clamp the small values, since the smallest value in 8 bit is 0.3%, so there could be some savings in performance as well as storage.

    ReplyDelete
  2. Lighting is computed as floating point R, G, B and intensity values. The intensity can later be scaled based on lighting parameters. For example, for cloud/sky lighting, intensity can be varied based on the time of day. R, G, and B values are generally stored closer to the {0,1} range. These are stored as one byte (0-255) for each of R, G, and B by multiplying the colors by the intensity and doing various scaling, where some scale factor is associated with the file itself and multiplied as a constant when loading it back in. This allows files to be normalized so that lighting data with very different magnitudes can be stored in different files with different scale factors while maximizing dynamic resolution.

    The min luminosity of an 8-bit 0-255 value is around 0.3%. However, since these file are stored as RGB, the actual luminosity of a voxel is (R+G+B)/3, which has 765 unique values, and the min representatble value is around 0.1%. So I use a threshold of 0.1% for filtering nonempty voxels at the floating-point stage before converting to 8-bit RGB.

    ReplyDelete
    Replies
    1. I went back and looked through the code and it compares each of the R/G/B values to the threshold 1.0/(256.0 * max(0.001f, scale)). The scale factor is something computed as the max intensity and used to normalize values to the (0,1) range. This is maxed with 0.001 as some sort of absolute min intensity check for very low intensity lights. But the real threshold used in most cases is 1/256, which represents one bit of resolution.

      Delete
  3. I was thinking about it, and there may be advantages to using an HSV model, hue saturation and value. That way the two color channels can be low width, and you can put a lot of your data depth into the value. I don't know if that will help, but the efficiency may improve, and you could easily change the color without redoing all of the intensity calculations.

    ReplyDelete
  4. Thanks for the suggestion. That might help. It requires quite a bit of code changes (ray casting code, file I/O, lighting compression, shader code, etc.) I'm not sure I want to go through all of the effort trying to change it just to see if it looks better.

    I can already change the color of global lights such as the sun, moon, and sky/clouds. I use a white light for the ray casting and light accumulation. Then I can multiply the accumulated color with the color of the light source before creating the texture that gets passed to the GPU/shader. I already do this for the day/night cycle.

    ReplyDelete