Each frame, the generated smoke is accumulated and diffused through open spaces in the scene, with a bias that pushes the smoke upward over time so that it accumulates near the ceilings. Most of the time I use something like a 128x128x64 grid, which is a reasonable tradeoff between quality and performance. Now that I have a new computer + GPU I might be able to get away with a higher resolution, though it's nice to make everything a power of 2, and increasing by 2x in all three dimensions is 8x more volume data to store and process. 4MB of texture data for smoke density + RGB indirect colors is a reasonable amount of data to generate and upload to the GPU each frame, but 32MB is a lot.
The smoke effect is implemented in my GLSL fragment shader of every material used in the indoor scene. It's generally too expensive to use on vegetation due to high pixel overdraw, but that's okay, the grass and trees are all outside and can be smoke-free. Standard fog without volumetric lighting can still be used on the outdoor parts of the scene. To get the true volumetric effect, smoke can't be explicitly rendered in a way that modifies the dept buffer. The individual smoke puffs can be rendered that way, but it doesn't make a convincing room full of smoke.
For each triangle fragment of indoor geometry we need to do a 3D volume ray traversal from the fragment (point on the scene geometry) to the camera/player/eye in world space, and accumulate light along the ray to include the effects of both forward and back scattering. The shader performs this ray marching in a loop, where each iteration:
- Looks up the smoke density and local indirect lighting in a 3D texture that's incrementally updated each frame
- Computes the forward and back scattering at that point using the smoke density and color
- Checks the scene shadow map to determine if the point is visible to the sun and adds sun light
- Finds any nearby dynamic light sources and adds their light contributions
vec3 dir = eye_pos - vpos; // pos to eye vec3 normal = normalize(dir); // used for dynamic lights vec3 pos = vpos; // world space float nsteps = length(dir)/step_delta; int num_steps = 1 + int(nsteps); // round up to nearest int vec3 delta = dir/(nsteps*scene_scale); // world space delta for each iteration float step_weight = fract(nsteps); // needed to remove sharp transitions (popping) float smoke_sscale = SMOKE_SCALE*step_delta; vec4 cur_epos = fg_ModelViewMatrix * vec4(vpos, 1.0); // world => eye space vec3 epos_delta = fg_NormalMatrix * delta; // eye space delta
// smoke volume iteration using 3D texture, pos to eye for (int i = 0; i < num_steps; ++i) { vec4 tex_val = texture(smoke_texture, pos.zxy); // rgba = {indir_color.rgb, smoke} // add dynamic lighting tex_val.rgb += add_dynamic_lights(pos, normal); // world space // add sun light with shadows const float smoke_albedo = 0.9; tex_val.rgb += smoke_albedo * get_shadow_map_weight(cur_epos) * light.diffuse.rgb; cur_epos.rgb += epos_delta; // move position in eye space // final calculation at this step float smoke = smoke_sscale*tex_val.a*step_weight; // smoke density color = mix(color, vec4((tex_val.rgb * smoke_color), 1.0), smoke); pos += delta*step_weight; // move position in world space
step_weight = 1.0;
} // for i
return color;
This shader does a ton of work, over 100 iterations of this loop per fragment, where each iteration does a 3D texture lookup, shadow map texture lookup, lighting computation, and various vector floating-point math. All this done at 1080p = 1920x1080 pixels. It hasn't been practical to use this approach until I got my new computer and GPU (GeForce GTX 770), where I now get at least 60 fps. That's good enough for this one effect, but not good that this single effect takes most of my allocated frame time. Oh well, maybe I can optimize this more later. If I was using a deferred rendering pipeline I could do the volume lighting on a lower resolution (half? quarter?) target then upsample and blur it. But I'm doing this with a forward renderer, which means I need to process every fragment independently.
Now that I have this effect working, I can have fun setting the scene on fire to get some nice looking volumetric smoke. The plasma cannon does a nice job of creating fire and smoke. Here is a screenshot of the lobby area of my office building (modeled by myself). The fires are out, but the room is filled with smoke, and you can see the shafts of sunlight coming from the windows around the stairwell above.
[Note: I'm not sure why the left side of these screenshots has a tiny strip of pixels from the right edge. It's not clear what stage of my screen capture, image resize, and bmp->jpg compression did this.]
Sunlight filtering down between the stairs, producing light shafts in a smoke-filled room. |
Over time, the smoke will dissipate as it diffuses through the air, up the stairs, and out the doors. If I was to shoot out the windows, the smoke would quickly escape the building and the air would clear up in a few seconds. This is because the smoke diffusion algorithm uses dynamically updated flow vectors for each cell that determine the rate at which smoke can flow between that cell and the adjacent cell in {x, y, z}, based on cross sectional area of a 2D cut. When the scene is modified/destroyed, this data is updated, and the smoke diffusion algorithm incrementally updates the 3D texture across several frames.
Here is another screenshot showing high contrast light and shadows caused by a skylight in the ceiling. Volumetric smoke is added when the smoke puffs rising from the fires hit the glass plate in the ceiling. I had to use a pretty small step size of 0.2x cell grid size (5 steps per texel) to remove the aliasing effects at the shadow boundary, which hurt the frame rate. There is still a small amount of aliasing present if you look at the right angle.
Light from the overhead skylight scatters in the smoky room to produce volumetric shadows. |
Dynamic and static scene lights also contribute scattered light to smoke. As I've shown in previous posts, 3DWorld supports hundreds of dynamic point and line light sources. These can be made to illuminate smoke particles as well as scene geometry, though this decreases the frame rate to only 30 fps in some cases. In this screenshot you can see the glow/halos from the laser beam line light source and the blue floating point light source interacting with a smoky basement.
A laser beam and blue floating point light illuminate the surrounding smoke, producing a glow due to light scattering. |
Finally, here is a screenshot of my "God Rays" implementation in tiled terrain mode. In this case there is no 3D volume texture, since the scene is "infinite" in size, so the cloud density is procedural. The shader is different, but the general idea is the same. I'm evaluating 4 octaves of Perlin noise at every step along the ray, for every fragment (pixel) of every object in this scene, at > 2M pixels, so I only get 22 fps. This is a very brute force solution, so it's impractical to have this effect in a realtime engine, but it makes a pretty picture:
God rays from sun filtering through rain clouds in tiled terrain mode. |
I still have more work to do, mostly improving the performance, but the results look pretty good so far. I'll have to see if there's some way to do the ray marching on a lower resolution image. Or maybe I can find a way to decrease the number of steps and use some kind of blur to remove the aliasing and noise from the shadow map. There are plenty of papers and presentations available on this topic. Overall, this volumetric scattering approach produces some pretty nice effects.
I don't know how it works, but the smoke system in Blender has some sort of vortex convolution that looks ace and runs in realtime.
ReplyDeleteYeah, the holy grail would be to run a full atmospheric volume simulation every frame, but as you noted it takes forever. I think you could probably do some sort of special case for crepuscular rays. Do an outline for gaps in the shadow map and extrude the geometry, then do a low-res (1/4?) front and back depth pass and use the difference as an addition to the final buffer... Or whatever.
There are many ways to implement smoke. One of my later posts uses procedural noise plus volumetric effects to produce drifting clouds of smoke. It's technically realtime, but it does take most of the frame time.
ReplyDeleteI also added God rays as a GPU post-processing pass sometime after this post. These move with the camera and look pretty good - in certain outdoor scenes. It's all screen space and uses the depth map of the previous draw and the sun disk passed in as a shader uniform, with ray marching along the light vector. It only works when the sun is visible though, and only with a single light source.
I might work on this more in the future. So many things to do, ... and it seems like procedural buildings will take a long time.