Monday, February 29, 2016

More Reflections

I can't get enough reflections in 3DWorld. They really bring life to the scenes. I recently added reflective surface support for horizontal "floor" polygons of imported models. The fragment shader enables reading from the reflection texture for fragments with z-values within the z-range of the reflection plane. Other than that change, the rest of the reflection texture creation works like I explained in my previous post.


Here is a screenshot showing the reflective marble floor from the Crytek Sponza atrium scene. You can see the reflected colored curtains and fire in the floor.

Realtime reflections on the shiny marble floor of the Crytek Sponza atrium.

I also added support for multiple reflective z-planes so that more than one floor can be reflective. The active reflective surface is selected as the one with z-value closest to the camera position from the visible surfaces below the camera. Surfaces above the camera are ignored because the system assumes only the top of the objects are reflective. Here is a screenshot showing reflections on the upper floor of the Sponza atrium with some moving dynamic light sources.

Reflections of the scene and some colored lights on the upper level of the Sponza scene.

I made the marble floor of the museum scene reflective as well. It makes the scene look much more realistic. Take a look at the way the windows look reflected from the floor.

Reflections on the marble floor of the museum scene.

Here is the museum scene shown from a different viewpoint where the dinosaur skeletons are visible.

Another view of the museum scene showing the reflective floor.

I'm currently working on cube map reflections for small placed reflective objects such as the golden dragon in the office building scene. Cube map reflections are more difficult than planar reflections because they need to be rendered from all six sides of a cube rather than a single reflection plane. It's probably too slow to generate all six cube face images each frame for large scenes, so other tricks need to be played. For example, if the scene is static (no moving objects), the reflection images can be rendered once as preprocessing rather than per-frame. I'll make another post if/when I get this working correctly.

Thursday, February 18, 2016

Overhead Map View

This past week I have been making improvements to 3DWorld's overhead map view. This was originally a low resolution, cartoon-ish overhead view of the entire level that showed the level bounds, player position and direction, enemy positions, and item locations. The player can zoom in and pan around in the image interactively using the arrow keys.

I decided to add some improvements to the original system that I wrote many years ago. First, I added mouse wheel zoom and mouse click-and-drag for a "slippy map" interface that works like Google Maps. The update rate of map view is somewhere between 10 and 30 FPS, which seems to be okay for a drag-able map.

Each pixel of the map is computed independently. The colors are determined from a height-based table lookup and match the normal colors you tend to see in maps: blue for water, brown for dirt, green for vegetation, gray for exposed mountain rock, and white for snowy peaks. I added sun lighting by computing the normal at each pixel to make the peaks and valleys stand out more. Here is what this view looks like for a procedurally generated island scene. The player is located at the red dot in the center facing in the direction of the black dot.

Map view of an island with correct lighting applied, which produces mountains that look 3D.

This approach works for heightmap textures as well. The user can interactively pan and zoom within a high resolution heightmap and view a pseudo-colored image of the terrain. It works just like an image viewing tool. In the following screenshot, I used the 16K by 16K pixel Puget Sound heightmap dataset available here.

Overhead map view of Mount Rainier from the 16K x 16K pixel Puget Sound texture with real lighting.

Note that interactive heightmap editing done in 3D mode will automatically update the overhead map view. Some of my custom height editing can be seen in the area around the player on the left. Those small lakes and green peaks aren't in the original heightmap dataset.

Here is a video where I pan around the Puget Sound heightmap and change the sun position/direction to prove that map view really does compute per-pixel lighting. The sun moves slowly because I use a keyboard key to move it a few degrees per key press.



Next, I added ray tracing of each pixel to pick up all of the other scene objects, and another ray toward the light source to generate shadows. I'm getting around 20M primary rays per second throughput on a 4 core CPU using OpenMP, which makes it interactive. I was getting over 30M rays per second during snow coverage map creation from the previous post, so this isn't quite as fast. I suspect the slowdown is due to cache misses looking up the texture data for each hit point on the scene geometry. I didn't need this extra step when creating the snow coverage map. When the view is zoomed out this far, texture samples are spaced pretty far apart, making the texture access pattern pretty random. Texture aliasing artifacts can be seen in the screenshot below. The small red/yellow and blue/yellow circles are red and blue team AI players.

Ray-traced overhead view of building scene with shadows enabled in gameplay mode.

You can see some shadows cast by the buildings and trees here. I'm simply darkening the pixels in the overhead image when the sun is not visible from the primary (vertical) ray hit location. Shadows are enabled by default, but they add additional runtime. Shadow rays are faster to compute than primary rays because the search can terminate as soon as an intersection is found. However, they're less coherent, especially near complex surfaces, and that adds to the query time.

The results remind me of old sprite-based top down PC games and some of the simple phone/tablet games I've played. Here is another screenshot, this time of the house scene. The trees cast shadows on the ground and the rest of the scene. This scene uses a custom heightmap which is not drawn in map mode, so all terrain is green.

Ray-traced overhead view of house scene with shadows enabled in gameplay mode.

Overall, this was a fun side project. It doesn't look as good as what I could have drawn on the GPU, and it's much slower to compute on the CPU, but it does show off 3DWorld's ray-tracing support. It's just a different way to approach the problem. There are other benefits, such as improved ability to debug the collision volumes, ray tracing code, object placement, etc.

Wednesday, February 3, 2016

Rain and Snow Progress

The past few weeks of 3DWorld development have mostly been spent improving rain and snow effects. I have added several new features, including dynamic wetness/drying effects with puddles, realtime snow accumulation and coverage, ice, and snow footprints. I'll briefly cover each of these below.

Puddles

I continued implementing features from this blog post, and the next item up was puddles. I wanted to have puddles generated procedurally rather than having to place/paint them all by hand. 3DWorld is all about procedural generation and automating as much of the level design as possible. It's too much work to add every puddle by hand to every horizontal surface of the scene, for all of my test scenes. I don't own any 3D modeling tools and I only use free image editing tools such as Gimp.

Normally, puddles are formed in low areas of the ground, but most of the man-made (non-natural) surfaces in 3DWorld such as concrete floors are modeled as flat planes. The normal maps are too regular and finely detailed to use for puddle depth, so some other tiny invisible adjustment to surface height was required. It seemed like this was another place where I could apply Perlin noise, and as expected, it worked pretty well.

The wet/dry/rain cycle works like this:
  1. The ground is dry
  2. Rain starts, making mostly horizontal surfaces that are open to the sky uniformly wet over time
  3. Rain stops, and the ground dries non-uniformly, leaving puddles
  4. After some time has passed, the ground becomes dry again (same as 1.)
There are three states of rain surfaces: wet, dry, and full of puddles. These are implemented with shader flags that control how the surface is rendered, affecting diffuse albedo, specular intensity, specular gloss, and reflectivity (Fresnel term). Here is a screenshot of some procedural puddles from state 3 generated on the ground after the rain has stopped.

Procedural puddles on the ground as the concrete is drying out after a heavy rainstorm. Wet surfaces are reflective.

And here is a YouTube video showing the transition from dry ground, to wet reflective ground, to puddles, and back to dry ground. Oh, and yes, I finally added procedurally generated palm trees.



Realtime Snow Accumulation

My previous post was on volumetric/3D snow accumulation. That system produced good results, but required dropping a very large number (1 billion) of snowflakes from the sky and recording where each of them landed. This works well for a static scene where a one-time preprocessing step is acceptable. But I wanted another solution for faster snow accumulation that could be used with 3DWorld's realtime, dynamic weather system.

I decided to add support for a thin dusting of snow that "paints" upward oriented surfaces a white color. This isn't true volumetric snow with depth, but it still produces a nice snowy scene effect. The most difficult problem was determining which parts of each surface receive snow. Outdoor areas only please - we don't want snow accumulating in the lobby, offices, or basement!

My first attempt was to cast rays through scene vertices and tag a surface as "outdoors" if any rays reach the sky, and "indoors" otherwise. This worked somewhat, and was good enough to use for the rain wet mask, but wasn't quite right for snow. It left areas under objects such as benches snowy because there was a single large polygon covering a large surface area. It looked correct to have wet areas under cover during rain because rainwater will flow across flat surfaces over time, but snow doesn't do this.

The next thing I tried was closer to how shadow mapping works. From what I understand, this is how other games mask snowy and wet areas. The scene is drawn in a "shadow" pass where the objects are projected down, or in the direction of the falling snow if there is wind. This is normally done on the GPU, but that would have produced aliasing and other artifacts. Since shadow mapping tends to sample a pixel once, this means that a small/thin object such as a flagpole or lamppost that happened to fall in a bad spot near the pixel center would leave a hole in the snow mask. Have you ever seen a flagpole that blocks the snow from accumulating around it's base? It looks very odd and unnatural.

Instead of rendering the scene to a snow shadow map, I compute the coverage map on the CPU. A group of rays are traced through a pixel in a 3x3 grid pattern, and all ray intersections with the scene are determined. The furthest/deepest distance is used as the snow depth for that grid position (pixel). This way, a thin object such as a flagpole may block some of the rays, but won't block all 9 of them. It took a bit of further experimenting, but in the end I was able to remove all of the incorrect holes in the snow mask. I also tried adding random noise to the samples, which helped remove some of the artifacts from the snow mask, but also added noise to the smooth edges of snow around building walls and overhangs. So in the end I used a simple fixed grid.

The snow coverage mask is sent to the GPU and used in the fragment shader for per-pixel snow masking. The (x,y) position of the surface to be drawn is used to look up the depth (z) value in the snow mask, which tells the GPU how deep the snow penetrates the scene in z at that position. If the surface has a z-value below this point, it's underneath some other surface that blocks the snow. If its z-value is less than or equal to the snow map depth, it's covered with snow. I addition, there is a soft transition from no snow to full snow based on the difference in depth/z between the snow mask and the surface. Finally, I used the slope of the surface (technically the z-component of the normal vector) to blend snow density from 1.0 for horizontal surfaces to 0.0 for nearly vertical surfaces.

The advantage of this approach is that I was able to produce a 256x256 pixel 2D snow coverage mask using only 256*256*3*3 = ~600K samples. Compare this to 1 billion samples required for smooth snow in the 3D volumetric case. Using spatial coherency of the rays to trace the 3x3 rays as a group, the 3DWorld ray tracing system can achieve around 35M rays per second on 4 CPU cores, which translates into a mere 17ms of runtime. That's quite a bit faster than the ~20 min. required for volumetric snow accumulation! This algorithm allows the snow coverage mask to be recomputed in realtime as the scene is modified and the weather changes. As a bonus, no precomputed snow coverage files need to be stored on disk and loaded.

Below is a screenshot showing the results of realtime dynamic show accumulation. Note that only areas of the scene that have an unobstructed view of the sky accumulate snow (except for the grass, which doesn't use the snow mask).

Realtime procedural snow accumulation in the office building scene.

Here is a close-up view of the snow on objects of various shapes. Note that there is no snow under the trees and benches.

Close-up view of snow accumulating on various objects based on a generated sky coverage mask.

I also recorded a video showing how snow accumulates during a heavy snowfall.



Ice

I decided to implement the mixing of rain and snow to produce ice. The ice effect is somewhat different depending on which comes first. Rain on top of snow melts the snow, but snow on top of rain forms a layer of reflective ice on horizontal surfaces. Angled surfaces accumulate less water and eventually turn to pure snow. It looks like this:

Rain mixed with snow produces a layer of reflective ice on horizontal surfaces. Angled surfaces are more snowy.

Note that the water in the fountain has been frozen and covered with a layer of snow.


Snow Footprints

The day after I published the previous blog post, someone asked me if players left footprints in the snow. What a great idea! It wasn't even that hard to add, though I went back and tweaked the parameters several times. I had to define the player stride, foot width, foot spacing, foot height/crush depth, etc. to get realistically shaped and placed footprints. The numbers don't quite work out for my spherical "smiley face" character model, but it's close enough. Both the user/player and AI controlled players leave footprints in deep snow. Here is a screenshot showing footprints in the snow after I walked around for a while. The gaps in the footsteps where the places where I jumped (which doesn't leave footprints). If you look closely, you can see that the footprints are actually changes applied to the snow mesh rather than decals because that footprint on the very edge on the top of the wall leaves a small gap in the shadow on the snow below.

Footprints created using mesh deformation when the player and AI characters walk in the snow.

I made a Fraps video of a walk in the snow, complete with footstep sounds. The footsteps are a bit too fast, so they sound more like I'm running. Note that snow and concrete have different footstep sounds. I'll probably add a larger variety of footsteps and other sounds in the future.




That's all for now. I wonder what other interesting weather effects I can add?