Wednesday, February 1, 2017

San Miguel Rain and Snow

This post continues my San Miguel scene experiments using techniques from previous posts on rain and snow. I decided I wanted to see how this scene looked with wet and snowy surfaces and how well the environment coverage mask worked on a highly detailed, complex model.

Here are two screenshots showing a rainy San Miguel. The wet look on the stone ground, tables, and chairs is achieved by decreasing the material diffuse lighting and increasing both specular intensity and specular exponent. This is a pretty simple effect, but works well in practice, especially on the ground. Wetness is higher for horizontal surfaces, and surfaces that have an unoccluded view of the sky above. These are the surfaces that naturally receive and accumulate the most water.

The raindrops themselves are drawn as vertical lines that quickly fall in a mostly vertical direction depending on the wind. They look much better in a dynamic scene than they do in the static screenshots shown here. Each drop is visible for only a few frames.

Rain and wet ground. Raindrops are drawn as lines and look much better when moving.

Rain and wet ground + objects from another angle.

I also enabled snow coverage and let the simulation run for a few minutes. Snow is accumulated over time and is drawn as a white layer on top of the normal textured scene geometry as a postprocessing step in the fragment shader. Snow thickness/density is a function of polygon normal and is highest for horizontal surfaces that face the sky above. Since the camera is above the scene in this screenshot, most of the visible surfaces such as the tree leaves are pointed up and have a high snow density.

Snowy scene from above, after several minutes game time of accumulated snowfall.

I enabled the snow coverage mask for the next two screenshots. This works by precomputing a depth-based coverage mask for the scene that stores the vertical distance from the sky plane for each pixel. The fragment shader accesses this texture to determine whether the current fragment is at or below the recorded occluder height. If the fragment is below this height, it is occluded by other scene geometry from above and receives no snow. The texture is blurred using multiple projection angles to simulate a randomization from the typical vertical snowfall. This smooths the edges of the snow coverage mask in areas of high frequency geometry, such as under the trees. However, if snow is left to fall over an extended period of time, this saturates the coverage layer and produces sharper boundaries (as seen below).

The first image was created using a low resolution 512x512 coverage mask, and the second image was created using a much higher resolution 2048x2048 coverage mask. The second image contains higher frequency content for a more accurate representation of snow coverage, though it takes much longer to compute. Preprocessing time is still under a minute on 4 cores. I don't think it looks all that much better though.

Snowy scene using a low resolution 512x512 snow coverage mask.

Snowy scene with a high resolution 2048x2048 snow coverage mask. The snow "shadow" from the trees can be seen.

I'll continue to experiment with snow effects. It would be interesting to see how well true volumetric snow accumulation works on this scene. This will likely take several hours of CPU time, and generate a large amount of voxel coverage data.

[Update: It in fact only takes 7 minutes to create the snow volume for 1 billion simulated snowflakes. I underestimated how well the bounding volume hierarchy scales to scenes with many polygons. However, the snow coverage is very noisy due to the large number of small leaves and other objects. I think the white color non-volumetric snow actually looks better. If you're interested, here is a screenshot.]

Volumetric snow simulated using 1 billion snowflakes.

Monday, January 9, 2017

San Miguel Scene

This post doesn't contain much technical content, because I was busy with moving to a new house and then the holidays, so I didn't have much time to write code. Instead, I'll post some images of the San Miguel scene modeled by Guillermo M. Leal Llaguno of EvoluciƩn Visual. I downloaded the Lightwave object file model and textures from the McGuire Computer Graphics Archive.

It took me a few days at an hour or so per day to get the scene to render correctly in 3DWorld. I had to add several minor new features, including a user-specified environment map cube, per-model normal map options, texture invert, custom sunlight intensity, custom mipmap creation for the leaves, and a per-material metalness parameter. Some of the features worked well, but others need more work. In addition, there are some errors in this version of the model, including missing triangles, missing textures, and incorrect texture coordinates. However, these problems don't affect the overall scene quality too much, as long as you're not looking at the details.

I computed indirect lighting for both the sky and sun for this scene using a preprocessing ray tracing operation that took around 10 minutes on 4 CPU cores. Once the preprocessing is finished, the lighting can be used for all view points and various weather conditions. There is no fake ambient term here, all indirect lighting is computed over a 3D grid. This isn't a closed model - meaning, many of the polygons are two sided and don't form a watertight volume. This leads to light leaking through the walls in some places. I don't have a good solution for this problem yet.

I used a single 8192x8192 shadow map and a 6 face 1024x1024 cube map centered in the courtyard for environment reflections. 3DWorld computes the shadow map and cube map when the scene is loaded, and updates them dynamically when objects move or any lighting/environment parameters change. This update requires the scene to be drawn multiple times, but there aren't any dynamic objects enabled by default so this isn't a problem.

Here are some screenshots showing the San Miguel scene from various locations and directions. These are some of the most photorealistic images I've gotten from 3DWorld so far. It can render this scene at an average of almost 200 frames per second at 1920x1080 on my Geforce GTX 1070.

View of the San Miguel scene from the corner showing shadows and indirect lighting.

View from the trees looking down. The indirect lighting appears to soften the shadows.

San Miguel scene viewed from near the fountain with a low value of sun and sky indirect lighting but strong direct sunlight.

View from the upper balcony showing closeup of plants. The sky is cloudy, which makes indirect lighting stand out more.



Here are some screenshots showing cube map environment reflections on glass and metallic objects. The reflection and transmission model is physically based and uses true Fresnel equations. Reflections don't use per-object cube map centers yet, so they're not very accurate, but they still look pretty good. There are some alpha blending artifacts due to not sorting the faces for individual objects from back to front. The materials are themselves sorted correctly as the viewpoint changes. Some of the objects are misaligned with each other, such as the salt shaker. This appears to be a problem in the model file and not a 3DWorld bug.

Reflections in glasses and silverware. For some reason, the salt shaker's salt, glass case, and metal lid are misaligned from each other.

More environment reflections of objects on the table.



Cube map reflection in the window. The reflection is a bit undersampled and distorted because it's far from the single cube map center point.

Cube map reflections in the silverware on a table. Can you spot the sun?

Here is a short video where I fly through the scene and examine some objects. At some point I'll have to add a system to improve the smoothness of camera movements when recording videos.


Here is another video with a slower camera speed and my attempt at smoother movement.



This work has been pretty fun and interesting. It's quite different from writing graphics and physics algorithms. I can't wait to find more large and interesting models like this one, and the museum scene from a previous post, to show in 3DWorld. If I find enough of them, I can combine them into some sort of town and maybe build a game into exploring the environments.

Tuesday, November 15, 2016

Tiled Terrain Update and Screenshots

The past few weeks I haven't been working on 3DWorld as much as usual, partly because I'm preparing to move to a new house and have been distracted with other things. The work I've been doing with 3DWorld is mostly related to minor improvements to visuals and optimizations for generating and rendering large outdoor scenes. I've made many small improvements to tiled terrain mode. Each individual change has only a minor impact on the scene, but together they do improve visual quality considerably. I'll discuss some of these changes and show new screenshots in this blog post.

Terrain Normal Maps

I've modified the terrain texturing system in 3DWorld to support separate detail normal maps for each terrain texture layer (sand, dirt, grass, rock, and snow). I found suitable normal map textures online for all layers except for grass, for which I couldn't find a texture that I liked. The grass texture layer is drawn with a fine field of triangle grass blades, which cover up the mesh so that the normal map isn't visible anyway. This adds many more texture lookups inside the terrain fragment shader, but my GeForce GTX 1070 shows almost no change in frame rate. I'm sure this feature hurts frame rates on older/slower/cheaper cards though.

Here are some screenshots of steep normal mapped hills showing all of the texture layers. I have disabled trees and water so that the terrain is fully visible. Grass and plants are left enabled.

A different normal map is assigned to each of the five ground texture layers that make up this steep hillside.

Terrain normal maps viewed from a different angle.

Normal maps applied to rocky mountains, viewed from above from a distance. The terrain appears to have very high detail.

These new normal maps add a good deal of complexity and realism to the scene with little cost. They make the mesh appear to be highly detailed, almost for free.

Leafy Plants

After reviewing several other terrain generation and rendering systems written by other people, I decided that 3DWorld's terrain needs more detailed ground cover. The area on the ground under the trees was too bare, even with all of the grass. I had this as a todo list item for maybe a year before I finally got around to adding another plant type. Here they are. I call them "leafy plants".

View of a grassy beach with some new procedurally generated leafy plants.

There are four different types of leafy plants that appear at different elevations, including one type that is underwater. They're created by applying tree leaf textures to spherical sections that are scaled to an ellipsoid shape. This gives the leaves a smooth curve, rather than having them be boring flat polygons. However, they are also more expensive to create and draw than other types of plants that use single polygons (quads) for leaves. I had to use a sparse distribution of leafy plants to keep performance under control. These new plants seem to blend in well with the other plants, grass, and flowers. They're just one more component to the 3DWorld ground cover, meant to fill the gaps between the trees.

Improved Pine Tree Branch Randomization

I made another pass at procedural generation of pine tree branch sizes, positions, and orientations. This new approach uses more random numbers to produce a less symmetric branch coverage, which removes some of the repeating patterns found in my previous pine tree forests. Each branch is rotated a small random amount around the vertical (z) axis of the tree, and shifted a random distance up or down in z as well. This makes the trees look more natural and realistic, especially when shadows are enabled. Keep in mind that every tree in this scene is unique. Instancing is only used to save memory when there are more than ~100K trees.

Pine tree forest on the mountain, with high detail shadows and more randomized tree branch placement.

Note that in this screenshot, and the images from the following sections, I've increased the high resolution shadow map distance. This improves shadow map quality, at the expense of increased GPU memory usage and slightly longer tile generation time. My current GPU has 8GB of memory, so spending a few hundred MB on improved shadows seems like a good decision. The shadow map smoothly transitions to a lower resolution baked mesh shadow at about half the max view distance. It's possible to push the shadow maps out to the full view distance, at the loss of a smooth frame rate.

Large Numbers of Trees

I decided to test the limits of how many trees I could get onscreen at once. I optimized some of the tree placement and pine tree leaf/branch generation code so that it scaled better to many trees. At this point the number of trees is probably limited by the time taken to send the vertex data from the CPU to the GPU. I'm not sure how to profile this, or optimize it further.

Here is a scene showing pine trees on a terrain with tall mountains that extends into the distance.

Tall mountains covered with pine trees. The terrain height has been scaled and zoomed out to produce sharp peaks.

This is a scene showing tens of thousands of deciduous trees drawn using 50 unique generated tree models. 3DWorld can create hundreds of unique tree models with an 8GB graphics card. However, this increases initial scene load time, and I prefer to keep loading under a few seconds. [Update: I can see duplicate trees in this screenshot, so I increased the number of unique trees to 100 and tweaked the type randomization code to remove them.]

Distant, foggy mountains with trees.

Here I have decreased tree size and increased tree density in the config text file. The active area of the scene contains 4-5M pine trees, and around 1M trees are visible in these next two screenshots. This scene renders at a consistent 78 FPS, mostly independent of which way the camera is facing.

Maximum tree density: Pine trees cover every dry patch of land. This scene contains at least 1M trees and renders at 78 FPS. The white spot on the distance is a patch of bare snow that is too steep for trees.

A million pine trees near sunset. Note the complex cloud lighting and the dark tree shadows.

Fall Leaves

Yes, it's that time of year again. The trees outside my house are turning bright colors (mostly red) and covering the sidewalk with leaves. Once again I'm showing off screenshots of colorful fall trees in 3DWorld. I have tree leaf color parameters tied into the editing UI this time, which allows me to change colors interactively using the arrow keys on the keyboard. Here are two screenshots showing fall deciduous trees in a variety of colors.

Trees with colorful fall leaves spread across the rocky ground.

More fall leaves with grass, flowers, and plants.

I'll continue to work on trees in tiled terrain mode. Maybe I can add winter trees that have no leaves and are covered with snow in time for Christmas. Of course, we don't get snow here in San Jose, CA.

Thursday, October 13, 2016

Fun with Stacking Spheres and Cubes

This is more of a fun post compared to some of my previous technical posts. I'll be sure to add a lot of YouTube videos below and a few static images at the end. This is my first interactive scene editor feature, but is more for fun and amusement rather than real level editing. I don't imagine I could create any useful level by placing various one meter cubes and spheres one at a time in front of the player's camera.

The following videos show my progress on dynamic object placement. I've made huge improvements in my workflow and "artistic quality" over the past week. There are many additional improvements to be done and features to add. I believe that eventually this system can be made into a full level editor, but it has a long way to go before it's useful for this purpose.

None of these videos were recorded with sound. I still only have two options for video recording: Fraps with sound, limited to 30 seconds; and ffmpeg with unlimited length but no sound. I'm sure I'll figure it out eventually.

The first video shows a stack of 1 meter cubes of a variety of different materials that I slowly placed one on top of each other. It was more fun to push the stack down than it was to build it. Note that I didn't have dynamic shadows working for the cubes at this point. I later added the ability to destroy cubes with weapons instead of only being able to push them around and knock them down.


After a while I grew tired of placing blocks one-by-one, so I made the placement delay another in-game user editable parameter. I found it was much easier to create large piles of cubes and spheres with the delay set to 0. In reality it's one per frame, or 16ms delay for a framerate of 60 FPS.

This next videos shows how I can create large stacks of cubes. I still hadn't added shadows. The cubes are created so that they stack but don't intersect each other. If I have collision detection turned on, I'll quickly wall myself into a corner. But if I disable collision detection I can just insert the cubes into one place and they'll form a stack as the bottom cube is inserted at the bottom and will push all the other cubes up... or maybe the new cube is inserted and pops up to the top of the stack. I'm really not sure - the code is pretty complex, and it all happens within a single frame. At least the cubes don't intersect with anything, which is the most important property. Oh, and if I made the cubes small enough, I could probably walk up them like stairs. I really should try that and record a video if it works.


I had the material editing menu up for much of the video so that I could easily change the materials. It would be nice to hide the menu somehow in the future while still having a way to change the textures. The material itself can be selected with a hotkey from a user-created list of predefined materials that is read with the scene data, but it doesn't have the variety of textures I wanted to use for this video.

There is a small yellow number printed near the center of the screen that counts the number of objects placed. Here I've placed 1533 cubes and spheres. This was added to track how many objects can be created within reasonable performance constraints.

I finally enabled shadows for the cubes and spheres. I spent some time creating high stacks and then knocked them down in this next video. It was pretty fun! All of the weapons push the cubes around, but the rocket launcher has the most force per unit time. The seek-and-destroy has about twice the force, but fires too slowly. If I make the cubes small enough, around 0.4m on a side, I can push them out of the stack with a rocket or two. All the cubes above the one that was pushed out will fall in quick succession. The system is very stable, but this many cubes hurts the frame rate at this stage of development.


I tried to stack spheres next. At first it didn't work because the collision detection for this case wasn't fully implemented, and the spheres just sat there overlapping each other, forming a giant blob of spheres. My attempted fix had a surprising an unexpected effect, shown in the video below. Groups of spheres stuck together in a quivering unstable ball, then floated off on a random path toward the sky. Sometimes the sphere clusters got stuck on static level geometry and pulsated there. What's going on here? I recorded the video, but I hadn't enabled a lower screen resolution and the video compression couldn't keep up well. The frame rate dropped, leading to a laggy recording where some parts ran at up to 2x realtime. Sorry. I've reduced the sphere drawing time by 5-10x since recording this so it shouldn't be as much of a problem in the future.


What was causing this bug? I had the collision response vector sign backwards, and colliding spheres were being pulled together like magnets rather than pushed apart. They would overshoot and separate slightly, only to be pulled back together in the other direction. Some instability (floating-point error?) caused the clusters of attracted spheres to drift off in random directions with a random walk. Some sank into the floor/ground, some floated off into the sky, and some got stuck in the static level geometry such as the lamp posts and building walls. If I had come across this bug without just having rewritten the sphere intersection code, I would have never figured it out. The effect was pretty funny though. I might even add an option to enable it from within the material editor. Magnetic materials? Negative gravity materials? I'll have a hard time justifying this as anything resembling real physics!

I later got spheres working without much trouble. They stick to each other like masses of fish eggs. This actually reminds me of the little colored sticky foam balls that my daughter used for building sculptures and got spread all over the house. The user can disable player collision detection and float around creating sculptures of 1 meter spheres that fill the level. I'm not sure what use this is in real gameplay, but you can get a few thousand spheres scattered about before the framerate starts to drop.


Maybe it's unrealistic to stick brick spheres together like this. What should really happen when placing spheres this way in a sane world? I guess they would all fall down and roll around until they covered the ground - but that's no fun! Maybe I can make another option to control this later.

The final video of this post shows the exploding cube effect. I can mark objects as exploding in the material editor, and then fill the scene with them. Any hit from a weapon will detonate the object and also destroy the surrounding objects within the blast radius. One explosion takes out several cubes at a time, which allows me to destroy them all much more quickly. This clears space for placing even more stacks. I call it the "undo" feature. If you need more precision, there's also a "shatterable" material mode that will only destroy the object that was directly hit with weapon fire.


I spent some time stacking cubes and pushing them down, but it's not too much different from what I've shown in the previous videos. Here are two images of my "artwork". The first image shows a few thousand stacked cubes of various sizes and materials. This was earlier in the development when the framerate was too low to add more, and shadows still weren't enabled.

Thousands of stacked cubes of various materials and sizes litter the courtyard. The office workers are hiding in fear.

Imagine having this stack of cubes outside a real office building! Would anyone want to walk anywhere near it? In a real environment, the wind would probably push these stacks over, and one falling stack would bring all of the others down like dominoes. This stuff is fun to build due to the pure absurdity of the whole system. But, keep in mind, this is just a prototype of the level editor that in the future will be used to construct structures such as the office building itself. It's in no way meant to represent physically correct physics or real gameplay. I'm not creating another Minecraft clone here.

This next image shows a scene that took quite a while to create, even with one cube placed per frame. There are over 10,000 cubes here, in stacks that reach hundreds of feet into the sky. Some of them go above the cloud layer and beyond the far clipping plane so that the boxes at the top aren't even visible to the player. The stacks take up to 10s to fall if you remove a block near the bottom and watch the rest of them drop one by one. However, they do still completely fall. Shadows and indirect lighting from the lamps in the courtyard are enabled.

More than 10,000 cubes stack to the sky. This artwork took me a while to create and was a shame to lose when I realized I hadn't completed the "Save Level" feature yet.

It was a shame to create this wonderful bit of cube insanity and then throw it all away when I quit 3DWorld. See, I didn't implement a "Save Level" feature until after creating this scene. On the plus side, this problem encouraged me to finish the save feature in a hurry. Now the save system is complete, and I can save, load, and modify all of my cube and sphere creations as much as I want.

This post is short on technical details, so I should probably talk about why the performance was poor in the beginning, how I improved it, and how the physics works.

There are two types of user placeable objects in 3DWorld: dynamic objects and movable static objects. There are various other object types, but they can't be created by the player. The previous post on sphere and cube materials was all about dynamic objects. These include weapons, ammo, projectile effects, smiley body parts, etc. Anything that has free movement physics is a dynamic object. These objects support Newtonian physics with gravity, elastic and inelastic collisions, momentum, friction, air resistance, buoyancy, etc. Basically, all of the physical parameters that are normally modeled in games and physics engines, plus a few obscure ones that have little impact on the physics but were fun or challenging (= fun) to add.

There are three problems associated with building using dynamic objects:
  1. The physics really only works correctly for spheres (as implemented in 3DWorld).
  2. This system is too expensive for simulating thousands of interacting objects.
  3. There are problems with stability when creating stacks of objects.
The solution to these problems is to use a different physics system that solves for static constraints rather than realtime dynamics. I've extended my framework for movable (player pushable) static objects to work with user-placed material cubes and spheres. It works better with cubes since they have flat surfaces and the constraints are simpler, but it's acceptable for spheres. I could in theory add support for other 3DWorld shapes: cylinder, cone, capsule, torus, polygon, extruded polygon. However, that's a lot more work, and these other more complex shapes have more parameters that need to be set by the user. Cubes and spheres only have one parameter: size/radius.

When I say static objects, I really mean persistent objects that are created once and last forever, and will remain in place if no dynamic forces act on them. Static movable objects have a more limited set of physical parameters and a simpler physics model. There is no momentum, torque, friction, or elasticity. All collision responses other than gravity are resolved statically within a single frame. They can be pushed around by the player, stacked, and dropped - but that's about it. As I've shown in previous posts, buoyancy in water works, as do some other minor effects. Since there is no torque or friction, and materials are infinitely hard/rigid, they can be placed in stable stacks of unlimited height. As long as there is an object below supporting the objects above, everything stays in place. Remove one object (cube) from the bottom, and the cubes above will fall one-by-one until the entire stack has fallen down to the next supporting block (or the ground). Since static cubes don't rotate when stacked, any contact point on the bottom of the cube will support it, even if it's just a tiny corner of another cube. At least this simplifies the math.

Static objects are faster to simulate than dynamic objects, but they're still not free. My 10K blocks scene was dropping the frame rate all the way down to 40 FPS. Almost all the time (maybe 80%) was in querying the scene bounding volume hierarchy (BVH) for adjacent objects that were either supporting or resting on the current query object. The BVH was built incrementally while the objects were added, so it may not be optimal, in particular for these dense stacks.

The important observation here is that most of the time, an object is not moving at all. In fact, the majority of the game frames observe no block motion. The blocks only move when dropped, pushed, or shot at by the player, and there's a limit to how many blocks a player can get moving at the same time. The fix is to track which objects are moving vs. "sleeping" and not simulate the sleeping objects every frame. I used an active counter to track how many frames it's been since each object was last updated/moved. If an object goes for 8 consecutive frames without moving, it's marked as sleeping and only checked for collision once every 16 frames. This cuts the simulation time down by a factor of 16 in mostly static scenes. To avoid large delays with falling objects, every active object wakes up all other objects within a certain distance of it when it moves. If the player fires a rocket at a cube, the collision/explosion will wake up the cube, which will wake up the cubes above and below it as it falls. The chain effect will incrementally wake up the entire stack (but just that one stack), and make all of the blocks fall in quick succession.

This change improved the framerate from 40FPS to 90FPS. Good enough for now. I think the framerate is currently limited by actually drawing all of the cubes, or maybe the occlusion culling. I should be able to create 15K+ cubes while still hitting a solid 60 FPS. Spheres are more expensive to draw, so I can only have about 2000 of them in the scene.

Monday, September 26, 2016

Physically Based Materials

I'm continuing to work on improving object reflections in 3DWorld. The past few weeks I've been trying to integrate reflective objects into the engine as physically-based materials. As a first step, I provided a way to create and throw spheres and cubes with user-defined material properties as dynamic objects in the scene. The properties are specified in a config text file in keyword/value format. There is also a simple UI for realtime editing of material parameters and creating new materials. The UI is an in-game text overlay with arrow key input similar to the onscreen display you would find in a computer monitor. It's very simple but usable. I would like to use the mouse to select menu items, but I think it would interfere with the user's ability to play the game and interact with the world while the menu system was active.

The material parameters supported are:
  • Material Name - User-defined text string identifier
  • Texture - Name/filename of texture to use; "none" to disable texturing
  • Normal Map - Name/filename of normal map texture to use; "none" to disable normal mapping
  • Shadows - Flag to enable cube map shadows for point light spheres
  • Emissive - Flag to mark as having an emissive color (no lighting)
  • Reflective - Flag to mark surface as reflective (using an environment cube map)
  • Destroyability - Tag to mark as destroyable, shatterable, exploding, static, etc.
  • Metalness - Value in [0,1] to represent dielectric vs. metal
  • Hardness - Value in [0,1] to set hardness for elastic collision physics
  • Density - Value of material density, used to compute mass and buoyancy in water
  • Specular Magnitude - Magnitude of specular light reflection in [0,1]
  • Specular Shininess - Shininess of specular light reflection, converted to surface roughness
  • Alpha - Value in [0,1] to specify alpha value of partially transparent objects such as glass
  • Light Attenuation - Factor for computing transparency and scattering within the material
  • Index of Refraction - Value for controlling reflection and refraction in glass, plastic, etc.
  • Light Radius - Radius of light emission for light source spheres
  • Diffuse Color - {R,G,B} diffuse, emissive, or light source color value
  • Specular Color - {R,G,B} specular color value ((1,1,1)=white for non-metals)
For example, the material "Gold" is specified as:
hardness 0.8
density 19.29
alpha 1.0
reflective 1
metalness 1.0
specular_mag 1.0
specular_exp 128.0
diffuse_color 0.0 0.0 0.0
specular_color 0.9 0.6 0.1
add_material Gold


I recorded several videos showing how 3DWorld's dynamic, throw-able spheres and cubes work, including realtime editing of material parameters. I feel that videos are required to show these features. It's just too hard to tell what's going on in static images. I can only cover a small fraction of the materials, parameters, and features available in these short videos.

Sorry, none of these videos were recorded with sound. The only sounds I have enabled in these tests are for throwing and bouncing anyway. These videos are too long to record with the free version of Fraps (which has sound). The FFmpeg video recording wrapper support in 3DWorld can record unlimited length videos and compress them in realtime, but I haven't figured out how to record audio yet in Windows.

Here is a video of me throwing spheres of various materials around in the scene and editing the material parameters in realtime. Everything in the scene is reflected in mirror surfaces, including the placeholder smiley player model.


This is a video showing dynamic sphere point lights and cube mapped shadows in a dark room. Lighting, shadows, reflections, and various other parameters can be assigned to materials and edited in-game.


I later decided to add support for dynamic material cubes as well as spheres. Here is a video of me throwing some cubes around and changing their textures and normal maps. Cubes and spheres use partially elastic collision models and will propagate collision forces around when piled up on top of or against each other. They can be stacked, pushed around the scene, and the player can stand on them, though there are some issues with simulation/physics stability.


Density is one of the material parameters that can be modified in realtime through the material editor. The material's density affects the amount of resistance to pushing and its buoyancy in water. In this video, I edit the density of the brick cubes, which affects how high they float in the water or how quickly they sink. The player can stand on and stack objects on the cubes as well, and everything works correctly. Spheres can also be used.


This is a video of my incomplete puzzle/platformer scene. It uses a variety of different effects and materials. The marble floor and some of the glass surfaces are plane reflectors. I haven't finished all of the traps and obstacles, and the various sections aren't even fully connected. I had to use the "flight mode" cheat to get to the second section. I'll post more screenshots and videos of this map later when it nears completion.


 I'm continuing to work on dynamic objects and materials. I would like to add support for the other shape types supported by 3DWorld: polygon, extruded polygon, cylinder, cone, capsule, and torus. I'm also considering adding more physics properties to the editable materials list, for example parameters for friction, air resistance, deformation, elasticity, player damage, etc. Regular dynamic 3DWorld objects such as weapon projectiles and pickup items use fixed materials, which already have all of these properties. Finally, I would like to add a way to make these objects into efficient static scene objects so that this mode acts like an in-game scene/map editor. I'm curious to see what the performance is when there are thousands of placed objects of dozens of different materials in the scene.

Wednesday, September 14, 2016

Reflections and Roughness

This post continues my work on cube map reflections from where I left off in an earlier post on this topic. I had it working pretty well at the time. However, I was never able to get 100 reflective objects in the scene in realtime because I didn't have enough GPU memory on my 2GB card. I now have a GeForce GTX 1070 with 8GB of video memory, which should allow me to add as many as 300 reflective objects.

Another problem that I had with the earlier reflection framework was the lack of surface roughness support. Every object was a perfect mirror reflector. I did some experiments with mipmap biasing to try and get a proper rough surface (such as brushed metal), but I never got it working at a reasonable performance and quality point. I think I've finally solved this one, as I'll explain below.

3DWorld uses a Phong specular exponent (shininess factor) lighting model because of its simplicity. Physically based rendering incorporates more complex and accurate lighting models, which often include a factor for surface roughness. I'm converting shininess to surface roughness by mapping the specular exponent to a texture filter/mipmap level, which determines which power-of-two sampling window to use to compute each blurred output texel. I use an equation I found online for the conversion:
filter_level = log2(texture_size*sqrt(3)) - 0.5*log2(shininess + 1.0)

The problem with using lower mipmap levels to perform the down-sampling/blurring of the reflection texture is the poor quality of the filtering. Mipmaps use a recursive 2x2 pixel box filter, which produces blocky artifacts in the reflection as seen in the following screenshot. Here the filter_level is equal to 5, which means that each pixel is an average of 2^5 x 2^5 = 32*32 source texels. Click on the image to zoom in, and look closely at the reflection of the smiley in the closest sphere.

Rough reflection using mipmap level 5 (32x32 pixel box filter) with blocky artifacts.

The reflection would look much better with a higher order filter, such as a bi-cubic filter. Unfortunately, there is no GPU texture hardware support for higher order filtering. Only linear filtering is available. Adding bi-cubic texture filtering is possible through shaders, but is complex and would make the rendering time increase significantly.

An alternative approach is to do the filtering directly in the fragment shader when rendering the reflective surface, by performing many texture samples within a window. This is more of a brute force approach. Each sample is offset to access a square area around the target pixel. I use an NxN tap Gaussian weighted blur filter, where:
N = 2^(filter_level+1) - 1
A non-blurred perfect mirror reflection with filter_level=0 has a single sample computed as N = 2^(0+1)-1 = 1. [Technically, a single filter sample still linearly interpolates between 4 adjacent texels using the hardware interpolation unit.] A filter_level=5 Gaussian kernel has N= 2^(5+1)-1 = 63 samples in each dimension, for 3969 samples total. That's a lot of texture samples! It really kills performance, dropping the framerate from 220 FPS to only 19 FPS as shown in the screenshot below. Note the framerate in the lower left corner of the image. But the results look great!

Rough reflection using a 63x63 hardware texture filter kernel taking 3969 texture samples and running at only 19 FPS.

The takeaway is that mipmaps are fast but produce poor visual results, and shader texture filtering is slow but produces good visual results. So what do we do? I chose to combine the two approaches: select a middle mipmap level, and filter it using a small kernel. This has a fraction of the texture lookups/runtime cost, but produces results that are almost as high quality as the full filtering approach. For a filter_level of 5, I split this into a mipmap_filter_level of 2 and a shader_filter_level of 3. The mipmap filtering is applied first with a 2^2 x 2^2 = 4x4 pixel mipmap. Then the shader filtering is applied with a kernel size N= 2^(3+1)-1 = 15. The total number of texture samples is 15x15 = 225, which is nearly 18x fewer texture accesses. This gets the frame rate back up to around 220 FPS.

I'm not sure exactly why it's as fast as a 1x1 filter. The texture reads from the level 2 mipmap data are likely faster due to better GPU cache coherency between the threads. That would make sense if the filtering was texture memory bandwidth limited. I assume the frame rate is limited by something else for this scene + view, maybe by the CPU or other shader code.

Here is what the final image looks like. It's almost identical in quality to the 63x63 filter kernel image above. The amount of blur is slightly different due to the inexactness of the filter_level math (it's integer, not floating-point, so there are rounding issues). Other than that, the results are perfectly acceptable. Also, this image uses different blur values for the other spheres to the right, so concentrate on the closest sphere on the left for comparison with the previous two images.

Rough reflection using a combination of mipmap level 2 and a 15x15 texture filter kernel taking 225 texture samples.

Here is a view of 8 metal spheres of varying roughness, from matte (fully diffuse lighting) on the left to mirror reflective (fully specular lighting) on the right. Each sphere is one filter_level different from the one next to it; the specular shininess factor increases by 2x from left to right.

Reflective metal spheres of varying roughness with roughest on the left and mirror smooth on the right.

This screenshot shows a closer view of the rough sphere on the left, with the filter_level/specular exponent biased a bit differently to get a clearer reflection. There are no significant filtering artifacts even at this extreme blurring level.

Smiley reflection in rough metal sphere showing high quality blur.

I'm pretty happy with these results, and the solution is relatively simple. The next step is to make the materials editable by the user and to make the reflective shapes dynamic so that they can be moved around the level. In fact, I've already done this, but I'll have to show it in a later post.

Sunday, August 28, 2016

Procedural Universe Rendering

I recently installed a new version of Microsoft Visual Studio on my home machine where I develop 3DWorld. The upgrade from MSVS 2010 to MSVS 2015 had been delayed until I found a good deal on the latest version, which sells for hundreds of dollars. I managed to get a used copy on amazon.com for a fraction of the retail price, and it seems to work just fine.

Overall it only took me a few hours to get 3DWorld building and running with the new compiler. There were various minor fixes for syntax errors and warnings, and I had to rebuild some of the dependencies. However, the upgrade did require me to spend a lot of time setting up my universe mode scenes, for about the fourth time in the history of 3DWorld. My universe scenes were all invalidated and had to be reconstructed because the planets were different types and in different places. I had to re-place the ships and space stations and change various parameters.

The problem is that that the built-in random number generator values changed again. It seems like every version of Visual Studio gives me different values from rand(). Normally I wouldn't use the system rand() because it's slow, poor quality, and varies across compilers/OSes. I have my own custom random number generator that solves these three issues that I've been using since I switched to MSVS 2010 about 5 years ago. I thought that was the last time I would have to deal with the universe random seeds problem. I guess not.

Unfortunately, I missed a call to rand() that was used to precompute a table of Gaussian distribution random numbers to avoid generating Gaussian distributions on the fly. My custom random number generator was still being used to select a random entry from the Gaussian table, but the entries were all different. This distribution was used to select the temperature and radius of each system star. The star radius affected the planet orbits and the star temperature affected the planet types and environments. All of the galaxies and systems locations were the same, but within a solar system everything was different.

I fixed the problem and added a random seed config file parameter. This made it easy to regenerate the current system until I found one I liked, rather than having to fly around the galaxy looking for a suitable starting system for the player. I was looking for a seed that would give me a yellow to white star, an asteroid belt, and at least one of each type of interesting planet (Terran/Earth-like inhabitable, gas giant, ice planet, volcanic planet, ringed planet, etc.) In the process I came across some interesting and beautiful planets such as the gas giant in the screenshot below that looks like Jupiter.

Closeup of a procedural gas giant that looks like Jupiter, including small elliptical "storms".

Shadows

I settled on a system that had some interesting shadow effects, so I thought I would take some screenshots of the different types of objects that cast and receive shadows. Here is an image of a planet with a moon that is in the middle of the system's asteroid belt. I don't know if this actually happens in real solar systems, but it certainly makes for interesting gameplay. It's fun to watch the ships fly around the planet trying to (or failing to) avoid colliding with the asteroids. In this screenshot, I've positioned my ship so that the star is behind me and I'm in the shadow of the moon, looking at the asteroid belt and the planet, which is right in the middle of the asteroids.

Moon and planet casting shadows on an asteroid belt.

The small asteroids in the near field are fully shadowed and black, and the asteroids further away show a dark cone of shadow extending toward the planet in the center of the image. Some of the shadowed asteroids are difficult to see because they blend in with the black universe background, but you can definitely see shadowed asteroids contrasted against the planet. The shadow cone eventually disappears as the moon occludes a decreasing amount of the light from the star as the distance from the asteroid to the moon increases. This is similar to how, on Earth, shadows from nearby objects are much sharper than shadows from distant objects. Also note that the moon doesn't actually cast a shadow on the planet in its current position.

Here is a nice blue ocean planet that has a ring of asteroids around it. The ring casts a thin shadow near the equator of the planet. This can be seen as a thin dark line a bit below the center of the planet. This shadow is ray traced through the procedural ring density function in the fragment shader on the GPU to determine the amount of light that is blocked. The sun is behind my ship and a bit to the right. You can also see that the planet shadows the asteroid belt on the back left side. I found another planet where the moon should cast shadows on the rings, so I'll have to implement that in the code next.

Beautiful blue planet with asteroid belt rings. The rings cast a faint shadow on the planet and the planet casts a soft shadow on the rings.

I was lucky enough to find a rare occurrence of a moon casting a shadow on a planet - a solar eclipse! However, the relative sizes and distances between the star, moon, and planet in 3DWorld aren't to scale with real distances, so it may not represent a physically correct eclipse. I don't see these very often, and the previous planet configuration (MSVS 2010) didn't have one of these in any nearby star systems. The moon slowly revolves around the planet with an orbital period of around an hour, and after a few minutes of time the shadow no longer intersects the planet.

Rare occurrence of a moon casting a soft analytical shadow on a planet. The planet also reflects light onto the moon.

Note that the shadow has a physically correct umbra and penumbra. This is computed in the fragment shader when rendering the planet. The amount of light reaching the planet is calculated as one minus the fraction of the sun disk that is occluded by the moon. The sun is modeled as a circular/disk light source and the moon is modeled as a sphere projecting into a circle along the light vector. You can find the math for such a calculation here.

Bonus video of asteroid bowling! Here is a video of a planet plowing through the asteroid field at 100x speed, with a moon trailing behind it. I fixed the asteroid belt placement after recording this video.



Nebulae

Nebula rendering is not new to 3DWorld. I've shown images of 3DWorld's nebulae in previous posts such as this one. I recently went back and reworked the shader code that determines the color and transparency of each pixel in the nebula. I made a total of three changes:
  1. Added an octave of low frequency 3D Perlin noise to modulate the density/transparency of the nebula to give it a more random, nonuniform shape rather than looking like a large sphere.
  2. Increased the exponent of the noise from 2.0 to a per-nebula random value between 2.0 and 4.0 to produce stronger contrast between light and dark areas (wispy fingers).
  3. Switched to additive blending to model emissive gas rather than colored occluding material for high noise exponent nebulae to give them a brighter appearance.
Here are some nebula screenshots. They show the evolution of nebula rendering as I applied my changes to the algorithm. The first two show the original algorithm, the middle two show changes 1 and 2, and the last four images show the final code. The stars in these screenshots are in front of, inside, and behind the nebula.











Keep in mind that nebulae are volumetric objects computed using 3D noise, not just 2D images. They are drawn with 13 crossed billboards, allowing the player to fly in and around them with minimal rendering artifacts. I got the idea from this video.

That's it for nebulae. I'll add some more images if I change the algorithm again in the future. Sorry, I haven't created any nebula videos. The fine color gradients just look horrible after video compression, and it ruins the wispy, transparent effect.