Friday, March 24, 2017

Realtime Terrain Editing

I've changed topics once again and have gone back to working on realtime terrain editing. 3DWorld's heightmap based terrain has been editable using a brush system for a few years now. I'm currently working on additional editing functionality. Grass and trees can now be added and removed along with terrain height editing. This works for deciduous trees, pine trees, palm trees, grass, and flowers.

This is going to be a technical post. Sorry to those of you who were expecting another post with pretty pictures of fancy custom materials. I'll throw some samples of my wonderful terrain editing artwork in among the blocks of text to break things up a bit. Keep in mind that everything was created in realtime, and none of the scenes in the images took more than a minute or so to construct. I'm not an artist, but I'm sure an artist could make something really cool with these features.

Scene showing custom edited tree, grass, and flower coverage created in under a minute.

All editing modes use a brush system, with keyboard keys switching between modes and add vs. remove operations. The following modes are available:
  • Increase Terrain Height
  • Decrease Terrain Height
  • Smooth Terrain (constant height)
  • Add Trees (deciduous, pine, or palm depending on tree mode - another key)
  • Remove Trees
  • Add Grass (and flowers)
  • Remove Grass (and flowers)

Brushes have five parameters:
  • Brush Shape (square or round) 
  • Brush Size/Radius (in heightmap texel increments, from 0.5 to 128)
  • Brush Weight (applies to height increase/decrease and grass add/remove only)
  • Brush Falloff (for round brushes, controls falloff with radius: constant, linear, quadratic, sine)
  • Placement Delay (controls smooth brush application/painting vs. discrete stamp operations)

Terrain height editing operations only work for heightmap terrain read from a source image. The editing operates directly on heightmap image data stored in memory so that it can be saved back to an image on disk. Each user brush operation is stored in a list of edits that can be written to and loaded from disk, allowing the user to save their work. Brush operations that increase and decrease terrain height also support undo, which works by inverting the brush weight and reapplying it as a new brush at the same size and location. The updated height data is used everywhere, including in overhead map mode, which serves as a simple image viewer for the terrain in false colors.

3DWorld can load, display, and edit a terrain up to 2^15 by 2^15 (32K x32K) pixels in size, though the largest terrain image I have is the 16K x 16K Puget Sound dataset. This is an enormous 256M pixels! I can't even open the image in any standard image editing programs. Windows Explorer hangs when I try to click on the image because it can't even generate the thumbnail preview. Height data can be in either 8-bit or 16-bit resolution. Most of the larger datasets are 16-bit for higher precision.

Trees and grass can be added and removed without a heightmap because they operate directly on the procedurally generated data, overriding the procedural results. However, these brushes can't be saved (yet), and if the player moves far away from the edited region and comes back, the changes will be lost. I'll see if I can improve this in the future by saving and reapplying brushes when the terrain tiles they operate on are generated. Because of this, users will generally want to edit the height values before editing the vegetation.

Another example of my awesome artwork. This is a section of the Puget Sound heightmap, with custom hills and valleys and user placed trees and grass. The spikes in the back are one mesh quad wide and over 1 km tall.


All editing operations work on heightmap texels, or individual mesh elements when heightmaps aren't being used. Depending on the scene size/grid resolution, these units range from 1m to 10m. For example, the Puget Sound dataset is 10m resolution. This is around the spacing between trees, which means that individual trees can be added or removed. One mesh texel also contains a few hundred blades of grass, which isn't realistic but looks good enough. Addition and removal of grass is based on brush weight, meaning that the number of grass blades can be smoothly adjusted per grid cell. Flowers are generated or removed to match the grass density.

3DWorld also supports "zoomed out" heightmap editing on larger groups of power-of-two texels (2x2, 4x4, 8x8, etc.) This allows large heightmaps to be quickly modified without requiring a huge number of brush strokes to update millions of pixels. This increases CPU usage during editing, and is limited to a reasonable zoom factor of around 16x16. Zoomed editing is useful for creating larger features such as mountains and lakes that span many terrain tiles.

Who says you can't have a forest of 100K *unique* palms trees packed so close together you can't see the ground? I can create whatever I want in my map.

Heights (mesh z-values), trees, and grass and updated per-tile during editing. One tile consists of an NxN grid of height values, where N is typically equal to 128. Tiles that are modified have their data recomputed and sent to the GPU (height texture, material weight texture, ambient occlusion map, shadow map, etc.) The largest brush size available corresponds to one tile width (128), so in the worst case 4 tiles are updated per brush, if the brush happens to fall at the intersection of 4 tile corners. If brush delay is set to 0, this is 4 tiles per frame. The procedural generation and rendering system is capable of updating 4 tiles per frame at a realtime framerate of at least 30 Frames Per Second (FPS). Average editing framerate is often over 100 FPS. Most of the CPU time is spent recreating the various texture maps rather than on the editing operations themselves.

Smiley face "cookie" created with custom mesh brushes. This one is several hundred meters across, and has ponds for eyes and trees for hair.

The most important remaining task is adding a save + load system for trees and grass. That may also be the most difficult task. Considering the procedural nature of the vegetation system, it doesn't interact well with user edits. Edits are modifications on top of the procedural content; they can't be applied until the procedural content is first generated. This means that a saved edit file can only be replayed if the terrain tiles are loaded in the same order they were loaded during the initial editing. Unless the editable area is constrained to be within a few tiles of the starting location, this is difficult to guarantee. It requires some cooperation of the user - which means it only really works well when the user knows what they're doing.

I could, of course, save the entire state with every object placement. Then, when the save is loaded, skip all procedural generation and use the objects from the save file. It's unclear exactly how this would work if I continue editing the scene. If trees are removed and then re-added, are the new trees procedurally generated again, or do they need to match the trees from the save file that were removed? Also, there are performance/runtime issues to consider. I'll continue to think about this.

Finally, here is a video showing how I created a smiley face "cookie" within the Puget Sound scene. This one is similar to the steps I used to create the other smiley in the image above. Note that the water is at a constant height, so holes that are deep enough become filled with water.



If you're curious how I added the brush overlays as colors spread across the mesh, theses are done using the stencil buffer. A cylinder is drawn around the brush (cube for square brushes) in the proper editing color. Then the stencil buffer is used to mask off all but the pixels that are blocked by the mesh and trees, creating a decal effect. This way, the user can see exactly what area is being edited.

Here is another terrain editing video in the same scene. I loaded this heightmap with some existing edits, then added more.



I had only 1 of 4 CPU cores allocated to terrain generation/update, and the other 3 were allocated to video compression. This made update laggy since it's normally multi-threaded.That's why the video appears to play back so quickly in some places. It was recorded assuming a constant 60 FPS, but in reality I was getting 30-50 FPS during mesh height and tree editing. Grass editing takes almost no time. Also, the glitchy water tiles near the end of the video are only present during video recording, so I have no idea what's going on there. This is the first time I've seen that artifact.

Bonus: If you're curious how large the Puget Sound dataset is, take a look at this overhead map view. The area I was editing in in the center around that red circle. It's a tiny fraction of the entire heightmap. In reality, the heightmap is tiled/mirrored so that it actual goes on forever.

Puget Sound heightmap showing small user edited area in the center near the red circle.

Saturday, February 25, 2017

Yes, More Custom Materials!

It's time for more fun with custom materials in the form of reflective and refractive cubes and spheres. Every time I get distracted by some other new feature, I keep coming back to these materials. Part of the reason is that I've never seen a system that can do something like this before. I'm not saying no other game engines or material editors like this exist, just that I've never seen one.

In theory, 3DWorld should be rendering these materials using close to physically correct lighting models. Of course, there could be bugs. I have no idea how to check for correctness. Some of the materials I can create may not exist in the real world, such as a half metal half glass. Or a material that absorbs/scatters blue light and reflects red light. Maybe they exist, but I don't have any around for reference, and good luck finding a reference image for something like that online.

What's changed since I last blogged about this stuff? I think I finally have refraction working correctly with per-object cube maps, at least for cubes and spheres. These two shapes are simple enough that I can ray trace them inside the shader to compute correct ray intersection points and reflection/refraction angles. The shader code is pretty complex as it handles up to two reflections and two refractions corresponding to both the front and back faces of transparent materials. In addition, it handles volumetric light scattering and light absorption/attenuation. This is what we see in volumetric/solid, partially transparent materials such as ice, glass blocks, colored plastic, liquids such as milk, etc.

There are an incredibly large combination of material parameters that I can create in 3DWorld, so I can't possibly show them all in this post. At best I can show a dozen or so examples. Here is the set of parameters that can be edited in realtime that affect lighting (excluding parameters used for physics):
Texture, Emission, Reflectivity, Metalness, Alpha, Specular Magnitude, Shininess, Refraction Index, Light Attenuation, Diffuse Red, Diffuse Green, Diffuse Blue, Specular Red, Specular Green, and Specular Blue. These user parameters are explained more in my previous blog post on this topic.

Parameters are changed through my simple onscreen editing system where the arrow keys select the variables and move the sliders. It's similar to a monitor's onscreen menu system. Maybe it's not the best or nicest looking UI, but it's simple to use, easy to work with in the code, and has almost no impact on framerate. Plus, it avoids having to pull in a 3rd party UI library and figure out how to make it work in fullscreen mode when 3DWorld only supports a single window + render target. I did, however, add a texture preview image and some small color indicator circles. Everything but the texture image is ASCII art. If I ever release 3DWorld to the public, I'll have to add a better UI.

Here is an example of a refractive glass cube and sphere together. Refraction currently only works correctly with sphere and cube shapes; other shapes are approximated without knowing the exit point of the view ray from the object. You can see the reflection of the sphere in two of the back faces of the cube. The fragment shader traces each view ray through the object. The ray is both reflected and refracted at the front surface of the shape (the current pixel's 3D position in the fragment shader). The refracted ray is then intersected with the back face of the shape using a ray-sphere or ray-cube intersection. The hit position is reflected back into the shape, and also refracted out of the shape into air. The reflection and refraction angles depend on the indexes of refraction at the material/air interface. Each of the rays that exits the shape is looked up in the correct face of the reflection cube map to determine the color to use for blending. This produces first-order physically correct reflections and refractions, as seen in the image below.

Glass cube and sphere drawn using a cube map with two reflections and two refractions, in shadow.

Does that look like a real glass cube? Maybe. I do have a clear plastic cube that I used as a reference last year. Unfortunately, I seem to have misplaced it after moving to the new house. The sphere certainly looks like images of glass spheres I found on Google (example). It has the typical inverted, zoomed out refraction image of glass spheres. However, I don't have the physical parameters (such as index of refraction) for the glass used in those images, so I can't do an accurate comparison. A small change in index of refraction makes a large change in the reflections and refractions, especially the second order ones. The difference is amplified with each reflection/refraction.

I've also experimented with more than two reflections/refractions, but this didn't seem to improve the results, so I left it out. It definitely changed the results, but it wasn't clear that they were more correct. I don't think the average human can detect correct third order reflection/refraction effects. Also, multiple bounces in spheres tends to magnify part of the image, which makes the pixel borders of the cube map texture more obvious. I would rather have a smooth and clean image than one that is every so slightly more physically correct. Besides, this adds extra complexity and reduces framerate.

Here are some screenshots where I was experimenting with different diffuse and specular colors. The blueish diffuse and reddish specular makes these glass objects look like anti-reflective coated lenses as used in cameras and some sunglasses.

Textured glass "marble" with reflection, refraction, and scattering using custom material texture and colors. The smiley face is my placeholder player model!

The glass eyeball.

Normal maps can be applied to object as well to perturb the reflection/refraction normal vector and get some interesting effects.

Glass ... pumpkin? Well, this is a wooden fence normal map applied to a sphere.

I made some attempts at creating ice. I don't have any ice normal maps, so I had to make do with the normal maps I happen to have. The stucco normal map shown in the first video at the bottom looks like a natural chunk of ice. These two cubes look more like ice sculptures or something else man made.

Ice cube? More like ice bricks.

Another ice cube, using a different normal map texture. Since it's supposedly transparent, it doesn't cast a shadow.

Light absorption and scattering within the material volume is also supported. This is sometimes referred to as participating media. The way this works is by calculating the length of the view ray clipped to the bounds of the sphere or cube for each fragment in the shader. This is the distance light must travel to cross the 3D volume of the object. The amount of light absorption/scattering varies exponentially as exp(distance) and is used to blend between the cube map ray hit color and the material's bulk color.

I can add hundreds of cubes and spheres and they're all drawn in depth sorted order for correct alpha blending. You can see the reflections and refractions between all of the objects. Is this physically correct? I have no idea, it's not obviously wrong as far as I can tell. I've seen similar effects in image searches such as this one. The exact value for index of refraction makes a huge difference.

Refractive glass cubes and spheres that absorb and scatter light. They also reflect and refract each other.

Here is a screenshot of the San Miguel scene from the previous blog posts, this time with some interesting glass and metal objects on the ground. The gold sphere can be seen twice in the glass sphere. On the left is a refraction, and on the right is a reflection off the inside surface of the sphere. It looks like it could be physically correct. If I only had a glass sphere and gold sphere I could probably figure it out. Is anyone interested in donating these to me in the name of science?

Material spheres placed in the San Miguel scene. The reflective gold sphere is reflected and refracted in the glass sphere.

I created two videos showing material editing. Both videos were recorded in realtime. As long as a single sphere or cube is placed or moved at one time, the framerate remains high. There is a list of predefined materials read from a text file. The UI allows for editing of existing materials to create new named materials. These can be saved in the scene for later use. Dynamic objects can have their parameters modified after placement if their named material is the one being edited. This is because they store a reference back to their material. However, the cube map is rendered every frame, so this only works well for a single dynamic object. Multiple objects reduce framerate due to all the required scene rendering passes. Static objects, on the other hand, can be placed all over the scene until graphics memory is exhausted (a few hundred objects on my system). The downside is that their material parameters can't be modified once they're placed. Both types of objects can be moved by the player and also destroyed.




Here is a similar video, this time in the San Miguel scene shown in the previous two blog posts. The only collision object is the ground. The tables and chairs weren't added as collision objects because they're not really grouped in a convenient way in the model file, and I haven't computed their bounding volumes. 3DWorld really prefers volumetric collision objects rather than polygon-based ones. I could add all ~7M polygons as individual collision objects, and it would probably work, but the performance would be terrible. I'm not sure if 3DWorld could actually fit the data + acceleration structure in 32-bit address space. So you can ignore the fact that I'm pushing cubes through tables and chairs, because I haven't bothered to fix it. Also note that the final cube's refraction image looks a bit odd because it's partially sticking though the wall.




That's all I have for now. I have a lot more images and videos of more materials with strange parameters, but I feel I've shown enough of the interesting ones.

The question is, what can I do with these? Is it possible to build an entire building or game level with them? I guess I need to add support for shapes other than cubes and spheres. Technically, I do have other shapes, but I hard-coded the parameters into the source code rather than adding a proper UI for controlling them. I've already created a scene with over 10K cubes so I know it scales, as long as they're not all reflective and refractive. 10K opaque/diffuse objects and 100 reflective/refractive objects should be fine for a realtime scene. Of course, it would be an incredible amount of work to build something useful from them if every object needs to be individually placed. I suppose I can continue to use these objects as decorations. And yes, they can be saved and reloaded, keeping most of the parameters. I can no longer edit object materials properties after reloading.

Wednesday, February 1, 2017

San Miguel Rain and Snow

This post continues my San Miguel scene experiments using techniques from previous posts on rain and snow. I decided I wanted to see how this scene looked with wet and snowy surfaces and how well the environment coverage mask worked on a highly detailed, complex model.

Here are two screenshots showing a rainy San Miguel. The wet look on the stone ground, tables, and chairs is achieved by decreasing the material diffuse lighting and increasing both specular intensity and specular exponent. This is a pretty simple effect, but works well in practice, especially on the ground. Wetness is higher for horizontal surfaces, and surfaces that have an unoccluded view of the sky above. These are the surfaces that naturally receive and accumulate the most water.

The raindrops themselves are drawn as vertical lines that quickly fall in a mostly vertical direction depending on the wind. They look much better in a dynamic scene than they do in the static screenshots shown here. Each drop is visible for only a few frames.

Rain and wet ground. Raindrops are drawn as lines and look much better when moving.

Rain and wet ground + objects from another angle.

I also enabled snow coverage and let the simulation run for a few minutes. Snow is accumulated over time and is drawn as a white layer on top of the normal textured scene geometry as a postprocessing step in the fragment shader. Snow thickness/density is a function of polygon normal and is highest for horizontal surfaces that face the sky above. Since the camera is above the scene in this screenshot, most of the visible surfaces such as the tree leaves are pointed up and have a high snow density.

Snowy scene from above, after several minutes game time of accumulated snowfall.

I enabled the snow coverage mask for the next two screenshots. This works by precomputing a depth-based coverage mask for the scene that stores the vertical distance from the sky plane for each pixel. The fragment shader accesses this texture to determine whether the current fragment is at or below the recorded occluder height. If the fragment is below this height, it is occluded by other scene geometry from above and receives no snow. The texture is blurred using multiple projection angles to simulate a randomization from the typical vertical snowfall. This smooths the edges of the snow coverage mask in areas of high frequency geometry, such as under the trees. However, if snow is left to fall over an extended period of time, this saturates the coverage layer and produces sharper boundaries (as seen below).

The first image was created using a low resolution 512x512 coverage mask, and the second image was created using a much higher resolution 2048x2048 coverage mask. The second image contains higher frequency content for a more accurate representation of snow coverage, though it takes much longer to compute. Preprocessing time is still under a minute on 4 cores. I don't think it looks all that much better though.

Snowy scene using a low resolution 512x512 snow coverage mask.

Snowy scene with a high resolution 2048x2048 snow coverage mask. The snow "shadow" from the trees can be seen.

I'll continue to experiment with snow effects. It would be interesting to see how well true volumetric snow accumulation works on this scene. This will likely take several hours of CPU time, and generate a large amount of voxel coverage data.

[Update: It in fact only takes 7 minutes to create the snow volume for 1 billion simulated snowflakes. I underestimated how well the bounding volume hierarchy scales to scenes with many polygons. However, the snow coverage is very noisy due to the large number of small leaves and other objects. I think the white color non-volumetric snow actually looks better. If you're interested, here is a screenshot.]

Volumetric snow simulated using 1 billion snowflakes.

Monday, January 9, 2017

San Miguel Scene

This post doesn't contain much technical content, because I was busy with moving to a new house and then the holidays, so I didn't have much time to write code. Instead, I'll post some images of the San Miguel scene modeled by Guillermo M. Leal Llaguno of EvoluciƩn Visual. I downloaded the Lightwave object file model and textures from the McGuire Computer Graphics Archive.

It took me a few days at an hour or so per day to get the scene to render correctly in 3DWorld. I had to add several minor new features, including a user-specified environment map cube, per-model normal map options, texture invert, custom sunlight intensity, custom mipmap creation for the leaves, and a per-material metalness parameter. Some of the features worked well, but others need more work. In addition, there are some errors in this version of the model, including missing triangles, missing textures, and incorrect texture coordinates. However, these problems don't affect the overall scene quality too much, as long as you're not looking at the details.

I computed indirect lighting for both the sky and sun for this scene using a preprocessing ray tracing operation that took around 10 minutes on 4 CPU cores. Once the preprocessing is finished, the lighting can be used for all view points and various weather conditions. There is no fake ambient term here, all indirect lighting is computed over a 3D grid. This isn't a closed model - meaning, many of the polygons are two sided and don't form a watertight volume. This leads to light leaking through the walls in some places. I don't have a good solution for this problem yet.

I used a single 8192x8192 shadow map and a 6 face 1024x1024 cube map centered in the courtyard for environment reflections. 3DWorld computes the shadow map and cube map when the scene is loaded, and updates them dynamically when objects move or any lighting/environment parameters change. This update requires the scene to be drawn multiple times, but there aren't any dynamic objects enabled by default so this isn't a problem.

Here are some screenshots showing the San Miguel scene from various locations and directions. These are some of the most photorealistic images I've gotten from 3DWorld so far. It can render this scene at an average of almost 200 frames per second at 1920x1080 on my Geforce GTX 1070.

View of the San Miguel scene from the corner showing shadows and indirect lighting.

View from the trees looking down. The indirect lighting appears to soften the shadows.

San Miguel scene viewed from near the fountain with a low value of sun and sky indirect lighting but strong direct sunlight.

View from the upper balcony showing closeup of plants. The sky is cloudy, which makes indirect lighting stand out more.



Here are some screenshots showing cube map environment reflections on glass and metallic objects. The reflection and transmission model is physically based and uses true Fresnel equations. Reflections don't use per-object cube map centers yet, so they're not very accurate, but they still look pretty good. There are some alpha blending artifacts due to not sorting the faces for individual objects from back to front. The materials are themselves sorted correctly as the viewpoint changes. Some of the objects are misaligned with each other, such as the salt shaker. This appears to be a problem in the model file and not a 3DWorld bug.

Reflections in glasses and silverware. For some reason, the salt shaker's salt, glass case, and metal lid are misaligned from each other.

More environment reflections of objects on the table.



Cube map reflection in the window. The reflection is a bit undersampled and distorted because it's far from the single cube map center point.

Cube map reflections in the silverware on a table. Can you spot the sun?

Here is a short video where I fly through the scene and examine some objects. At some point I'll have to add a system to improve the smoothness of camera movements when recording videos.


Here is another video with a slower camera speed and my attempt at smoother movement.



This work has been pretty fun and interesting. It's quite different from writing graphics and physics algorithms. I can't wait to find more large and interesting models like this one, and the museum scene from a previous post, to show in 3DWorld. If I find enough of them, I can combine them into some sort of town and maybe build a game into exploring the environments.