Latest Updates:
..I Maed a M@p W1th Weighted Companion Cubes 1NIT!!!1 (09.18.2011)
..Notes on the Normal Offset Materials (03.06.2011)
..Normal Offset GDC Materials (03.02.2011)
..Normal Offset Shadows (08.27.2010)
..Light Index Mapping (01.13.2010)

Pic of the Moment:

Trial By Fire in action.

I Maed a M@p W1th Weighted Companion Cubes 1NIT!!!1
09.18.2011 07:28PM | Dan

I had the itch to make a Portal 2 map, so here it is. I'm pretty happy with it. There are basically two parts: opening the door and getting through the door. There are at least 2 ways to solve Part 2. See if you can get them both!

To play the map:
1) Copy the file to C:\Program Files (x86)\Steam\steamapps\common\portal 2\portal2\addons
2) Enable the developer console in the options menu.
3) Enter the command "map ReactionPit".

Reaction Pit



Notes on the Normal Offset Materials
03.06.2011 01:32PM | Dan

1) I forgot to mention a bug with the FX Composer sample. For some unknown reason, FX Composer loses the normals on the robot model, so you have to reimport it every time you load the project :/. It's also way huge, so you have to scale it down. The easiest way that I've found is to scale each dimensions individually, zoom to object extends, and repeat.

2) The depth-based scaling is relevant only for perspective shadows, not orthogonal shadows. It's just part of the computation of how big a shadow texel is in world space.

3) If you are using cascaded shadow maps, you may need to take some extra care with this technique ("Exploding Shadow Receivers" aka "Normal Offset"), as it modifies UV coordinates.

Normal Offset GDC Materials
03.02.2011 06:25PM | Dan

My GDC poster session on Normal Offse Shadows (aka Exploding Shadows) was a smashing success. The poster explains the technique very well. I've included a demo for Nvidia FX Composer 2.5 that shows it off in real time.

There's also a short video which explains it in the previous posting.

Poster (4.7 MB)

NOTE: Due to a bug in FX Composer, the complex, robot-looking model loses its normals when you reopen the project, so you need to reimport it from "robot chaingun.3d" and then scale it way down.
Nvidia FX Composer Demo (3.5 MB)

Normal Offset Shadows
08.27.2010 03:39AM | Dan

Shadow acne has been a thorn in games' side ever since shadow maps were first employed therein. Various depth-biasing techniques have been used, but none fully eliminate acne without introducing the "Peter Pan" effect. This is an inherent part of depth bias.

But maybe there's another way to deal with it. Actually, there definitely is. Instead of using a simple depth bias, we can actually avoid acne by making small tweaks to the UV coordinates used in the shadow map look-up. Simply offset a fragment's position along its normal (geometric normal...normal maps need not apply here), and you can sidestep troublesome self-occluding shadow texels, rather than applying a depth bias (which can be offensively large at grazing angles). This "Normal Offset" technique yields vastly superior results to slope-scale depth bias.

Now I just need a really cool name. How about "Exploding Shadows"?

Here's a video explaining the technique

Results

Constant Bias
Slope-Scale
Normal Offset


Light Index Mapping
01.13.2010 02:01PM | Dan

Most lightmapping techniques these days don't yield good specular results. Diffuse is easy because all you need is a single color--it doesn't change based on the viewing angle. But with specular, you need directionality. The technique that I am most familiar with is a technique developed by Valve and employed by Epic which projects all lights onto a set of 3 orthogonal basis vectors.

So you can have only 3 specular directions. In addition to the obvious (it's imprecise), you get the oddity that a single light can end up as 3 separate highlights. It also causes highlights to smear out when applied to rounded surfaces. In steps light index mapping!

The ideas is to, rather than store lighting results, store the indices of the most important lights in each texel. (Diffuse would still be precomputed using all lights.) Then you can look up these lights in a table in the pixel shader. This even allows for some dynamic effects on the lights, such as color changes and subtle pulsation or movement. You can probably also get away with much lower-resolution light maps than you can with other methods of lightmapping.

Has anyone tried this? A brief search of the internet yields a dynamic lighting technique that is sort of the inverse of deferred rendering (rendering light indices to a screen buffer instead of rendering surface properties), but nothing on precomputed light maps.

How Many Lights Can a Forward-Renderer Handle?
01.13.2010 01:43PM | Dan

Deferred rendering has been getting a lot of attention lately, and for good reason. The sheer number of lights that a scene can handle is pretty darn high (so long as the average light doesn't affect too many pixels). But a deferred renderer may not be feasible for some. Here are some potential reasons why:

1) Rendering special materials that need more inputs than you have G-buffers for. The "Human Head" Nvidia demo comes to mind.
2) You want to light transparent objects.
3) You want to use hardware MSAA. 4) It's just not feasible to convert your engine, pipeline, and content over to deferred rendering before your game ships.

With that in mind,I tried a little experiment to see how many individual lights I could get to affect a mesh using a forward rendering approach without performance tanking. I realized that I needed some way of spatially dividing the mesh so that any given pixel is affected by only a certain number of lights. A uniform grid quickly explodes in terms of size and computation cost before it can provide the necessary fidelity.

I came up with the idea of what I called an "independent grid". First, think of a uniform 2D rid. Then, think of a non-uniform grid, wherein each row or column is not necessarily the same thickness as any of the others. Then imagine that each row decides how to divide up its columns where it sees fit. The effects is that nothing necessarily lines up. There are not necessarily any vertical grid lines traversing the entire grid. Finally, extend this to 3D. This allows for a more sporadic distribution than a uniform grid allows.

How does it work in the pixel shader? First, a texture is sampled to figure out which X coordinate of the pixel's grid cell. That is used as a texture coordinate for the second texture, which is used to determine which Y coordinate. Finally, that is used to sample a third texture, which is used to tell which lights affect the grid cell that the pixel lies in (the Z coordinate is implicit here). The light positions and colors are also stored in a texture. The shader processes 3 lights. This yields 3 texture samples to determine the lights + 6 samples to read the light values = 9 samples total in order to get the lights. In my test, I used and 8x8x8 grid. The grid textures ended up having 66.25K of pixel data. The lights texture is another 4K. That's kind of bulky if each object has its own grid, but you could perhaps use a more refined grid that is shared by many objects. Also, the texture layouts could be changed to make those numbers 18.25K and 24K, respectively.

The results were OK. There were definitely artifacts that wouldn't be present with a deferred renderer. The results depend heavily on the algorithm used to divide up the lights into a grid. I used a rather simple algorithm that I figured would be pretty fast in order to choose where to split of the grid cells. The test for determing the most important lights for a cell is also pretty basic. Doing this step on the GPU would allow for a more numerically-intense computation. Also, there was no fading to avoid sudden pops. Improving any one of these parts could yield much better visual quality.

Still, deferred rendering wins. Independent lighting grids might be most useful as a way to augment a deferred renderer for transparent or otherwise-special materials. Anyway, here are some screen shots:



Screen shots:

Lighting Results
Video - 23.3 mb
 
X - The first grid division Dividing along the Y axis
 
Gridding along Z Putting all three together
 
The colors of the lights affecting each grid cell


Windshield / Dynamic Shader Compiler
07.03.2007 11:50PM | Dan

Requirements: Shader Model 3.0 Graphics Card
DirectX 9.0c April 2007 or newer (4.09.0000.0904 in dxdiag)
Windows XP (Vista and/or Windows 2000 may work; not tested)
Download: Windshield_07_03_07.zip (full source included)

This has two major components: the pretty side (well, bear in mind that I'm not an artist and don't have 3DS Max at home, so the scenes behind the windshield aren't the greatest) and the behind-the-scenes side.

The pretty side is the windshield shader that you can see running in the demo. There are several controls you can use to choose different versions of the shader. It is similar to the water on the windows seen in the ATI toy shop demo. However, this expands upon that idea with a blurriness map that is used to make both point lights and the scene behind the windshield blurrier, depending on the water on the windshield.

However, if you've worked on shaders in retail game, particularly a PC game with all sorts of detail settings, you probably have experienced how difficult it can be to manage all those shaders. That's where my "dynamic shader compiler" comes in.

TECHNICAL JARGON ALERT

The dynamic shader compiler looks at a set of "shader settings" and compiles the appropriate versions of the shaders. It works using a hash table that associates each "material" (corresponding to a .fx shader file) with 2 functions: one that names the shader variants and one that generates the appropriate shader code. The compiler then compiles that code and stores it in another hash table, where it is associated with the variant name.

Each frame, each material looks at its shader settings to determine whether or not its current shader technique is up to date. If not, it will first check to see if the correct shader is already sitting in memory. Otherwise, it will check to see if the appropriate binary shader is on disk. If both those steps fail, it will compiler the needed variant of the shader and save the binary to disk.

In this system, each shader can respond or not respond to certain shader settings because it (potentially) has a unique set of functions associated with it. For this to work, each shader contains a macro that takes a number of parameters and defines a technique. This means that you will custom-code these functions, but that is much preferable to, say, typing all the different techniques by hand, which is tedious, boring, and prone to copy-and-paste errors. Consider that the windshield shader has 3^5 = 243 variations, and you can see what I'm talking about...

This is just a rough draft, and there are plenty of ways to expand upon it, such as replacing the hard-coded functions with scripts, and so on.



Rockin' Raytracing
06.06.2006 10:08PM | Dan

A few months back, I did some work modifying a basic raytracing framework. I added Monte Carlo sampling for the rendering of soft shadows. I also added radiosity ("light bleeding" in layman's terms) which uses Monte Carlo sampling to compute how much light is transferred between surfaces.

Screen shots:

Radiosity
The materials for the surfaces are just solid colors. Each rectangle has its own color. Notice how the red and green from the side walls bleed onto the scene and how the blue from the top of the box bleeds onto the upper half of the scene.


Monte Carlo Soft Shadows
50 samples per pixel 128 samples per pixel
The more samples per pixel, the longer it takes to render a scene, but the better (less grainy) the image is.


Uberrefraction
06.06.2006 09:13PM | Dan

I re-engineered how the refraction is done in ChessGC with great results. Not only did I improve the speed several-fold, but I also managed to improve the visuals! The game now has what I call "recursive refraction," meaning that if you seen one refractive object through another, the scene is refracted twice. There's no limit to the amount of recursion.

Also note that reflection of the light on the surface of a piece blocks out the light refracted through the piece.

Here are some screen shots:

start of game
from above
close-up
smash!
complex shape
Queen selected
pawn selected
30's a crowd
clusterfun


Refraction, baby!
05.31.2006 09:52PM | Dan

I've been working on the graphics for a chess game for the Game Development Club at UCF. I'm pretty proud at how good I've made those chess pieces look. They are refractive. You can see all the rest of the environment through them. There's no limit to how many chess pieces you can see through each other, other than the amount of color that the transparency values allow to pass through.

The board also reflects the pieces, which is pretty minor, in comparison, but a nice touch. Here are some screen shots:

start of game
pieces visible through others
no limit to transparency depth
pawn selected
possible moves
Pawn squash!
upside down


B-Splines
05.28.2006 04:35PM | Dan

Here is a simple little program that draws uniform cubic B-spline curves and and surfaces. It takes in the control points from a text file. You can adjust the tesselation factor and toggle drawing of the control polygon/mesh. Details are in the readme.

BSpline_DHolbert.zip

Bezier Splines!
12.10.2005 04:00PM | Dan

My final project for my Computer Graphics class involved making a program that used shaders to generate cubic Bezier curves and bicubic Bezier surfaces. The positions of the vertices were calculated in the shaders from the t or u and v coordinates. For the surfaces, per-pixel normals were calculated in the fragment shader from the derivative of the Bezier function.

I also created spline-like curves and surfaces for which each piece is a Bezier curve or surface.

You can move the control points around in real-time. It's pretty fun to see what shapes you can make. Without further ado, here is the link:

Bezier.zip

Parallax and Shadows
12.09.2005 03:24AM | Dan

Here's a little goody for you. This demo shows a parallax-mapped torus casting a shadow on a plane below. As self-occlusion is possible with shadow maps, the torus self-shadows.

Shadow Mapping

Skeletal Animation
11.18.2005 06:16PM | Dan

When I've had some free time, I've been hard at working on my skeletal animation, both developing a 3DS Max exporter and implementing skeletal animation in Dissident Logic's Warlock engine.

Here are demos for your downloading pleasure:
Bone Toggling - Shows the hierarchical nature of the skeleton by allowing you to toggle which bones are animated with the 5, 6, and 7 keys.

Teapot - Demonstrates keyframe interpolation by comparing it to non-interpolated animation. The "smooth" program interpolates and the "nointerp" program does the obvious.

Shadowy Developments
08.20.2003 09:04PM | Dan

Well, it seems that Dissident Logic always has different projects to work on. We've begun development on another project. I don't want to release any info about it just yet, so keep your eyes peeled for updates. :)



©2002-2003 Dissident Logic
the official DL forum about DL, the team, and how to contact us the DL games the latest DL news