Thursday, February 11, 2010

Volumetric fog spoilers

Okay, so you want to make volumetric fog. Volumetric fog has descended from its days largely as a gimmick to being situationally useful, and there are still some difficulties: It's really difficult to model changes in the light inside the fog. There are techniques you can use for volumetric shadows within the fog, like rendering the depths of the front and back sides of non-overlapping volumes into a pair of accumulation textures, and using the difference between the two to determine the amount of distance penetrated.

Let's focus on a simpler implementation though: Planar, infinite, and with a linear transitional region. A transitional region is nice because it means the fog appears to gradually taper off instead of being conspicuously contained entirely below a flat plane.

In practice, there is one primary factor that needs to be determined: The amount of fog penetrated by the line from the viewpoint to the surface. In determining that, the transitional layer and the surface layer actually need to be calculated separately:

Transition layer

For the transition layer, what you want to do is multiply the distance traveled through the transition layer by the average density of the fog. Fortunately, due to some quirks of the math involved, there's a very easy way to get this: The midpoint of the entry and exit points of the transitional region will be located at a point where the fog density is equal to the average density passed through. The entry and exit points can be done by taking the viewpoint and target distances and clamping them to the entry and exit planes.

Full-density layer

The full-density layer is a bit more complex, since it behaves differently whether the camera is inside or outside of the fog. For a camera inside the fog, the fogged portion is represented by the distance from the camera to the fog plane. For a camera outside of the fog, the fogged portion is represented by the distance from the object to the fog plane. If you want to do it in one pass, both of these modes can be represented by dividing one linearly interpolated value by the linearly interpolated distance of the camera-to-point distance relative to the fog plane.

Since the camera being inside or outside the fog is completely determinable in advance, you can easily make permutations based on it and skip a branch in the shader. With a deferred renderer, you can use depth information and the fog plane to determine all of the distances. With a forward renderer, most of the distance factors interpolate linearly, allowing you to do some clamps and divides entirely in the shader.

Regardless of which you use, once you have the complete distance traveled, the most physically accurate determination of the amount still visible as:

min(1, e-(length(cameraToVert) * coverage * density))

You don't have to use e as the base though: Using 2 is a bit faster, and you can rescale the density coefficient to achieve any behavior you could have attained with using e.

As usual, the shader code spoilers:

// EncodeFog : Encodes a 4-component vector containing fraction components used
// to calculate fog factor
float4 VEncodeFog(float3 cameraPos, float3 vertPos, float4 fogPlane, float fogTransitionDepth)
float cameraDist, pointDist;

cameraDist = dot(cameraPos,;
pointDist = dot(vertPos,;

return float4(cameraDist, fogPlane.w, fogPlane.w - fogTransitionDepth, pointDist);

// PDecodeFog : Returns the fraction of the original scene to display given
// an encoded fog fraction and the camera-to-vertex vector
// rcpFogTransitionDepth = 1/fogTransitionDepth
float PDecodeFog(float4 fogFactors, float3 cameraToVert, float fogDensityScalar, float rcpFogTransitionDepth)
// x = cameraDist, y = shallowFogPlaneDist, z = deepFogPlaneDist (< shallow), w = pointDist
float3 diffs = fogFactors.wzz - fogFactors.xxw;

float cameraToPointDist = diffs.x;
float cameraToFogDist = diffs.y;
float nPointToFogDist = diffs.z;

float rAbsCameraToPointDist = 1.0 / abs(cameraToPointDist);

// Calculate the average density of the transition zone fog
// Since density is linear, this will be the same as the density at the midpoint of the ray,
// clamped to the boundaries of the transition zone
float clampedCameraTransitionPoint = max(fogFactors.z, min(fogFactors.y, fogFactors.x));
float clampedPointTransitionPoint = max(fogFactors.z, min(fogFactors.y, fogFactors.w));
float transitionPointAverage = (clampedPointTransitionPoint + clampedCameraTransitionPoint) * 0.5;

float transitionAverageDensity = (fogFactors.y - transitionPointAverage) * rcpFogTransitionDepth;

// Determine a coverage factor based on the density and the fraction of the ray that passed through the transition zone
float transitionCoverage = transitionAverageDensity *
abs(clampedCameraTransitionPoint - clampedPointTransitionPoint) * rAbsCameraToPointDist;

// Calculate coverage for the full-density portion of the volume as the fraction of the ray intersecting
// the bottom part of the transition zone
float fullCoverage = cameraToFogDist * rAbsCameraToPointDist;
if(nPointToFogDist >= 0.0)
fullCoverage = 1.0;
# else
float fullCoverage = max(0.0, nPointToFogDist * rAbsCameraToPointDist);
# endif

float totalCoverage = fullCoverage + transitionCoverage;

// Use inverse exponential scaling with distance
// fogDensityScalar is pre-negated
return min(1.0, exp2(length(cameraToVert) * totalCoverage * fogDensityScalar));

Wednesday, February 10, 2010

Self-shadowing textures using radial horizon maps

I don't think I need to extoll the virtues of self-shadowing textures, so instead, I'll simply present a method for doing them:

Any given point on a heightmap can represent all of the shadowing from the surrounding heightmap as a waveform h = f(t), where t is a radial angle across the base plane, and h is the sine of the angle from the base plane to the highest-angled heightmap sample in that direction. h, in other words, is the horizon level for a light coming from that direction.

Anyway, the method for precomputing this should be pretty obvious: Fire traces from each sample and finding the shallowest angle that will clear all heightmap samples. Once you have it, it's simply a matter of encoding it, which can be easily done using Fourier series. Each Fourier band requires 2 coefficients, except the first which requires one since the sine band is zero. I use 5 coefficients (stored as an RGBA8 and A8), but 3 works acceptably. More coefficients requires more storage, but produces sharper shadows.

Fourier series are really straightforward, but here are the usual Cartesian coordinate spoilers:

Assuming x = sin(t) and y = cos(t)

[sin(0)] = 0
[cos(0)] = 1
[sin(n)] = x
[cos(n)] = y
[sin(2n)] = 2*x*y
[cos(2n)] = y*y - x*x

The constant band squared integrates to 1 over a circle, but all other bands integrate to 0.5. That means you need to double whatever you get from it to reproduce the original waveform.

There's one other problem, which is that the horizon will move rapidly at shallow angles, right where precision breaks down. You need more precision at low values, and my recommended way of doing that is storing the sign-preserving square root of the original value, and multiplying the stored value by the absolute value of itself in the shader.

For added effect, rather than simply doing a comparison of the light angle to the horizon level, you can take the difference of the two, multiply it by some constant, and saturate. This will produce a softer shadow edge.

Shader code to decode this, taking a texture coordinate and a tangent-space lighting direction to the light souce:

float SelfShadowing(float3 tspaceLightDir, float2 tc)
float3 nLightDir = normalize(tspaceLightDir);
float4 lqCoefs = tex2D(pHorizonLQMap, tc) * 4.0 - 2.0; // Premultiply by 2
float cCoef = tex2D(pHorizonCCUMap, tc).x * 2.0 - 1.0;

lqCoefs *= abs(lqCoefs); // Undo dynamic range compression
cCoef *= abs(cCoef);

float2 tspaceRadial = normalize(float3(tspaceLightDir.xy, 0.0)).xy;

float3 qmultipliers = tspaceRadial.xyx * tspaceRadial.yyx * float3(2.0, 1.0, -1.0);

float horizonLevel = cCoef + dot(lqCoefs.rg, tspaceRadial)
+ dot(lqCoefs.baa, qmultipliers);

return saturate(0.5 + 20.0*(tspaceLightDir.z - horizonLevel));

An additional optimization would be to exploit the fact that the range of the coefficients other than the constant band is not -1..1, but rather, -2/pi..2/pi. Expanding this to -1..1 gives you a good deal of additional precision.