Crying Suns VFX #4 — The Rim

Christophe SAUVEUR
5 min readSep 24, 2021

Two years ago, almost day for day, we released our narrative roguelite game Crying Suns. To commemorate this anniversary, here is the fourth and last entry in the series about the visual effects of the game. And don’t forget to read the first, second and third parts.

In this post, we will discuss the process behind a subtle effect, but potentially the most technical we had to implement: the Rim.

Draw The Line

As most of our scenes are backlit, the artistic direction called for contours of objects and characters to be highlighted, producing an effect similar to the use of a rim light in photography. This process creates a dramatic and somewhat mysterious atmosphere, and contributes perfectly to the grim tone of the game.

An example of rim light in traditional photography — Source: studio binder

If we were in a photo studio, we would simply put a light behind an object and shoot our subject. In theory, in a virtual 3D environment, we could use a similar technique. In practice, most of our objects are unfortunately 2D elements. Producing a clean rim light effect revealed itself much more challenging than expected.

The Missing Dimension

In 3D, with objects represented as true volumes, we have access to a variety of information (positions, normals, tangents…) that help produce very lifelike rim light effects (through sub-surface scattering for example), or even good enough approximations with an exaggerated Fresnel effect.

Examples of Fresnel shaders — Source: Godot Shaders

Basically, a rim light effect can be described as “draw light when normals are approaching the perpendicular vector of the camera direction”. However, with 2D elements, we are stuck to the plane and the missing depth dimension took away all relevant information from us. Volumes are defined by colors inside the objects and not by actual meshes of vertices.

The Spine of the Problem

With regular sprites, it is possible to provide normal maps for each one to reintroduce normal data in the rendering pipeline. A normal map is basically a texture in which red, green and blue components are used to store the orientation in space of each pixel. This is a powerful and well documented solution.

Example of a sprite normal map — Source: Xenosis: Alien Infection

However, Crying Suns does not use traditional sprites for most of the assets on which the rim light effect was expected. Instead, we use Spine to produce smooth and almost 3D-like animations which our characters.

The way Spine handles the various parts of a character makes the use of normal maps very difficult. Basically, it creates a mesh for each element and even allows you to deform those meshes in order to simulate changes in perspective. Normal maps on traditional sprites assume they are always facing the camera.

We had to find another solution. One that would allow us to generate some kind of normal map in real time from the flat Spine meshes only.

Edge Defection

For our own rim light effect, we settled on the assumption that we only needed to find the edge pixels of the objects, potentially with a width of multiple pixels.

At first, we tried to brute force this issue through a simple shader program. However, it became clear very quickly that would not work. There were eight texture look-ups for each pixel to determine if it was near an edge, with a lot of “ifs” that made the shader code hard to read and even harder to execute. And the final nail in the coffin was put when we tried to expand the width of the edge detection. More texture look-ups and more tests were needed.

It became evident we had to find a solution to pre-bake some kind of global normal map and use it during a specific material pass on a frame-by-frame basis but within a low time budget. And this is when compute shaders saved us.

Fifty Shades of Compute

Compute shaders have a lot of benefits that revealed themselves key in our implementation:

  • First, they run on the GPU space, allowing us the render all Spine characters in a render texture and pass it directly to the compute shader without the need of copying it to the CPU space (which is a very slow operation),
  • Second, compute shaders are not executed on a fragment-per-fragment basis as regular shaders are, they can be parallelized and they can read-write a texture randomly (visible for them only as a huge chunk of memory)
  • Finally, they are amazingly fast!

The only major downside of our implementation is that, at least at the time of release of the game, compute shaders were very intensive on mobile and we had to cut the rim light effect on the corresponding platforms (including the Switch).

The first step is to render the silhouettes of characters on which the rim light effect should be applied. You can easily do that thanks to Unity’s Camera.RenderWithShader() method and a simple alpha cutout shader. The result is a alpha texture representing the pixels that are actually drawn by Spine.

The compute shader then consists only of an oriented edge detection system, based on the previous render texture. Only horizontal and vertical presence or absence of pixels is tested, resulting in a sort of approximated normal map for edges.

The final lighting process is handled during the render of each object affected by the rim light. It only consists of a simple computation between the direction of the eligible lights in the scene and the faked normal provided by the generated map. Everything is simulated into the plane. Below you can see the result and how the readability of the whole scene benefits from it.

Kaliban without (left) and with the rim light effect (right)

The whole effect is obviously dynamic and both character and light movements are fully supported.

See how the rim light effect adapts to the surrounding moving lights

Source Code

Here is the source code of the compute shader:

--

--

Christophe SAUVEUR

French Video Game Lead Developer @ Alt Shift. I experiment a lot. I share what I discover. Personal website: https://chsxf.dev