A tribute to my amazing childhood friend Ray Tracing

I’m going to take a moment to appreciate the talented 3D artist Raymond “Ray” Tracing’s spirited career and how he grew his fledgling personal brand into the household name he is today.

Who is Ray Tracing?

Today his name is everywhere. The Xbox Series X and PlayStation 5 just released, and people are desperate to get their hands on the latest generation of consoles just to see Ray’s immaculate work tweaking the lighting and reflection settings of games like Marvel’s Spider-Man: Miles Morales and Demon’s Souls. You might know him as Ray Tracing, the 3D rendering expert and controversial public figure, but back when I grew up with him, I just knew him as Raymond.

Battlefield 5 Ray Tracing TESTED - Is RTX Worth It? | The Tech Chap -  YouTube
The gaming community was so excited about his work that companies have started building in the option to disable Ray’s famous lighting tweaks, just so they can compare exactly what the project looked like before and after Ray joined the project.

Ray’s humble origins

Back when I was in high school I made small video games in Unity3D. During a lunch break I was working on a scene and I could tell that this shy kid was watching me. I looked over at him, and he said “that looks cool”. Back then I didn’t know anyone else who made video games, so I was immediately excited that someone showed an interest. I explained how to move a directional light, and he said “yeah, I know”. This surprised me greatly. I found out he made video games too, and when I finally convinced him to show me something he’d made, I was awestruck. Before me was this large-breasted anime girl wearing glasses, and the lighting and reflections were more realistic than anything I had ever seen. I said “wow, I didn’t even know that was possible. What did you say your name was?” He smirked at me and said: “Raymond. Raymond Tracing.”

Ray Tracing’s unique approach to realistic light reflections

It’s hard to explain what Raymond’s approach even is without getting technical, but I want to share my limited understanding of it just to give an idea of how deep his mastery of 3D graphics is. It all starts with a thing called Reflection Probes. In order to have reflections in your scene at all, you have to place one of those probes. It acts like a little spherical camera and takes a picture 360 degrees around it, and reflective objects will use this image to determine what to reflect.

Now, don’t laugh, but this is what an outdated 3D reflection setup used to look like before Raymond Tracing’s innovation.

Raymond’s revolutionary idea was that instead of placing a single reflection probe in the scene, he would place them all around the scene, roughly encapsulating the entire volume of space that is visible to the player in an almost grid-like pattern. Sounds simple but as you can imagine, placing every individual probe by hand like this could take minutes.

And this is what it looks like with Raymond’s Technique, I think we can all agree that this is a little more next-gen.

After Raymond released his first game: Tsundere-chan In The Hall of Mirrors (2013), it was immediately praised by critics and players alike for its breathtakingly realistic portrayal of reflective objects. He was quick to patent his technique – a move some people in the industry criticize – but it immediately cemented him as the singular proprietor of realistic reflections in video games. After making several more critically acclaimed videogames chronicling Tsundere-chan’s adventures around reflective surfaces, in 2018 he finally set his sights on sharing his revolutionary technique with the rest of the industry. Hired by 4A games as a Lead Reflection Artist, he set out to apply his technique to a game called Metro Exodus. But there was a snag: the game world consisted of massive open spaces. How was he going to place that many reflection probes by hand?

Raymond Technique Extreme (RTX)

One day in college Raymond was bored and decided to throw as many ping pong balls into a solo cup as he could. When he went to empty the cup, he noticed how evenly spaced they were. They were distributed exactly across the space of the inside of the solo cup. In a swell of motivation and excitement he opened Unity. It was time to stop placing every individual node by hand. The artistry of placing the reflection probe in exactly the right position had to make way for a fast-pace tool-assisted technique. This led to Raymond’s greatest innovation yet:

his Tool for Unrelenting Rapid Installation of N-Gons.

“TURING” was a new automated system that used a physics simulation to literally throw reflection probes all over the scene. It took the 3D rendering community by storm.

The downfall of Ray Tracing

“Nvidia”, a company that sells consumer rendering hardware, approached Raymond and asked him permission to create a dedicated hardware chip inside their GPU’s to automatically apply TURING to a scene. Raymond hesitated at first but was quickly beguiled by the large sum of money they offered him. He was – as some people called it – a ‘sore winner’.

People weren’t too happy about it. When the next generation of gaming consoles came around and started integrating RTX, they started marketing it as the single greatest reason to upgrade your gaming console. But when the day finally came, people were less than enthusiastic.

A Ray of Hope

When things got personal, Raymond took some time off to travel the world. No one heard from him for months. When he finally came back I took a moment to sit down with him and talk about how he was feeling. He said he was doing much better now and that it really helped to spend some time in Tibet, clear his head. I couldn’t help but notice that he had his laptop sitting in front of his backpack. I asked him if he had been doing any tinkering at all. He glanced over and slowly and carefully closed the lid. Before the screen had tilted all the way down I could make out just enough pixels to read the text: Tsundere-chan Royale. Raymond looked at me with a twinkle in his eye, and smirked.


Introducing: The Scene View Picker

Assigning references via the scene in Unity

Are you an organized worker? Have you ever organized your scene hierarchy so hard that everything makes a lot of sense, but you can’t easily find that one object? That little bastard. It’s right there in the scene view, I’m looking at it.

Has this ever happened to you?

The Solution

Introducing: the Scene View Picker. It’s a PropertyDrawer for UnityEngine.Object that adds a handy button next to all references in your script that allows you to assign a reference from the scene view.

It’s particularly useful for UI but it works for references of all types

Extra Features

Sometimes the object you’re looking for is surrounded by other objects that would be equally valid. In that case you can Middle-Click to select nearby objects from a dropdown.

One of its more obscure features but it’s proven very useful in some of our projects.

In Conclusion

It’s free, very convenient and best of all: it’s plug-and-play. All your custom scripts (save perhaps for the ones with dodgy editor scripts that don’t use serialized properties) will now have this handy little button.

It’s on the Open Unity Package Manager (OpenUPM) and also available on GitHub.

I highly recommend it, and if you have any ideas on how to improve it feel free to open a Pull Request.

Shaders Unity

GPU Spline Deformation in Unity

How to deform any mesh along a spline by leveraging textures

Textures are just data grids and they don’t need to describe colours nor correspond to existing geometry in any way. With a little creativity they can mean anything.

Many 3D objects and effects could be described as a simple shape repeating across a spline. Most modelling packages offer this kind of functionality, yet it’s not quite as intuitive to do something like this in a game engine. Most implementations fall back to the brute-force approach of generating a new mesh via code. I’m here to offer an interesting alternative.


There is an uptick of new effects and optimizations that stems from a simple idea: textures don’t have to describe colours along polygons. Textures are just data grids and they don’t need to describe colours nor correspond to existing geometry in any way. With a little creativity they can mean anything.

Inspired by that idea I set out to try a new workflow: render deformation along a spline to a texture, then apply that deformation to any mesh using a shader. Let’s walk through how it works and what you can do with it.

A novel approach to deforming any mesh along a spline

Deformation Matrices

Transformation matrices are used in 3D rendering to translate co-ordinates from one space to another such as local space to world space. Without going into too much detail on the maths (there’s plenty of good articles for that), it’s worth knowing how transformation matrices are stored. In shader functions, most matrices are stored as a float4x4, which is essentially a two-dimensional array of floats, like so:

Image for post

The idea started when I wondered if there was an easy way to store a matrix in a texture. There is, actually. It can be written like this:

Image for post

Given that each pixel in a texture can hold RGBA values, each pixel can hold one row of a matrix. A matrix usually has four rows, so a whole matrix can be stored in as little as four pixels. So if we just make a script that samples the position, rotation and scale along a spline then we can construct a matrix and store that in a texture using the layout described above, like this:

Image for post
A 64 KB 1024×8 RGBA16 texture generated by a script. The X-axis represents progress along the spline.

The Shader

Image for post

Given a base mesh that is aligned with the Z axis and has plenty vertices we can very easily transform one ‘slice’ of the mesh along the spline. The process is as follows:

  1. The normalized z-coordinate of the vertex determines what ‘slice’ it belongs to, or how far along the spline it should end up. Let’s call this P.
  2. Sample four vertical points in the deformation texture. The X co-ordinate is P and the Y co-ordinate of each sample is evenly spread.
  3. Combine all four samples into a float4x4 matrix.
  4. Move the vertex back to Z co-ordinate 0, the starting point, and multiply it by the matrix.
Image for post
It’s that easy!


There’s a number of artifacts you’ll see if you’re building something like this from scratch and they’re worth knowing about.

  1. Using the alpha channel to store non-alpha data surprisingly does not work well out-of-the-box. Most image formats will change the RGB values when the A value is 0, corrupting the data. I’ve found the EXR file format to be the least intrusive when it comes to this. Any other format, especially PSD, will interfere with the data.
  2. Interpolation may work against you. Especially for the rows in the texture, which are distinct values with no relation to one another, interpolation will generate incorrect values. A quick solution to interpolation problems is just to increase the resolution. Despite theoretically only needing four vertical pixels I’ve opted to use eight for this particular reason.
  3. It’s important to think about clamping. For looping splines the X-axis can be repeated, but the Y-axis remains distinct and should be clamped to avoid interpolation artifacts.
Image for post
Early version of the displacement algorithm

Strengths and Weaknesses

There’s certain things this technique does well and certain things it doesn’t. What it does well:

  • Simple generic workflow, any deformation is easily baked and applied.
  • Updated in real-time for immediate feedback while working.
  • Can be animated to achieve creative new effects.
  • Pre-rendered texture assets make this approach extremely performant.

However, there’s also certain weaknesses to this approach:

  • As it stands, rendering new textures is done via a script and is not very efficient. Certainly good enough for editor usage, but not optimal for splines that change at runtime.
  • No new geometry is created, so if you want to deform a mesh along an arbitrarily long spline you might have to instantiate more meshes.
  • The bounds of the mesh are not automatically updated and have to be updated via a script to prevent incorrect culling.
Image for post
Looping splines and animated offsets can create looping meshes.

Closing Thoughts

It’s an interesting technique and hopefully serves to help people let go of their preconceptions about textures and make new optimizations, and maybe even create surprising new real-time effects that were not possible before.

The repository for this article can be found here. Good luck, and have fun.

Image for post

Debug.Log(“Hello World!”);

Hello. When I make something cool or have acquired some piece of ancient wisdom I’ll share it here.

As my great-great-grandfather Ronald Dietrich Theunissen said: “why bother with microblogging when you can do macroblogging”