Do you want to be able to tweak a curve or a gradient in real-time and pass it along to a shader? Maybe re-use the same one across different effects? Look no further!
Many times when working on a complicated effect you want to have a colour fall-off, or you need some kind of specific value fall-off and your simple mathematical function / lerping between colours is not cutting it.
The Traditional Solution
What do you do? Maybe start a new Photoshop document. Convert the background layer to a normal layer. Add a layer effect with a gradient overlay. Start moving around some keys. Hope it makes sense. CTRL+S. Back to Unity, wait for it to import. Hmm, not quite right, back to Photoshop. That workflow sucks. The extra waiting time between iterations slows the process down and makes it hard to see how small changes affect the overall results. Especially if you’re talking about a curve of float values. Having to express that as colours is not at all intuitive. The ideal workflow would be that you just specify an animation curve or a gradient and have that automatically generate a texture, and pass that on to your shaders.
The New Solution
That’s what I made! Declare an AnimationCurveTexture field or a GradientTexture field and you are good to go. You can edit it “locally” for this specific instance to get real-time feedback while tweaking, or you can create a re-usable AnimationCurveAsset/GradientAsset. If you want you can even export it to a texture asset and use that. It is easy to switch back-and-forth between different kinds of sources. The point is that there is now an abstraction layer in-between. You can just define the values that you want, and there is now a utility that is responsible for generating or loading the corresponding texture, giving you full control over how the corresponding data is formatted, but allowing you to focus on the impact of minute changes in values instead.
This is ideal for creating an intuitive shader workflow that encourages frequent iteration, especially in the Built-In Render Pipeline / code-based shaders. The extra control it gives you also allows you to create new workflows for real-time creation of complex project-specific texture requirements that would be complicated to create in Photoshop (think 2D gradients with lots of colours on both axes). Whatever your project needs, it is now in your hands to extend this tool and facilitate a productive workflow for it.
Curves & Gradients To Texture is now available for free on OpenUPM and on GitHub.
If you make any improvements that you’d like to share with the rest of us, feel free to open a Pull Request.
Optimize your workflows with shortcuts to frequently-used assets.
Imagine this: you’re making a handful of castle-themed levels, and the majority of the work entails placing drawbridges, chandeliers, doors and keys. It’s a very small group of assets, but some of them might be gameplay, some of them might be decorative, frankly the specific files are scattered across different subfolders of the project. For the duration of your work on these levels though, these assets belong together. You want to be able to have them all in one place and quickly drag them into the scene. This is where the Asset Palette comes in.
This kind of scenario actually happens a lot in game development. It’s not just level design: as a programmer you might be working on a feature requiring you to constantly tweak a handful of Scriptable Objects. I believe the solution is to identify frequently-used asset-based workflows, create a new group for that workflow and add all the relevant assets to it. From then on you can use it as a panel of shortcuts to greatly speed up your day-to-day work. The Asset Palette does just that.
The Asset Palette is set up using [SerializeReference] and PropertyDrawers, meaning that you can add any kind of custom PaletteEntry to a collection. Do you have a feature that requires you to constantly execute one of a few macro functions? Instead of cluttering the global toolbar menu with your hyper-specific macros you could add buttons for them to your own personal palette.
Every game is different. The workflows required to make your game in an efficient manner are unique to your game. Consider using this free, plug-and-play tool to create the kind of tailored workflow optimization that your game needs.
How to deform any mesh along a spline by leveraging textures
Many 3D objects and effects could be described as a simple shape repeating across a spline. Most modelling packages offer this kind of functionality, yet it’s not quite as intuitive to do something like this in a game engine. Most implementations fall back to the brute-force approach of generating a new mesh via code. I’m here to offer an interesting alternative.
There is an uptick of new effects and optimizations that stems from a simple idea: textures don’t have to describe colours along polygons. Textures are just data grids and they don’t need to describe colours nor correspond to existing geometry in any way. With a little creativity they can mean anything.
Inspired by that idea I set out to try a new workflow: render deformation along a spline to a texture, then apply that deformation to any mesh using a shader. Let’s walk through how it works and what you can do with it.
Transformation matrices are used in 3D rendering to translate co-ordinates from one space to another such as local space to world space. Without going into too much detail on the maths (there’s plenty of good articles for that), it’s worth knowing how transformation matrices are stored. In shader functions, most matrices are stored as a float4x4, which is essentially a two-dimensional array of floats, like so:
The idea started when I wondered if there was an easy way to store a matrix in a texture. There is, actually. It can be written like this:
Given that each pixel in a texture can hold RGBA values, each pixel can hold one row of a matrix. A matrix usually has four rows, so a whole matrix can be stored in as little as four pixels. So if we just make a script that samples the position, rotation and scale along a spline then we can construct a matrix and store that in a texture using the layout described above, like this:
Given a base mesh that is aligned with the Z axis and has plenty vertices we can very easily transform one ‘slice’ of the mesh along the spline. The process is as follows:
The normalized z-coordinate of the vertex determines what ‘slice’ it belongs to, or how far along the spline it should end up. Let’s call this P.
Sample four vertical points in the deformation texture. The X co-ordinate is P and the Y co-ordinate of each sample is evenly spread.
Combine all four samples into a float4x4 matrix.
Move the vertex back to Z co-ordinate 0, the starting point, and multiply it by the matrix.
There’s a number of artifacts you’ll see if you’re building something like this from scratch and they’re worth knowing about.
Using the alpha channel to store non-alpha data surprisingly does not work well out-of-the-box. Most image formats will change the RGB values when the A value is 0, corrupting the data. I’ve found the EXR file format to be the least intrusive when it comes to this. Any other format, especially PSD, will interfere with the data.
Interpolation may work against you. Especially for the rows in the texture, which are distinct values with no relation to one another, interpolation will generate incorrect values. A quick solution to interpolation problems is just to increase the resolution. Despite theoretically only needing four vertical pixels I’ve opted to use eight for this particular reason.
It’s important to think about clamping. For looping splines the X-axis can be repeated, but the Y-axis remains distinct and should be clamped to avoid interpolation artifacts.
Strengths and Weaknesses
There’s certain things this technique does well and certain things it doesn’t. What it does well:
Simple generic workflow, any deformation is easily baked and applied.
Updated in real-time for immediate feedback while working.
Can be animated to achieve creative new effects.
Pre-rendered texture assets make this approach extremely performant.
However, there’s also certain weaknesses to this approach:
As it stands, rendering new textures is done via a script and is not very efficient. Certainly good enough for editor usage, but not optimal for splines that change at runtime.
No new geometry is created, so if you want to deform a mesh along an arbitrarily long spline you might have to instantiate more meshes.
The bounds of the mesh are not automatically updated and have to be updated via a script to prevent incorrect culling.
It’s an interesting technique and hopefully serves to help people let go of their preconceptions about textures and make new optimizations, and maybe even create surprising new real-time effects that were not possible before.
The repository for this article can be found here. Good luck, and have fun.