This post will be about RenderTargets and how to use them to solve the problem of staying pixel perfect without having to do position snaping.
Here is a small demo to show of a working example.
It won’t work on mobile devices as those are not supported by Unitys WebGL yet.
What is a Render Target?
If you aren’t familiar with RenderTargets (in Unity they are called Render Textures) here’s a short simplified explanation:
The screen will usualy be rendered into the backbuffer / framebuffer which will then be shown to the user as final result.
The backbuffers size will be that of the applications resolution so for example full HD would be a backbuffer size of 1920×1080.
A RenderTarget is a replacement for the backbuffer. Instead of showing the result directly we write the screen into a texture for further modification.
A few examples for Render Target usage are:
– rendering splitscreen – render multiple “cameras” into RenderTargets and combine them into a single output image
– post process / image effects – we modify the rendered screen instead of directly outputing the result (e.g. Bloom, Sobel outline)
– deferred lighting / shading – instead of calculating the light directly in the objects shader we pass colors, normals, depth etc. in multiple RenderTargets and calculate the final result only for what we recored in them once in the end (cheaper light calculation but screen has to render multiple times and information about “unseen” objects get lost in the process)
How can RenderTargets help us?
As mentioned before the backbuffer has the size of the applications resolution. Old devices like the GameBoy had a very small resolution and every pixel on the screen was exactly one pixel of the sprites but with new devices we have a lot more pixels to render. Figure 1 tries to show the problem. The original Sprite has a size of 16×16 pixels. By stretching it to a higher resolution multiple pixels will display the same pixel of the sprite. Try to move all of them by a floating value to the right in your head. While the first one has no choice but to snap to the grid the other two have enough points inbetween to write into which causes the sprite to leave our preferred pixel grid.
A RenderTarget can have it’s own resolution different from that of the backbuffer. By having a small resolution like that of the old GameBoy we can force the camera to draw everything exactly on the pixel grid. This will stay true even if the camera moves as the resolution stays the same. This can also be used to display 3D meshes and particles like pixel art because they get pixelated by the RenderTargets resolution (as seen in the demo). What this means for us is we can still calculate everything like we are used to with floats and vectors but we won’t have any issues regarding staying pixel-perfect anymore.
Setting up our rendering pipeline
This is a fairly easy process. The next steps are written with Unity in mind but it can be done in every other framework as well.
1) Create two cameras in the scene. One will be the camera rendering our scene and the other will display our final result. Both should be set to orthographic.
2) Create a Render Texture in the project browser. The filtering mode has to be set to “Point” else we will get a blurry result. Leave anti alisasing off and set the color format to ARGB32. The resolution should be the same as our rendered pixels. (e.g. if we have 16×16 tiles and display 10×15 tiles it would result in a resolution of 160×240)
3) Apply the Render Texture to our scene camera.
4) Create a “unlit texture” material and apply our Render Texture to it.
5) There are different ways to do this step. The first is to create a Quad and apply the material. Set the quads Layer to a custom Layer and tell the camera rendering our final screen to only render this layer. The other way is to tell that camera to render nothing and use the UI to draw an Image with our material applied.
6) Either write a small script that scales our Quad / UI image depending on the screens height (Set the camera’s clear flag to render a solid color of black to get black borders) or scale it to fill the whole screen.
Pro and cons of this approach
+ Results will always be pixel perfect. Every floating value will get pushed into the grid
+ We have to render a smaller resolution of our scene. This means we have fewer calculations for light etc.
+ Pixelation of everything means pixelated 3D objects and particles as well
+ Works well with global color tables (will write about this in the next post)
+ No edge artifacts that happen by pixelating the uvs in an image effect (floor(uv * res) / res)
– We have to render twice (is a minor drawback since we get some boost from doing the calculations in a smaller resolution; this worked without problems even on mobile devices)
– We need to know how many pixels we actually want to render beforehand (or need to calculate a new RenderTarget when changing resolutions)
– Rotation of sprites looks bad (Same result as rotating sprites in a graphics application). It’s best to prepare a few rotated sprites beforehand.
That’s about it for this part. Feel free to leave comments in case you disagree or have a few more tricks you can share regarding this topic.