Pixel perfect results with rendertargets

This post will be about RenderTargets and how to use them to solve the problem of staying pixel perfect without having to do position snaping.

Here is a small demo to show of a working example.
It won’t work on mobile devices as those are not supported by Unitys WebGL yet.

https://blog.felixkate.net/unity/PixelPerfectRendering.html

 

What is a Render Target?
If you aren’t familiar with RenderTargets (in Unity they are called Render Textures) here’s a short simplified explanation:
The screen will usualy be rendered into the backbuffer / framebuffer which will then be shown to the user as final result.
The backbuffers size will be that of the applications resolution so for example full HD would be a backbuffer size of 1920×1080.
A RenderTarget is a replacement for the backbuffer. Instead of showing the result directly we write the screen into a texture for further modification.

A few examples for Render Target usage are:
– rendering splitscreen – render multiple “cameras” into RenderTargets and combine them into a single output image
– post process / image effects – we modify the rendered screen instead of directly outputing the result (e.g. Bloom, Sobel outline)
– deferred lighting / shading – instead of calculating the light directly in the objects shader we pass colors, normals, depth etc. in multiple RenderTargets and calculate the final result only for what we recored in them once in the end (cheaper light calculation but screen has to render multiple times and information about “unseen” objects get lost in the process)

 

PixelResolutionIssue

Figure1: Difference in amount of rendered pixels with increasing size.

How can RenderTargets help us?
As mentioned before the backbuffer has the size of the applications resolution. Old devices like the GameBoy had a very small resolution and every pixel on the screen was exactly one pixel of the sprites but with new devices we have a lot more pixels to render. Figure 1 tries to show the problem. The original Sprite has a size of 16×16 pixels. By stretching it to a higher resolution multiple pixels will display the same pixel of the sprite. Try to move all of them by a floating value to the right in your head. While the first one has no choice but to snap to the grid the other two have enough points inbetween to write into which causes the sprite to leave our preferred pixel grid.

A RenderTarget can have it’s own resolution different from that of the backbuffer. By having a small resolution like that of the old GameBoy we can force the camera to draw everything exactly on the pixel grid. This will stay true even if the camera moves as the resolution stays the same. This can also be used to display 3D meshes and particles like pixel art because they get pixelated by the RenderTargets resolution (as seen in the demo). What this means for us is we can still calculate everything like we are used to with floats and vectors but we won’t have any issues regarding staying pixel-perfect anymore.

 

PixelCameras

Figure2: Our two cameras (l. scene camera rendering into the RenderTexture / r. screen camera rendering the result)

Setting up our rendering pipeline
This is a fairly easy process. The next steps are written with Unity in mind but it can be done in every other framework as well.
1) Create two cameras in the scene. One will be the camera rendering our scene and the other will display our final result. Both should be set to orthographic.
2) Create a Render Texture in the project browser. The filtering mode has to be set to “Point” else we will get a blurry result. Leave anti alisasing off and set the color format to ARGB32. The resolution should be the same as our rendered pixels. (e.g. if we have 16×16 tiles and display 10×15 tiles it would result in a resolution of 160×240)
3) Apply the Render Texture to our scene camera.
4) Create a “unlit texture” material and apply our Render Texture to it.
5) There are different ways to do this step. The first is to create a Quad and apply the material. Set the quads Layer to a custom Layer and tell the camera rendering our final screen to only render this layer. The other way is to tell that camera to render nothing and use the UI to draw an Image with our material applied.
6) Either write a small script that scales our Quad / UI image depending on the screens height (Set the camera’s clear flag to render a solid color of black to get black borders) or scale it to fill the whole screen.

 

Pro and cons of this approach
+ Results will always be pixel perfect. Every floating value will get pushed into the grid
+ We have to render a smaller resolution of our scene. This means we have fewer calculations for light etc.
+ Pixelation of everything means pixelated 3D objects and particles as well
+ Works well with global color tables (will write about this in the next post)
+ No edge artifacts that happen by pixelating the uvs in an image effect (floor(uv * res) / res)

– We have to render twice (is a minor drawback since we get some boost from doing the calculations in a smaller resolution; this worked without problems even on mobile devices)
– We need to know how many pixels we actually want to render beforehand (or need to calculate a new RenderTarget when changing resolutions)
– Rotation of sprites looks bad (Same result as rotating sprites in a graphics application). It’s best to prepare a few rotated sprites beforehand.

 

That’s about it for this part. Feel free to leave comments in case you disagree or have a few more tricks you can share regarding this topic.

2 thoughts on “Pixel perfect results with rendertargets

  1. Peter Edwards

    Great post, learn’t a lot! Wondering if you can help me though. I’m trying to use Unity UI in conjunction with this, so setting my canvas to Screen Space – Camera and targeting the Camera I’m using to render to my texture. Everything renders fine but no click events populate etc which I kind of understand but don’t know how to remedy. In your demo you have a small UI, do you mind telling me how you implemented this or are you able to share the project? Thanks in advance.

    Reply
    1. FelixK Post author

      The UI elements in this specific project were made with SpriteRenderer and BoxCollider2D which where added on the screen cameras part.
      Input was taken from an attatched Component with the MonoBehaviour.OnMouseDown() method. (Inside there I also saved the Input.mousePosition and calculated mouse drags by comparing the savedPosition with the current mouse position in the Update() method)

      Using Unity 5’s UI system works as well as it get’s layered ontop of everything after the rendering is done. I just personaly prefer to use Quads and SpriteRenderes for UI on 2D projects since the layer rendering order won’t change too much anyways and I don’t have to setup an extra canvas (or bother with shader incompatibility).

      An important thing to note here is that if you want the UI to be part of the “pixel perfect” rendering you have to add the elements on the pixel camera part with real geometry. This works fine for most HUD elements but checking for mouse input can’t be done directly with the basic ways provided by Unity. (Could still be done manualy over code by comparing Input.mousePosition to buttons rects)
      Doing the UI as a last step is usualy easier and if you know your render targets resolution it is trivial to write a script that scales the UI elements so that they fit the pixel grid.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *