Do you know how video cards render an image? Here is a diagram showing the general process:

GenericPipelineGraphics Rendering Pipeline – is a pipeline that is passed by data before it’s rendered on a screen. Old computers used software rendering. CPU made all calculations by the following rendering pipeline (Pre-1980′s Сustomized Softfare Rendering):SoftwarePipeline

The first 3D-accelerators used so called Fixed-Function Pipeline that was strictly fixed and sequential. It was impossible to intermediate in the process of rendering (Pre-2001):

FixedInput data was passed as an object in the form of individual vertices with multiple attributes: position of a vertice, color, normal, texture coordinates etc.

Transformation and Lighting. At this stage geometrical operations (move, rotate, scale) were performed on an object as well as lighting was calculated for every vertice basing on a placement\types of lighting sources and parameters specifying its surface (reflection, refraction).

Primitive Setup is a process of triangulation: vertices are combined into triangles.

Rasterization. The purpose of this stage is to calculate pixels color basing on the data that had already been prepared. Since we have information about colors of vertices only, to obtain pixel colors, we linearly interpolate the value between color values of the vertices.

Pixel Processing. It’s a coloring of pixels. The input data is prepared at the previous level. Also here we can apply additional effects to pixels such as texturing.

Frame Buffer Blend. Here the final frame is built. The data of Z-buffer is considered to determine which object is closer to the camera. Alpha test is performed. Objects are laid on the final image layer by layer. Post-effects can be applied as well. Then the completed frame is placed in the Frame Buffer.

Since video cards got hardware support of DirectX 8.0-8.1 processing of vertices and pixels became programmable. Here is a diagram showing Graphics Rendering Pipeline (DX8-DX10):

DxNow let’s talk about 3D-objects. Model is a set of vertices, links between them, as well as materials, animations and etc. Vertice has attributes. For instance UV-coordinates that define vertice location at the texture unwrapping. Material determines object appearance and includes a reference to a shader that is used to render geometry or particles. Thus, these components will not be displayed without any material.

Shader – is a program for one of the stages of graphics rendering pipeline. All shaders can be divided into two groups: vertice and fragment (pixel). There is a simplistic approach to writing shaders in Unity – Surface Shader. It’s just a higher level of abstraction. During compilation of the Surface shader the compiler generates shader consisting of both vertice and pixel shaders. Unity has its own language for writing shaders – ShaderLab that supports insertion of CG and HLSL code.


As an example, let’s write a shader that places diffuse texture, normal map, specular map (based on Cubemap) on an object and clips pixels by alpha channel in the diffuse texture.

Consider general syntax of ShaderLab. Even if we write a shader with CG or HLSL, we will still need to know the syntax of ShaderLab, so we could set parameters of our shader in the inspector.

    // properties that will be seen in the inspector
             _Color("Main Color",Color)=(1,0.5,0.5,1)
        // define one subshader

The first key word is «Shader». Then we define a name of the shader. We can specify a path with ‘/’, where the shader will be displayed in the drop down menu when setting a material. There is a list of parameters Properties {}, which will be visible in the inspector and which user will be able to interact with.

Each shader in Unity consists of a list of subshaders. When Unity has to display a mesh, it will find the shader to use, and pick the first subshader that runs on the user’s graphics card. It’s done so a shader could be displayed correctly on different video cards supporting different shader models. The Pass block causes the geometry of an object to be rendered once. The shader can contain one or several passes. Multiple passes can be used for example in the case of shader optimization for old hardware, or to achieve special effects.

If Unity hasn’t found any subshader in the body of the shader, which can display the geometry correctly, then it rolls back to another shader, defined after the Fallback statement. It would use Diffuse shader in the example above, if video card was unable to display current shader correctly.

Let’s consider the following example.

Shader "Example/Bumped Reflection Clip"
    _MainTex ("Texture"2D) = "white" {}
    _BumpMap ("Bumpmap"2D) = "bump"  {}
    _Cube ("Cubemap", CUBE) = "" {}
    _Value ("Reflection Power", Range(0,1)) = 0.5

   Tags {"RenderType" = "Opaque" }
   Cull Off

   #pragma surface surf Lambert
   struct Input
    float2 uv_MainTex;
    float2 uv_BumpMap;
    float3 worldRefl;
   sampler2D _MainTex;
   sampler2D _BumpMap;
   samplerCUBE _Cube;
   float _Value;

   void surf (Input IN, inout SurfaceOutput o)
    float4 tex = tex2D (_MainTex, IN.uv_MainTex);
    clip (tex.a - 0.5);
    o.Albedo = tex.rgb;
    o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
    float4 refl = texCUBE (_Cube, WorldReflectionVector (IN, o.Normal));
    o.Emission = refl.rgb * _Value * refl.a;
 Fallback "Diffuse"

Field Properties contains four variables that will be displayed in Unity inspector. _MainTex – defines a name displayed in the inspector, its type and default value.

_MainTex and _BumpMap – texture, _Cube – Cubemap for reflections, _Value – amount (level) of reflection.

Tag {“RenderType” = “Opaque”} marks the shader as opaque. It affects drawing sequence.

Cull Off disables culling – all faces are drawn independent of polygons facing directions. There are three options:

  • Back Don’t render polygons facing away from the viewer (default).
  • Front Don’t render polygons facing towards the viewer. Used for turning objects inside-out.
  • Off Disables culling – all faces are drawn. Used for special effects.

We use a block of CG code that is framed by two key words: CGPROGRAM, ENDCG.

# pragma surface surf Lambert – declaration of surface-shader function and additional parameters. In this case, the function is called surf and Lambert Lighting Model is specified as an additional parameter.

Now consider the input structure. All possible variables of the input structure can be found here. But we consider only those used in our example.

Variables uv_MainTex and uv_BumpMap are UV coordinates required for the proper placement of a texture on an object. These variables must be named the same way as names of texture variables with prefixes uv_ or uv2_ for the first and the second channels. worldRefl and INTERNAL_DATA are used for reflections.

Now let’s clarify the shader function surf.

By the first step we get a vector with four components (RGB + Alpha) of the pixel going to output structure assuming UV-map. Variable tex will keep information about it. After that we specify which pixels are missing during rendering by the function clip. Since the cut-off is based on the information stored in the alpha channel, we put tex.a as a parameter.

After the cull off we draw our main texture on the object with the following line of code: o.Albedo = tex.rgb; variable o is the output structure. All its fields are described in the Help regarding SurfaceShaders.

We apply normal map by the next step:

o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));

After that we add the reflection:

float4 refl = texCUBE (_Cube, WorldReflectionVector (IN, o.Normal));
o.Emission = refl.rgb * _Value * refl.a;

I should note that the information about the reflection is multiplied by _Value, so we could control the level of the effect in the inspector.

Finally we write Fallback for the case if the video card isn’t able to display the shader correctly.

Let’s see what we have got finally:

Author Nikita Zakharchenko

Senior Unity Developer with Heyworks Unity Studio