|Silverlight Custom Bitmap Effects - more HLSL|
|Monday, 26 July 2010|
Page 2 of 4
Dependency property types
How the dependency property is used to load the register it is associated with depends on its type and WPF/Silverlight currently support the following dependency property types:
In each case the C# data type is packed into the four elements of the HLSL float4 register.
In our example the register is going to be used as a color so it makes sense to define a Color dependency property and allow the system to perform the mapping to the float4 register.
Defining the dependency property follows the usual course. First we create a standard set/get property that used the dependency property:
public Color PixelColor
Then we create the dependency property adding "Property" to the end of the name:
public static readonly
Notice that this is a perfectly normal dependency property apart from the use of the PixelShaderConstantCallback(0) which connects the property to the c0 register.
Now we can create an effect object, set the dependency property and use it to modify the way a button displays.
BlankEffect BE = new BlankEffect();
We have set the color to full green.
All of these modifications and additions are to the BlankEffect class created in the first article. The complete listing of this class is:
public class BlankEffect:ShaderEffect
Notice that you have to modify the pack URI to make sure it gives the location of the shader file .ps. This file also has to be added to the project and it has to be set to a recource - see the previous article for details of how to set everything up correctly.
Finally for it all to work we need to modify the shader code:
Remember to save and compile the shader .fx file to create an up-dated .ps file.
If you now run the program you will see a green block appear in place of the button's usual rendering.
So far all we have managed to do is pass a constant value to the shader and return it as the color of the pixel. We obviously need to gain access to and process the pixels that rendering the control actually produces.
The key to this, and generally with working with bitmaps in shaders is, the sampler.
A sampler is a, usually small, bitmap that is stored in video memory. A sampler can be used in many different ways.
For example, if you have a small bitmap of a section of texture, fur say, you can use it to map onto another object as it renders to the screen.
This is the original and most common use of the sampler and it is the reason a sampler is called a sampler - i.e. it allows you to sample a texture. However, it is also possible to use samplers for many other purposes including just rendering an image to the screen.
Samplers are passed to an HLSL program using registers. In pixel shader 2.0 you can use up to 16 shaders specified in S0 to S15 but WPF/Silverlight limits you to using a maximum of four.
Using a shader follows the same steps as using a constant.
In the HLSL program you first declare a variable and associate it with a shader register. For example:
sets up the variable bitmap1 as a sampler2D data type and associates it with register s1. Following this declaration you can work with bitmap1 as if it was a 2D bitmap sampler.
In the C# program you have to create a dependency property of type Brush and associate it with the shader register using the special RegisterPixelShaderSamplerProperty static method which is supplied by the ShaderEffect class.
That is, to make the connection between the bitmap that is represented byt the Brush and the sampler register you have to register the dependency property in a special way.
Once you have the dependency property and the sampler setup you can set the dependency property to a suitable bitmap within your C# program and work with it as the sampler in you HLSL program.
Let's see each step in action by using a sampler to define what is rendered for a button object.
First let's create the shader program:
This associates the variable bitmap1 with sampler register s1. In the body of the function we make use of the function tex2D which takes a sampler as its first parameter and a texture coordinate as its second parameter. The function returns the color of the pixel at the coordinate specified by uv and this is returned as the colour of the rendered pixel. Hence we are simply transferring the image in the sampler to the output target.
At this point we need to understand texture coordinates a little better.
Texture co-ordinates always work in the same way. The top left-hand corner is (0.0) and the bottom-right is (1,1) - irrespective of the number of pixels in the graphic.
What this means is that texture co-ordinates always specify a point within the graphic and graphics are automatically scaled to fit the area they are being mapped to.
In this case the input texture co-ordinate uv which is passed into the shader is a point in the area to be rendered i.e. in this example the button's render area. The same (0,0) to (1,1) co-ordinates are mapped to the sampler's entire area with the result that the entire sampler is mapped to the entire button render area.
<ASIN:1430229799 ><ASIN:1847199844 >
|Last Updated ( Tuesday, 27 July 2010 )|