Portfolio


  • Home
  • DirectX Raytracing
  • Advanced Graphics
  • Roboflux
  • Boat Simulation

ADVANCED GRAPHICS



The project



I created this project for a module called Advanced Games and Graphics. In this module I implemented a variety of graphical techniques, such as normal mapping, using DirectX 12. As it was my first-time using DirectX 12 it was interesting to see how it differed compared to DirectX 11 with things like command lists and allocators. Pix was used extensively throughout this project to help debug issues.



Mapping Techniques



The main part of this module was comprised of implementing various mapping and advanced graphical techniques. The techniques implemented are Normal mapping, Parallax mapping, Parallax Occlusion mapping and Self-Shadowing Parallax Occlusion mapping. The self-shadowing parallax occlusion mapping only creates hard shadows and is hardcoded to work with the point light in the scene. This isn't ideal however this technique would rarely be implemented in a game as there would be a separate shadow system already in place.



Deferred Rendering



In terms of the architecture of the renderer I decided to go with a deferred rendering approach. This involves breaking down the rendering into multiple passes. The first pass is responsible for populating what is known as a G buffer. This buffer effectively represents the geometric information needed to render the scene on a per pixel basis. The G buffer is then used as the input for another pass. This pass then uses a full screen quad to invoke a pixel shader for every pixel. In the pixel shader the geometric information is retrieved from the textures in the G buffer and used to carry out lighting calculations for the scene. There are various advantages to this. One such advantage is that it decouples the scene geometry from the light sources. It also ensures that the lighting calculations are only carried out for visible pixels. However, this approach has various trade-offs. The main trade off is that more VRAM is required as multiple extra screen size textures are needed to render the scene. This can be improved by tightly packing the geometric information in the G buffer.



Post Processing Pipeline



Adding a post processing pipeline to a renderer allows a variety of extra effects such as bloom, depth of field, motion blur etc. I added the post processing pipeline by drawing a full screen quad in the same way I did with the light calculation pass. The difference being this time the texture passed in is the final output from the deferred rendering passes. This allows effects to be applied on a per pixel basis. The effect shown above is a simple box blur. It works by averaging each pixel in a box surrounding the currently shaded fragment. By making the width and height of the grid bigger you make the resulting image more blurred.


My currently box blur implementation isn't very efficient as to get an appropriate blurring effect lots of samples are needed. This greatly effects performance. One way around this would be to render the blurred image to a texture with a lower resolution than the back buffer. Not only does this help reduce the number of samples taken as there are less pixels it also increases the blur effect as when the texture is upscaled it will appear blurrier.



Screen Space Ambient Occlusion



My implementation of Screen Space Ambient Occlusion (SSAO) works by creating a few random unit length vectors. Next 64 sample points are created inside a unit hemisphere in texture space. The sample points are weighted so samples are more likely to be positioned closer to the fragment being shaded as this creates more accurate results. This information is then used, along with the geometric information in the G buffer, in a separate pass where another full screen quad is used. Gram-Schmidt Orthonormalization is used to ensure the random unit vector is perpendicular to the normal vector. This new vector can then be used to create a TBN matrix and transform the sample points from texture space to screen space. Using the random vectors like this creates the effect of randomly rotating the hemisphere sample points. Once the points have been transformed into screen space the depth texture is sampled and the values retrieved compared against the depth at the current pixel. If the sample point is closer to the camera, then a counter is incremented. Once all samples have been taken the counter is divided by the number of samples creating a value between 0 and 1 for the occlusion of that fragment which is then written to an occlusion texture. In the light pass the occlusion values are used to scale the ambient term appropriately.


When looking back at this implementation it is missing a step at the end. As there are only so many random rotations passed in the occlusion values created can have a repeating pattern which becomes obvious when higher ambient values are used in a scene. This can be overcome by applying a blur pass to the occlusion texture before using it in the light pass.



Improvements



Overall, I was happy with how the implementations in this module turned out. However, as mentioned in previous sections there are various improvements I would make to things like the Box blur and the SSAO. Some other improvements I would have made can be seen in the raytracing project such as creating a better workflow for managing the creation of resources and the descriptors that go along with them. Another improvement I would have made would be to the G buffer. A better way of implementing it would be to create an array of textures and then use an ENUM to retrieve the desired texture. This would allow the use of for loops to greatly simplify managing the G buffer. It would also mean the G buffer could be easily resized whenever needed. One final change I would make to the G buffer is I would allow it to have textures of different formats. This would help facilitate the tighter packing of textures.