My implementation of Screen Space Ambient Occlusion (SSAO) works by creating a few random unit length vectors. Next 64 sample points are created inside a unit hemisphere in texture space. The sample points are weighted so samples are more likely to be positioned closer to the fragment being shaded as this creates more accurate results. This information is then used, along with the geometric information in the G buffer, in a separate pass where another full screen quad is used. Gram-Schmidt Orthonormalization is used to ensure the random unit vector is perpendicular to the normal vector. This new vector can then be used to create a TBN matrix and transform the sample points from texture space to screen space. Using the random vectors like this creates the effect of randomly rotating the hemisphere sample points. Once the points have been transformed into screen space the depth texture is sampled and the values retrieved compared against the depth at the current pixel. If the sample point is closer to the camera, then a counter is incremented. Once all samples have been taken the counter is divided by the number of samples creating a value between 0 and 1 for the occlusion of that fragment which is then written to an occlusion texture. In the light pass the occlusion values are used to scale the ambient term appropriately.
When looking back at this implementation it is missing a step at the end. As there are only so many random rotations passed in the occlusion values created can have a repeating pattern which becomes obvious when higher ambient values are used in a scene. This can be overcome by applying a blur pass to the occlusion texture before using it in the light pass.