Portfolio


  • Home
  • DirectX Raytracing
  • Advanced Graphics
  • Roboflux
  • Boat Simulation

Roboflux



The project



This was created for a group project in my second year at university. It was created using DirectX 11 with the help of 4 other people. This project helped me work on my ability to work as a team and gave insight into the different tools that can be used to make communicating easier.



UI



One of the areas that I worked on was the UI. Various UI components were needed such as an animated component, button component, text component etc. To create these components, I created a UI component parent class. This class provided basic functionality such as the ability to create a quad with a set width and height in screen space. Another bit of functionality provided was the ability to have a parent UI component. This allows entire UI elements to be moved around and scaled without needing the transformation to be applied manually to each component.


Using this parent class, I created the other components mentioned earlier. The animated UI component takes a vector of textures and timings representing when the textures should switch. You can then enable interpolation which will lerp between the textures based on the frame timings.


Another component I created was the bar component. This was used to create things like health bars. It works by taking in 2 memory addresses. The first address references the max value the variable can be and the second is the current value the variable is set to. The width the bar should be is then calculated using the max and current variable values.


One of the final components I made was the button component. This component takes a function pointer to a void function with no parameters. This function is called whenever the button is pressed. It also takes references to 2 textures. One texture for the pressed button and another texture for the unpressed button. On top of this the colour of the texture when hovered over is calculated automatically.


On reflection an improvement I would have made to this system is to render into the depth stencil buffer wherever any non-transparent UI is located. This could then be used to not render pixels where the UI is located saving on pixel shader invocations.



Font Rendering



To create a functioning UI system text rendering was needed. I implemented this by first creating a font class. The font class takes in 2 file paths. The first file path is to the .fnt file. This file holds information about each character such as the width and height of each character. The second file path is to the texture atlas for the font.


Once you've initialised the font it can be passed into the text UI component. Next you pass in the sentence you want to be rendered. The component then uses the information from the font to create an appropriately sized screen space quad for each character. There are two main options at this stage. The first is to add a maximum length of the text. This is done by scaling down the dimensions of each quad by an appropriate amount and then adjusting the positions to ensure there isn't any gaps. The other option is to centre the text. This is done by keeping track of the total width and height of the text when creating the quads. These values can then be halved and added to the original position of the quad vertices.


An alternative method for rendering text is to use signed distance fields. This approach has the benefit that the text can be rescaled to any size without having to worry about the font textures becoming pixelated (source). Rendering text like this also allows more control over the appearance of the text eg. an outline could be added.



Line Rendering



Line rendering was used to represent the bullets fired from the weapon in the game. It worked by passing in the starting position, direction, width and height of the line. From these attributes a quad is calculated in the geometry shader. It is calculated in such a way that it mimics the properties of billboard rendering. Once the line rendering was setup, I created an object pool class. This ensures the line object doesn't have to be recreated every time saving time on the constructor and memory allocation. Another reason this approach is better as it helps reduce memory fragmentation as objects aren't needlessly recreated.


In hindsight I feel the effect would have looked better if instead of using a billboarded quad I had calculated a capsule mesh and used that instead. This would have been more expensive to create but would have looked much better.



Working as a team



While working on the project we had to organise and track who was working on which aspects of the application. We did this by creating a spreadsheet of tasks. These tasks were then assigned a difficulty and importance value. We kept track of which tasks depended on other tasks so we could see the order in which they needed to be completed. Once this was set up the work was split up and a Trello board was made. The Trello board contained all the different tasks allowing easy communication of which tasks were currently being worked on. On top of this we setup weekly scrum meetings ensuring all members of the team were kept up to date of where people were currently at.


Another important aspect of working together was the use of source control. We decided to use GitHub as it was what most of the members of the team were already familiar with. Each team member then created their own branch of the repository to work on. When it was time to merge the branches, we checked with each other to ensure merging wouldn't cause any issues. If any merge conflicts occurred, then we called whoever wrote the code involved with the conflict to help ensure as few bugs as possible were introduced to the repository.