“Stranded on a forgotten island, fight through enemies with sword & bow and decipher riddles on your quest to re-ignite an ancient beacon to call for aid.”
My Contributions
Screen Space Reflections
I was responsible for upgrading the graphical quality of our engine during this project and successfully implemented numerous techniques to do so. The first of which was Screen Space Reflections (SSR) with hierarchical depth traversal for quality and performance. This combined with one of our technical artists Erik Hausners water shader resulted in a very pretty effect.
(SSR in use)
This effect still has some issues to iron out. In the gif above you can see the player getting falsely reflected in the water in front of them. At first I tried to make reflections work for all types of meshes (In our case that would be forward, deferred and particles).
To do that I wanted to reuse both the color and depth buffer from the previous frame in order to get a more correct reflection, but that resulted in a lot of jittering when the camera moved around. So the solution we ended up on was deferred depth buffer with the previous frames color buffer with the final intersection point being reprojected with the help of a velocity buffer and I think it looks really good in most cases.
Horizon Based Ambient Occlusion
The second effect I created for the game was Horizon Based Ambient Occlusion (HBAO), which is a type of SSAO presented by NVIDIA at SIGGRAPH 2008. I’ve gotten something working using this method but it doesn’t look quite as good as the paper. I’m not completely sure why the result looks like it does but it’s a good approximation of ambient occlusion, especially compared to the previous method we used so we ended up using this method regardless.
(HBAO now)
(broken SSAO from before)
Cascaded Shadow Mapping
I also implemented Cascaded Shadow Mapping (CSM) with the help of a guest article from LearnOpenGL. Which made our shadows way crisper and gave them much more range while using the exact same amount of memory (in this case 2048x2048x4 instead of a 4096x4096 shadow map). For cascaded shadow mapping to work you need to partition your frustum into multiple slices called cascades.
(Cascades Visualized)
In order for cascades to work I had to add support for Texture2DArrays in our engine as well as select a texture slice using the SV_RenderTargetArrayIndex semantic in a geometry shader which ran multiple instances. Framerate dropped a little bit compared to regular shadow mapping, probably because our engine could no longer cull as much as a lot of previously cullable meshes are now possibly needed for our shadow maps. But also due to the fact that we’re running way more vertices through the rasterizer.
(CSM from far away)
(Previous method from far away)
(CSM close up)
(Previous method close up)
Optimizations
A big part of Keeper’s lights concept was the idea of a semi-open world where there’s a little bit of exploration sprinkled in a mostly linear world. This requires us to be able to render a lot meshes within a reasonable timeframe. At this time we had the ability to use instanced rendering but there wasn’t a streamlined pipeline for instanced rendering so it was largely unused. So I streamlined the pipeline.
The regular mesh renderers sends all non-animated objects and data to the new MeshInstancer class which first culls the meshes and then sorts them depending on if they’re going to be rendered in the forward or deferred pipeline (So we minimize overdraw but allow relatively correct transparency). Then all meshes per-instance data get added to one big instance buffer which gets indexed into to fetch out the relevant data and the render commands get created.
Another big optimization I made was to render static objects only once when rendering point light shadows and then copying the depth buffers from the buffers containing the rendered static data to another depth buffer where I later rendered dynamic objects. This decreased our rendering times of point light shadow maps from 17ms to roughly 0.5ms per point light virtually eliminating the shading part of creating the shadow maps for us.
Animation System
I also rewrote the animation system using an approach that was heavily inspired by Unity. Animation Nodes are created to reference an animation type, In our game this would be a 2D Blended or SingleAnimation and transitions between these nodes were defined either explicitly from one node to another or from Any to a specific node.
Transitions are controlled using Triggers which open and close the possibility for a transition to happen. If all conditions of a transition are met and the trigger is set the transition will start and the previous and new animation node may be interpolated between depending on the transition settings. Sometimes we would like to make a transition instantaneously and therefor an ExitTime variable is defined for the programmers to decide how long it should interpolate from the previous node to the new node. If the exit time is <= 0 then no transition will be made.
The Problem
Turns out, complex characters with a lot of animations need a lot of transitions. And to define these and not lock yourself is very difficult. One of these complex characters initialization code looks something like this.
Sometimes, the systems in our reference materials are created in such a way that it can be modified by a tool. But we’re not in the business of making tools, we’re in the business of making games. This system had its upside that you could have multiple different behaviors and describe them in terms of transitions but the downside of creating monoliths of code and if you were to add an animation you would have to create a lot of transitions to support it correctly.
The Solution
With input and feedback from the gameplay programmers I added a queue system where you could simply queue a transition and it would just happen. So if you wanted to play a one-shot animation then go back into idle you could queue the one-shot animation and then queue idle and tell it to TransitionOnFinish. Bundle this with an Animation Event System written by Måns and you’ve got yourself a powerful animation system. This simplified the workflow for the other programmers and basically removed all of the boilerplate code unless you wanted to use that system (like the programmer designing the example did).
There’s pros and cons to using both systems, the transition system is great if you just want to ask for a state and let the transitions get there by themselves so for example setting the GroundedTrigger could take you through a landing one-shot animation which then could take you back into the grounded 2D node.
But if you’d rather just request an animation you’re better off using the queue system combined with set animation.
The blending part of this System was probably the easiest part as I took a previously existing InterpolateTransforms function which basically lerped two Transforms with a weight and mashed it together with playing two animations at once and interpolating the results.
For 2D Blending I first had to figure out the barycentric coordinates of the selected triangle. I decided to limit the 2D blending to front, right, back and left in order to not have to implement a triangulation algorithm as I could simply bake the triangles into the code if I was able to make that assumption.
After that I simply had to run multiple animations at once interpolating between them using the weights gotten from the barycentric coordinates and below you can see the result.
2D Animation Blending
What the Duck?
Programmers
Filip Tripkovic
Måns Berggren
Liam Sjöholm
Herman Sjöholm
Erik Edfors
William Sigurdsson
Artists
Stephanie Madsen
Albin Mjörnstedt
Jasper Paavolainen
Emil Hagström
Animators
Oskar Lind
Jesper Walden
Technical Artists
Elina Kans
Erik Hausner
Level Designers
Kristian Sistig
Martin Trosell