Kitchen Roll follows the story of Kam, a young photojournalist who has travelled to a deserted and condemned factory town known currently as Sweetlandia to create its final image in the public view before it is levelled to the ground. After being blinded in an unfortunate accident, she has to come to terms with her new disability while navigating the dangerous and hostile environment of the factory town to find a way to contact the outside world and escape. However, even though the town is deserted, Kam is not alone, and while facing some of the more regrettable ghosts of her past, she discovers that the factory is haunted by ghosts of its own, whose stories she comes to discover as well. The game focuses on a unique visual echolocation mechanic, and with little to no dialogue, the story - written by Lychalis, my girlfriend and a skilled writer - is revealed through text-based narration in a style similar to Thomas Was Alone, and through gradually building a picture of the factory town’s story by finding objects scattered across the levels, not unlike Bioshock.
Sadly, as this is built in Unity 4, I cannot make a WebGL demo of this game, but I have plenty of videos :)
What's happening here?
The protagonist Kam has to come with terms with recently being blinded in an accident. Through the course of the game, she discovers and masters a way of navigating her surroundings via a visual representation of echolocation. I wanted to achieve the effect of capturing any given frame in the scene and applying it as a texture to a plane, giving the effect of 'snap shooting' the game and giving a form of visualisation by pinging the surroundings like sonar, with every sound in the world leaving an impressive impact.
After 8 different attempts on finding a solution, I managed to create a system that via the process of creating render textures, disabling their cameras and applying the 'stale' texture, could completely avoid the CPU bottleneck of transferring a camera's view to texture, instead essentially storing the frame buffer in vram.
Each frame is run through a render texture camera manager, which holds it in esgro in case it’s needed. Upon request of a torch, that frame’s camera is 'locked', meaning it will no longer update while the texture is being used. If multiple torches are requested in the same frame, then the texture is simply shared.
As one large texture is used to reveal different parts of the screen, the UVs of the texture are altered to reflect the torch’s position in the world, so even when 2 torches might be on different sides of the screen, they will still recieve the same texture.
This means that, vram/resolution permitting, I can have dozens of torches on screen with minimal impact as they use the same texture for a given frame.
The planes on which these textures are applied (called torches, as they allow Kam to ‘see’) are procedurally generated meshes which warp to their surroundings via ray-casting, wrapping round and absorbing into the world.
The texture on a torch is revealed using a circular alpha mask, which is grown over time to give a greater sense of energy flowing from each sound wave.
These all combine to give a sonar-esque field of view mechanic which conveys how Kam sees while also giving an interesting mechanic to work with when designing levels.
In the process of making this I learnt about 3D modelling and animation, render textures, camera manipulation, ray casting and procedural generation of meshes and textures.