Kato is a 3D adventure game produced using the Unreal Engine 4.
The bulk of production occurred during a four-month period in the spring of 2018 as part of a vertical slice course at Uppsala University Campus Gotland.
Production then continued albeit slower throughout the following summer, as part of a second course in the autumn, and the later parts of spring 2019.
The main focus of my work involved the design and implementation of a cutscene system, a quest system, and a dialogue system which all could interact with each other in a way to allow easy to use design tools for the designers to produce content within the game itself.
Additionally, I partook in other general design aspects such as drafting the overarching narrative, level design, as well as assisting with work in other subsystems.
Part of the core design for Kato was to produce an interesting exploration adventure where players had to solve problems using creative solutions based on the restrictions of a dog character. Though some leniency was given to certain interactions, such as the ability to cut down trees in a single swing using an axe held in the mouth of the character. All of this, combined with a slower-paced open mission structure in-which players were allowed to take on and complete quests at their own discretion in more open-ended ways resulted in an experience enjoyed by both younger children and adults alike.
Coupled with the calm atmosphere produced through sound design, the music produced for the title, and the overall art direction players desire to explore, interact, and uncover the mysteries and stories that existed around the game world was even more strengthened.
At the time it was difficult to find any built-in systems for producing quests in a way that worked for Kato. So I was tasked with producing one.
The quest system contains two main object types:
A quest manager which manages the logic for all quests, checking if a quest sub-goal is completed, if a quest in of itself has been completed, logging current and completed quests, as well as checking that requirements are met before starting new quests.
The base quest object is the parent of all the following quest objects in the hierarchy. It is through the base quest object that all actual quests are produced. The object contains all data structures used for a quest, this due to requirements as the product was being developed included spawn points for objects, goal points for AI navigation, dialogue index numbers, and certain cutscene data.
While tools already exist within the Unreal Engine 4 that allows for the production of cutscenes and other similar systems listed above. Requests were made for a single working system that allowed for as much freedom as possible. And so the quest objects became the base as the two secondary systems would maintain themselves, only needing data to use.
The dialogue system would activate whenever the player interacted with a dialogue interaction object. These objects could include NPCs, special objects, or whatever the product needed.
Interacting with an object would activate the dialogue manager which would then check what kind of interaction would be expected. Interactions ranged from quest specific dialogue, randomized “bark” interactions, or specific contextual interactions.
Once the system calculated what type of interaction was being made, one of two things would occur. If the interaction was specific either through quest dialogues, specific items being used, or some other contextual interactions the system would determine what dialogue index the interaction was associated with. These indexes existed as short string value codes that could be stored in respective objects and accessed when needed.
Should the interaction have been a non-specific one, the system would check what kind of interaction was being made. Any and all interactable objects would then contain a list of possible interactions which the system would randomly choose from. These interactions, much like the specific ones existed as string codes.
These string codes would then be compared to an archive of texts within an Unreal Engine excel spreadsheet. Once a match was found the text segments following were chosen and displayed on the textbox widget.
During the development of the system, a desire to use multiple colored text segments was made, this was to highlight important information to the reader to further make goals understandable.
However, it was found that the way the Unreal Engine produced text did not allow for individual segments of text to be formatted. It was more so an “all or nothing” system. Because of this and the fact that any text formatting tools by third parties that existed were either grossly outdated or cost a significant amount of money for a school project, I took it upon myself to figure out a different solution.
This resulted in a small text printing system which would make use of multiple cells within the previously mentioned excel sheet. Each excel column was designated a color and any text printed from within it would print inside of a new text box that formatted to the correct color.
Thus instead of containing a single multi-line text box, each dialogue prompt with multi-colored text would contain multiple text boxes containing different formatting that was then placed to simulated a solid text box.
The system in its final state while used in a fairly simplistic way, only requiring five  separate text boxes that at most alternated between black and red text colors worked as intended, allowing a form of text formatting.
However, once the coloring formatting was finished it did also highlight a secondary problem. If the text was too long for the remaining canvas area and it hadn’t started from the far left, the text would “pop” down a single line covering as much of the new line as possible and then, if needed, cut off later segments to fit.
This meant that any larger text segment that did not start at the far left edge of the canvas would try to refit itself leaving any previous text with a potentially wide-open gap for the rest of that specific line.
Using this “popping” slightly to an advantage I further developed the system to break at every space, effectively starting a new text box for every word. While far from ideal, the functionality wasn’t process heavy enough to cause worry. And the result was a text prompt in which text would form to fit the given canvas area as much as possible.
In the end, the system was not used for any heavier formatting, only having to recolor some dialogues at most. But the way it was constructed did allow for more complex formatting to be performed if need be, simply requiring a bit more formatting data to work.
The cutscene system was the final stage which put both of the previous systems together. While Unreal Engine 4 does contain its own cutscene system, the way the use of cutscenes was described as well as how the previously mentioned systems worked it was deemed that the existing cutscene system might not be adequate for our specific use. And as such an in-house solution was made. This system used similar techniques as the unreal one with a separated “cutscene camera” which could be moved and rotated as need be during specific segments of dialogue. As well as displaying text boxes and other necessary information on the screen overlaying anything else.
All information for cutscenes would most commonly originate from a quest object as these were the most common places for cutscenes to be necessary.
Part of my work also involved designing certain aspects within the level itself, included but not limited to the construction of certain structures, implementation of sounds at certain points of the map, as well as production of some “invisible walls” used during the initial quest meant to contain players new to the game within a “tutorial area”.
Due to a short production time many of the assets produced were smaller modular objects, these were then used to produce many of the buildings and most other “man-made” structures on the map.
Personal contributions include the stables found in one of the four  main corners of the map as well as a “sheep farm” area added during a later stage of development.
The invisible barriers were another system produced to be standalone from the rest yet integrate easily with them. A barrier would consist of two  main objects while taking advantage of a third. These objects were:
A trigger volume as it exists in the Unreal Engine, these trigger volumes would be placed on the map and act as the actual barriers themselves triggered only when the player enters them.
The second object used was the target point actors. These actors contained a blueprint which was acted as the actual logic for the blockades.
During early development, the blockades were only needed in the beginning area of the game disappearing once the initial “tutorial” quest was completed. However, later on, a request was made to have them work using a special collectible and were reworked to allow both.
Each target points contained a list of trigger volumes, when any of these were triggered the logic would activate, a dialogue text box specific to that blockade was displayed and the controls taken from the player. This was done to allow the in-engine navigational AI to use the navmesh generated on the map and automatically walk the player character back into the “safe” area before returning control to them.
If an event occurred where a blockade was to be removed the target point would delete its listed blockades. Thus allowing the player entry to the new area.
While I do believe that I’ve learned a lot working on Kato, I still feel there is more for me to improve upon. My systems could be better optimized and allow for even more features to be added. And so due to a parting of ways, while Kato may not have become a commercially released success story, it was a fun project, one that can be looked back on as a time of growth.
Working with others to produce something that brought enjoyment to the people who tried it and engagement from the audience despite its comparatively smaller size has taught me a lot in regards to games production and engagement.
And that while one can set out with a specific vision in mind it is a good investment to work on the systems required in a way that allows for flexibility, understandability, and efficiency.
During the writing of this text (2019/04/30) Kato is currently being finished up for a smaller release on itch.io, though this fact may change over the next few weeks this text will be updated to reflect the final fate of the product in the future.
Addendum: As of 14th of May 2019 Kato has been released on the application platform Itch.Io with a positive reception.