This site showcases the thesis capstone projects for the Full Sail Mobile Gaming Master of Science program. Students completing the program post their end of program project self evaluation here examining what went right and what went wrong during production.

The site provides examples of all completed projects, without regard to the quality of work. Final faculty evaluation of your project is separate from your postmortem. It is a place to share student work and start dialogue with faculty about completed and upcoming projects.

If you are adding a postmortem for a completed project to this blog, please do your best to provide a meaningful meta-level evaluation of your project. This helps students currently in the program have a better understanding of the critical points related to independent production, game development and design and project management. The template for the blog content and instructions can be found in the first post from July 2014.

Thank You,
MGMS Faculty

Thursday, July 28, 2016

Capstone Game Post Mortem: The Odyssey

Capstone Game Post Mortem: The Odyssey

Game Summary:


The Odyssey


2D Sidescroller platformer game


The game is built for Android and iOS platforms.

Revenue model

The game will be released for free.

Development tools/Language

The Odyssey was developed using Unity3D engine in C# language. All the assets were custom made in Illustrator and Photoshop. Unity Analytics is used to gather feedback and statistics information from end users.

Game audience

My game audience will be players from 17 years to 24 years old because it is in this range in which players are most attracted to platform and action games. The game is oriented for the Achiever and Explorer types of players (Bartle) that will want to reach the end of all levels, know more about the story of Odysseus and collect all the items and coins that will help them on their quest and reach a higher ranking. Players of The Odyssey will also like titles in this genre such as SuperCatlevania IV, Super Mario Bros, etc.


Andres Torres – Concept, Design, Level Design, Software Development
Ainhoa Salas – Content Design
Michael Wong – Main Character Design and Animation
Alejandra Azpurua – Enemies Design and Animation


The Odyssey © 2015 Andres Torres

Sound Bite

“Not even the gods can stop him from coming home”

Executive Summary

Play as Odysseus and help him get back to his home, Ithaca, while the gods are trying to stop him. Odysseus will go through different levels defeating monsters and sorting obstacles that will help him get nearer to his objective. Odysseus will run, jump, hit and shoot arrows while he traverses a long and perilous journey set by the same gods he worships. Meet this great Greek champion in a story as it is depicted in one of the great literature classics, The Odyssey by Homer.


I’m a big fan of Greek mythology so my goal was to make a game that resembled that entire cultural heritage, as well as its stories and characters. Also, I’ve always loved adventure, platformer types games with a good storyline so, the clear choice was to combine one of the great epic poems of Homer (The Odyssey) and make it and adventure-like sidescroller platformer game. One of the pillars of the game is to teach the story of Odysseus to players who haven’t heard of it and relive it to people who has, so I didn’t want the mechanics to come in the way of it. That’s why I chose a comfortable mechanic for any player with even a little experience in gaming, which resembles a lot of classic platformer games like Super Mario Bros. Besides, I’ve noticed that, when playing platformer games, the environment (enemies, platforms, puzzles, etc.) should be the one posing the difficulties rather than the controls themselves. The narrative was solid, since it has been proved for years for every person who has read The Odyssey.

Capstone Scope

The scope for the capstone project was to provide a fully functional game and mechanics, with 2 to 3 levels (including the tutorial), a system of rewards that lets players earn chapters of the actual Odyssey to read and a system that allows developers gather feedback from end users (Unity Analytics)


The idea for my capstone project was to create a fully functional game but with 2 or 3 levels, including the tutorial. I was successful doing that in the timeframe that was given and even added features that were listed in the wish list like multilanguage support and porting the game also for iOS platform. Since one of my goals was to transport the player to an specific time, place and culture, I would have wanted to add more elements that reinforced the cultural heritage left by the Greeks in the game, specially NPC’s characters and enemies that are easily related to Greek culture by many people, like the gods (Zeus, Poseidon, Athena, etc.), Medusa, Chimera, etc. Sadly, this imposed a big workload on the designers and me that would have left us without time to complete other functionality, so we created the assets that were completely needed to achieve the goal without loosing quality on the project. Still I think the game managed to fulfill its goal perfectly. Probably, future efforts will be put on adding these elements and constructing cinematics that will allow shortening texts on the game as well as giving life to it.

Demo Screencast

The Critique: What went right…

First time Unity developer

One of my biggest fears, when starting to develop my capstone project, was that I didn’t have any prior experience on Unity or any other game engine whatsoever, other than small tutorials long time ago, so the capstone would be the first project that I would do on an engine of this type and even on the programming language chosen, which was C#. This was a big challenge for me because I knew that the learning curve that I would have to take was going to be very steep, especially in the first weeks when I would start getting confortable with it and were the ones that would require the most effort. Thankfully, Unity is a great platform and very easy to pick up. Unity’s official tutorials are really helpful and its community it’s superb. So, something that seemed like a big challenge at the beginning, of course with a lot of effort, turned out to be a great commodity. Unity’s tools and integrations, like services, collision system, physical system, editor, etc., makes really easy for a beginner to start developing what he has in mind and step by step add more complex stuff to the mix. Luckily, I am a proficient Android and iOS developer, so many of the concepts that I had to take in account when developing, where already clear in my mind (events, delegation, etc.) so basically I was just taking care of the syntax and the whole new paradigm of developing by components. I’m not saying that it was easy to develop in Unity and it takes no effort at all, but the engine itself has a lot of advantages and perks that lets the work of developers be easier than with others IDE’s. This enabled me to complete most of the main functionality before scheduled and work with time the solutions of more complex situations and problems. Now that I know a little more on Unity and C#, I would want to go back and refactor things that I coded in the beginning, interaction between classes and scripts, better organization, etc. The more you code, the more you realize how things should be placed, done and even how to make other components interact between them. These are not patterns that are set or imposed by Unity; in fact, Unity gives a lot of freedom in this area. These are good practices that are individual for developers. At least, all these are lessons learned for future projects.

Assets are a big way to give context

One of the things that I’m most proud of in the game is its assets. Each asset has a clear meaning (even without a hint or text) and is easy and quickly understood by players. Choosing making the assets as vectors was great because it gave the overall game a neat and polished finish as well as making them very clear even in smaller screens. Every asset was particularly designed having in mind Greek elements and colors that would ultimately add to the theme of the game. Feedback from users showed that even before reading anything, they would clearly understand that they were in a game related to Greek Mythology. One big advantage of doing them on a 2D plane was that the assets themselves had the lightning and shadows that they needed, which comes as a great performance improvement to not have the engine render them by itself at runtime. Even thought Unity has great light and shadows rendering system, it still has some cost on the performance of the game. This way, performance is maintained in great levels and the game can be seen running smoothly with a high number of frames per seconds (60 fps approx.). Having made them custom, allow me to put the designer’s and my imagination to fly. This had as a result some extremely interesting, original and novel characters (main character and enemies) that had an awesome impact on the gaming experience of the players. One of the most rewarding moments was whenever players came and congratulate me on the assets, it felt as if all the hard work and vicissitudes I went through while developing them, meant something. 

Keeping the project well managed

The project had to be well managed since the beginning. The project is stored in the Underdog repository as well as my personal project repository in Bitbucket. On the client side, I used Sourcetree, which is a VCS or a Git Client to keep track of the updates of my project. I kept two main branches, “master”, which was the stable versions of the game and from where my advisor or anyone who would like to test the game, would pull the latest tested version, and a “develop” branch where I had all my recent work yet to be tested. Generally, I would create more branches for different states or versions of the project, in case I needed to show my game in a particular manner (and store it) but still keep working on the develop branch. Near the end of the project, I found out a great way to work with Git in which you create a branch for each feature you are adding and then you merge it to the develop branch. This way the project is more organized and different people can work on different features of the game and then merge them into the project. Sadly, implementing this near the end of the project wouldn’t have been much helpful because, beside me being the only one working on it, the main functionality was already completed and I was already solving bugs and polishing. I started working in other projects with this methodology and I completely recommend it to everyone. Having all this type of organization was of tremendous help in seizing every moment to work on the project and to keep everything safe and centralized. I had to change laptops, I move to another house, I work both at home and in the office and traveled to several places and my project was easily reachable anywhere I went. It also helped me return to previous states of the project whenever I knew I’ve introduced a bug in a previous commit or wanted to erase safely everything I’ve done up to a point. One suggestion for everyone, that brought me some troubles at one point, is that you should do frequent and atomic commits every time. This way your project will be more separated in Git and you’ll easily find and revert to any specific state you want. In one moment, I noticed that everything that I had done up to a point was not what I really wanted and, instead of manually erasing the code, I wanted to revert to a specific commit to prevent introducing new bugs. Unfortunately, I haven’t been doing my commits as small and frequent as I should have and I had to revert to a commit that erased some code that I did wanted, and had rewrite it again. I’m sure that with my previous suggestion, you won’t get this type of trouble. 

A scalable and modular project works well for everyone

I tried working each and every feature and component of my game as an individual, to make the project as modular as I could and I was pretty successful in doing that. This would allow me later to add more freely and without difficulty other features, like building a structure with Lego blocks. The input from the player is controlled by a class, which maps the input to the different actions. Because my game had three ways of controlling the character (desktop, gestures and virtual buttons), this allowed me to easily integrate each one because it doesn’t matter the way the input came, if it was from touching the screen, gestures or pressing keys, my manager class would make the same and mapped it to the corresponding action. On the main character, each behavior (walk, jump, attack, shoot arrow, etc.) was controlled by a different script and then there was a master script that managed and activated each one of them. This way, each time I develop a new behavior, I could focus solely on what I wanted that behavior to do, which makes the script smaller, readable and simpler, and then letting the master script activate it when it was needed. Character animation was controlled by a whole other script that was still controlled by the master script. Enemies worked the same way, one script that controlled their behavior, and extra scripts for special behaviors if needed. This script that controlled their behavior would let me change their health, speed and damage but their AI remained the same, allowing me to easily add new enemies with different characteristics but that internally worked the same. Elements in the level worked the same way. With this, I could make everything into a prefab, and simply create new levels, enemies, characters, etc., by dragging and dropping these elements into the editor, which reduced greatly the amount of work when trying to escalate the project.

Multilanguage support

One thing that I didn’t know I was going to have the time to make, was to allow my game to support many languages. I knew it was not a “must have” feature but I really wanted it to have it because my native language is Spanish and I have decided to make my game in English. Adding more languages would help me reach a broader audience. Additionally, personally, it would be a symbol of having completed this course in a language that was not my own and was a constant challenge and obstacle every month. Thankfully, time let me work on this feature, which was a success. I didn’t want to just built-in the two languages but what I wanted to do was to build a system that would easily let you add any language for the game to support it. Unfortunately, this had to be made from scratch because Unity doesn’t let developers support many languages with a built-in system. Inspired by how it’s done in Android and iOS (background knowledge helped me a lot in this section), I created some static classes with the different texts in the different languages and another class who was in charge of checking which language was set and returning the correct text. If another language wants to be added, the developer would have just to add the texts in the corresponding language to the class and the system will do the rest seamlessly.  

The Critique: What went wrong…

Lack of thorough planning pushes deadlines and builds pressure

Due to the lack of free Greek-theme assets on the Internet and being this one of the pillars that would let my project fulfill all its goals, my main concern when idealizing this project was, how were we going to make all the assets needed in such a short amount of time? I did a quick list of the assets I would need for my project to be ready and got to the conclusion that with one designer and a good timeline it would be fine. This was a utopic scenario that almost never occurs. Once we started the development of all the assets I noticed that deadlines were being pushed and pushed and the assets were not ready to be implemented in the game, which desynchronize all the work between programming and design that was a key factor to complete the capstone project and all its features and objectives on time, so pressure started building up. When I sat down again and analyze what could be happening, there were a lot of factors that I didn’t take in account. First, being freelance, the designer had other projects in which he had to work too and couldn’t dedicate full time to my project. I didn’t take in account reworks, mistakes and redraws of the assets to make them completely perfect, which took more time that what I supposed it would be. Each project and each stage of the project should have a timeframe planned to fix bugs, mistakes or do reworks. No project is risk-free but a good planning of these risks with their respective solutions can let us be more precise on how long these time margins would be. After this, I hired two more designers and assigned them specific tasks and deadlines. Their workload was less than what had the first designer at first, so I had a lot of time to make corrections, give feedback and even cope with whatever external delays the designers had. Quickly the pressure was released and the assets came along great at the right time but with an extra effort. I now understand the importance of planning and having backup plans to minimize the risks in a project of this magnitude. For future projects, I will have a list of each one of the assets needed for each character, NPC, animation, level, element, etc., and the time it should take (previously revised by a professional designer) taking in account any major redraws or feedback implementation. This way I’ll be planning for the worst-case scenario and every asset done in less time it’s a gain in the project development time. A good solution, which I didn’t do it either, is to have a backup plan in case what you intended doesn’t get done. I should have looked for other assets on the Internet that served well on the project event though they weren’t perfect or aesthetically exactly to what I was looking for. This way at least I could get to the deadlines with a better version of the project and saying that assets will become more polished in later versions instead of just showing something that didn’t quite resembled what I want. This is very important especially when the project is going to be shown to financiers, clients or customers that can easily give a green or a red light to your project in any stage. It’s important to have solutions for each and every one of the risks that your project may befall into to make the process keep going smoothly no matter the obstacles that are in the way. Of course, what happened to me was specifically for the assets of my project but this should implemented for any step, which may cause a great risk in your project.

Code Organization

As I previously said, Unity gives a lot of freedom on how you can hierarchize, organize and structure your code and resources, but this freedom can come as a double-edged sword if you don’t know how to use it. At the beginning, having a few folders with some kind-of-corresponding names seamed to do the trick for what I was looking for. The project came along good at its early stages. As soon as I started deepening more in complex functionalities, I started having trouble with the hierarchy I have adopted for the project. Folders started to have different names but having the same types of scripts inside, scripts weren’t as easily found as before, there was no cohesion between the directories. This was making my work progress slower little by little and more tiresome. Finally, I had to take a pause in the development of my project as started doing a thorough refactor of all the structure of it. I found out it was easier to separate each component or element in 8 main folders: Animation, Artwork, Fonts, Material, Prefabs, Scenes, Scripts and Sounds. Inside “Prefabs” and “Artwork”, there were the same subdirectories, which were separated depending on where they were being used (Enemies, Gameplay, Levels, Misc, Odysseus, Screens). “Scenes” was separated in Screens and Levels and “Scripts” was separated depending on the functionality of the script (Behaviors, camera, collectable, collisions, effects, elements, extensions, helpers, input, interfaces, managers, movement, UI). This is totally to consumers taste but it can make your developing process and work more effective and easier. In the end, it’s better to have an already proven good hierarchy and code structure and then started tweaking it to fulfill each developer’s needs. Doing a retrospective, I would have structure the code and resources differently. I noticed that there were some built-in advantages in having all the assets inside a folder called specifically “Resources”. Still, it worked pretty well for me and I can be sure I won’t sub estimate the power of the freedom that Unity provides when structuring the code. “With great power comes great responsibility”. On the coding side of things, having come with a no-background in Unity, made things quite challenging because I had no good practices whatsoever. Due to Unity’s integration with a broad range of platforms, there are a lot of ways to do the same thing and it will depend mostly on the conception the developer has when programming each component, which in my case was none. I realized it was better to construct each component as a single being with its attributes rather than having many scripts controlling the same things on the component and making the component tell others that interact with him, about his status and attributes. This way you avoid having complex and illogical interactions between objects, which happened to me a lot in the end, and centralizing the information of each component in them. Having understood this, I now have the proper background and paradigms to make any other project right from the beginning, instead of doing reworks or realizing that I must do big changes to the code at the middle of the project. Still, there is always room for improvement.

Testing is all about big numbers

Although I felt that I got a lot of information from user testing, learned a lot about A/B testing, user testing, feedback, surveys, polls, testing tools, etc., I expected more people to be involved in these tests that what actually was, which I couldn’t say that was a significant number. Testing processes require much more effort put into that what I actually did, especially to find the rights testers and in significant quantities, which was, for me, the main challenge. Testing was implemented since day 1 of the development process on each area of the project. I used Google Forms to take surveys on people, which was a great tool because it showed the results as statistics right away. I used Talk-Aloud methods and personal interviews with players playing the game, as well as discussions and questionnaires, to gather more and insightful information about their likes, dislikes and needs. I recorded videos of their reactions, inputs and game screen with an awesome platform called Lookback, which also centralized all videos into a single spot no matter where they were recorded (each tester could test the game from their device and Lookback would record it and store it on the platform). I pushed new versions on the project to every tester device automatically with HockeyApp to keep them updated on what was new and to make testing processes easier and smooth. Even though I did all these tests, I noticed that every time, the same people were the ones that answered the questions and did the tests. I appreciate a lot what these testers had as feedback and it help me a lot to polish and find solutions to known problems in my project but I would have wished to have more and a more diverse group of people. I think that there weren’t more than 20 testers, which was far less from what I have in mind when idealizing all the testing process. I think that people doesn’t want to add more work, to do an extra effort or simply is not interested if there is no reward in it.  This was something that I knew, but I didn’t take in account, I though more people would have been be more interested in getting involved in the process of testing a kick starter game (I know I would have). Still, for future projects I’ll put more effort on gathering the necessary and right amount of testers and offering them a reward for their services at least until they feel so involved in the project that they want to do it for pure interest and fun, like my testers did. (25 min)


Gesture Input Controls

From the start, I wanted to give the player a novel and original way to control the main character. I was sure that I needed to at least try adding virtual buttons to the game screen for the player to push and interact with Odysseus but these buttons still took screen real state, which is very valuable especially in mobile devices. Sure I could add an alpha channel to the buttons to enable the player see through them but his fingers would still be place on top of the screen most of the time, but it was a solution. The alternative to these buttons was to device a gesture input method were the player could control the player by touching, swiping or dragging the finger through the screen without visual cues that could interfere with the whole gaming experience. A/B testing was required to see which of these two methods was the preferred one by end users. In the end, both methods were implemented as independent features of the game. The gesture method allowed the player to move right or left if the touch was in the right or left half of the screen respectively on the bottom half of the screen, because if it was on the top half, Odysseus will jump. If the player taps an enemy, he will do a sword attack but if he holds the tap on the enemy, Odysseus will shoot an arrow. Sadly, this system didn't have the desired impact that I wanted or thought that it will have. As much as I found it very easy and intuitive to use it, user feedback, gotten through testing, didn't find it that way. Without a tutorial, it was almost impossible to know how Odysseus was controlled (it felt as a series of random movements) and even after teaching them how it was used, players will often make mistakes or actions that they didn't want. Another common comment was that once the player got ahold of the timing and how to use the system, they felt that the movements were not precise and lack of speed, which are key elements when playing platformer games. Players felt they were not in complete control of the character while using this system and, despise of the disadvantages that could bring the virtual buttons, they preferred to come back to this method which resembled more how they used to play in console games, etc. The majority of the people would rather play with the buttons than with the gestures. Although it was a fun and novel way of controlling Odysseus, it lacked the speed, focus and control that were necessary for platformers. Also, having so many actions and gestures, makes it very difficult for the player to internalize the system. Maybe if some actions were automatic and the main character just needed fewer actions to control (like the title Rayman Fiesta Run), it would have been a better feature. In the end, what this system needed was more testing and polishing.

Tutorial as a way of presentation

Another thing that didn’t fulfill my expectations was the tutorial of the game. I always vision my tutorial as a way of presenting all those NPC characters that gave context to Odysseus and his world and the best way to help players make a quick connection between the game and Greek mythology. I would have loved for Athena to teach the player how to control Odysseus (since she was helping Odysseus through his journey) to give context to the story. Also, it was a good idea to stop every action to explain the user how to interact with an object, how to surpass a specific obstacle or how to perform a certain action but players expected to tap the message for it to go away instead of pressing a specific button. If the player was pressing a rapid succession of buttons, he could miss easily a certain part of the tutorial. Coding the tutorial was also a real challenge. The levels were first built and then the tutorial was made, so the tutorial basically was a sub-layer of a level, that could be activated or not. This made it very difficult to allow the tutorial layer to get in the middle of the different behaviors and elements interactions to have them do completely different things that what they were intentionally and originally coded up to. This is why the tutorial doesn’t feel as smooth and can feel invasive at times. I feel that the best solution would have been to build a separated level with all the tutorial functionality integrated (not as a separated layer) and tear apart the different scripts and behaviors so they would work specifically for how the tutorial was supposed to be. Maybe this can be sort of a rework or a lot of code repetition but it would give more control to the tutorial, more flexibility and it would allow it to easily achieve the completion of the objectives we were after. Besides all these, there was a tutorial for the virtual buttons, which were the easiest and more intuitive to understand due to their resemblance to external controllers from console, but there was not a tutorial for the gesture input method, which, beside being the least intuitive and difficult to learn, it was the one that players felt more issues while trying to get use to it. Also, having a tutorial for one system but not for the other is a lack of congruence on the project. (29 min 30)

What can we learn from the experience?

After analyzing all the development process of my capstone project, I’ll surely would have made two things differently. The first one is adding more effort to the planning section of the project. I’ve never given the merit technical documents deserved, but now I know the strong impact they can have on the whole process. They’ll let you notice the risks and weak points of the project and find the pertinent solutions to keep the project rolling even in the worst-case scenarios. Also planning lets you choose wisely your team and how the workload is going to be distributed between them to give more accurate timelines and better quality vertical slices or versions of your project to keep the interest and support of investor, consumers or clients that are behind the project. We have to be aware that no matter our skills, our team or even the precautions we take, things can still go wrong and push our deadlines. To minimize these risks, we have to make sure we have our backup plans and solutions ready to be implemented and add to our timeline, some frames for bug fixes, redraws, rework, etc. Everything that is done before the deadlines, can be seen as a gain to the project and is better to give long timelines and have the project ready before expected than giving a short timeline but having to ask more time and push deadlines. The second thing I would do differently is regarding the coding section of the project. At the beginning of the project, I didn’t have any background or notion to guide me through how, what and why should I do my code one way or the other. Realizing advantages and disadvantages of doing things different ways made me change the way I coded several times, which brought inconsistencies in the code as a whole and in the end reworks and even a slower process. Now, with everything I have already learned, I know which coding paradigm works best for me and for the necessities of the project being done. I now understand completely how components should be created and how they should interact between them, which will ultimately make my code more legible, logic and will allow development processes run smoothly, besides not having the learning process and steep curve that I did have in the beginning of the project that took a lot of the time in the first weeks.


Summing everything up, I am pretty pleased and happy with what my capstone project has become. I think that the project defers from the original in some ways to make it better, but it pretty much stayed right in track from the conceptual idea. Evidently, due to time reasons, some trade-offs had to be made to show a fully developed vertical slice of what would be the complete the game. These trade-offs came, for the most part, on the design side of things. There were elements that weren’t designed or included (NPC, environment elements, etc.) to put more focus or effort in others that had more weight in fulfilling the game’s objective or in completing other functionality that was missing or was more interesting. From here, I plan to add one or two NPC’s (one as a tutorial guide and the other one who will tell Odysseus parts of the story) and build at least five more levels, which should take me around one month. Then, I plan to add it to my portfolio and release it to the two major mobile platforms (Android and iOS) to start getting more feedback and making a name for myself. This way, I will be able to make my next game or project a lot more informed and with a better judgment. I’ve realized that is better to make several “small” projects to tests your abilities and learn the do’s and don’ts of a platform and coding style, before investing and releasing a full project which you expect to make a living out of it and having a team behind supporting it.


Torres, A. (2016). The Odyssey [Android video game]. Caracas, Venezuela.

Friday, July 22, 2016

Capstone Game Post-mortem: Sensor Dev

Capstone Project Postmortem: Sensor Dev

Project Summary: A plugin for Unity3D to provide access to all sensors available on Android platforms.


Jonathan Burnside


Sensor Dev


Developer tool set


Unity3D with Android as the platform target

Revenue model

The tool set will be sold on the Unity Asset store for $30.00.

Development tools/Language

SensorDev was developed using Android Studio and Unity3D development tools, and was programmed in both Java and C#.

Project audience

The intended audience for this project is game developers and designers wishing to add sensor support to their games and products that are not currently supported by Unity3D. These developers and designers may either not posses the skill sets needed to access these sensors, or may represent those that would rather purchase a tool set than develop, test and polish the functionality themselves.


SensorDev © 2016 Jonathan Burnside



Sound Bite

Your players have sensors, now your games can too!

Executive Summary

With this pack you will be able to easily integrate and use all Android sensors, including those not supported by default in Unity. The tool set supports all standard Android API sensors as well as those provided by the hardware manufacturer of the device not defined by the Android API. The tool set also provides support for networking sensor values from an Android device to a build running in the Unity Editor, allowing the developer to quickly test and debug without having to constantly push new builds to a device.


Multiple courses in the MGMS program had developers use sensors for game-play elements. These tasks were extremely easy if the sensors that Unity provides were used, such as the accelerometer and gyro, but would be considerably more challenging if using a sensor that did not have default support. After researching how to access these sensors I found it would require multiple development platforms and languages to implement. I believes this additional complexity would reduce the average user of Unity3D's ability to access these sensors, indicating that a tool set providing this support would be a viable product to sell on the Unity Asset Store. 


The original ideal version of my product would consist of two distinct parts, the tool set and an application that would demonstrate how these sensors can be used in a game setting. The tool set would provide sensor access in three forms: direct from Android, data converted to common use cases and Unity prefab objects that could simply be added to in the editor. The demonstration portion would consist of a 3D scene that would have used most of the sensors as part of game-play elements.


The Critique: What went right…

Android Studio

When I started this project Android Studio was still relatively new and not all developers had started using it in favor of other editors such as Eclipse. This made me concerned that Android Studio might be missing support that the project would require. I became particularly worried when I realized that Android Studio did not have a built in path for creating a JAR file (Java Archive (JAR) Files), which I planned to use as the package for my Unity plugin. This did not prove to be an issue as I was able to first provide support myself for creating the JAR file, then later discovered that Android Studio's built in package file system that creates AAR files also worked as a plugin for Unity (Create an Android Library).

Unity Asset Store art

The tile set package I purchased, titled "Village Interiors Kit", for creating the demonstration portions of the project worked wonderfully. It had assets for exactly what I had in mind. What I did not expect was that it would also come with a demonstration level that was already better than I could likely create myself. All I had to do with the assets was add a roof and some optimizations for mobile platforms.

The starting point in the SensorKeep scene, made from a purchased tile set from the Unity Asset Store.

Light Mapping and Occlusion Culling

Light Mapping and Occlusion Culling, while time consuming to calculate, adding these features to the scene I purchased was all that was needed to get a strong frame rate on the weakest mobile devices I had available.

A picture of the visualization of the Occlusion Culling system in Unity

Feature creep

During development I realized there was no simple means by which the actual values of the sensors could be debugged, at least not from Unity. Using Unity remote would give you the values for the sensors Unity supported by default, but my tool set would be running on the local development machine instead of the mobile device, so no sensor data was available. After speaking with my adviser, we agreed that support for debugging these sensors while running a build in the Unity Editor would be very useful, and my second month of development was spent implementing such a system. The resulting system uses Unity's server/client networking to pass sensor data from a device back to a build of the project running in the Unity Editor on the developers machine. This allows the developer to not only debug the sensor values being used, but also allows for much more development and testing to take place on the user's computer without pushing to a device. Pushing to a mobile device from Unity takes a fair amount of time

External responsibilities and distractions

I can't say that I would suggest anyone try to complete a Master's degree while taking care of two children under the age of two and working a full-time job, but these distractions did have at least one positive effect on myself. They forced me to think through my development process more than I typically would. I tend to be somewhat of a brute force developer, preferring to have only a very sparse high level idea of what I am going to do and figuring out most of the details along the way. The benefit of this bottom up approach is that I do not spend time designing things that may ultimately not work out. The draw back is that the final results tend to be less modular and organized. While I am still no fan of doing a great deal of pre-production planning, spending so much time with my arms filled with children forced me to think through some tasks in greater detail than I would previously, and the result was that when I finally did get to sit down and code, things tended to work out a little better and go a little faster. While I still think that most traditional wisdom on pre-production would have you plan to a detail level that is almost guaranteed to not be what is ultimately developed, doing more planning than I have in the past likely improved my results on this project.


The Critique: What went wrong…

Networked Debugging system

While this system proved extremely useful once complete, and is one of the features that sets this tool set apart from similar tools, it also took quite a bit of development time. The system itself did not take long to implement to a base working level, but this feature broke many times as other portions of the tool set were developed or adjusted. When working, the system allowed for debugging of sensor data but I did not have great tools for debugging why those values might not be networked when the system broke. Due to this, I was left to debug based on asserts and log files, which are not as informative as an actual debugger.

Demonstration program 

I had many unplanned for issues with the demonstration portion of this project. Most of the ideas I had at the start of development on how to use the sensors in generic game-play systems did not pan out. I was also of the opinion that while I developed the demonstration portion more ideas on how to use sensors would be reached intuitively, this was not the case. There are quite a few different means by which these sensors can be used for games, but trying to put them all in one homogeneous system did not work out. 

The Raw Sensor data scene, while not very exciting, gives an example of how to use every sensor available on a device.

Step Counter & Step Detector

I intended to have an option for translating the player based on either the step counter or step detector sensors. Unfortunately, walking in place is not detected by these sensors. Actually walking around in the real world while staring at a non-AR scene would result in people bumping into things. The step counter and detector can be faked into believing the user is walking by shaking the device, but this made it impossible to see the screen, rendering it useless. I added on-screen virtual controls for translation instead.

Proximity Sensor

I had planned to use the proximity sensor as an indication that the player was about to try to touch something on screen. This could be used to slow down game play elements  which would make it easier to touch fast moving objects. I quickly realized that the on-screen controls used for translation in the Sensor Keep scene would cause the proximity sensor to always trigger. This general concept may work for a game designed with this in mind, but it was not going to work for my intended use.

Light Sensor

I intended to use the light sensor output to scale the brightness of the lighting in the scene. This technique worked fine on my development machine, but mobile devices required that I baked the lighting in order to get a decent frame rate. I then tried adjusting the size of the particle effects used to represent lights in the scene based on the light sensor. This worked in practice, but was rather subtle in its result. Since the effect was so subtle, when I had testers try the scene they never noticed that this was happening. I considered adding a tutorial like system that would tell the user what sensors were being used for what, but this felt like a bad path. It was after this sensor that I decided the general idea I had for having most of the sensors used in the same scene was not going to work out as I intended, requiring that I redesign or re-evaluate this portion of the program.

External Responsibilities and Distractions

Had I been able to consistently dedicate 40 hours a week to this project, it would have likely been further along after two months than it is today. Even being able to consistently get 20 hours in a week was well beyond what I was able to dedicate to this project most of the time. Luckily, while life did not allow me to sit at my PC and develop as often as I had hoped, it did allow time to think about the project which as mentioned improved my work rate when I was actually able to put time towards development.


Getting external sources to test the demonstration portion of my project has been very easy, and very helpful. On the other hand, I have not been successful in getting anyone with development experience to try out the tool set. I will not be comfortable releasing this product without some external sources testing the tool set portion of the project.



At the completion of my time in the MGMS program there are still three main tasks I would like to complete, and test, before I would be comfortable releasing this tool set to be purchased on the asset store.

The demonstration portion of the project, as previously mentioned, did not turn out as planned. In hind sight, I do not believe the plan I had would even be the best course of action for this tool, mainly because a developer is more likely to want a simple light weight example than a larger hard to follow one. I am considering a few different paths that could be taken with the demo project, but the front running idea now is to just use what I have with a much smaller level for the Rotation Vector based character controller. The smaller level will reduce the file size as well as likely make it run more smoothly. While this will not demonstrate every feature of the tool set, the raw data scene already does provide a simple example of each sensor in use. Any benefit I would get from giving more complex or interesting examples for the remaining sensors is likely over-shadowed by the other two tasks left to complete.

In addition to the tool set giving the sensor data directly as reported from Unity, I planned to also give a slightly easier to use access system that put the data into more common use case formats. I planned to implement this feature as I was developing the more complex demonstration for the sensors to get a better feel for what the common use cases would be. As the complex demonstrations have been scrapped this task also did not get completed in full. Having more experience now though I believe the common use cases are going to be mostly just the raw data, or the raw data divided by the data range for most sensors. Providing a simpler coding interface to query these values directly should be very simple for most sensors. The one known exception to this will be the rotational sensors, like the Rotational Vector and Game Rotational Vector sensors, but having implemented a player controller prefab using these sensors, I have already determined what the more complex to implement common use case will be for these sensors.

The last thing the project needs is for the documentation to be completed. After speaking with my advisers and some other developers, having good documentation will likely be far better than having more in game examples of the sensors use. I am using Doxygen to automate the creation of the basic documentation, which will allow the end user to reference all the functionality of the system. In addition to the automated Doxygen documentation I will need to create tutorials of how to get started and how to use some of the more advanced features of the system, such as the networked debugging. The last aspect of documentation I want to implement are video tutorials, one demonstrating the capabilities of the system, one walking a user through the basics of getting started using the system and the last one showing how to use the networked debugging for a variety of use cases.

I believe these last few tasks should not take a huge effort, and with them the result will be better than what I had originally planned for the project. Some of the motivation for the project was lost when Google released a package for supporting their Cardboard platform with head tracking, to Unity, but I do believe the project is still viable and that it is worth finishing and releasing. The main competition for this product, GyroDroid, appears to still be getting sales but people are complaining about bugs and a lack of support. SensorDev is already a better tool set than GyroDroid as I provide access to a number of sensors that my competitor does not as well as a system for debugging. With a cheaper price, if I can finish the remaining tasks as well as provide a more bug-free, or at least supported, product I should be able to make some sales. Also, simply releasing the product will prove a number of abilities that could be beneficial to my future career.



Java Archive (JAR) Files. (n.d.). Retrieved July 22, 2016, from http://docs.oracle.com/javase/6/docs/technotes/guides/jar/index.html 

Create an Android Library. (n.d.). Retrieved July 22, 2016, from https://developer.android.com/studio/projects/android-library.html

Asset Store - Village Interiors Kit. (n.d.). Retrieved July 25, 2016, from https://www.assetstore.unity3d.com/en/#!/content/17033

Asset Store - GyroDroid. (n.d.). Retrieved July 22, 2016, from https://www.assetstore.unity3d.com/en/#!/content/1701 

Doxygen: Main Page. (n.d.). Retrieved July 22, 2016, from http://www.stack.nl/~dimitri/doxygen/

Motion Sensors. (n.d.). Retrieved July 22, 2016, from https://developer.android.com/guide/topics/sensors/sensors_motion.html#sensors-motion-rotate