Brace yourselves! What you’ll find below is my attempt at talking about my game in the most palatable way possible as I finish grad school! So, get ready for some academic conversation on artificial intelligence, genetic algorithm, data, thesis, accuracy, prediction, all that good stuff! Science is great, and I love it! Welcome to my world.
After two years in pursuit of my Master’s degree in Machine Learning at the University of Helsinki, I’m finally down to the last task: writing a thesis. Despite my degree being unrelated to game development, I decided to inject my passion into the project by applying machine learning to a tower defense game. I hope this article about my research process helps others to incorporate machine learning into their games. I’m gonna do my best and you tell me in the comment section what you think about it.
So, let’s dive into that.
1. Picking a Machine Learning Framework
Since my programming language of choice is C#, I started by looking at the machine learning frameworks available for .NET. I considered a few options, such as Numl, Encog, and Sharpkit, but the one that stood out to me as the most versatile and comprehensive was the Accord.Net framework.
I started by using the skeleton of a previous project to create a prototype that would generate data approximating what I expected from my game. Specifically, the prototype randomly places between one and ten defense structures on the screen, then generates between five and twenty enemies who start on the left side of the screen and then try to get to the right side while the towers fire at them.
2. Identify all input data
The input data for the model is very important because you want to capture everything that will have an impact on the output, but exclude anything that is unrelated and will only add noise to the results. For my prototype, I recorded the maximum health, speed, location, and form of locomotion (walking/hopping/flying) of every enemy, then combined that with the range, damage, firing speed, location, and area of effect of each of the structures. If an enemy/tower was not used, I replaced the values with zeroes. *Note: Depending on the model used, you may need to normalize your data; that is process it so that all values fall in a range between zero and one by dividing the actual value by the maximum possible value.
3. Identify your output data
The output data of your model is the information you’re trying to teach the model to predict. For my game what I was trying to predict was how effective a group of enemies would be, given the defenses and obstacles they faced. At the end of the wave I’d record the distance achieved by each of the enemies, and mark which of them managed to reach the other side of the screen, and these values would be considered the output resulting from the wave input. I saved both the input and output data to a CSV, which is the sample data for my model.
4. Generate sufficient sample data
I let my test run non-stop for a couple days to generate plenty of sample data. I then wrote a class to read in the CSV data, then used fed it into an assortment of different Accord.Net models, including identical models with differing initialization parameters, to see which model had the most accurate predictions, and at what level of data their predictions became reliable. This process of model refinement took several days due to the different methods of learning possible (supervised vs unsupervised), varying model parameters, and hyperparameters possible. I’d test a single model with a variety of parameters, choose the reasonably best combination for that model as judged by the required time to train it and the resulting accuracy, and then moved on to the next model.
5. Choose a model
At the end of my testing, the model with the most promising results for my particular scenario was the deep belief network, a form of neural network that excels at feature detection. By using a combination of supervised and unsupervised learning I discovered that its predictions became reliable after about one hundred data samples, with an accuracy of 90%, and continued to improve in accuracy with increased access to data, to the point where it achieved a mean squared error of 0.045 after 1,500 samples, or an accuracy of 95.5%. In addition to the high accuracy, the learning time was tolerable, with an upper range of thirty seconds when I using the maximum number of samples I had created.
6. Implement your model
At this point you should already have a sample of working code for your model of choice, so implementing it in your actual game should be as trivial as changing the dummy data to real data, and applying the predictions it makes to your game logic. The one key thing you need to do, however, is mitigate the time it will take whenever you train your model as it typically will take an incrementally longer time as the amount of data grows. What I did was whenever new data need to be incorporated, I copied the existing model, then assigned a low-priority thread to train the copy. When training was finished I replaced the existing model with the newly updated copy. I surrounded access to the live model with a mutex to prevent concurrency issues.
7. Make your game
So now you have a machine learning model selected, the data it needs identified, and you know how you’re going to use the output data so you can continue making your game. Maybe you decide to use more than one model for different facets of gameplay, or use other aspects of your chosen framework since you’ll be incorporating the framework into your project to implement the selected model anyways. There are many applications of machine learning, such as dynamically altering the events of the game to maintain tension, using genetic algorithms as the basis of generating in-game content, or analyzing gameplay patterns to maximize ongoing player engagement.
8. Decide on static or dynamic model
Once you’ve finished with your game, you can use it to generate sufficient data to train a model, export it at its current performance level, and then ship the exported model with your game for use without further training. This will forego the computational requirements needed to continually train it, which is an important consideration for a mobile game; however, if your data isn’t through you may be unintentionally baking in situational blind-spots and vulnerabilities that can be exploited by players. You may instead decide to partially train the model as best you can, then allow that model to be continually refreshed with live data generated by the player.
This latter, hybrid approach captures the best of both worlds, because you start out with a reasonably good model that will continue to improve based on the players own actions. For my own game, I decided to ship the model with zero data and let it learn as the player learns, so mistakes and blunders would be part of the narrative. In other games, this can be a dangerous proposition depending on what the model is in charge of, but in my case, it’s just deciding what enemies to send, so in the worst case scenario it will just send enemies that aren’t well equipped to tackle the defenses it faces.
9. Consider other applications
During my testing of the models, I explored the Accord.Net framework thoroughly and decided to make the enemies a bit more dynamic than they were in my prototype by leveraging the available implementation of genetic algorithms. I knew it wouldn’t have a great impact on the actual gameplay, but I thought it would be worthwhile to ensure each enemy had unique attributes to make the generated data a bit more “lively” and hopefully improve the enemy’s performance over the course of the game in conjunction with the model performance.
At the creation of a new save file, I generate a random pool of genetics for each enemy type, consisting of eight chromosomes which alter the maximum health, run speed, and elemental resistances in a counter-balanced way such that increasing one decreases another. The machine learning model then compares the prediction of a base enemy with an enemy using specific genetics to determine the relative fitness of that genetics, and such genetics with the highest fitness is more likely to be carried on to the next generation.
Regarding my Thesis
As previously mentioned, I started this process in pursuit of applying my knowledge towards a game for my thesis. My goal is to see how well my deep belief network performs against players with zero prior training, which means when you first start playing the game the AI knows nothing and it learns to play as you do. As you play, the generated input and output data are sent anonymously to Keen.io for analysis in my thesis, so the more people I get to play the game the more confidently I can state my results.
The Abattoir Intergrade
The game I created is called The Abattoir Intergrade, a tower defense, and branching, interactive novel. I developed it over the course of five months, including the time spent prototyping and selecting a machine learning model. My friend, Yuexin Du, who is a UI/UX designer who studied at local Aalto University designed the enemies, towers, attribute symbols, and landscape objects. I took her designs and animated them using Spriter Pro, wrote a branching dialogue script using Articy:Draft, and then combined all of my results using the FlatRedBall engine and MonoGame framework.
All combined, there are fifteen enemy types, six tower types, ten maps, approximately 2,500 dialogue fragments, four different game endings and it takes about an hour to complete a single playthrough. I’d love to spend more time on this project to focus on refining the gameplay, but with my studies concluding it is now time to release my creation into the wild to see how well the model performs against actual players, so I can document the results in my thesis.
The full game is available on Mac/Windows at these sites:
Thanks for reading, and if you decide to contribute to my thesis, I hope you enjoy my game!