Making AI to play Minigames
Description
Michal Ondrejka
june, 2024
In this project, I aimed to explore the world of AI. I implemented three basic AI models, each designed to complete
a specific task with a minimum required average score. My goal was to strengthen my understanding of this topic.
The minigames were implemented using the Gymnasium library from OpenAI. Each environment offers different levels
of complexity.
I implemented a neural network specific to each task, including forward propagation, setting up the environment,
and initializing hyperparameters. The training process includes interacting with the environment, collecting
experiences, and updating the network using backpropagation to improve the agent's performance over time.
First, I implemented AI for Lunar Landing (1st video). This environment is a classic rocket trajectory optimization
problem. The neural network is made of fully connected layers, and the environment provides the state we are in
as physical values like velocity, location, and more.
AI playing Pac-Man, on the other hand, is made with a convolutional neural network since the environment
provides an image of the game. This is more challenging, but the principle is similar. The training process took a
lot longer.
The last model implemented was A3C for the Kung Fu game (2nd video). This model uses multiple agents that are trained
simultaneously. The state is again represented with an image, but due to the high performance of this model, the
training was much faster.