In this lab you will add a learning component to a simple game playing program. You will use a Q learner to try to learn to play the game better.
In this game you are in a simple maze world (with a large cross shaped obstacle in the middle). You as the agent are shown as a blue circle. Your opponent is shown as a red circle. There will also be 4 food items shown as green circles. You are able to move up, down, left, right and stay still. Your job is to collect the food items without being touched by the opponent (if the opponent touches you the game ends - the game also ends when you collect all the food and if neither condition has been met after 300 steps).
The game program can be downloaded at http://www.d.umn.edu/~rmaclin/cs8751/programs/ssg.tar.Z. To extract the code type the following at the unix prompt:
uncompress ssg.tar.Z tar xvf ssg.tar.Z
This should create a directory student_simple_game. Enter this directory and you can type make to create the executable file play_game. You can try playing the game by simply typing 'play_game'. You can also see the list of command line arguments by typing 'play_game -help':
You will likely want to use the game in a few different ways:
The code includes hooks to make it possible to add a learned model. You will need to change parts of several files:
Note that you should not change the general structure of the code. Your updated versions of these files will be tested as part of my evaluation of your code (and how well your agent performs will factor into your grade).
Note also that you should (and this is the only time I will recommend this) use the rand() rather than random() function in generating any random numbers during your process of selecting moves.
You should train an agent to solve this problem. The resulting model file should be named 'learned_model' and should be included as part of your electronic code submission. You will likely want to perform significant testing of your code as part of constructing this learned model. In terms of documenting this code you should submit the results of your learned model playing a series of at least 20 games (while not learning) to demonstrate how well it performs. It would help if you marked on your output in which games your learned agent was able collect all the food, and in which games the agent was touched by the opponent.
Print out a copy of all of your code files. Hand in the printout from the test described above. Also make sure to include the learned model file named 'learned_model' as discussed above in your electronic submission.
You should also write up a short report (at least one page, no more than three) discussing your design decisions in implementing the Q learning model and how your version of the code works.
You must also submit your code electronically. To do this go to the link https://webapps.d.umn.edu/service/webdrop/rmaclin/cs8751-1-f2003/upload.cgi and follow the directions for uploading a file (you can do this multiple times, though it would be helpful if you would tar your files and upload one file archive).
To make your code easier to check and grade please use the following procedure for collecting the code before uploading it:
rmaclin/prog05Note that the suffix of all C++ code files (not .h files) should be ".cc". Only code files (in C++, only .cc and .h files) and your makefile should be stored in this directory.
tar cf prog05.tar login/prog05