Artificial intelligence player for Carcassonne board game – ideas & thoughts

 

OK, I love playing board games. And among my favorites I’ve a warm place for Carcassonne. This simple tile-placing game is very tactical, offering different course of actions for the player to decide in each turn. I can’t help thinking, how good can I make an artificial intelligence player in the Carcassonne game. This post will cover some ideas and thoughts I’d like to share on this subject. I only refer to the basic Carcassonne game without any expansions for simplicity.

General setup and game play

The game rules for reference.

The basic question is what type of artificial intelligence to use? for example, some board games like chess are more suitable for the minmax mechanism, but this game has a dimension of uncertainly (drawing cards), so it won’t fit here. However, like in the minmax mechanism, I believe there should be a mechanism to read the board and calculate a score for each player, but it will not try to predict the other players moves, but it will stick to the maximum score possible in the current turn. Leaving the prediction of the future moves to be somehow integrated to the score of the current turn.

For this approach the AI should know to read the board and calculate a score for each player based on the situation on the board, considering the followers placed by each player and the number of remaining followers in the reserve. We will discuss further about guidelines for how to calculate this score. Once the AI can read the board then for each turn, we would have those following steps for the AI player:

  1. Once the new tile was drawn, calculate the different legal places for this tile. I think there shouldn’t be more than 20 legal place in each turn.
  2. For each legal place calculate the all the possibilities for placing a follower. It usually goes between 1-3 possibilities. Of course that not placing a follower is also a possibility.
  3. For each legal place and for each follower placing possibility, calculate the score for all the players, including of course the self AI player, according to the situation on the board.
  4. For each move (tile placing + follower placing) calculate  the delta(difference) from current board situation score for all the players. Compare all the scores delta – The important delta is the delta of the AI player but need to take into consideration also other players delta since sometime a decrease in other player’s score is more significant than the AI player gain.

The first 2 steps are merely technical and their aim is to calculate all the possible moves the AI player can perform. since there are not too many of those (averaging a few dozens I suppose) there should not be any performance problems to do it.

The heart of it all is calculate the score in step 3. In that score there should be all the logic that makes the AI an intelligent player. Ok, so how to calculate this score? The simple answer is that I’m not sure… There are many considerations that should be taken and it is quite complicated and require massive testing to reach exact formulas. I do want however, to share a few ideas for guidelines that would affect this score in different situations in the game.

Score basics

So what affects the score? Followers on the board will increase the score dependent on which feature they placed on. They will contribute to the score proportionally to how much their feature worth points in the game. Also, if you are the only one with a follower on the feature the feature is more valuable then if some other players has followers in the same feature.

completion bonus

Now, when completing a feature the follower returns to the supply, plus some features (cities) worth more when completed than in the end of the game so it is top priority to see the follower’s feature being completed. To ensure that, the more close to completion the feature is the higher the value it should get – completion bonus. Thus a city 3 tiles with a single open edge, worth more then a 3 tiles city with 3 open edges. Also, a farm will never get this completion bonus since it is being completed only in the end of the game.

Also, There are many situations in the game where a feature is being stuck. That means that very few types of tiles (or none at all) exist in the game that could be used in order to complete it. For those cases as well there should be a low or no completion bonus. Note, that since the system is calculating the score for all the players, placing a tile that will cause your opponent feature to be impossible to complete, will affect its completion bonus and thus make its score decrease.

Another point is to be sure to consider how many turns the player have until the end of the game. If you add your city a tile that makes it with 3 open edges instead of 1, you’d better not do it when there are only 2 turns left for the AI player to play. Therefore, the completion bonus should be reduced the more the game approached its end (less turns the AI player have got left).

Managing the number of followers

After playing your first few Carcassonne games, you clearly come to the conclusion that you must manage carefully the number of followers you place in the game since they tend to be run out of stock quite soon in the game. This knowledge should be reflected when calculating the score. The more followers the AI possess in its stock after the turn the more valuable it is. And it shouldn’t be linear, since the difference between 6 or 7 followers in hand in not like zero or 1 followers. The AI should play its last follower only when the gain is significant. Also there should be consideration to how many turns the AI has left to play. Of course that if you have n followers with only n turns to play you should place a follower in each turn.

Different tactics with different number of players in the game

When I played Carcassonne, I usually apply different tactics whether I’m played a 2-player game or 4-5 player game. In 2-player game if you complete a feature where both players have deployed the same number of followers – both players get points for it, so practically it is like no one got points at all. While in multi-player game the same situation where 2 players have the same number of followers will grant those players the same points, but since the rest of the players didn’t receive points, those point are valuable.

Moreover, since the tile amount is constant, the more players there are the less turns each player play. Thus, completing a feature alone in 5-player game is much harder than in 2-player, especially if it is a big one. Joining forces with another player may prove more valuable since there is more chance that the feature will be completed and in less time.

So clearly the score if affected. I think the more players there are in the game the less important is to have a feature where the AI player has the only follower. And vice-versa, if there are only 2 players, sharing a feature with the opponent should be least valuable and maybe give no score at all.

Tactics – joining a feature of another player

One of the great things about Carcassonne is the interaction between players. You may try to join an opponent who had built a big city and share the points when it is completed, or maybe a farm with many cities – even better! The rules of Carcassonne prohibit joining a feature that has already a follower placed on it. The way to do it is to create a similar new feature very close to the subject feature but not directly connected to it,  and place your follower on it. On the next turns in order to join the feature it is necessary to place the tiles in a way that both features will be connected and then would have more then one follower on it. This requires planning and is not always successful even for the intelligent human player since not always the appropriate tiles are drawn and there is always the opponent that is not always glad to share the feature.

 

Example for the steps taken to join a farm by 2 players

So how can our AI player implement it? It requires planning of at least 2 steps ahead and sometimes more.

The score! We would have to integrate this logic to the score.

It seems that for each open feature with a follower on it (let us call it the subject open feature), the system will scan the neighboring area for incomplete feature of the same type. Each neighboring incomplete feature will add value to the score of the subject open feature. The added score value will be proportional to the value of the incomplete feature and with inverse ratio to the tile distance from the subject open feature. Thus  with this bonus the AI could favor moves that are not contribute in the short run but could bring a lot of profit in the long run.

Remarks -

  • Sometimes a neighboring incomplete feature would have more opponent followers (of the same player) than the subject open feature. In that case if the features would be joined the player will lose points to the opponent. This also has to be reflected to the score by some sort of a penalty.
  • Farms are always incomplete features.
  • Joining farms will not necessarily add more points as when joining cities or roads, since some cities can be shared by more then one field and the joined farm will not have more cities as shown in the example above. In fact sometimes joining farms can even reduce the score. Therefore, when calculating the value of a neighboring farm it basically should be how many cities it adds to the subject farm that the subject farm doesn’t already have.

Summery

I enjoyed sharing my basic belief about how AI player should be implemented, and I would be happy to get feedback. Whether you have some more ideas or would like to share from your personal experience. Please feel welcome to comment.

Leave a comment

Hadoop binary files processing introduced by image duplicates finder

This post examines the possibility to process binary files with hadoop, while demonstrating it with an example from the world of images. The image duplicates finder deals with the dilemma of multiple relatively small files as an input for a hadoop job and shows how to read binary data in a map / reduce job. Thus it demonstrates how to process binary files with hadoop. The post is hands-on and includes code snippets that you can copy-paste into your own environment and try for yourself.

Most of the time when viewing map/reduce algorithms it’s all about text! From the ‘Word Count’ famous example to many other hadoop jobs, one can believe that the entire hadoop framework is designed to process text data. Well, while it may be true in a way, hadoop is designed also for dealing with huge amounts of data which is not text.

In order to test what is being discussed here you may want to have a hadoop environment. The quickest way is to use cloudera training virtual machine. My friend Philippe Adjiman wrote a nice tutorial for setting up hadoop environment with cloudera.

The Problem:

Let’s image a system that download images from the internet using some search string. Some of the images may be identical being the same image used in different web sites, so we would like the system to know to filter out the duplicates. And we can expect a huge amount of data. That is why we using hadoop for, right?

So the input is a directory full of jpg images and the output should be a list with the unique images.

The Dilemma:

Hadoop is at its best reading from huge data file while distributing the processing on several agents. It using optimizations to have the processing unit give priority for its local data as the input for processing. What will happen if we will feed the process with huge amount of relatively small files like images?

Apparently it will have a lot of overhead. The HDFS is keeping relatively big amount of overhead for each file in order to be able to support features like splitting a file across multiple machines and backup each file with several copies on different machines. As for the processing time, It will be affected as well. By default each input file is being send to it own map task, thus, huge amount of files means a huge amount of map tasks. having lots of map tasks can significantly slow the application by the overhead of each task. This can be somewhat relieved by reusing the task’s JVM and by using the MultiFileInputSplit, for more than 1 file in a split.

More about the small files problem, and some conclusions of a project that is dealing with huge amount of images.

Using Hadoop Sequence Files

So what should we do in order to deal with huge amount of images? Use hadoop sequence files! Those are map files that inherently can be read by map reduce applications - there is an input format especially for sequence files - and are splitable by map reduce, so we can have one huge file that will be the input of many map tasks. By using those sequence files we are letting hadoop use its advantages. It can split the work into chunks so the processing is parallel, but the chunks are big enough that the process stays efficient.

Since the sequence file are map file the desired format will be that the key will be text and hold the HDFS filename and the value will be BytesWritable and will contain the image content of the file.  Remark - for this example's sake it is even better to store the processed binary file (as MD5 text, we will see later), instead of the whole bytes of the file. However, I'm interested in showing how to read and write binaries so we will stick to the general BytesWritable.

That is all very well, but how to generate a sequence file from the images files? Since there are lots of them it may prove not an easy task at all. There are some ways that I had thought of:

  1. If possible the best way is to generate the sequence file as soon as the images are acquired. That way the system is always ready for image processing without the need to do some preprocessing to generate the sequence file. Since sequence files can be appended to this can be applied also if the images are not being retrieved all at once.  We can do this by using class org.apache.hadoop.io.SequenceFile.Writer . As demonstrated in this example.
  2. Storing all the images as a tar file and using the tool written by Stuart Sierra to convert it to a sequence file.
  3. Using a map reduce job to collect all the image files and store them as a sequence file. The map input will be the HDFS path of those images and the map output will the the sequence file (no reduce task in this case). That way using Hadoop scalable advantages in case you really have a huge amount of files. This is inspired by a simple tool written by Ryan Balfanz.

I'll demonstrate creating the sequence file according to the third option since I believe it is the most general way to do it.

Before starting digging into code an immediate question floats in the mind. If I'm bothering to load all the images bytes into the memory in order to write them to a sequence file, won't I would suffer from all the problems related to a huge amount of small files that I had mentioned above? And also if the images are in the memory won't it be best to do the image processing - finding image duplicates - at this stage rather than creating a sequence file and only then try to find the duplicates?

The answer has two different aspects -

  • In case the image processing include more than just image duplicate finder and there are more than a single map reduce job that need to read all the images, then you would like it to be as a sequence file in order to perform better
  • The pre process of converting the images into a sequence file can be done for each image independently if we are not having all the images yet. The duplicate finder however, is a process that needs all the images in order to be processed. Decoupling the job of creating the sequence file from the job of finding the duplicates can be used in some cases to design the system in a way that the sequence file creation will be invoked on the images even before all the images are present.

Pre Processing job - generating the sequence file

The target is a sequence file which has the filename as the key and the BytesWritable as the value. The input is a file contains all the image file as HDFS filenames. For example:

hdfs://localhost:8022/user/elevy/smallArchiveImages/WonderWoman.jpg

So our map / reduce job will include only a map that will read one file at a time and write it to a sequence file by using SequenceFileOutputFormat. It uses a FileSystem object in order to open the HDFS file and FSDataInputStream in order to read from it. The byte array is being written to the context as a bytewritable. Since the output format of the job is SequenceFileOutputFormat class, the output of the map is being written to a sequence file.

This is demonstrated in the BinaryFilesToHadoopSequenceFile.java which implements this pre process job.

Images Duplicates Finder job

Now we have a sequence file with all the files binary data and it is time for the actual job that will filter the duplicates. We will use the MD5 algorithm to generate a unique key for each image and compare this key in order to find the duplicates. Our map / reduce job will include:

    • A mapper that will read the binary image data of all the image files and will create MD5 presentation for each file. It will pass this data to the reducer where the key will be the MD5 string while the value will be the filename. Thus, all the identical images will be grouped together by the hadoop framework.
      public void map(Text key, BytesWritable value, Context context) throws IOException,InterruptedException { //get the md5 for this specific file String md5Str; try { md5Str = calculateMd5(value.getBytes()); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); context.setStatus("Internal error - can't find the algorithm for calculating the md5"); return; } Text md5Text = new Text(md5Str);  //put the file in the map where the md5 is the key, so duplicates  //will be grouped together for the reduce function context.write(md5Text, key); } static String calculateMd5(byte[] imageData) throws NoSuchAlgorithmException { //get the md5 for this specific data MessageDigest md = MessageDigest.getInstance("MD5"); md.update(imageData); byte[] hash = md.digest(); // Below code of converting Byte Array to hex String hexString = new String(); for (int i=0; i < hash.length; i++) { hexString += Integer.toString( ( hash[i] & 0xff ) + 0x100, 16).substring( 1 ); } return hexString; } 
    • A very simple reducer that will only take the first filename for each MD5 hash. That way there will be only a single filename for each identical image and all the duplicates are filtered. The output is a map file where the key is the filename and the value is the MD5 hash
public void reduce(Text key, Iterable<Text> values,
  Context context) throws IOException, InterruptedException {
   //Key here is the md5 hash while the values are all the image files that
 // are associated with it. for each md5 value we need to take only
 // one file (the first)
  Text imageFilePath = null;
  for (Text filePath : values) {
    imageFilePath = filePath;
    break;//only the first one
  }
  // In the result file the key will be again the image file path. 
  context.write(imageFilePath, key);
}

All this stuff is demonstrated in class ImageDuplicatesRemover.java which implements the duplicates remover job.

9 Comments

Follow

Get every new post delivered to your Inbox.