Learning to play Operation Lucid from human expert games
Abstract
When the number of possible moves in each state of a game becomes very high, standard methods for computer game playing are no longer feasible. We present two approaches to learning to play such a game from human expert gamelogs. The first approach uses lazy learning to imitate similar states and actions in a database of expert games. This approach produces rather disappointing results. The second approach deals with the complexity of the action space by collapsing the very large set of allowable actions into a small set of categories according to their semantical intent, while the complexity of the state space is handled by representing the states of collections of units by a few relevant features in a location-independent way. The state–action mappings implicit in the expert games are then learnt using neural networks. Experiments compare this approach to methods that have previously been applied to this domain, with encouraging results.