Show simple item record

dc.contributorKråkenes, Tonyen_GB
dc.contributorHalck, Ole Martinen_GB
dc.date.accessioned2018-10-16T08:04:20Z
dc.date.available2018-10-16T08:04:20Z
dc.date.issued2003
dc.identifier806
dc.identifier.isbn8246407511en_GB
dc.identifier.other2002/04041
dc.identifier.urihttp://hdl.handle.net/20.500.12242/1569
dc.description.abstractWhen the number of possible moves in each state of a game becomes very high, standard methods for computer game playing are no longer feasible. We present two approaches to learning to play such a game from human expert gamelogs. The first approach uses lazy learning to imitate similar states and actions in a database of expert games. This approach produces rather disappointing results. The second approach deals with the complexity of the action space by collapsing the very large set of allowable actions into a small set of categories according to their semantical intent, while the complexity of the state space is handled by representing the states of collections of units by a few relevant features in a location-independent way. The state–action mappings implicit in the expert games are then learnt using neural networks. Experiments compare this approach to methods that have previously been applied to this domain, with encouraging results.en_GB
dc.language.isoenen_GB
dc.titleLearning to play Operation Lucid from human expert gamesen_GB
dc.subject.keywordKunstig intelligensen_GB
dc.subject.keywordMaskinlæringen_GB
dc.subject.keywordNevrale nettverken_GB
dc.subject.keywordStridsmodelleringen_GB
dc.source.issue2002/04041en_GB
dc.source.pagenumber33en_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record