Neural networks and Markov models for the iterated prisoner's dilemma
The study of strategic interaction among a society of agents is often handled using the machinery of game theory. This research examines how a Markov Decision Process (MDP) model may be applied to an important element of repeated game theory: the iterated prisoner's dilemma. Our study uses a Markovian approach to the game to represent the problem of in a computer simulation environment. A pure Markov approach is used on a simplified version of the iterated game and then we formulate the general game as a partially observable Markov decision process (POMDP). Finally, we use a cellular structure as an environment for players to compete and adapt. We apply both a simple replacement strategy and a cellular neural network to the environment. ©2009 IEEE.
Seiffertt, John, Samuel Mulder, Rohit Dua, and Donald C. Wunsch. "Neural networks and Markov models for the iterated prisoner's dilemma." In 2009 International Joint Conference on Neural Networks, pp. 2860-2866. IEEE, 2009.
Proceedings of the International Joint Conference on Neural Networks