000 04316nam a22005775i 4500
001 978-1-4471-5022-0
003 DE-He213
005 20140220082808.0
007 cr nn 008mamaa
008 130228s2013 xxk| s |||| 0|eng d
020 _a9781447150220
_9978-1-4471-5022-0
024 7 _a10.1007/978-1-4471-5022-0
_2doi
050 4 _aTJ212-225
072 7 _aTJFM
_2bicssc
072 7 _aTEC004000
_2bisacsh
082 0 4 _a629.8
_223
100 1 _aChang, Hyeong Soo.
_eauthor.
245 1 0 _aSimulation-Based Algorithms for Markov Decision Processes
_h[electronic resource] /
_cby Hyeong Soo Chang, Jiaqiao Hu, Michael C. Fu, Steven I. Marcus.
250 _a2nd ed. 2013.
264 1 _aLondon :
_bSpringer London :
_bImprint: Springer,
_c2013.
300 _aXVII, 229 p. 29 illus., 1 illus. in color.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aCommunications and Control Engineering,
_x0178-5354
505 0 _aMarkov Decision Processes -- Multi-stage Adaptive Sampling Algorithms -- Population-based Evolutionary Approaches -- Model Reference Adaptive Search -- On-line Control Methods via Simulation -- Game-theoretic Methods via Simulation.
520 _aMarkov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences.  Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable.  In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function.  Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search. This substantially enlarged new edition reflects the latest developments in novel algorithms and their underpinning theories, and presents an updated account of the topics that have emerged since the publication of the first edition. Includes: . innovative material on MDPs, both in constrained settings and with uncertain transition properties; . game-theoretic method for solving MDPs; . theories for developing roll-out based algorithms; and . details of approximation stochastic annealing, a population-based on-line simulation-based algorithm. The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling, and control, and simulation but will be a valuable source of tuition and reference for students of control and operations research. The Communications and Control Engineering series reports major technological advances which have potential for great impact in the fields of communication and control. It reflects research in industrial and academic institutions around the world so that the readership can exploit new possibilities as they become available.
650 0 _aEngineering.
650 0 _aComputer software.
650 0 _aSystems theory.
650 0 _aDistribution (Probability theory).
650 0 _aOperations research.
650 1 4 _aEngineering.
650 2 4 _aControl.
650 2 4 _aSystems Theory, Control.
650 2 4 _aProbability Theory and Stochastic Processes.
650 2 4 _aOperations Research, Management Science.
650 2 4 _aAlgorithm Analysis and Problem Complexity.
650 2 4 _aOperation Research/Decision Theory.
700 1 _aHu, Jiaqiao.
_eauthor.
700 1 _aFu, Michael C.
_eauthor.
700 1 _aMarcus, Steven I.
_eauthor.
710 2 _aSpringerLink (Online service)
773 0 _tSpringer eBooks
776 0 8 _iPrinted edition:
_z9781447150213
830 0 _aCommunications and Control Engineering,
_x0178-5354
856 4 0 _uhttp://dx.doi.org/10.1007/978-1-4471-5022-0
912 _aZDB-2-ENG
999 _c94757
_d94757