A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This book reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as ...
Read More
A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This book reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as Q-Learning, SARSA, and least-squares policy iteration. It describes algorithms in a unified framework, giving pseudocode together with memory and iteration complexity analysis for each. Empirical evaluations of these techniques, with four representations across four domains, provide insight into how these algorithms perform with various feature sets in terms of running time and performance. This tutorial provides practical guidance for researchers seeking to extend DP and RL techniques to larger domains through linear value function approximation. The practical algorithms and empirical successes outlined also form a guide for practitioners trying to weigh computational costs, accuracy requirements, and representational concerns. Decision making in large domains will always be challenging, but with the tools presented here this challenge is not insurmountable.
Read Less
Add this copy of A Tutorial on Linear Function Approximators for Dynamic to cart. $85.03, new condition, Sold by Revaluation Books rated 4.0 out of 5 stars, ships from Exeter, DEVON, UNITED KINGDOM, published 2014 by now publishers Inc.
Add this copy of A Tutorial on Linear Function Approximators for Dynamic to cart. $41.72, very good condition, Sold by Hay-on-Wye Booksellers rated 3.0 out of 5 stars, ships from Hereford, UNITED KINGDOM, published 2013 by now publishers Inc.
Choose your shipping method in Checkout. Costs may vary based on destination.
Seller's Description:
Very Good. Unused, some outer edges have minor scuffs, cover has light scratches, some outer pages have marks from shelf wear, book content is in like new condition. 92 p. Foundations and Trends (R) in Machine Learning . Intended for college/higher education audience.
Add this copy of A Tutorial on Linear Function Approximators for Dynamic to cart. $60.12, new condition, Sold by Ingram Customer Returns Center rated 5.0 out of 5 stars, ships from NV, USA, published 2013 by now publishers Inc.
Add this copy of A Tutorial on Linear Function Approximators for Dynamic to cart. $77.07, new condition, Sold by Ria Christie Books rated 5.0 out of 5 stars, ships from Uxbridge, MIDDLESEX, UNITED KINGDOM, published 2013 by now publishers Inc.
Add this copy of A Tutorial on Linear Function Approximators for Dynamic to cart. $82.37, new condition, Sold by Paperbackshop rated 4.0 out of 5 stars, ships from Bensenville, IL, UNITED STATES, published 2013 by Now Publishers.
Add this copy of A Tutorial on Linear Function Approximators for Dynamic to cart. $91.17, new condition, Sold by Booksplease rated 4.0 out of 5 stars, ships from Southport, MERSEYSIDE, UNITED KINGDOM, published 2013 by now publishers Inc.
Add this copy of A Tutorial on Linear Function Approximators for Dynamic to cart. $111.88, good condition, Sold by Bonita rated 4.0 out of 5 stars, ships from Newport Coast, CA, UNITED STATES, published 2013 by Now Publishers.