Course details
A specialist MAGIC course
Semester
- Autumn 2020
- Monday, October 5th to Friday, December 11th
Hours
- Live lecture hours
- 10
- Recorded lecture hours
- 0
- Total advised study hours
- 40
Timetable
- Tuesdays
- 13:05 - 13:55 (UK)
Description
This course concerns multi-stage decision processes in the framework of optimal control theory, dynamic programming and the Bellman equation, where optimal policies are synthesized based on both immediate and long-term rewards.
However, the computational requirements of dynamic programming techniques can be prohibitive as the policy/state space is overwhelmingly large, the so-called Bellman's curse of dimensionality".
In this course we will overcome this difficulty by means of different techniques for the computation of suboptimal solutions to dynamic programming equations.
The lectures will address theoretical, algorithmic, and computational aspects of such techniques.
However, the computational requirements of dynamic programming techniques can be prohibitive as the policy/state space is overwhelmingly large, the so-called Bellman's curse of dimensionality".
In this course we will overcome this difficulty by means of different techniques for the computation of suboptimal solutions to dynamic programming equations.
The lectures will address theoretical, algorithmic, and computational aspects of such techniques.
Prerequisites
Some general knowledge on Dynamical Systems, Iterative Methods, Optimisation and/or Markov Chains is useful, but not essential.
Syllabus
- Dynamical systems and control essentials.
- Optimization and optimal control: characterization of optimal actions, necessary optimality condtions.
- Optimal feedback control and the Hamilton-Jacobi-Bellman PDE.
- Discrete Dynamic Programming: the Bellman Equation, Value and Policy Iteration Methods.
- Neural Networks: basic architectures, approximation properties, training/optimization.
- Continuous Optimization: deterministic and stochastic gradient descent, variants.
- Approximate Dynamic Programming Algorithms.
- An overview of Deep Reinforcement Learning and Case studies: playing Pac-man, Tetris, and the financial market with reinforcement learning.
Lecturer
-
Dr Dante Kalise
- University
- University of Nottingham
Bibliography
Follow the link for a book to take you to the relevant Google Book Search page
You may be able to preview the book there and see links to places where you can buy the book. There is also link marked 'Find this book in a library' - this sometimes works well, but not always - you will need to enter your location, but it will be saved after you do that for the first time.
- Introduction to the Mathematical Theory of Control (A. Bressan and B. Piccoli, )
- Neuro-Dynamic Programming (Dimitri P. Bertsekas and John Tsitsiklis, )
- Reinforcement Learning: An Introduction (R. Sutton and A. Barto, )
- Deep Reinforcement Learning: A Brief Survey, IEEE Signal Processing Magazine 34(6), 2017 (K. Arulkumaran, M. P. Deisenroth, M. Brundage, A. A. Bharath, )
Assessment
The assessment for this course will be released on Monday 11th January 2021 at 00:00 and is due in before Sunday 24th January 2021 at 23:59.
This exam has 4 questions, 25 marks each. Under normal conditions, this exam should be completed in 2 hours. All 4 questions must be answered. The minimum passing grade is 50%, that is, 2 correct questions. This exam must be returned by January 24th 2021, 23:59. Please upload your solutions to the MAGIC website, and please be mindful that your answers should be legible. Good luck!
Please note that you are not registered for assessment on this course.
Files
Only current consortium members and subscribers have access to these files.
Please log in to view course materials.
Lectures
Please log in to view lecture recordings.