Skip to content

Lecture notes, tutorial tasks including solutions as well as online videos for the reinforcement learning course hosted by Paderborn University

License

Notifications You must be signed in to change notification settings

MertClk/reinforcement_learning_course_materials

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reinforcement Learning Course Materials

Build Status License made-with-python made-with-latex

Lecture notes, tutorial tasks including solutions as well as online videos for the reinforcement learning course hosted by Paderborn University. Source code for the entire course material is open and everyone is cordially invited to use it for self-learning (students) or to set up your own course (lecturers). Example

Lecture Content

  1. Introduction to Reinforcement Learning
  2. Markov Decision Processes
  3. Dynamic Programming
  4. Monte Carlo Methods
  5. Temporal-Difference Learning
  6. n-Step Bootstrapping
  7. Planning and Learning with Tabular Methods
  8. Function Approximation with Supervised Learning
  9. On-Policy Prediction with Function Approximation
  10. Value-Based Control with Function Approximation
  11. Eligibility Traces
  12. Policy Gradient Methods
  13. Further Contemporary RL Algorithms (DDPG, TD3, TRPO, PPO)

Exercise Content

  1. Basics of Python for Scientific Computing
  2. Manually Solving Basic Markov Chain, Reward and Decision Problems
  3. The Beer-Bachelor and Dynamic Programming (the Shortest Beer Problem)
  4. Drive Through the Race Track with Monte Carlo Learning
  5. Drive even Faster Using Temporal-Difference Learning
  6. Stabilizing the Inverted Pendulum by Tabular n-Step Methods
  7. Boosting the Inverted Pendulum by Integrating Learning & Planning (Dyna Framework)
  8. Predicting the Operating Behavior of a Real Electric Drive Systems with Supervised Learning
  9. Evaluate the Performance of Given Agents in the Mountain Car Problem Using Function Approximation
  10. Escape from the Mountain Car Valley Using Semi-Gradient Sarsa & Least Square Policy Iteration
  11. Improve the Value-Based Mount Car Solution using Sarsa(Lambda)
  12. Landing on the Moon with REINFORCE and Actor-Critic Methods
  13. Shoot to the moon with DDPG & PPO

Citation

Please use the following BibTeX entry for citing us:

@Misc{KSWW2020,
  author = {Wilhelm Kirchgässner and Maximilian Schenke and Oliver Wallscheid and Daniel Weber},
  note   = {Paderborn University},
  title  = {Reinforcement Learning Course Material},
  year   = {2020},
  url    = {https://github.com/upb-lea/reinforcement_learning_course_materials},
}

Contributions

We highly appreciate any feedback and input to the course material e.g.

  • typos or content-related discussions (please raise an issue)
  • adding new contents (please provide a pull request)

If you like to contribute to the repo to a larger extent, please do not hesitate to contact us directly.

Credits

The lecture notes are inspired by

The tutorials are partly using pre-packed environments from

About

Lecture notes, tutorial tasks including solutions as well as online videos for the reinforcement learning course hosted by Paderborn University

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 95.8%
  • TeX 3.1%
  • Other 1.1%