Adaptive reinforcement learning algorithm for Probabilistic Reversal Learning Task (PRL). This project's goal was to understand how the learning rate of an agent may change depending on the volatility of the environment. In the PRL task, volatility could be defined as the rate of reversal of the preferred stimulus. We used a meta-agent to learn the learning rate of the agent that interacts with the PRL task.
Developed during my interneship in Wulfram Gerstner's Laboraty for Computational Neuroscience at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland.
I believe, because of the "last modified tag", that this is the version of the program I used when I graduated from my undergrad at the Technological Institute of Aeronautics (ITA). The last modification was from 30 January 2014. This was taken from my external hard disk drive.