Skip to content

LENS: Learning ENSembles using Reinforcement Learning 2020 (v2.0)

Notifications You must be signed in to change notification settings

GauravPandeyLab/lens-learning-ensembles-using-reinforcement-learning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

Release lens2.0 incorporates diversity into the RL framework to enable the selection of more accurate and parsimonious ensembles. This version builds upon the previous version of lens (https://github.com/GauravPandeyLab/lens-2017), and follows the same basic design. Diversity is calculated between pairs of current and potential future state ensembles during reinforcement learning (RL) search. The potential future ensemble that has the highest value of the diversity measure, i.e., is the most diverse with respect to the current ensemble, is the next state of the agent during the exploration phase of the RL process.

The diversity measures implemented are:

  • diversity measures based on Pearson's correlation coefficient, cosine similarity, and Euclidean distance (unsupervised)
  • Yule's Q [1] and Fleiss' kappa [2] (supervised)

Notes:

REFERENCES

[1] T. G. Dietterich, “An Experimental Comparison of Three Methods forConstructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization,” Machine Learning, vol. 40, no. 2, pp. 139–157, 2000. [2] G. U. Yule, “On the Association of Attributes in Statistics: With Illustrations from the Material of the Childhood Society,” Philosophical Transactions of the Royal Society of London, Series A, vol. 194, pp.257–319, 1900.

About

LENS: Learning ENSembles using Reinforcement Learning 2020 (v2.0)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%