Skip to content

Latest commit

 

History

History
48 lines (30 loc) · 2.06 KB

README.md

File metadata and controls

48 lines (30 loc) · 2.06 KB

FEAT MIMIC-III Benchmarks

This repo contains code to benchmark FEAT on the phenotyping tasks in the MIMIC-III Critical Care Database.

This code builds directly upon the curation work done by @YerevaNN here.

Usage

  • First, follow the guide from YerevaNN to extract the datasets.

Analysis Code

  • mimic3-benchmarks/mimic3models/phenotyping/tsfresh/main.py: extracts time series features from the MIMIC-III data using tsfres.
  • mimic3-benchmarks/mimic3models/phenotyping/feat/main.py: trains and evaluates FEAT models on the phenotypes.

Scripts

Here are some convenience scripts for batch analysis:

  • extract_tsfresh_features.sh: extracts time series features.
  • lpc_feat_tsfresh_phenotyping.sh: trains feat models.
  • lpc_lr100_tsfresh_phenotyping.sh*: trains logistic regression models of max dimensionality 100.

References

This repo is used to generate results in the following paper:

La Cava, W., Lee, P. C., Ajmal, I., Ding, X., Solanki, P., Cohen, J. B., Moore, J. H., & Herman, D. S. (2020). Application of concise machine learning to construct accurate and interpretable EHR computable phenotypes. MedRxiv, 2020.12.12.20248005.

Additional results can be found at https://bitbucket.org/hermanlab/ehr_feat .

Contact

Acknowledgments

We would like to thank Debbie Cohen for helpful discussions about secondary hypertension. W. La Cava was supported by NIH grant R00-LM012926. This work was supported by Grant 2019084 from the Doris Duke Charitable Foundation and the University of Pennsylvania. W.La Cava was supported by NIH grant K99LM012926. J.H. Moore and W. La Cava were supported by NIH grant R01 LM010098. J. B. Cohen was supported by NIH grants K23HL133843 and R01HL153646.