Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Last-layer prediction rigidity (LLPR)-based uncertainty quantification for MACE #601

Open
wants to merge 11 commits into
base: develop
Choose a base branch
from

Conversation

SanggyuChong
Copy link

Hello, all!

Sanggyu from COSMO-EPFL here, together with Filippo (@frostedoyster). We recently had the chance to explore the model uncertainties of MACE-mp-0 and conduct as thorough of an analysis as possible, and would like to contribute the implementation that we used in doing so. Here are a list of main changes:

  • creation of a LLPRModel: a wrapper that resides within models.py with all of the utilities involved in computing the uncertainties using the LLPR approach. To use this, the user needs to wrap their original MACE model with this class, compute the covariance matrix, obtain inverse covariance matrix (ideally with calibration), and then from there on, with every inference, the uncertainties will be accessed in the model output.
  • utility functions for LLPR within modules/utils.py: functions needed for proper covariance matrix calculation, namely compute_ll_feat_gradients, get_huber_mask, and get_conditional_huber_force_mask, were contributed here.
  • Calibration tools contributed as tools/llpr.py: functions needed to calibrate the prediction rigidities to uncertainty estimates were contributed here.

I see that our branch has some discrepancies elsewhere with develop branch that does not interfere with our newly contributed functionalities, so I will first put this in as a draft PR to do further clean-up. In the mean time, any feedback is more than welcome.

Thanks,
Sanggyu and (Filippo)

@ilyes319
Copy link
Contributor

amazing, thank you! I will have a look:)

@SanggyuChong
Copy link
Author

Cheers, Ilyes. For the record, we must have started from the universal branch when starting on our implementation, and hence the extra commits from that branch. @frostedoyster will do some rebasing and cherrypicking before we mark it ready for review. Thanks all!

@frostedoyster
Copy link

Ok, done! We need to make sure things are still working

@SanggyuChong SanggyuChong marked this pull request as ready for review September 24, 2024 16:25
@SanggyuChong
Copy link
Author

Our UQ features have been checked to be fine after the rebase. It would be nice for you to take a look and try it out (get in touch with us for a detailed workflow). We are open to any comments or suggestions of our new features, thanks!

@ilyes319
Copy link
Contributor

Hey @SanggyuChong, thank you very much for your PR.
Some additional questions:

  1. Please could you a ref to your paper in the doc string of the model.
  2. Would be nice to have series of test of the basic functionalities of the model.
  3. I remember there was a calibration step for the model and I see the funciton in utils. Is is possible to make a script in cli to run the calibration.
  4. Is an interface to an ASE calculator possible so one can get the uncertainties directly from there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants