Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update for OEAssignCharges #255

Merged
merged 9 commits into from
May 17, 2017
Merged

Update for OEAssignCharges #255

merged 9 commits into from
May 17, 2017

Conversation

nathanmlim
Copy link
Collaborator

Addressing issue #252 by replacing the line to call the OEAssignCharges from OEToolkits version 2017.02.01. Added some warning messages as well.

@nathanmlim nathanmlim requested a review from davidlmobley May 9, 2017 03:49
@davidlmobley
Copy link
Collaborator

@nathanmlim : Looks good to me, except could you also address the other aspect of #252 at the same time? Specifically:

HOWEVER, there is one extension of the charging protocol that he recommends. Specifically, the normal (small) RMSD threshold for omega is still recommended as long as the maximum number of conformers (maxconfs) is not reached. But when maxconfs is hit, what will happen is that omega will begin to prune the least favorable conformers in order to maintain only maxconfs; with a low RMSD threshold this can result in a gradual loss of diversity of the conformers and thus correspondingly worse performance for the charge calculations.

Thus, Chris's recommendation was this: Whenever maxconfs is hit, increase the RMSD threshold for conformer generation until maxconfs is no longer hit. This will ensure that conformers are diverse for very flexible molecules, while still using the recommended low RMSD threshold for relatively rigid molecules.

Or, if you'd prefer not to, then we need to NOT close the issue when this is resolved and just rename it/clarify what still needs to be done. The changes involved for this are obviously more involved. So, let me know what you prefer.

@nathanmlim
Copy link
Collaborator Author

Travis tests appear to be failing due to inactivity?
https://travis-ci.org/choderalab/openmoltools/jobs/230204889#L2143

@davidlmobley
Copy link
Collaborator

That's odd. On occasion we've seen this if there is no output for a while. I think there is a setting to adjust how long it takes for the tests to time out? Or, adjust the verbosity level so it will produce more output?

Have you run tests locally?


#Determine conformations to return
if keep_confs == None:
logger.warning(
"keep_confs was set to None. Returned molecules will not be charged.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in this case the returned molecule is still charged, but the coordinates are reset to those of the passed molecules, or did something changed with the new OE?

Do you think it'll be ok to send these messages to logger.debug instead of warning? Otherwise it will be very hard to silent them in our applications.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nathanmlim - Andrea is right, unless you've changed it the warning message here is wrong. The behavior we'd implemented (as per the doc string) was that if keep_confs = None then the computed charges are applied to the provided conformation of the molecule, rather than any of the generated conformations.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andrrizzi Thanks for catching that. I'll switch it to debug as well.

@nathanmlim
Copy link
Collaborator Author

nathanmlim commented May 9, 2017

Travis tests are stalling
https://travis-ci.org/choderalab/openmoltools/jobs/230387708#L2166

Test generation of single ffxml file from a list of molecules ... 
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received

Any suggestions on what to do here? @davidlmobley @andrrizzi

@davidlmobley
Copy link
Collaborator

@andrrizzi - do you have advice on the "stalling tests" issue?

@andrrizzi
Copy link
Contributor

That's a weird one. Is it possible that simply the new charge assigning scheme takes much longer than the old one? That test ends up calling get_charges through generateForceFieldFromMolecules, so this could be a possibility. Or maybe it could be a problem with the license that makes the code hanging?

In both cases, I'd make an experiment and temporarily revert to the old charge engine to see if it's a problem related to that or not.

Is this something you can reproduce locally?

@nathanmlim
Copy link
Collaborator Author

So it looks like it's stalling because the new charging engine takes longer. Is there a way to make it so travis doesn't time out?

@andrrizzi
Copy link
Contributor

Hmm. Maybe you could yield a partial function for each test case instead of having a single test running them all. This should print something as soon as the first one passes and continue. 10mins is a long time for a test though. You could cut superfluous test cases. Or maybe we should let the user decide which engine to use through an argument of openeye.get_charges(..., engine)? If so we could use the old engine for testing.

@davidlmobley
Copy link
Collaborator

@andrrizzi @nathanmlim - is this the test where we charge a whole bunch of molecules? I recall wondering in the past why we were testing so many molecules. I don't see any reason to test charging many, many molecules.

@davidlmobley
Copy link
Collaborator

It's also not a bad idea to make the engine be an option, with the default to the new engine. Then you could just, say, test on charging a single molecule (?) with the new engine and another molecule or two with the old engine...?

@andrrizzi
Copy link
Contributor

I think this is the problematic test. I think it calls get_charges on these 5 molecules through generateForceFieldFromMolecules().

The speed loss is actually quite significant. The same test took 22 second to complete with the old engine, so other tests could be affected by this.

@nathanmlim
Copy link
Collaborator Author

I've added a legacy option to the get_charges function. It will default to use the old engine due to the speed loss, so tests should continue using the old engine and users can opt to use the new one if they like.

@davidlmobley
Copy link
Collaborator

Sounds reasonable, @nathanmlim, but can you make sure the new charging option gets used in at least one test? We don't want to have it going untested.

I can see it being significantly slower, as I believe it's doing more careful selection of conformed for charging and more charge calculations for averaging in certain cases. So there will be more cost, but also better charges resulting.

@nathanmlim nathanmlim changed the title Update for OEAssignCharges [WIP] Update for OEAssignCharges May 10, 2017
@nathanmlim
Copy link
Collaborator Author

Sounds reasonable, @nathanmlim, but can you make sure the new charging option gets used in at least one test? We don't want to have it going untested.

@davidlmobley Valid point. BTW, do you know if there is a way to check the installed version of OEToolKits? That would be another way to make sure it ends up using the legacy charging engine.

@davidlmobley
Copy link
Collaborator

davidlmobley commented May 11, 2017 via email

@nathanmlim
Copy link
Collaborator Author

Ah found it in .travis.yml

python -c "import openeye; print(openeye.version)"

@nathanmlim nathanmlim changed the title [WIP] Update for OEAssignCharges Update for OEAssignCharges May 17, 2017
@nathanmlim
Copy link
Collaborator Author

@davidlmobley @andrrizzi

Addition of tests are linked here

Okay to merge?

Copy link
Collaborator

@davidlmobley davidlmobley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @nathanmlim . This looks excellent. Approving and will merge. Shall we do a release to incorporate this so we can push it out to Omnia?

@davidlmobley davidlmobley merged commit 77a71ac into master May 17, 2017
@davidlmobley davidlmobley deleted the update/assigncharges branch May 17, 2017 16:31
@nathanmlim
Copy link
Collaborator Author

Thanks, @nathanmlim . This looks excellent. Approving and will merge. Shall we do a release to incorporate this so we can push it out to Omnia?

Not a high priority right now but will eventually need it on a conda channel roughly mid-June. Thanks!

@davidlmobley davidlmobley mentioned this pull request May 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants