Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference script with representative mapping and metadata #514

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

etowahadams
Copy link
Contributor

  • Inference script which also takes in additional args for the representative mapping and the metadata so that the right MSAs can be used
  • Since template searching can take a long time, I iterate through the different seeds after the the templates have been found

Comment on lines +419 to +424
for seed in random_seeds:
np.random.seed(seed)
torch.manual_seed(seed + 1) # This is what the original code does

try:
out = run_model(model, processed_feature_dict, tag, args.output_dir)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to make sure that switching seeds right before calling inference on the model is equivalent to switching seeds earlier. One thing that comes to mind is if there is some stochasticity when load_models_from_command_line gets called.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant