Skip to content

Commit

Permalink
update index.md (#35)
Browse files Browse the repository at this point in the history
* update index.md

* fix ruff erors

* [pre-commit.ci] Add auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update user guide

* update user_guide examples

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
a-kore and pre-commit-ci[bot] authored Jul 25, 2024
1 parent eb3e42f commit 866589d
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 14 deletions.
2 changes: 0 additions & 2 deletions docs/source/index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
Certainly! I'll update the index.md file based on the context provided in the other project files. Here's a revised version of the index.md that better represents the AtomGen project:

```markdown
---
hide-toc: true
Expand Down
17 changes: 5 additions & 12 deletions docs/source/user_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,11 @@ Welcome to the AtomGen User Guide. This document provides comprehensive instruct

## Installation

To install AtomGen, run the following command:
The package can be installed using poetry:

```bash
pip install atomgen
```

For the latest development version, you can install directly from the GitHub repository:

```bash
pip install git+https://github.com/your-repo/atomgen.git
python3 -m poetry install
source $(poetry env info --path)/bin/activate
```

## Quick Start
Expand All @@ -48,7 +43,7 @@ attention_mask = torch.ones(1, 10)
with torch.no_grad():
output = model(input_ids, coords=coords, attention_mask=attention_mask)

print(output.last_hidden_state.shape) # Should be (1, 10, 768) for the base model
print(output.shape) # Should be (1, 10, 768) for the base model
```

This example demonstrates how to load the pretrained AtomFormer model and use it to extract features from molecular data.
Expand Down Expand Up @@ -168,7 +163,7 @@ attention_mask = torch.ones(1, 10)
with torch.no_grad():
output = model(input_ids, coords=coords, attention_mask=attention_mask)

predictions = output.logits
predictions = output[1]
print(predictions.shape) # Should be (1, 20) for the SMP task
```

Expand Down Expand Up @@ -205,7 +200,6 @@ python -m torch.distributed.launch --nproc_per_node=4 run_atom3d.py \
--per_device_train_batch_size 8 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--fp16 # Enable mixed precision training
```

## Troubleshooting
Expand All @@ -214,6 +208,5 @@ If you encounter out-of-memory errors, try the following:

1. Reduce batch size in the script arguments
2. Enable gradient checkpointing (add `--gradient_checkpointing` to your command)
3. Use mixed precision training (add `--fp16` to your command)

For more help, please check our [GitHub Issues](https://github.com/your-repo/atomgen/issues) or open a new issue if you can't find a solution to your problem.

0 comments on commit 866589d

Please sign in to comment.