Simple PyTorch AutoEncoder to play with.
Check out simple examples in the Notebooks.
As an example:
python -m venv
source .venv/bin/activate
pip install git+https://github.com/ghsanti/torch_practice
One then can run it:
python -m torch_practice.simple_train
For custom configurations, write a simple script:
from torch_practice.simple_train import train
from torch_practice.default_config import default_config
config = default_config()
config["n_workers"] = 3
# then train it.
train(config)
This package installs torch+cpu by default. For other hardware please install torch from the matrix versions.
The "blueprint" is in the DAEConfig, in this file.
basic practices
From the [docs](https://pytorch.org/docs/stable/notes/randomness.html):Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms.
To control the sources of randomness one can pass a seed to the configuration dictionary. This controls some ops and dataloading.
simple steps here
1. Fork 2. Clone your fork and run ```bash pip install uv uv venv source .venv/bin/activate uv sync --all-extras # non-cpu users need extra torch installs. ```Checking out to a Codespace it installs everything. Activate the venv using:
source .venv/bin/activate
- In both cases, remember to select the
.venv
python-interpreter in VSCode. - Use absolute imports.
uv pip install --upgrade build
uv build