Skip to content

Commit

Permalink
v0.1.4
Browse files Browse the repository at this point in the history
  • Loading branch information
Jintao-Huang committed Oct 15, 2022
1 parent c8f977a commit efe23ff
Show file tree
Hide file tree
Showing 7 changed files with 7 additions and 8 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
2. Download the latest version (>=1.12) of Torch(corresponding CUDA version) from the [official website](https://pytorch.org/get-started/locally/) of Torch. It is not recommended to automatically install Torch (CUDA 10.2 default) using the Mini-Lightning dependency, which will cause CUDA version mismatch.
3. Install mini-lightning
```bash
# from pypi (v0.1.3)
# from pypi (v0.1.4)
pip install mini-lightning

# Or download the files from the repository to local,
Expand Down
2 changes: 1 addition & 1 deletion examples/dqn.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ def training_step(self, batch: Any) -> Tensor:
"optim_name": "SGD",
"dataloader_hparams": {"batch_size": batch_size},
"optim_hparams": {"lr": 1e-2, "weight_decay": 1e-4}, #
"trainer_hparams": {"max_epochs": max_epochs, "gradient_clip_norm": 20},
"trainer_hparams": {"max_epochs": max_epochs, "gradient_clip_norm": 20, "verbose": False},
#
"rand_p": {
"eta_max": 1,
Expand Down
8 changes: 4 additions & 4 deletions mini_lightning/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from .mini_lightning import *
from .utils import *
from .warmup_lrs import *
from .visualize import *
from ._mini_lightning import *
from ._utils import *
from ._warmup_lrs import *
from ._visualize import *
Original file line number Diff line number Diff line change
Expand Up @@ -873,7 +873,7 @@ def test(self, dataloader: Optional[DataLoader], test_best: bool = False, test_l
#
if test_last: # just current model
if self._best_ckpt_is_last() and test_best is True:
logger.info("Ignore test last: the best ckpt is the last ckpt")
logger.info("Ignore test last: the best ckpt and the last ckpt is the same")
else:
m = self._test(dataloader, "last")
res_mes.update(m)
Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
import math
from typing import List, Callable, Union, Dict, Optional
#
from torch.optim.lr_scheduler import _LRScheduler as LRScheduler, CosineAnnealingLR
from torch.optim import Optimizer
__all__ = ["get_T_max", "warmup_decorator", "cosine_annealing_lr"]

Expand Down

0 comments on commit efe23ff

Please sign in to comment.