Skip to content

Commit

Permalink
Update DOFA configuration with training parameters and learning rate …
Browse files Browse the repository at this point in the history
…adjustments

- Added gradient clipping and gradient accumulation settings for improved training stability.
- Reduced maximum training epochs from 150 to 50 and adjusted early stopping patience from 20 to 10 epochs.
- Updated learning rate from 0.001 to 2e-5 and added cooldown and minimum learning rate settings for the scheduler.
  • Loading branch information
valhassan committed Nov 27, 2024
1 parent fc626ae commit a207b3f
Showing 1 changed file with 7 additions and 3 deletions.
10 changes: 7 additions & 3 deletions configs/dofa_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ trainer:
accelerator: "gpu"
devices: -1
strategy: "ddp"
gradient_clip_val: 1.0
accumulate_grad_batches: 2
logger:
class_path: lightning.pytorch.loggers.mlflow.MLFlowLogger
init_args:
Expand All @@ -30,7 +32,7 @@ trainer:
mode: "min"
save_top_k: 1
filename: "model-{epoch:02d}-{val_loss:.2f}"
max_epochs: 150
max_epochs: 50
min_epochs: 2

model:
Expand All @@ -56,15 +58,17 @@ model:
optimizer:
class_path: Adam
init_args:
lr: 0.001
lr: 2e-5

lr_scheduler:
class_path: ReduceLROnPlateau
init_args:
monitor: "val_loss"
mode: "min"
factor: 0.1
patience: 20
patience: 10
cooldown: 2
min_lr: 2e-8

data:
class_path: datamodules.imagery_NonGeoDataModule.BlueSkyNonGeoDataModule
Expand Down

0 comments on commit a207b3f

Please sign in to comment.