Skip to content

Commit

Permalink
more updates
Browse files Browse the repository at this point in the history
  • Loading branch information
Sllambias committed Oct 18, 2024
1 parent 5417274 commit 9d8b831
Show file tree
Hide file tree
Showing 4 changed files with 44 additions and 12 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Our Pipeline allows users to quickly train solid baselines or change features to

# Guides

- [Changing Parameters](yucca/documentation/guides/changing_parameters.md#model--training)
- [Changing Pipeline Parameters](yucca/documentation/guides/changing_pipeline_parameters.md#model--training)
- [Classification](yucca/documentation/guides/classification.md)
- [Environment Variables](yucca/documentation/guides/environment_variables.md)
- [Ensembles](yucca/documentation/guides/ensembles.md)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,23 +8,26 @@
- [Training](#training)
* [Data Augmentation](#data-augmentation)
* [Data Splits](#data-splits)
* [Deep Supervision](#deep-supervision)
* [Foreground Oversampling](#foreground-oversampling)
* [Learning Rate](#learning-rate)
* [Learning Rate Scheduler](#learning-rate-scheduler)
* [Loss Function](#loss-function)
* [Model Architecture](#model-architecture)
* [Model Dimensionality](#model-dimensionality)
* [Momentum](#momentum)
* [Optimizer](#optimizer)
* [Patch Based Training](#patch-based-training)
- [Inference](#inference)
* [Evaluation](#evaluation)
* [Fusion](#fusion)


# Guide to Changing Yucca Parameters
## Subclassing
Changing parameters in Yucca is generally achieved using subclasses. This means, to change a given parameter (1) Subclass the class defining the parameter, (2) change the value of the parameter to the desired value and (3) create a new .py file in the (sub)directory of the parent class.
Changing parameters in the Yucca Pipeline is generally achieved using subclasses. This means, to change a given parameter (1) Subclass the class defining the parameter, (2) change the value of the parameter to the desired value and (3) create a new .py file in the (sub)directory of the parent class.

E.g. to lower the starting Learning Rate we subclass YuccaManager - the default class responsible for handling model training, and change the variable self.learning_rate variable from 1e-3 to 1e-5.
E.g. to lower the starting Learning Rate we subclass the YuccaManager - the default class responsible for handling model training, and change the variable self.learning_rate variable to e.g. 1e-5.

We call this new Manager "YuccaManager_1e5" and save it as "YuccaManager_1e5.py" in the /yucca/training/managers directory as this is where the Parent Class is located. Alternatively it can be saved in a subdirectory of the directory of the parent class e.g. /yucca/training/managers/lr.

Expand All @@ -39,7 +42,7 @@ class YuccaManager_1e5(YuccaManager):
```

# Preprocessing
Unless otherwise mentioned, preprocessing variables and functions are handled by the YuccaPlanners. For optimal results, it is advisable to subclass the default planner when applying changes.
Unless otherwise mentioned, preprocessing variables and functions are set by the YuccaPlanners. For optimal results, it is advisable to subclass the default planner when applying changes.

**Default Planner Class: [YuccaPlanner](/yucca/pipeline/planning/YuccaPlanner.py)**

Expand Down Expand Up @@ -122,8 +125,7 @@ class YuccaManager_NewUserSetup(YuccaManager):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.augmentation_params = generic
self.augmentation_params = {"blurring_p_per_sample": 0.0,
"scale_factor": (0.7, 1.3)}
self.augmentation_params["blurring_p_per_sample"] = 0.0
```

## Data Splits
Expand Down Expand Up @@ -190,6 +192,21 @@ class YuccaManager_NewUserSetup(YuccaManager):

CLI: Deep supervision can also be enabled during training and finetuning by using the --ds flag.

## Foreground Oversampling
Controls the percentage of samples containing minimum 1 foreground voxel. Remaining samples are drawn randomly and may also contain foreground voxels.

Parent: default Manager class

Variable: self.p\_oversample\_foreground
```
from yucca.pipeline.managers.YuccaManager import YuccaManager
class YuccaManager_NewUserSetup(YuccaManager):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.p_oversample_foreground = 0.66
```

## Learning Rate
Parent: default Manager class

Expand Down Expand Up @@ -269,7 +286,7 @@ CLI: Can also be changed using *yucca_train* --mom flag.
## Optimizer
Parent: [YuccaLightningModule](/yucca/lightning_modules/YuccaLightningModule.py)

Variable: self.optim
Variables: *self.optim* and *self.optim_kwargs*

```
from yucca.modules.lightning_modules.YuccaLightningModule import YuccaLightningModule
Expand All @@ -279,6 +296,7 @@ class YuccaLightningModule_Adam(YuccaLightningModule):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.optim = optim.Adam
self.optim_kwargs = {"eps": 1e-8, "betas": (0.9, 0.99), "lr": 5e-5, "weight_decay": 5e-2}
```

## Patch Based Training
Expand All @@ -298,8 +316,4 @@ class YuccaManager_NoPatches(YuccaManager):
This is used to train models on full-size images. Requires datasets are preprocessed using a Planner using fixed_target_size to ensure that all samples have identical dimensions.

# Inference
Changing inference parameters not implemented currently.

## Evaluation
## Pixel/Object
## Fusion
Changing inference parameters not implemented currently. Use the CLI to control inference parameters.
17 changes: 17 additions & 0 deletions yucca/documentation/guides/compression.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Compression
Yucca normally stores training data uncompressed and everything else compressed. Both raw data and predictions are stored in .nii.gz, .npz, .jpg, .png or similar file formats files while the preprocessed data is kept as .npy files (rather than the compressed .npz counterpart). During model training the preprocessed data will be read many, many times and using compressed data for this will significantly slow down training steps.

If it is not possible to store the training data uncompressed due to storage limitations it is possible to train on compressed data.

To achieve this use a Planner with compression enabled, such as the YuccaPlanner_Compress, or create one yourself with the desired Planner parameters.
```
from yucca.pipeline.planning.YuccaPlanner import YuccaPlanner
class YuccaPlanner_Compress(YuccaPlanner):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.name = str(self.__class__.__name__)
self.compress = True
```

1 change: 1 addition & 0 deletions yucca/documentation/guides/missing_modalities.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
UNDER CONSTRUCTION

0 comments on commit 9d8b831

Please sign in to comment.