model_arch
:nn.Module
to be used for the attention netF
and the feature extraction of the scale netG
. Must return a dictionary with'attention'
and'fg_feats'
as keys, which represent, respectively, the attention output ofF
, and the feature vector fromG
scale_arch
:nn.Module
to be used to classifyfg_feats
obtained fromG
freeze_model
: Whether to freeze the attention and scale networks or notndf
: Width parameter formodel_arch
nc
: Number of channels in input imagesattention_sparsity
: The desired sparsity. Specified as a real number in[0, 1]
attention_sparsity_r
: Parameter determining the compression in the sigmoid
batch_size
: Training batch size to uselr
: Desired learning ratemomentum
: Desired momentum for optimiserweight_decay
: L2 regularisation weight hyperparameter for optimisern_epochs
: Number of training epochslosses
: A list specifying which losses to use for training. See variableloss_definitions
intrain.py
for possible lossespredictions
: A list specifying what predictions to makecheckpoint_special
: A list or an int. A list specifies which epochs to keep specific checkpoints. An int specifies a checkpoint every so many epochslr_decay
: Decay factor for learning ratelr_decay_every
: LR is decayed at ...equivariance_scale
: Whether to enforce equivariance to scale for attention networkF
. PreferablyFalse
equivariance_aug
: Whether to enforce equivariance to rigid transforms for attention networkF
. Seeutils.py
for possible values for this optionoptimiser
: Which optimiser to use. Allowed values are'adam'
and'sgd'
.pixel_means
: Pixel means for input imagespixel_stds
: Pixel stds for input imagesmax_train_batches_per_epoch
: Number of maximum training batches per epochmax_val_batches_per_epoch
: Number of maximum validation batches per epochuse_colour_transform
: Whether to use colour augmentationuse_image_transforms
: Whether to use geometric augmentation
dset_name
: Name of datasetimage_size
: Sizes of input images. If an image is not this size, it is padded by placing the original in the top-left cornerpatch_size
: Sizes of patches to pick out from input imagesworkers
: Number of dataset workersdataroot
: Path to dataset filesstain_normaliser_file
: Relative path to stain normalisation target (relative todataroot
)hed_decomp
: Whether to use HED decomposition instead of RGB imageshed_channels
: Which HED channels to useseg_threshold
: Threshold to reject input tiles. This value in[0, 1]
specifies at least what fraction of an input image must be tissue in order for it to be usedlevels
: Magnification levels to use for training.OpenSlide
's convention is followed, so'max'
denotes the maximum magnification, while'-1'
and'-2'
denote one and two lower levels of magnification, respectively.splits_file
: Splits file specifying train/val/test split indataroot
.seg_cover_file
: File indataroot
specifying fraction of tissue cover for extracted images. This file for MoNuSeg images extracted with the provided code is included indata_processing
.
load
: Initialise entire expriment from this pathinit_model
: Initialise only the model from this path