Skip to content

Latest commit

 

History

History
592 lines (354 loc) · 31.9 KB

CHANGELOG.md

File metadata and controls

592 lines (354 loc) · 31.9 KB

Release notes

2.7.10

New Features

  • Add torch save and load kwargs (#3831), thanks to @JonathanGrant
    • This lets us do nice things like set pickle_module to cloudpickle
  • PyTorch 1.13 Compatibility (#3828), thanks to @warner-benjamin
  • Recursive copying of attribute dictionaries for TensorImage subclass (#3822), thanks to @restlessronin
  • OptimWrapper sets same param groups as Optimizer (#3821), thanks to @warner-benjamin
    • This PR harmonizes the default parameter group setting between OptimWrapper and Optimizer by modifying OptimWrapper to match Optimizer's logic.
  • Support normalization of 1-channel images in unet (#3820), thanks to @marib00
  • Add img_cls param to ImageDataLoaders (#3808), thanks to @tcapelle
    • This is particularly useful for passing PILImageBW for MNIST.
  • Add support for kwargs to tensor() when arg is an ndarray (#3797), thanks to @SaadAhmedGit
  • Add latest TorchVision models on fastai (#3791), thanks to @datumbox
  • Option to preserve filenames in download_images (#2983), thanks to @mess-lelouch

Bugs Squashed

  • get_text_classifier fails with custom AWS_LSTM (#3817)
  • revert auto-enable of mac mps due to pytorch limitations (#3769)
  • Workaround for performance bug in PyTorch with subclassed tensors (#3683), thanks to @warner-benjamin

2.7.8

New Features

  • add split value argument to ColSplitter (#3737), thanks to @DanteOz
  • deterministic repr for PIL images (#3762)
  • option to skip default callbacks in Learner (#3739)
  • update for nbdev2 (#3747)

Bugs Squashed

  • IntToFloatTensor failing on Mac mps due to missing op (#3761)
  • fix for pretrained in vision.learner (#3746), thanks to @peterdudfield
  • fix same file error message when resizing image (#3743), thanks to @cvergnes

2.7.6

New Features

  • Initial Mac GPU (mps) support (#3719)

2.7.5

New Features

  • auto-normalize timm models (#3716)
  • PyTorch 1.12 support

2.7.4

New Features

  • Add DataBlock.weighted_dataloaders (#3706)

2.7.2

Bugs Squashed

  • PIL.Resampling only added in v9.1 (#3699)

2.7.1

Bugs Squashed

  • Update fastcore minimum version

2.7.0

Breaking changes

  • Distributed training now uses Hugging Face Accelerate, rather than fastai's launcher. Distributed training is now supported in a notebook -- see this tutorial for details

New Features

  • resize_images creates folder structure at dest when recurse=True (#3692)
  • Integrate nested callable and getcallable (#3691), thanks to @muellerzr
  • workaround pytorch subclass performance bug (#3682)
  • Torch 1.12.0 compatibility (#3659), thanks to @josiahls
  • Integrate Accelerate into fastai (#3646), thanks to @muellerzr
  • New Callback event, before and after backward (#3644), thanks to @muellerzr
  • Let optimizer use built torch opt (#3642), thanks to @muellerzr
  • Support PyTorch Dataloaders with DistributedDL (#3637), thanks to @tmabraham
  • Add channels_last cb (#3634), thanks to @tcapelle
  • support all timm kwargs (#3631)
  • send self.loss_func to device if it is an insatnce on nn.Module (#3395), thanks to @arampacha
  • adds tracking and logging best metrics to wandb cb (#3372), thanks to @arampacha

Bugs Squashed

  • Solve hanging load_model and let LRFind be ran in a distributed setup (#3689), thanks to @muellerzr
  • pytorch subclass functions fail if no positional args (#3687)
  • Workaround for performance bug in PyTorch with subclassed tensors (#3683), thanks to @warner-benjamin
  • Fix Tokenizer.get_lengths (#3667), thanks to @karotchykau
  • load_learner with cpu=False doesn't respect the current cuda device if model exported on another; fixes #3656 (#3657), thanks to @ohmeow
  • [Bugfix] Fix smoothloss on distributed (#3643), thanks to @muellerzr
  • WandbCallback Error: "Tensors must be CUDA and dense" on distributed training (#3291)
  • vision tutorial failed at learner.fine_tune(1) (#3283)

2.6.3

Bugs Squashed

  • Fix Learner pickling problem introduced in v2.6.2

2.6.2

Bugs Squashed

  • Race condition: 'Tensor' object has no attribute 'append' (#3385)

2.6.0

New Features

  • add support for Ross Wightman's Pytorch Image Models (timm) library (#3624)
  • rename cnn_learner to vision_learner since we now support models other than CNNs too (#3625)

Bugs Squashed

2.5.6

New Features

Bugs Squashed

2.5.5

New Features

  • Update fastcore dep

2.5.4

New Features

  • Support py3.10 annotations (#3601)

Bugs Squashed

  • Fix pin_memory=True breaking (batch) Transforms (#3606), thanks to @johan12345
  • Add Python 3.9 to setup.py for PyPI (#3604), thanks to @nzw0301
  • removes add_vert from get_grid calls (#3593), thanks to @kevinbird15
  • Making loss_not_reduced work with DiceLoss (#3583), thanks to @hiromis
  • Fix bug in URLs.path() in 04_data.external (#3582), thanks to @malligaraj
  • Custom name for metrics (#3573), thanks to @bdsaglam
  • Update import for show_install (#3568), thanks to @fr1ll
  • Fix Classification Interpretation (#3563), thanks to @warner-benjamin
  • Updates Interpretation class to be memory efficient (#3558), thanks to @warner-benjamin
  • Learner.show_results uses passed dataloader via dl_idx or dl arguments (#3554), thanks to @warner-benjamin
  • Fix learn.export pickle error with MixedPrecision Callback (#3544), thanks to @warner-benjamin
  • Fix concurrent LRFinder instances overwriting each other by using tempfile (#3528), thanks to @warner-benjamin
  • Fix _get_shapes to work with dictionaries (#3520), thanks to @ohmeow
  • Fix torch version checks, remove clip_grad_norm check (#3518), thanks to @warner-benjamin
  • Fix nested tensors predictions compatibility with fp16 (#3516), thanks to @tcapelle
  • Learning rate passed via OptimWrapper not updated in Learner (#3337)
  • Different results after running lr_find() at different times (#3295)
  • lr_find() may fail if run in parallel from the same directory (#3240)

2.5.3

New Features

Bugs Squashed

2.5.1

  • Import download_url from fastdownload

2.5.0

Breaking changes

  • config.yml has been renamed to config.ini, and is now in ConfigParser format instead of YAML
  • THe _path suffixes in config.ini have been removed

Bugs Squashed

  • Training with learn.to_fp16() fails with PyTorch 1.9 / Cuda 11.4 (#3438)
  • pandas 1.3.0 breaks add_elapsed_times (#3431)

2.4.1

New Features

  • add DiceLoss (#3386), thanks to @tcapelle
  • TabularPandas data transform reproducibility (#2826)

Bugs Squashed

  • Latest Pillow v8.3.0 breaks conversion Image to Tensor (#3416)

2.4

Breaking changes

  • QRNN module removed, due to incompatibility with PyTorch 1.9, and lack of utilization of QRNN in the deep learning community. QRNN was our only module that wasn't pure Python, so with this change fastai is now a pure Python package.

New Features

  • Support for PyTorch 1.9
  • Improved LR Suggestions (#3377), thanks to @muellerzr
  • SaveModelCallback every nth epoch (#3375), thanks to @KeremTurgutlu
  • Send self.loss_func to device if it is an instance of nn.Module (#3395), thanks to @arampacha
  • Batch support for more than one image (#3339)
  • Changable tfmdlists for TransformBlock, Datasets, DataBlock (#3327)

Bugs Squashed

2.3.1

New Features

  • Add support for pytorch 1.8 (#3349)
  • Add support for spacy3 (#3348)
  • Add support for Windows. Big thanks to Microsoft for many contributions to get this working
  • Timedistributed layer and Image Sequence Tutorial (#3124), thanks to @tcapelle
  • Add interactive run logging to AzureMLCallback (#3341), thanks to @yijinlee
  • Batch support for more than one image (#3339)
  • Have interp use ds_idx, add tests (#3332), thanks to @muellerzr
  • Automatically have fastai determine the right device, even with torch DataLoaders (#3330), thanks to @muellerzr
  • Add at_end feature to SaveModelCallback (#3296), thanks to @tmabraham
  • Improve inplace params in Tabular's new and allow for new and test_dl to be in place (#3292), thanks to @muellerzr
  • Update VSCode & Codespaces dev container (#3280), thanks to @bamurtaugh
  • Add max_scale param to RandomResizedCrop(GPU) (#3252), thanks to @kai-tub
  • Increase testing granularity for speedup (#3242), thanks to @ddobrinskiy

Bugs Squashed

  • Make TTA turn shuffle and drop_last off when using ds_idx (#3347), thanks to @muellerzr
  • Add order to TrackerCallback derived classes (#3346), thanks to @muellerzr
  • Prevent schedule from crashing close to the end of training (#3335), thanks to @Lewington-pitsos
  • Fix ability to use raw pytorch DataLoaders (#3328), thanks to @hamelsmu
  • Fix PixelShuffle_icnr weight (#3322), thanks to @pratX
  • Creation of new DataLoader in Learner.get_preds has wrong keyword (#3316), thanks to @tcapelle
  • Correct layers order in tabular learner (#3314), thanks to @gradientsky
  • Fix vmin parameter default (#3305), thanks to @tcapelle
  • Ensure call to one_batch places data on the right device (#3298), thanks to @tcapelle
  • Fix Cutmix Augmentation (#3259), thanks to @MrRobot2211
  • Fix custom tokenizers for DataLoaders (#3256), thanks to @iskode
  • fix error setting 'tok_tfm' parameter in TextDataloaders.from_folder
  • Fix lighting augmentation (#3255), thanks to @kai-tub
  • Fix CUDA variable serialization (#3253), thanks to @mszhanyi
  • change batch tfms to have the correct dimensionality (#3251), thanks to @trdvangraft
  • Ensure add_datepart adds elapsed as numeric column (#3230), thanks to @aberres

2.3.0

Breaking Changes

  • fix optimwrapper to work with param_groups (#3241), thanks to @tmabraham
    • OptimWrapper now has a different constructor signature, which makes it easier to wrap PyTorch optimizers

New Features

  • Support discriminative learning with OptimWrapper (#2829)

Bugs Squashed

  • Updated to support adding transforms to multiple dataloaders (#3268), thanks to @marii-moe
    • This fixes an issue in 2.2.7 which resulted in incorrect validation metrics when using Normalization

2.2.7

Bugs Squashed

  • Regression fix: Ensure add_datepart adds elapsed as numeric column (#3230), thanks to @aberres

2.2.6

Bugs Squashed

  • 2.2.5 was not released correctly - it was actually 2.2.3

2.2.5

New Features

  • Enhancement: Let TextDataLoaders take in a custom tok_text_col (#3208), thanks to @muellerzr
  • Changed dataloaders arguments to have consistent overrides (#3178), thanks to @marii-moe
  • Better support for iterable datasets (#3173), thanks to @jcaw

Bugs Squashed

  • BrokenProcessPool in download_images() on Windows (#3196)
  • error on predict() or using interp with resnet and MixUp (#3180)
  • Fix 'cat' attribute with pandas dataframe: AttributeError: Can only use .cat accessor with a 'category' dtype (#3165), thanks to @dreamflasher
  • cont_cat_split does not support pandas types (#3156)
  • DataBlock.dataloaders does not support the advertised "shuffle" argument (#3133)

2.2.3

New Features

  • Calculate correct nf in create_head based on concat_pool (#3115), thanks to @muellerzr

Bugs Squashed

  • wandb integration failing with latest wandb library (#3066)
  • Learner.load and LRFinder not functioning properly for the optimizer states (#2892)

2.2.2

Bugs Squashed

  • tensorboard and wandb can not access smooth_loss (#3131)

2.2.0

Breaking Changes

  • Promote NativeMixedPrecision to default MixedPrecision (and similar for Learner.to_fp16); old MixedPrecision is now called NonNativeMixedPrecision (#3127)
    • Use the new GradientClip callback instead of the clip parameter to use gradient clipping
  • Adding a Callback which has the same name as an attribute no longer raises an exception (#3109)
  • RNN training now requires RNNCallback, but does not require RNNRegularizer; out and raw_out have moved to RNNRegularizer (#3108)
    • Call rnn_cbs to get all callbacks needed for RNN training, optionally with regularization
  • replace callback run_after with order; do not run after cbs on exception (#3101)

New Features

  • Add GradientClip callback (#3107)
  • Make Flatten cast to TensorBase to simplify type compatibility (#3106)
  • make flattened metrics compatible with all tensor subclasses (#3105)
  • New class method TensorBase.register_func to register types for __torch_function__ (#3097)
  • new dynamic flag for controlling dynamic loss scaling in NativeMixedPrecision (#3096)
  • remove need to call to_native_fp32 before predict; set skipped in NativeMixedPrecision after NaN from dynamic loss scaling (#3095)
  • make native fp16 extensible with callbacks (#3094)
  • Calculate correct nf in create_head based on concat_pool (#3115) thanks to @muellerzr

2.1.10

New Features

Bugs Squashed

  • NoneType object has no attribute append in fastbook chapter 6 BIWI example (#3091)

2.1.9

New Features

  • Refactor MixUp and CutMix into MixHandler (#3037), thanks to @muellerzr
    • Refactors into a general MixHandler class, with MixUp and CutMix simply implementing a before_batch to perform the data augmentation. See fastai.callback.mixup

Bugs Squashed

  • Gradient Accumulation + Mixed Precision shows artificially high training loss (#3048)

2.1.8

New Features

Bugs Squashed

  • Update for fastcore negate_func->not_
  • LR too high for gradient accumulation (#3040), thanks to @marii-moe
  • Torchscript transforms incompatibility with nn.Sequential (#2920)

2.1.7

New Features

  • Pytorch 1.7 subclassing support (#2769)

Bugs Squashed

  • unsupported operand type(s) for +=: 'TensorCategory' and 'TensorText' when using AWD_LSTM for text classification (#3027)
  • UserWarning when using SaveModelCallback() on after_epoch (#3025)
  • Segmentation error: no implementation found for 'torch.nn.functional.cross_entropy' on types that implement torch_function (#3022)
  • TextDataLoaders.from_df() returns TypeError: 'float' object is not iterable (#2978)
  • Internal assert error in awd_qrnn (#2967)

2.1.6

New Features

  • Option to preserve filenames in download_images (#2983), thanks to @mess-lelouch
  • Deprecate config in create_cnn and instead pass kwargs directly (#2966), thanks to @borisdayma

Bugs Squashed

  • Progress and Recorder callbacks serialize their data, resulting in large Learner export file sizes (#2981)
  • TextDataLoaders.from_df() returns TypeError: 'float' object is not iterable (#2978)
  • "only one element tensors can be converted to Python scalars" exception in Siamese Tutorial (#2973)
  • Learn.load and LRFinder not functioning properly for the optimizer states (#2892)

2.1.5

Breaking Changes

  • remove log_args (#2954)

New Features

  • Improve performance of RandomSplitter (h/t @muellerzr) (#2957)

Bugs Squashed

  • Exporting TabularLearner via learn.export() leads to huge file size (#2945)
  • TensorPoint object has no attribute img_size (#2950)

2.1.4

Breaking Changes

  • moved has_children from nn.Module to free function (#2931)

New Features

  • Support persistent workers (#2768)

Bugs Squashed

  • unet_learner segmentation fails (#2939)
  • In "Transfer learning in text" tutorial, the "dls.show_batch()" show wrong outputs (#2910)
  • Learn.load and LRFinder not functioning properly for the optimizer states (#2892)
  • Documentation for Show_Images broken (#2876)
  • URL link for documentation for torch_core library from the doc() method gives incorrect url (#2872)

2.1.3

Bugs Squashed

  • Work around broken PyTorch subclassing of some new_* methods (#2769)

2.1.0

New Features

  • PyTorch 1.7 compatibility (#2917)

PyTorch 1.7 includes support for tensor subclassing, so we have replaced much of our custom subclassing code with PyTorch's. We have seen a few bugs in PyTorch's subclassing feature, however, so please file an issue if you see any code failing now which was working before.

There is one breaking change in this version of fastai, which is that custom metadata is now stored directly in tensors as standard python attributes, instead of in the special _meta attribute. Only advanced customization of fastai OO tensors would have used this functionality, so if you do not know what this all means, then it means you did not use it.

2.0.19

This version was released after 2.1.0, and adds fastcore 1.3 compatibility, whilst maintaining PyTorch 1.6 compatibility. It has no new features or bug fixes.

2.0.18

Forthcoming breaking changes

The next version of fastai will be 2.1. It will require PyTorch 1.7, which has significant foundational changes. It should not require any code changes except for people doing sophisticated tensor subclassing work, but nonetheless we recommend testing carefully. Therefore, we recommend pinning your fastai version to <2.1 if you are not able to fully test your fastai code when the new version comes out.

Dependencies

  • pin pytorch (<1.7) and torchvision (<0.8) requirements (#2915)
  • Add version pin for fastcore
  • Remove version pin for sentencepiece

2.0.16

New Features

  • added support for tb projector word embeddings (#2853), thanks to @floleuerer
  • Added ability to have variable length draw (#2845), thanks to @marii-moe
  • add pip upgrade cell to all notebooks, to ensure colab has current fastai version (#2843)

Bugs Squashed

  • fix TabularDataLoaders inference of cont_names to keep y_names separate (#2859), thanks to @sutt

2.0.15

Breaking Changes

  • loss functions were moved to loss.py (#2843)

2.0.14

New Features

  • new callback event: after_create (#2842)

    • This event runs after a Learner is constructed. It's useful for initial setup which isn't needed for every fit, but just once for each Learner (such as setting initial defaults).
  • Modified XResNet to support Conv1d / Conv3d (#2744), thanks to @floleuerer

    • Supports different input dimensions, kernel sizes and stride (added parameters ndim, ks, stride). Tested with fastai_audio and fastai time series with promising results.

Bugs Squashed

  • img_size attribute for TensorPoint is not updated properly (#2799), thanks to @IRailean

2.0.13

Bugs Squashed

  • Undo breaking num_workers fix (#2804)
    • Some users found the recent addition of num_workers to inference functions was causing problems, particularly on Windows. This PR reverts that change, until we find a more reliable way to handle num_workers for inference.
  • learn.tta() fails on a learner imported with load_learner() (#2764)
  • learn.summary() crashes out on 2nd transfer learning (#2735)

2.0.12

Bugs Squashed

  • Undo breaking num_workers fix (#2804)

2.0.11

Bugs Squashed

  • Fix cont_cat_split for multi-label classification (#2759)
  • fastbook error: "index 3 is out of bounds for dimension 0 with size 3" (#2792)

2.0.10

New Features

  • update for fastcore 1.0.5 (#2775)

2.0.6

New Features

  • "Remove pandas min version requirement" (#2765)
  • Modify XResNet to support Conv1d / Conv3d (#2744)
    • Also support different input dimensions, kernel sizes and stride (added parameters ndim, ks, stride).
  • Add support for multidimensional arrays for RNNDropout (#2737)
  • MCDropoutCallback to enable Monte Carlo Dropout in fastai. (#2733)
    • A new callback to enable Monte Carlo Dropout in fastai in the get_preds method. Monte Carlo Dropout is simply enabling dropout during inference. Calling get_preds multiple times and stacking them yield of a distribution of predictions that you can use to evaluate your prediction uncertainty.
  • adjustable workers in get_preds (#2721)

Version 2.0.0

  • Initial release of v2