Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aggregate changes for v1 #744

Merged
merged 95 commits into from
Sep 7, 2024
Merged
Show file tree
Hide file tree
Changes from 79 commits
Commits
Show all changes
95 commits
Select commit Hold shift + click to select a range
3afe9d7
chore!: remove cpu/gpu/stacktrace_truncation
avik-pal Jul 2, 2024
1fbe795
chore!: remove old preferences
avik-pal Jul 2, 2024
5a3b010
chore!: remove contrib deprecations
avik-pal Jul 2, 2024
eff6f50
chore!: remove `st_fixed_type`
avik-pal Jul 2, 2024
3e68b54
chore!: update layer_map/freeze
avik-pal Jul 2, 2024
7d21d8c
chore!: remove flattening of chains
avik-pal Jul 2, 2024
ab46dd8
fix: remove old usage of TrainState
avik-pal Jul 3, 2024
9ea6e60
chore: remove old `transform` export
avik-pal Jul 3, 2024
ae66cc8
ci: remove unncessary vars
avik-pal Jul 3, 2024
6ea4622
test: fix some tests
avik-pal Jul 3, 2024
6db3f6f
chore!: remove annotation of WrappedFunction
avik-pal Jul 4, 2024
8fb9ad4
fix!: remove potentially incorrect Tracker gradients for SimpleChains
avik-pal Jul 11, 2024
e2b0e16
fix: store the bias as a vector
avik-pal Jul 23, 2024
287932d
fix: test updates from new changes
avik-pal Jul 23, 2024
d818328
chore: drop pre-1.0 weight initializers
avik-pal Jul 27, 2024
e4ae250
refactor: migrate to `MLDataDevices`
avik-pal Jul 27, 2024
a35f1fb
feat: reexport NNlib
avik-pal Jul 27, 2024
dddfd66
chore: remove old versions
avik-pal Jul 27, 2024
fcf27c6
fix: errors in testing
avik-pal Jul 27, 2024
6f2f618
fix: don't reexport NNlib.dropout
avik-pal Jul 27, 2024
3ba442f
fix: remove explicit imports
avik-pal Aug 18, 2024
6553b87
fix: bad rebase
avik-pal Aug 18, 2024
221901a
fix: recurrent bias flatten
avik-pal Aug 18, 2024
506f5d2
chore: update to using [email protected]
avik-pal Aug 18, 2024
ef18ced
chore: update to using [email protected]
avik-pal Aug 18, 2024
f0c7374
fix: broken tests
avik-pal Aug 18, 2024
f4d3fc8
chore: remove all references to LuxDeviceUtils
avik-pal Aug 18, 2024
947817b
feat: define fallback `outputsize`
avik-pal Aug 18, 2024
c52754a
test: fix broken tests
avik-pal Aug 18, 2024
ec3be20
fix: printing of container layers
avik-pal Aug 19, 2024
005ecf4
fix!: (re)move deprecated `DynamicExpressionsLayer`
avik-pal Aug 29, 2024
cc9d663
fix!: (re)move deprecated `PeriodicEmbedding`
avik-pal Aug 29, 2024
fe81ec6
fix!: remove uses of DynamicExpressions
avik-pal Aug 29, 2024
929d60d
chore: apply formatting suggestion
avik-pal Aug 30, 2024
330545f
test: reexport NNlib in shared test modules
avik-pal Aug 30, 2024
1fd32e2
chore: mark version for release
avik-pal Aug 31, 2024
26e5058
chore: update compat entries for examples
avik-pal Aug 31, 2024
6d320b2
fix: remove old code from benchmarks
avik-pal Aug 31, 2024
6cb1b52
chore: run formatter
avik-pal Aug 31, 2024
a52f3d9
fix: qa testing
avik-pal Aug 31, 2024
80c7f0b
feat: controlled reexport of NNlib
avik-pal Aug 31, 2024
b9bbf99
docs: add a migration to v1 docs
avik-pal Sep 1, 2024
020afeb
test: try fixing the tests
avik-pal Sep 3, 2024
9f0cf23
fix: missing state type in StatefulLuxLayer
avik-pal Sep 3, 2024
b3c746a
chore: remove unnecessary `.0`
avik-pal Sep 3, 2024
e074134
fix!: cleanup the implementation of `layer_map`
avik-pal Sep 3, 2024
fac3d48
fix: tests
avik-pal Sep 3, 2024
4763413
fix: remove symbolic tutorial references
avik-pal Sep 4, 2024
2d1ad7b
fix: incorrect size propagator test rebase
avik-pal Sep 4, 2024
d41e3e3
fix!: remove allow_fast_activation
avik-pal Sep 4, 2024
43cee13
fix: bad rebase
avik-pal Sep 4, 2024
af560b6
fix: update freezing docs
avik-pal Sep 4, 2024
9b35c99
feat: correctly type-cast momentum and epsilon
avik-pal Sep 4, 2024
dce920d
feat: use the device iterators in the examples
avik-pal Sep 4, 2024
7b96d5d
fix: mark `Utils.eltype` as non-differentiable
avik-pal Sep 4, 2024
8c964f4
fix: misc docs issues
avik-pal Sep 5, 2024
5e8cda1
chore: remove old compat
avik-pal Sep 5, 2024
4d3f99c
feat: track running statistics in InstanceNorm
avik-pal Sep 5, 2024
95953aa
feat!: match initialization of convolution layers with Pytorch
avik-pal Sep 5, 2024
c6e6b04
fix: docstrings in InstanceNorm
avik-pal Sep 5, 2024
3219ae5
chore: run formatter
avik-pal Sep 5, 2024
4c8a7ee
feat!: upsampling now defaults to no align corners
avik-pal Sep 5, 2024
9ef520e
fix: tests and update init assumptions in tests
avik-pal Sep 5, 2024
e839347
fix: update initialization of linear layers
avik-pal Sep 5, 2024
8e68a1e
fix: update normalization defaults to match Pytorch
avik-pal Sep 5, 2024
183b18c
fix: update Embedding defaults to match Pytorch
avik-pal Sep 5, 2024
be1e74d
fix!: RNNCell defaults updated
avik-pal Sep 5, 2024
780de12
fix: testing failures due to non-zero bias
avik-pal Sep 5, 2024
01ce048
feat: update bias in LSTMCell
avik-pal Sep 5, 2024
965c9b3
feat: update bias in GRUCell
avik-pal Sep 5, 2024
0787b5b
feat: add cross correlation option to ConvTranspose
avik-pal Sep 5, 2024
6271c00
fix: accidental type to rand32
avik-pal Sep 5, 2024
1122d40
fix: unwanted printing
avik-pal Sep 5, 2024
e47f063
refactor: move the Upsample layer
avik-pal Sep 5, 2024
fd66780
feat: generalize pooling implementation and add LP versions
avik-pal Sep 5, 2024
15b20e4
fix: tests using old naming
avik-pal Sep 6, 2024
51956a9
test: remove unnecessary Enzyme runtime API
avik-pal Sep 6, 2024
8931b19
test: Enzyme with runtimeActivity enabled
avik-pal Sep 6, 2024
77dda0a
feat: add outpad to conv transpose
avik-pal Sep 6, 2024
03c57e5
docs: move docs around
avik-pal Sep 6, 2024
abf57f5
chore: run formatter
avik-pal Sep 6, 2024
74d22c4
test: more testing for ConvTranspose
avik-pal Sep 6, 2024
5afdd5a
test: more comprehensive testing for Pooling operations
avik-pal Sep 6, 2024
c548e64
test: minor test fixes
avik-pal Sep 6, 2024
3ccb1cc
fix: change in init
avik-pal Sep 6, 2024
bf11707
fix: DDIM updates and fix argument ordering
avik-pal Sep 6, 2024
f23be5f
fix: testing using old init assumptions
avik-pal Sep 6, 2024
06b20b8
fix: ConvMixer minor updates
avik-pal Sep 6, 2024
1039d97
fix: onehot supports GPUArrays
avik-pal Sep 6, 2024
9516a8d
test: explicitly zero init bias
avik-pal Sep 6, 2024
9ca0d38
fix: optionally test with FiniteDiff if ForwardDiff fails
avik-pal Sep 6, 2024
e395ed9
ci(buildkite): run some of the tutorials on CPU runners (#879)
avik-pal Sep 6, 2024
52c8880
docs: try fixing nested autodiff
avik-pal Sep 6, 2024
5010f10
docs: use the linux runners
avik-pal Sep 7, 2024
0bd7099
fix: update simplechains layer API
avik-pal Sep 7, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -154,8 +154,6 @@ jobs:
with:
version: ${{ matrix.version }}
- uses: julia-actions/julia-downgrade-compat@v1
with:
skip: 'AMDGPU'
- uses: julia-actions/julia-buildpkg@v1
- uses: julia-actions/julia-runtest@v1
env:
Expand Down
17 changes: 6 additions & 11 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "Lux"
uuid = "b2108857-7c20-44ae-9111-449ecde12c47"
authors = ["Avik Pal <[email protected]> and contributors"]
version = "0.5.68"
version = "1.0.0"

[deps]
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
Expand All @@ -20,7 +20,6 @@ GPUArraysCore = "46192b85-c4d5-4398-a991-12ede77f4527"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
LossFunctions = "30fc2ffe-d236-52d8-8643-a9d8f7c094a7"
LuxCore = "bb33d45b-7691-41d6-9220-0943567d0623"
LuxDeviceUtils = "34f89e08-e1d5-43b4-8944-0b49ac560553"
LuxLib = "82251201-b29d-42c6-8e01-566dec8acb11"
MLDataDevices = "7e8f7934-dd98-4c1a-8fe8-92b47a384d40"
MacroTools = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
Expand All @@ -42,7 +41,6 @@ WeightInitializers = "d49dbf32-c5c2-4618-8acc-27bb2598ef2d"
[weakdeps]
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
ComponentArrays = "b0b7db55-cfe3-40fc-9ded-d10e2dbeff66"
DynamicExpressions = "a40a106e-89c9-4ca8-8020-a735e8728b6b"
Enzyme = "7da242da-08ed-463a-9acd-ee780be4f1d9"
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
FunctionWrappers = "069b7b12-0de2-55c6-9aab-29f3d0a68a2e"
Expand All @@ -56,7 +54,6 @@ Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[extensions]
LuxComponentArraysExt = "ComponentArrays"
LuxDynamicExpressionsExt = "DynamicExpressions"
LuxEnzymeExt = "Enzyme"
LuxFluxExt = "Flux"
LuxMLUtilsExt = "MLUtils"
Expand All @@ -70,15 +67,14 @@ LuxZygoteExt = "Zygote"
[compat]
ADTypes = "1.5"
Adapt = "4"
ArgCheck = "2.1"
ArgCheck = "2.3"
ArrayInterface = "7.9"
CUDA = "5.3.2"
ChainRulesCore = "1.24"
Compat = "4.15"
ComponentArrays = "0.15.16"
ConcreteStructs = "0.2.3"
DispatchDoctor = "0.4.12"
DynamicExpressions = "0.16, 0.17, 0.18, 0.19"
Enzyme = "0.12.26"
EnzymeCore = "0.7.7"
FastClosures = "0.3.2"
Expand All @@ -89,9 +85,8 @@ Functors = "0.4.12"
GPUArraysCore = "0.1.6"
LinearAlgebra = "1.10"
LossFunctions = "0.11.1"
LuxCore = "0.1.24"
LuxDeviceUtils = "0.1.26"
LuxLib = "0.3.42"
LuxCore = "1"
LuxLib = "1.2"
MLDataDevices = "1.1"
MLUtils = "0.4.4"
MPI = "0.20.19"
Expand All @@ -104,7 +99,7 @@ Preferences = "1.4.3"
Random = "1.10"
Reexport = "1.2.2"
ReverseDiff = "1.15"
SIMDTypes = "0.1.0"
SIMDTypes = "0.1"
Setfield = "1.1.1"
SimpleChains = "0.4.7"
Static = "1.1.1"
Expand All @@ -113,6 +108,6 @@ Statistics = "1.10"
Tracker = "0.2.34"
UnrolledUtilities = "0.1.2"
VectorizationBase = "0.21.70"
WeightInitializers = "0.1.5, 1"
WeightInitializers = "1"
Zygote = "0.6.70"
julia = "1.10"
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@ import Pkg
Pkg.add("Lux")
```

> [!TIP]
> If you are using a pre-v1 version of Lux.jl, please see the [Updating to v1 section](https://lux.csail.mit.edu/dev/introduction/updating_to_v1/) for instructions on how to update.

## 🤸 Quickstart

```julia
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/setup.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
using ADTypes: ADTypes, AutoEnzyme, AutoZygote
using Adapt: adapt
using Lux: Lux, BatchNorm, Chain, Conv, CrossCor, Dense, Dropout, FlattenLayer, MaxPool
using Lux: Lux, BatchNorm, Chain, Conv, Dense, Dropout, FlattenLayer, MaxPool
using MLDataDevices: AbstractDevice, CPUDevice, CUDADevice, AMDGPUDevice
using NNlib: relu, gelu
using Random: Random
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/setups/models.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ function setup_vgg16_benchmarks!(suite::BenchmarkGroup, cpu_or_gpu::String,
conv_bn((3, 3), 512 => 512, relu; pad=(1, 1), stride=(1, 1)),
conv_bn((3, 3), 512 => 512, relu; pad=(1, 1), stride=(1, 1)),
conv_bn((3, 3), 512 => 512, relu; pad=(1, 1), stride=(1, 1)),
MaxPool((2, 2)); disable_optimizations=true),
MaxPool((2, 2))),
FlattenLayer(),
Chain(Dense(512, 4096, relu), Dropout(0.5f0), Dense(4096, 4096, relu),
Dropout(0.5f0), Dense(4096, 10); name="Classifier"); disable_optimizations=true)
Dropout(0.5f0), Dense(4096, 10); name="Classifier"))

for bsize in (32, 64, 128)
setup_forward_pass_benchmark!(suite, "vgg16(32, 32, 3, $bsize)",
Expand Down
10 changes: 4 additions & 6 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
LuxCUDA = "d0bbae9a-e099-4d5b-a835-1c6931763bda"
LuxCore = "bb33d45b-7691-41d6-9220-0943567d0623"
LuxDeviceUtils = "34f89e08-e1d5-43b4-8944-0b49ac560553"
LuxLib = "82251201-b29d-42c6-8e01-566dec8acb11"
LuxTestUtils = "ac9de150-d08f-4546-94fb-7472b5760531"
MLDataDevices = "7e8f7934-dd98-4c1a-8fe8-92b47a384d40"
Expand All @@ -39,18 +38,17 @@ GPUArraysCore = "0.1"
KernelAbstractions = "0.9"
LinearAlgebra = "1.10"
Literate = "2.18.0"
Lux = "0.5.62"
Lux = "1"
LuxCUDA = "0.3.2"
LuxCore = "0.1.15"
LuxDeviceUtils = "0.1.21"
LuxLib = "0.3.42"
LuxCore = "1"
LuxLib = "1"
LuxTestUtils = "1.1"
MLDataDevices = "1.1"
Optimisers = "0.3.3"
Pkg = "1.10"
Printf = "1.10"
Random = "1.10"
StaticArrays = "1"
WeightInitializers = "0.1.7, 1"
WeightInitializers = "1"
Zygote = "0.6.70"
julia = "1.10"
11 changes: 4 additions & 7 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
using Documenter, DocumenterVitepress, Pkg
using Lux, LuxCore, LuxLib, WeightInitializers
using LuxTestUtils, LuxDeviceUtils
using MLDataDevices
using LuxTestUtils, MLDataDevices
using LuxCUDA

using Optimisers # for some docstrings
Expand All @@ -15,6 +14,7 @@ pages = [
"Introduction" => "introduction/index.md",
"Overview" => "introduction/overview.md",
"Resources" => "introduction/resources.md",
"Updating to v1" => "introduction/updating_to_v1.md",
"Citation" => "introduction/citation.md"
],
"Tutorials" => [
Expand All @@ -31,8 +31,7 @@ pages = [
"tutorials/intermediate/3_HyperNet.md"
],
"Advanced" => [
"tutorials/advanced/1_GravitationalWaveForm.md",
"tutorials/advanced/2_SymbolicOptimalControl.md"
"tutorials/advanced/1_GravitationalWaveForm.md"
]
],
"Manual" => [
Expand All @@ -56,7 +55,6 @@ pages = [
"api/Lux/distributed_utils.md",
],
"Accelerator Support" => [
"api/Accelerator_Support/LuxDeviceUtils.md",
"api/Accelerator_Support/MLDataDevices.md"
],
"Building Blocks" => [
Expand All @@ -80,8 +78,7 @@ makedocs(; sitename="Lux.jl Docs",
authors="Avik Pal et al.",
clean=true,
doctest=false, # We test it in the CI, no need to run it here
modules=[Lux, LuxCore, LuxLib, WeightInitializers,
LuxTestUtils, LuxDeviceUtils, MLDataDevices],
modules=[Lux, LuxCore, LuxLib, WeightInitializers, LuxTestUtils, MLDataDevices],
linkcheck=true,
repo="https://github.com/LuxDL/Lux.jl/blob/{commit}{path}#{line}",
format=DocumenterVitepress.MarkdownVitepress(;
Expand Down
6 changes: 3 additions & 3 deletions docs/run_single_tutorial.jl
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,13 @@ function preprocess(path, str)
using InteractiveUtils
InteractiveUtils.versioninfo()

if @isdefined(LuxDeviceUtils)
if @isdefined(CUDA) && LuxDeviceUtils.functional(LuxCUDADevice)
if @isdefined(MLDataDevices)
if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
println()
CUDA.versioninfo()
end

if @isdefined(AMDGPU) && LuxDeviceUtils.functional(LuxAMDGPUDevice)
if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
println()
AMDGPU.versioninfo()
end
Expand Down
8 changes: 3 additions & 5 deletions docs/src/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ import { transformerMetaWordHighlight } from '@shikijs/transformers';

// https://vitepress.dev/reference/site-config
export default defineConfig({
base: 'REPLACE_ME_DOCUMENTER_VITEPRESS',// TODO: replace this in makedocs!
base: 'REPLACE_ME_DOCUMENTER_VITEPRESS',
title: 'REPLACE_ME_DOCUMENTER_VITEPRESS',
description: 'Documentation for LuxDL Repositories',
cleanUrls: true,
Expand Down Expand Up @@ -79,7 +79,6 @@ export default defineConfig({
},
{
text: 'Accelerator Support', items: [
{ text: 'LuxDeviceUtils', link: '/api/Accelerator_Support/LuxDeviceUtils' },
{ text: 'MLDataDevices', link: '/api/Accelerator_Support/MLDataDevices' }
]
},
Expand Down Expand Up @@ -112,6 +111,7 @@ export default defineConfig({
{ text: 'Introduction', link: '/introduction' },
{ text: 'Overview', link: '/introduction/overview' },
{ text: 'Resources', link: '/introduction/resources' },
{ text: 'Updating to v1', link: '/introduction/updating_to_v1' },
{ text: 'Citation', link: '/introduction/citation' }]
},
"/tutorials/": {
Expand All @@ -132,8 +132,7 @@ export default defineConfig({
},
{
text: 'Advanced', collapsed: false, items: [
{ text: 'Training a Neural ODE to Model Gravitational Waveforms', link: '/tutorials/advanced/1_GravitationalWaveForm' },
{ text: 'Solving Optimal Control Problems with Symbolic UDEs', link: '/tutorials/advanced/2_SymbolicOptimalControl' },]
{ text: 'Training a Neural ODE to Model Gravitational Waveforms', link: '/tutorials/advanced/1_GravitationalWaveForm' },]
},
{
text: 'Large Models', collapsed: true, items: [
Expand Down Expand Up @@ -216,7 +215,6 @@ export default defineConfig({
},
{
text: 'Accelerator Support', collapsed: false, items: [
{ text: 'LuxDeviceUtils', link: '/api/Accelerator_Support/LuxDeviceUtils' },
{ text: 'MLDataDevices', link: '/api/Accelerator_Support/MLDataDevices' }]
},
{
Expand Down
50 changes: 0 additions & 50 deletions docs/src/api/Accelerator_Support/LuxDeviceUtils.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/src/api/Accelerator_Support/MLDataDevices.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
`MLDataDevices.jl` is a lightweight package defining rules for transferring data across
devices. Most users should directly use Lux.jl instead.

!!! note "Comparison to LuxDeviceUtils.jl"
!!! note "Transitioning from `LuxDeviceUtils.jl`"

`LuxDeviceUtils.jl` was renamed to `MLDataDevices.jl` in v1.0 as a part of allowing
these packages to have broader adoption outsize the Lux community. However, Lux
Expand Down
11 changes: 3 additions & 8 deletions docs/src/api/Building_Blocks/LuxCore.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,9 @@ Pages = ["LuxCore.md"]
## Abstract Types

```@docs
LuxCore.AbstractExplicitLayer
LuxCore.AbstractExplicitContainerLayer
LuxCore.AbstractLuxLayer
LuxCore.AbstractLuxWrapperLayer
LuxCore.AbstractLuxContainerLayer
```

## General
Expand Down Expand Up @@ -49,12 +50,6 @@ LuxCore.update_state

## Layer size

!!! warning

These specifications have been added very recently and most layers currently do not
implement them.

```@docs
LuxCore.inputsize
LuxCore.outputsize
```
2 changes: 1 addition & 1 deletion docs/src/api/Building_Blocks/LuxLib.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# LuxLib
# [LuxLib](@id LuxLib-API)

Backend for Lux.jl

Expand Down
24 changes: 0 additions & 24 deletions docs/src/api/Lux/contrib.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,6 @@ All features listed on this page are **experimental** which means:
experimental sooner.
3. None of the features are exported.

!!! warning

Starting v"0.5.2" all Experimental features need to be accessed via
`Lux.Experimental.<feature>`. Direct access via `Lux.<feature>` will be removed in
v"0.6".

## Index

```@index
Expand All @@ -22,11 +16,6 @@ Pages = ["contrib.md"]

## Parameter Freezing

!!! info

In the long term, this will be supported via
[Optimisers.jl](https://github.com/FluxML/Optimisers.jl/pull/49).

```@docs
Lux.Experimental.FrozenLayer
Lux.Experimental.freeze
Expand All @@ -39,7 +28,6 @@ For detailed usage example look at the [manual page](@ref freezing-model-paramet

```@docs
Lux.Experimental.layer_map
Lux.Experimental.@layer_map
```

## Debugging Functionality
Expand All @@ -56,15 +44,3 @@ Lux.Experimental.DebugLayer
```@docs
Lux.Experimental.share_parameters
```

## StatefulLuxLayer

[`Lux.StatefulLuxLayer`](@ref) used to be part of experimental features, but has been
promoted to stable API. It is now available via `Lux.StatefulLuxLayer`. Change all uses of
`Lux.Experimental.StatefulLuxLayer` to `Lux.StatefulLuxLayer`.

## Compact Layer API

[`Lux.@compact`](@ref) used to be part of experimental features, but has been promoted to
stable API. It is now available via `Lux.@compact`. Change all uses of
`Lux.Experimental.@compact` to `Lux.@compact`.
Loading
Loading