API
DifferentiationInterface
— ModuleDifferentiationInterface
An interface to various automatic differentiation backends in Julia.
Argument wrappers
DifferentiationInterface.Context
— TypeContext
Abstract supertype for additional context arguments, which can be passed to differentiation operators after the active input x
but are not differentiated.
See also
DifferentiationInterface.Constant
— TypeConstant
Concrete type of Context
argument which is kept constant during differentiation.
Note that an operator can be prepared with an arbitrary value of the constant. However, same-point preparation must occur with the exact value that will be reused later.
Example
julia> using DifferentiationInterface
+
+julia> import ForwardDiff
+
+julia> f(x, c) = c * sum(abs2, x);
+
+julia> gradient(f, AutoForwardDiff(), [1.0, 2.0], Constant(10))
+2-element Vector{Float64}:
+ 20.0
+ 40.0
+
+julia> gradient(f, AutoForwardDiff(), [1.0, 2.0], Constant(100))
+2-element Vector{Float64}:
+ 200.0
+ 400.0
DifferentiationInterface.Cache
— TypeCache
Concrete type of Context
argument which can be mutated with active values during differentiation.
The initial values present inside the cache do not matter.
First order
Pushforward
DifferentiationInterface.prepare_pushforward
— Functionprepare_pushforward(f, backend, x, tx, [contexts...]) -> prep
+prepare_pushforward(f!, y, backend, x, tx, [contexts...]) -> prep
Create a prep
object that can be given to pushforward
and its variants.
If the function changes in any way, the result of preparation will be invalidated, and you will need to run it again. For in-place functions, y
is mutated by f!
during preparation.
DifferentiationInterface.prepare_pushforward_same_point
— Functionprepare_pushforward_same_point(f, backend, x, tx, [contexts...]) -> prep_same
+prepare_pushforward_same_point(f!, y, backend, x, tx, [contexts...]) -> prep_same
Create an prep_same
object that can be given to pushforward
and its variants if they are applied at the same point x
and with the same contexts
.
If the function or the point changes in any way, the result of preparation will be invalidated, and you will need to run it again. For in-place functions, y
is mutated by f!
during preparation.
DifferentiationInterface.pushforward
— Functionpushforward(f, [prep,] backend, x, tx, [contexts...]) -> ty
+pushforward(f!, y, [prep,] backend, x, tx, [contexts...]) -> ty
Compute the pushforward of the function f
at point x
with a tuple of tangents tx
.
To improve performance via operator preparation, refer to prepare_pushforward
and prepare_pushforward_same_point
.
Pushforwards are also commonly called Jacobian-vector products or JVPs. This function could have been named jvp
.
DifferentiationInterface.pushforward!
— Functionpushforward!(f, dy, [prep,] backend, x, tx, [contexts...]) -> ty
+pushforward!(f!, y, dy, [prep,] backend, x, tx, [contexts...]) -> ty
Compute the pushforward of the function f
at point x
with a tuple of tangents tx
, overwriting ty
.
To improve performance via operator preparation, refer to prepare_pushforward
and prepare_pushforward_same_point
.
Pushforwards are also commonly called Jacobian-vector products or JVPs. This function could have been named jvp!
.
DifferentiationInterface.value_and_pushforward
— Functionvalue_and_pushforward(f, [prep,] backend, x, tx, [contexts...]) -> (y, ty)
+value_and_pushforward(f!, y, [prep,] backend, x, tx, [contexts...]) -> (y, ty)
Compute the value and the pushforward of the function f
at point x
with a tuple of tangents tx
.
To improve performance via operator preparation, refer to prepare_pushforward
and prepare_pushforward_same_point
.
Pushforwards are also commonly called Jacobian-vector products or JVPs. This function could have been named value_and_jvp
.
Required primitive for forward mode backends.
DifferentiationInterface.value_and_pushforward!
— Functionvalue_and_pushforward!(f, dy, [prep,] backend, x, tx, [contexts...]) -> (y, ty)
+value_and_pushforward!(f!, y, dy, [prep,] backend, x, tx, [contexts...]) -> (y, ty)
Compute the value and the pushforward of the function f
at point x
with a tuple of tangents tx
, overwriting ty
.
To improve performance via operator preparation, refer to prepare_pushforward
and prepare_pushforward_same_point
.
Pushforwards are also commonly called Jacobian-vector products or JVPs. This function could have been named value_and_jvp!
.
Pullback
DifferentiationInterface.prepare_pullback
— Functionprepare_pullback(f, backend, x, ty, [contexts...]) -> prep
+prepare_pullback(f!, y, backend, x, ty, [contexts...]) -> prep
Create a prep
object that can be given to pullback
and its variants.
If the function changes in any way, the result of preparation will be invalidated, and you will need to run it again. For in-place functions, y
is mutated by f!
during preparation.
DifferentiationInterface.prepare_pullback_same_point
— Functionprepare_pullback_same_point(f, backend, x, ty, [contexts...]) -> prep_same
+prepare_pullback_same_point(f!, y, backend, x, ty, [contexts...]) -> prep_same
Create an prep_same
object that can be given to pullback
and its variants if they are applied at the same point x
and with the same contexts
.
If the function or the point changes in any way, the result of preparation will be invalidated, and you will need to run it again. For in-place functions, y
is mutated by f!
during preparation.
DifferentiationInterface.pullback
— Functionpullback(f, [prep,] backend, x, ty, [contexts...]) -> tx
+pullback(f!, y, [prep,] backend, x, ty, [contexts...]) -> tx
Compute the pullback of the function f
at point x
with a tuple of tangents ty
.
To improve performance via operator preparation, refer to prepare_pullback
and prepare_pullback_same_point
.
Pullbacks are also commonly called vector-Jacobian products or VJPs. This function could have been named vjp
.
DifferentiationInterface.pullback!
— Functionpullback!(f, dx, [prep,] backend, x, ty, [contexts...]) -> tx
+pullback!(f!, y, dx, [prep,] backend, x, ty, [contexts...]) -> tx
Compute the pullback of the function f
at point x
with a tuple of tangents ty
, overwriting dx
.
To improve performance via operator preparation, refer to prepare_pullback
and prepare_pullback_same_point
.
Pullbacks are also commonly called vector-Jacobian products or VJPs. This function could have been named vjp!
.
DifferentiationInterface.value_and_pullback
— Functionvalue_and_pullback(f, [prep,] backend, x, ty, [contexts...]) -> (y, tx)
+value_and_pullback(f!, y, [prep,] backend, x, ty, [contexts...]) -> (y, tx)
Compute the value and the pullback of the function f
at point x
with a tuple of tangents ty
.
To improve performance via operator preparation, refer to prepare_pullback
and prepare_pullback_same_point
.
Pullbacks are also commonly called vector-Jacobian products or VJPs. This function could have been named value_and_vjp
.
Required primitive for reverse mode backends.
DifferentiationInterface.value_and_pullback!
— Functionvalue_and_pullback!(f, dx, [prep,] backend, x, ty, [contexts...]) -> (y, tx)
+value_and_pullback!(f!, y, dx, [prep,] backend, x, ty, [contexts...]) -> (y, tx)
Compute the value and the pullback of the function f
at point x
with a tuple of tangents ty
, overwriting dx
.
To improve performance via operator preparation, refer to prepare_pullback
and prepare_pullback_same_point
.
Pullbacks are also commonly called vector-Jacobian products or VJPs. This function could have been named value_and_vjp!
.
Derivative
DifferentiationInterface.prepare_derivative
— Functionprepare_derivative(f, backend, x, [contexts...]) -> prep
+prepare_derivative(f!, y, backend, x, [contexts...]) -> prep
Create a prep
object that can be given to derivative
and its variants.
If the function changes in any way, the result of preparation will be invalidated, and you will need to run it again. For in-place functions, y
is mutated by f!
during preparation.
DifferentiationInterface.derivative
— Functionderivative(f, [prep,] backend, x, [contexts...]) -> der
+derivative(f!, y, [prep,] backend, x, [contexts...]) -> der
Compute the derivative of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_derivative
.
DifferentiationInterface.derivative!
— Functionderivative!(f, der, [prep,] backend, x, [contexts...]) -> der
+derivative!(f!, y, der, [prep,] backend, x, [contexts...]) -> der
Compute the derivative of the function f
at point x
, overwriting der
.
To improve performance via operator preparation, refer to prepare_derivative
.
DifferentiationInterface.value_and_derivative
— Functionvalue_and_derivative(f, [prep,] backend, x, [contexts...]) -> (y, der)
+value_and_derivative(f!, y, [prep,] backend, x, [contexts...]) -> (y, der)
Compute the value and the derivative of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_derivative
.
DifferentiationInterface.value_and_derivative!
— Functionvalue_and_derivative!(f, der, [prep,] backend, x, [contexts...]) -> (y, der)
+value_and_derivative!(f!, y, der, [prep,] backend, x, [contexts...]) -> (y, der)
Compute the value and the derivative of the function f
at point x
, overwriting der
.
To improve performance via operator preparation, refer to prepare_derivative
.
Gradient
DifferentiationInterface.prepare_gradient
— Functionprepare_gradient(f, backend, x, [contexts...]) -> prep
Create a prep
object that can be given to gradient
and its variants.
If the function changes in any way, the result of preparation will be invalidated, and you will need to run it again.
DifferentiationInterface.gradient
— Functiongradient(f, [prep,] backend, x, [contexts...]) -> grad
Compute the gradient of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_gradient
.
DifferentiationInterface.gradient!
— Functiongradient!(f, grad, [prep,] backend, x, [contexts...]) -> grad
Compute the gradient of the function f
at point x
, overwriting grad
.
To improve performance via operator preparation, refer to prepare_gradient
.
DifferentiationInterface.value_and_gradient
— Functionvalue_and_gradient(f, [prep,] backend, x, [contexts...]) -> (y, grad)
Compute the value and the gradient of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_gradient
.
DifferentiationInterface.value_and_gradient!
— Functionvalue_and_gradient!(f, grad, [prep,] backend, x, [contexts...]) -> (y, grad)
Compute the value and the gradient of the function f
at point x
, overwriting grad
.
To improve performance via operator preparation, refer to prepare_gradient
.
Jacobian
DifferentiationInterface.prepare_jacobian
— Functionprepare_jacobian(f, backend, x, [contexts...]) -> prep
+prepare_jacobian(f!, y, backend, x, [contexts...]) -> prep
Create a prep
object that can be given to jacobian
and its variants.
If the function changes in any way, the result of preparation will be invalidated, and you will need to run it again. For in-place functions, y
is mutated by f!
during preparation.
DifferentiationInterface.jacobian
— Functionjacobian(f, [prep,] backend, x, [contexts...]) -> jac
+jacobian(f!, y, [prep,] backend, x, [contexts...]) -> jac
Compute the Jacobian matrix of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_jacobian
.
DifferentiationInterface.jacobian!
— Functionjacobian!(f, jac, [prep,] backend, x, [contexts...]) -> jac
+jacobian!(f!, y, jac, [prep,] backend, x, [contexts...]) -> jac
Compute the Jacobian matrix of the function f
at point x
, overwriting jac
.
To improve performance via operator preparation, refer to prepare_jacobian
.
DifferentiationInterface.value_and_jacobian
— Functionvalue_and_jacobian(f, [prep,] backend, x, [contexts...]) -> (y, jac)
+value_and_jacobian(f!, y, [prep,] backend, x, [contexts...]) -> (y, jac)
Compute the value and the Jacobian matrix of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_jacobian
.
DifferentiationInterface.value_and_jacobian!
— Functionvalue_and_jacobian!(f, jac, [prep,] backend, x, [contexts...]) -> (y, jac)
+value_and_jacobian!(f!, y, jac, [prep,] backend, x, [contexts...]) -> (y, jac)
Compute the value and the Jacobian matrix of the function f
at point x
, overwriting jac
.
To improve performance via operator preparation, refer to prepare_jacobian
.
DifferentiationInterface.MixedMode
— TypeMixedMode
Combination of a forward and a reverse mode backend for mixed-mode Jacobian computation.
MixedMode
backends only support jacobian
and its variants.
Constructor
MixedMode(forward_backend, reverse_backend)
Second order
DifferentiationInterface.SecondOrder
— TypeSecondOrder
Combination of two backends for second-order differentiation.
SecondOrder
backends do not support first-order operators.
Constructor
SecondOrder(outer_backend, inner_backend)
Fields
outer::AbstractADType
: backend for the outer differentiationinner::AbstractADType
: backend for the inner differentiation
Second derivative
DifferentiationInterface.prepare_second_derivative
— Functionprepare_second_derivative(f, backend, x, [contexts...]) -> prep
Create a prep
object that can be given to second_derivative
and its variants.
If the function changes in any way, the result of preparation will be invalidated, and you will need to run it again.
DifferentiationInterface.second_derivative
— Functionsecond_derivative(f, [prep,] backend, x, [contexts...]) -> der2
Compute the second derivative of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_second_derivative
.
DifferentiationInterface.second_derivative!
— Functionsecond_derivative!(f, der2, [prep,] backend, x, [contexts...]) -> der2
Compute the second derivative of the function f
at point x
, overwriting der2
.
To improve performance via operator preparation, refer to prepare_second_derivative
.
DifferentiationInterface.value_derivative_and_second_derivative
— Functionvalue_derivative_and_second_derivative(f, [prep,] backend, x, [contexts...]) -> (y, der, der2)
Compute the value, first derivative and second derivative of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_second_derivative
.
DifferentiationInterface.value_derivative_and_second_derivative!
— Functionvalue_derivative_and_second_derivative!(f, der, der2, [prep,] backend, x, [contexts...]) -> (y, der, der2)
Compute the value, first derivative and second derivative of the function f
at point x
, overwriting der
and der2
.
To improve performance via operator preparation, refer to prepare_second_derivative
.
Hessian-vector product
DifferentiationInterface.prepare_hvp
— Functionprepare_hvp(f, backend, x, tx, [contexts...]) -> prep
Create a prep
object that can be given to hvp
and its variants.
If the function changes in any way, the result of preparation will be invalidated, and you will need to run it again.
DifferentiationInterface.prepare_hvp_same_point
— Functionprepare_hvp_same_point(f, backend, x, tx, [contexts...]) -> prep_same
Create an prep_same
object that can be given to hvp
and its variants if they are applied at the same point x
and with the same contexts
.
If the function or the point changes in any way, the result of preparation will be invalidated, and you will need to run it again.
DifferentiationInterface.hvp
— Functionhvp(f, [prep,] backend, x, tx, [contexts...]) -> tg
Compute the Hessian-vector product of f
at point x
with a tuple of tangents tx
.
To improve performance via operator preparation, refer to prepare_hvp
and prepare_hvp_same_point
.
DifferentiationInterface.hvp!
— Functionhvp!(f, tg, [prep,] backend, x, tx, [contexts...]) -> tg
Compute the Hessian-vector product of f
at point x
with a tuple of tangents tx
, overwriting tg
.
To improve performance via operator preparation, refer to prepare_hvp
and prepare_hvp_same_point
.
DifferentiationInterface.gradient_and_hvp
— Functiongradient_and_hvp(f, [prep,] backend, x, tx, [contexts...]) -> (grad, tg)
Compute the gradient and the Hessian-vector product of f
at point x
with a tuple of tangents tx
.
To improve performance via operator preparation, refer to prepare_hvp
and prepare_hvp_same_point
.
DifferentiationInterface.gradient_and_hvp!
— Functiongradient_and_hvp!(f, grad, tg, [prep,] backend, x, tx, [contexts...]) -> (grad, tg)
Compute the gradient and the Hessian-vector product of f
at point x
with a tuple of tangents tx
, overwriting grad
and tg
.
To improve performance via operator preparation, refer to prepare_hvp
and prepare_hvp_same_point
.
Hessian
DifferentiationInterface.prepare_hessian
— Functionprepare_hessian(f, backend, x, [contexts...]) -> prep
Create a prep
object that can be given to hessian
and its variants.
If the function changes in any way, the result of preparation will be invalidated, and you will need to run it again.
DifferentiationInterface.hessian
— Functionhessian(f, [prep,] backend, x, [contexts...]) -> hess
Compute the Hessian matrix of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_hessian
.
DifferentiationInterface.hessian!
— Functionhessian!(f, hess, [prep,] backend, x, [contexts...]) -> hess
Compute the Hessian matrix of the function f
at point x
, overwriting hess
.
To improve performance via operator preparation, refer to prepare_hessian
.
DifferentiationInterface.value_gradient_and_hessian
— Functionvalue_gradient_and_hessian(f, [prep,] backend, x, [contexts...]) -> (y, grad, hess)
Compute the value, gradient vector and Hessian matrix of the function f
at point x
.
To improve performance via operator preparation, refer to prepare_hessian
.
DifferentiationInterface.value_gradient_and_hessian!
— Functionvalue_gradient_and_hessian!(f, grad, hess, [prep,] backend, x, [contexts...]) -> (y, grad, hess)
Compute the value, gradient vector and Hessian matrix of the function f
at point x
, overwriting grad
and hess
.
To improve performance via operator preparation, refer to prepare_hessian
.
Utilities
Backend queries
DifferentiationInterface.check_available
— Functioncheck_available(backend)
Check whether backend
is available (i.e. whether the extension is loaded).
DifferentiationInterface.check_inplace
— Functioncheck_inplace(backend)
Check whether backend
supports differentiation of in-place functions.
DifferentiationInterface.outer
— Functionouter(backend::SecondOrder)
+outer(backend::AbstractADType)
Return the outer backend of a SecondOrder
object, tasked with differentiation at the second order.
For any other backend type, this function acts like the identity.
DifferentiationInterface.inner
— Functioninner(backend::SecondOrder)
+inner(backend::AbstractADType)
Return the inner backend of a SecondOrder
object, tasked with differentiation at the first order.
For any other backend type, this function acts like the identity.
Backend switch
DifferentiationInterface.DifferentiateWith
— TypeDifferentiateWith
Function wrapper that enforces differentiation with a "substitute" AD backend, possible different from the "true" AD backend that is called.
For instance, suppose a function f
is not differentiable with Zygote because it involves mutation, but you know that it is differentiable with Enzyme. Then f2 = DifferentiateWith(f, AutoEnzyme())
is a new function that behaves like f
, except that f2
is differentiable with Zygote (thanks to a chain rule which calls Enzyme under the hood). Moreover, any larger algorithm alg
that calls f2
instead of f
will also be differentiable with Zygote (as long as f
was the only Zygote blocker).
This is mainly relevant for package developers who want to produce differentiable code at low cost, without writing the differentiation rules themselves. If you sprinkle a few DifferentiateWith
in places where some AD backends may struggle, end users can pick from a wider variety of packages to differentiate your algorithms.
DifferentiateWith
only supports out-of-place functions y = f(x)
without additional context arguments. It only makes these functions differentiable if the true backend is either ForwardDiff or compatible with ChainRules. For any other true backend, the differentiation behavior is not altered by DifferentiateWith
(it becomes a transparent wrapper).
Fields
f
: the function in question, with signaturef(x)
backend::AbstractADType
: the substitute backend to use for differentiation
For the substitute AD backend to be called under the hood, its package needs to be loaded in addition to the package of the true AD backend.
Constructor
DifferentiateWith(f, backend)
Example
julia> using DifferentiationInterface
+
+julia> import FiniteDiff, ForwardDiff, Zygote
+
+julia> function f(x::Vector{Float64})
+ a = Vector{Float64}(undef, 1) # type constraint breaks ForwardDiff
+ a[1] = sum(abs2, x) # mutation breaks Zygote
+ return a[1]
+ end;
+
+julia> f2 = DifferentiateWith(f, AutoFiniteDiff());
+
+julia> f([3.0, 5.0]) == f2([3.0, 5.0])
+true
+
+julia> alg(x) = 7 * f2(x);
+
+julia> ForwardDiff.gradient(alg, [3.0, 5.0])
+2-element Vector{Float64}:
+ 42.0
+ 70.0
+
+julia> Zygote.gradient(alg, [3.0, 5.0])[1]
+2-element Vector{Float64}:
+ 42.0
+ 70.0
Sparsity detection
DifferentiationInterface.DenseSparsityDetector
— TypeDenseSparsityDetector
Sparsity pattern detector satisfying the detection API of ADTypes.jl.
The nonzeros in a Jacobian or Hessian are detected by computing the relevant matrix with dense AD, and thresholding the entries with a given tolerance (which can be numerically inaccurate). This process can be very slow, and should only be used if its output can be exploited multiple times to compute many sparse matrices.
In general, the sparsity pattern you obtain can depend on the provided input x
. If you want to reuse the pattern, make sure that it is input-agnostic.
DenseSparsityDetector
functionality is now located in a package extension, please load the SparseArrays.jl standard library before you use it.
Fields
backend::AbstractADType
is the dense AD backend used under the hoodatol::Float64
is the minimum magnitude of a matrix entry to be considered nonzero
Constructor
DenseSparsityDetector(backend; atol, method=:iterative)
The keyword argument method::Symbol
can be either:
:iterative
: compute the matrix in a sequence of matrix-vector products (memory-efficient):direct
: compute the matrix all at once (memory-hungry but sometimes faster).
Note that the constructor is type-unstable because method
ends up being a type parameter of the DenseSparsityDetector
object (this is not part of the API and might change).
Examples
using ADTypes, DifferentiationInterface, SparseArrays
+import ForwardDiff
+
+detector = DenseSparsityDetector(AutoForwardDiff(); atol=1e-5, method=:direct)
+
+ADTypes.jacobian_sparsity(diff, rand(5), detector)
+
+# output
+
+4×5 SparseMatrixCSC{Bool, Int64} with 8 stored entries:
+ 1 1 ⋅ ⋅ ⋅
+ ⋅ 1 1 ⋅ ⋅
+ ⋅ ⋅ 1 1 ⋅
+ ⋅ ⋅ ⋅ 1 1
Sometimes the sparsity pattern is input-dependent:
ADTypes.jacobian_sparsity(x -> [prod(x)], rand(2), detector)
+
+# output
+
+1×2 SparseMatrixCSC{Bool, Int64} with 2 stored entries:
+ 1 1
ADTypes.jacobian_sparsity(x -> [prod(x)], [0, 1], detector)
+
+# output
+
+1×2 SparseMatrixCSC{Bool, Int64} with 1 stored entry:
+ 1 ⋅
Internals
The following is not part of the public API.
DifferentiationInterface.AutoSimpleFiniteDiff
— TypeAutoSimpleFiniteDiff <: ADTypes.AbstractADType
Forward mode backend based on the finite difference (f(x + ε) - f(x)) / ε
, with artificial chunk size to mimick ForwardDiff.
Constructor
AutoSimpleFiniteDiff(ε=1e-5; chunksize=nothing)
DifferentiationInterface.AutoZeroForward
— TypeAutoZeroForward <: ADTypes.AbstractADType
Trivial backend that sets all derivatives to zero. Used in testing and benchmarking.
DifferentiationInterface.AutoZeroReverse
— TypeAutoZeroReverse <: ADTypes.AbstractADType
Trivial backend that sets all derivatives to zero. Used in testing and benchmarking.
DifferentiationInterface.BatchSizeSettings
— TypeBatchSizeSettings{B,singlebatch,aligned}
Configuration for the batch size deduced from a backend and a sample array of length N
.
Type parameters
B::Int
: batch sizesinglebatch::Bool
: whetherB == N
(B > N
is not allowed)aligned::Bool
: whetherN % B == 0
Fields
N::Int
: array lengthA::Int
: number of batchesA = div(N, B, RoundUp)
B_last::Int
: size of the last batch (ifaligned
isfalse
)
DifferentiationInterface.DerivativePrep
— TypeDerivativePrep
Abstract type for additional information needed by derivative
and its variants.
DifferentiationInterface.ForwardOverForward
— TypeForwardOverForward
Traits identifying second-order backends that compute HVPs in forward over forward mode (inefficient).
DifferentiationInterface.ForwardOverReverse
— TypeForwardOverReverse
Traits identifying second-order backends that compute HVPs in forward over reverse mode.
DifferentiationInterface.GradientPrep
— TypeGradientPrep
Abstract type for additional information needed by gradient
and its variants.
DifferentiationInterface.HVPPrep
— TypeHVPPrep
Abstract type for additional information needed by hvp
and its variants.
DifferentiationInterface.HessianPrep
— TypeHessianPrep
Abstract type for additional information needed by hessian
and its variants.
DifferentiationInterface.InPlaceNotSupported
— TypeInPlaceNotSupported
Trait identifying backends that do not support in-place functions f!(y, x)
.
DifferentiationInterface.InPlaceSupported
— TypeInPlaceSupported
Trait identifying backends that support in-place functions f!(y, x)
.
DifferentiationInterface.JacobianPrep
— TypeJacobianPrep
Abstract type for additional information needed by jacobian
and its variants.
DifferentiationInterface.PullbackFast
— TypePullbackFast
Trait identifying backends that support efficient pullbacks.
DifferentiationInterface.PullbackPrep
— TypePullbackPrep
Abstract type for additional information needed by pullback
and its variants.
DifferentiationInterface.PullbackSlow
— TypePullbackSlow
Trait identifying backends that do not support efficient pullbacks.
DifferentiationInterface.PushforwardFast
— TypePushforwardFast
Trait identifying backends that support efficient pushforwards.
DifferentiationInterface.PushforwardPrep
— TypePushforwardPrep
Abstract type for additional information needed by pushforward
and its variants.
DifferentiationInterface.PushforwardSlow
— TypePushforwardSlow
Trait identifying backends that do not support efficient pushforwards.
DifferentiationInterface.ReverseOverForward
— TypeReverseOverForward
Traits identifying second-order backends that compute HVPs in reverse over forward mode.
DifferentiationInterface.ReverseOverReverse
— TypeReverseOverReverse
Traits identifying second-order backends that compute HVPs in reverse over reverse mode.
DifferentiationInterface.SecondDerivativePrep
— TypeSecondDerivativePrep
Abstract type for additional information needed by second_derivative
and its variants.
ADTypes.mode
— Methodmode(backend::SecondOrder)
Return the outer mode of the second-order backend.
DifferentiationInterface.basis
— Methodbasis(backend, a::AbstractArray, i)
Construct the i
-th standard basis array in the vector space of a
with element type eltype(a)
.
Note
If an AD backend benefits from a more specialized basis array implementation, this function can be extended on the backend type.
DifferentiationInterface.inplace_support
— Methodinplace_support(backend)
Return InPlaceSupported
or InPlaceNotSupported
in a statically predictable way.
DifferentiationInterface.multibasis
— Methodmultibasis(backend, a::AbstractArray, inds::AbstractVector)
Construct the sum of the i
-th standard basis arrays in the vector space of a
with element type eltype(a)
, for all i ∈ inds
.
Note
If an AD backend benefits from a more specialized basis array implementation, this function can be extended on the backend type.
DifferentiationInterface.prepare!_derivative
— Functionprepare!_derivative(f, prep, backend, x, [contexts...]) -> new_prep
+prepare!_derivative(f!, y, prep, backend, x, [contexts...]) -> new_prep
Same behavior as prepare_derivative
but can modify an existing prep
object to avoid some allocations.
There is no guarantee that prep
will be mutated, or that performance will be improved compared to preparation from scratch.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_gradient
— Functionprepare!_gradient(f, prep, backend, x, [contexts...]) -> new_prep
Same behavior as prepare_gradient
but can modify an existing prep
object to avoid some allocations.
There is no guarantee that prep
will be mutated, or that performance will be improved compared to preparation from scratch.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_hessian
— Functionprepare!_hessian(f, backend, x, [contexts...]) -> new_prep
Same behavior as prepare_hessian
but can modify an existing prep
object to avoid some allocations.
There is no guarantee that prep
will be mutated, or that performance will be improved compared to preparation from scratch.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_hvp
— Functionprepare!_hvp(f, backend, x, tx, [contexts...]) -> new_prep
Same behavior as prepare_hvp
but can modify an existing prep
object to avoid some allocations.
There is no guarantee that prep
will be mutated, or that performance will be improved compared to preparation from scratch.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_jacobian
— Functionprepare!_jacobian(f, prep, backend, x, [contexts...]) -> new_prep
+prepare!_jacobian(f!, y, prep, backend, x, [contexts...]) -> new_prep
Same behavior as prepare_jacobian
but can modify an existing prep
object to avoid some allocations.
There is no guarantee that prep
will be mutated, or that performance will be improved compared to preparation from scratch.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_pullback
— Functionprepare!_pullback(f, prep, backend, x, ty, [contexts...]) -> new_prep
+prepare!_pullback(f!, y, prep, backend, x, ty, [contexts...]) -> new_prep
Same behavior as prepare_pullback
but can modify an existing prep
object to avoid some allocations.
There is no guarantee that prep
will be mutated, or that performance will be improved compared to preparation from scratch.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.prepare!_pushforward
— Functionprepare!_pushforward(f, prep, backend, x, tx, [contexts...]) -> new_prep
+prepare!_pushforward(f!, y, prep, backend, x, tx, [contexts...]) -> new_prep
Same behavior as prepare_pushforward
but can modify an existing prep
object to avoid some allocations.
There is no guarantee that prep
will be mutated, or that performance will be improved compared to preparation from scratch.
For efficiency, this function needs to rely on backend package internals, therefore it not protected by semantic versioning.
DifferentiationInterface.pullback_performance
— Methodpullback_performance(backend)
Return PullbackFast
or PullbackSlow
in a statically predictable way.
DifferentiationInterface.pushforward_performance
— Methodpushforward_performance(backend)
Return PushforwardFast
or PushforwardSlow
in a statically predictable way.