From 1ceea9a9e706d3932237201d935004eac844c151 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Fri, 6 Dec 2024 18:36:31 +0000 Subject: [PATCH] build based on 0f3d2c1 --- .../dev/.documenter-siteinfo.json | 2 +- .../dev/api/index.html | 10 +++++----- DifferentiationInterfaceTest/dev/index.html | 2 +- .../dev/tutorial/index.html | 20 +++++++++---------- 4 files changed, 17 insertions(+), 17 deletions(-) diff --git a/DifferentiationInterfaceTest/dev/.documenter-siteinfo.json b/DifferentiationInterfaceTest/dev/.documenter-siteinfo.json index 1a20e1bc1..c609f5864 100644 --- a/DifferentiationInterfaceTest/dev/.documenter-siteinfo.json +++ b/DifferentiationInterfaceTest/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-06T08:56:13","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-06T18:36:25","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/DifferentiationInterfaceTest/dev/api/index.html b/DifferentiationInterfaceTest/dev/api/index.html index 7516d7d6a..53bd79f96 100644 --- a/DifferentiationInterfaceTest/dev/api/index.html +++ b/DifferentiationInterfaceTest/dev/api/index.html @@ -1,6 +1,6 @@ -API reference · DifferentiationInterfaceTest.jl

API reference

Entry points

DifferentiationInterfaceTest.ScenarioType
Scenario{op,pl_op,pl_fun}

Store a testing scenario composed of a function and its input + output for a given operator.

This generic type should never be used directly: use the specific constructor corresponding to the operator you want to test, or a predefined list of scenarios.

Type parameters

  • op: one of :pushforward, :pullback, :derivative, :gradient, :jacobian,:second_derivative, :hvp, :hessian
  • pl_op: either :in (for op!(f, result, backend, x)) or :out (for result = op(f, backend, x))
  • pl_fun: either :in (for f!(y, x)) or :out (for y = f(x))

Constructors

Scenario{op,pl_op}(f, x; tang, contexts, res1, res2)
-Scenario{op,pl_op}(f!, y, x; tang, contexts, res1, res2)

Fields

  • f::Any: function f (if args==1) or f! (if args==2) to apply

  • x::Any: primal input

  • y::Any: primal output

  • tang::Union{Nothing, NTuple{N, T} where {N, T}}: tangents for pushforward, pullback or HVP

  • contexts::Tuple: contexts (if applicable)

  • res1::Any: first-order result of the operator (if applicable)

  • res2::Any: second-order result of the operator (if applicable)

source
DifferentiationInterfaceTest.test_differentiationFunction
test_differentiation(
+API reference · DifferentiationInterfaceTest.jl

API reference

Entry points

DifferentiationInterfaceTest.ScenarioType
Scenario{op,pl_op,pl_fun}

Store a testing scenario composed of a function and its input + output for a given operator.

This generic type should never be used directly: use the specific constructor corresponding to the operator you want to test, or a predefined list of scenarios.

Type parameters

  • op: one of :pushforward, :pullback, :derivative, :gradient, :jacobian,:second_derivative, :hvp, :hessian
  • pl_op: either :in (for op!(f, result, backend, x)) or :out (for result = op(f, backend, x))
  • pl_fun: either :in (for f!(y, x)) or :out (for y = f(x))

Constructors

Scenario{op,pl_op}(f, x; tang, contexts, res1, res2)
+Scenario{op,pl_op}(f!, y, x; tang, contexts, res1, res2)

Fields

  • f::Any: function f (if args==1) or f! (if args==2) to apply

  • x::Any: primal input

  • y::Any: primal output

  • tang::Union{Nothing, NTuple{N, T} where {N, T}}: tangents for pushforward, pullback or HVP

  • contexts::Tuple: contexts (if applicable)

  • res1::Any: first-order result of the operator (if applicable)

  • res2::Any: second-order result of the operator (if applicable)

source
DifferentiationInterfaceTest.test_differentiationFunction
test_differentiation(
     backends::Vector{<:ADTypes.AbstractADType};
     ...
 ) -> Union{Nothing, DataFrames.DataFrame}
@@ -27,12 +27,12 @@
     benchmark_seconds,
     benchmark_aggregation
 ) -> Union{Nothing, DataFrames.DataFrame}
-

Apply a list of backends on a list of scenarios, running a variety of different tests and/or benchmarks.

Return

This function always creates and runs a @testset, though its contents may vary.

  • if benchmark == :none, it returns nothing.
  • if benchmark != :none, it returns a DataFrame of benchmark results, whose columns correspond to the fields of DifferentiationBenchmarkDataRow.

Positional arguments

  • backends::Vector{<:AbstractADType}: the backends to test
  • scenarios::Vector{<:Scenario}: the scenarios on which to test them (defaults to the output of default_scenarios())

Keyword arguments

Test categories:

  • correctness=true: whether to compare the differentiation results with the theoretical values specified in each scenario
  • type_stability=:none: whether (and how) to check type stability of operators with JET.jl.
  • allocations=:none: whether (and how) to check allocations inside operators with AllocCheck.jl
  • benchmark=:none: whether (and how) to benchmark operators with Chairmarks.jl

For type_stability, allocations and benchmark, the possible values are :none, :prepared or :full. Each setting tests/benchmarks a different subset of calls:

kwargprepared operatorunprepared operatorpreparation
:nonenonono
:preparedyesnono
:fullyesyesyes

Misc options:

  • excluded::Vector{Symbol}: list of operators to exclude, such as FIRST_ORDER or SECOND_ORDER
  • detailed=false: whether to create a detailed or condensed testset
  • logging=false: whether to log progress

Correctness options:

  • isapprox=isapprox: function used to compare objects approximately, with the standard signature isapprox(x, y; atol, rtol)
  • atol=0: absolute precision for correctness testing (when comparing to the reference outputs)
  • rtol=1e-3: relative precision for correctness testing (when comparing to the reference outputs)
  • scenario_intact=true: whether to check that the scenario remains unchanged after the operators are applied
  • sparsity=false: whether to check sparsity patterns for Jacobians / Hessians

Type stability options:

  • ignored_modules=nothing: list of modules that JET.jl should ignore
  • function_filter: filter for functions that JET.jl should ignore (with a reasonable default)

Benchmark options:

  • count_calls=true: whether to also count function calls during benchmarking
  • benchmark_test=true: whether to include tests which succeed iff benchmark doesn't error
  • benchmark_seconds=1: how long to run each benchmark for
  • benchmark_aggregation=minimum: function used to aggregate sample measurements
source
test_differentiation(
+

Apply a list of backends on a list of scenarios, running a variety of different tests and/or benchmarks.

Return

This function always creates and runs a @testset, though its contents may vary.

  • if benchmark == :none, it returns nothing.
  • if benchmark != :none, it returns a DataFrame of benchmark results, whose columns correspond to the fields of DifferentiationBenchmarkDataRow.

Positional arguments

  • backends::Vector{<:AbstractADType}: the backends to test
  • scenarios::Vector{<:Scenario}: the scenarios on which to test them (defaults to the output of default_scenarios())

Keyword arguments

Test categories:

  • correctness=true: whether to compare the differentiation results with the theoretical values specified in each scenario
  • type_stability=:none: whether (and how) to check type stability of operators with JET.jl.
  • allocations=:none: whether (and how) to check allocations inside operators with AllocCheck.jl
  • benchmark=:none: whether (and how) to benchmark operators with Chairmarks.jl

For type_stability, allocations and benchmark, the possible values are :none, :prepared or :full. Each setting tests/benchmarks a different subset of calls:

kwargprepared operatorunprepared operatorpreparation
:nonenonono
:preparedyesnono
:fullyesyesyes

Misc options:

  • excluded::Vector{Symbol}: list of operators to exclude, such as FIRST_ORDER or SECOND_ORDER
  • detailed=false: whether to create a detailed or condensed testset
  • logging=false: whether to log progress

Correctness options:

  • isapprox=isapprox: function used to compare objects approximately, with the standard signature isapprox(x, y; atol, rtol)
  • atol=0: absolute precision for correctness testing (when comparing to the reference outputs)
  • rtol=1e-3: relative precision for correctness testing (when comparing to the reference outputs)
  • scenario_intact=true: whether to check that the scenario remains unchanged after the operators are applied
  • sparsity=false: whether to check sparsity patterns for Jacobians / Hessians

Type stability options:

  • ignored_modules=nothing: list of modules that JET.jl should ignore
  • function_filter: filter for functions that JET.jl should ignore (with a reasonable default)

Benchmark options:

  • count_calls=true: whether to also count function calls during benchmarking
  • benchmark_test=true: whether to include tests which succeed iff benchmark doesn't error
  • benchmark_seconds=1: how long to run each benchmark for
  • benchmark_aggregation=minimum: function used to aggregate sample measurements
source
test_differentiation(
     backend::ADTypes.AbstractADType,
     args...;
     kwargs...
 ) -> Union{Nothing, DataFrames.DataFrame}
-

Shortcut for a single backend.

source
DifferentiationInterfaceTest.benchmark_differentiationFunction
benchmark_differentiation(
     backends,
     scenarios::Vector{<:Scenario};
     benchmark,
@@ -43,4 +43,4 @@
     benchmark_seconds,
     benchmark_aggregation
 ) -> Union{Nothing, DataFrames.DataFrame}
-

Shortcut for test_differentiation with only benchmarks and no correctness or type stability checks.

Specifying the set of scenarios is mandatory for this function.

source

Utilities

DifferentiationInterfaceTest.DifferentiationBenchmarkDataRowType
DifferentiationBenchmarkDataRow

Ad-hoc storage type for differentiation benchmarking results.

Fields

  • backend::ADTypes.AbstractADType: backend used for benchmarking

  • scenario::Scenario: scenario used for benchmarking

  • operator::Symbol: differentiation operator used for benchmarking, e.g. :gradient or :hessian

  • prepared::Union{Nothing, Bool}: whether the operator had been prepared

  • calls::Int64: number of calls to the differentiated function for one call to the operator

  • samples::Int64: number of benchmarking samples taken

  • evals::Int64: number of evaluations used for averaging in each sample

  • time::Any: aggregated runtime over all samples, in seconds

  • allocs::Any: aggregated number of allocations over all samples

  • bytes::Any: aggregated memory allocated over all samples, in bytes

  • gc_fraction::Any: aggregated fraction of time spent in garbage collection over all samples, between 0.0 and 1.0

  • compile_fraction::Any: aggregated fraction of time spent compiling over all samples, between 0.0 and 1.0

See the documentation of Chairmarks.jl for more details on the measurement fields.

source

Pre-made scenario lists

The precise contents of the scenario lists are not part of the API, only their existence.

Internals

This is not part of the public API.

Base.zeroMethod
zero(scen::Scenario)

Return a new Scenario identical to scen except for the first- and second-order results which are set to zero.

source
DifferentiationInterfaceTest.batchifyMethod
batchify(scen::Scenario)

Return a new Scenario identical to scen except for the tangents tang and associated results res1 / res2, which are duplicated (batch mode).

Only works if scen is a pushforward, pullback or hvp scenario.

source
DifferentiationInterfaceTest.cachifyMethod
cachify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional cache argument a to store the result before it is returned.

source
DifferentiationInterfaceTest.constantifyMethod
constantify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional constant argument a by which the output is multiplied. The output and result fields are updated accordingly.

source
DifferentiationInterfaceTest.flux_scenariosFunction
flux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Flux.jl.

Warning

This function requires FiniteDifferences.jl and Flux.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API. Their ground truth values are computed with finite differences, and thus subject to imprecision.

source
DifferentiationInterfaceTest.lux_scenariosFunction
lux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Lux.jl.

Warning

This function requires ComponentArrays.jl, ForwardDiff.jl, Lux.jl and LuxTestUtils.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API.

source
+

Shortcut for test_differentiation with only benchmarks and no correctness or type stability checks.

Specifying the set of scenarios is mandatory for this function.

source

Utilities

DifferentiationInterfaceTest.DifferentiationBenchmarkDataRowType
DifferentiationBenchmarkDataRow

Ad-hoc storage type for differentiation benchmarking results.

Fields

  • backend::ADTypes.AbstractADType: backend used for benchmarking

  • scenario::Scenario: scenario used for benchmarking

  • operator::Symbol: differentiation operator used for benchmarking, e.g. :gradient or :hessian

  • prepared::Union{Nothing, Bool}: whether the operator had been prepared

  • calls::Int64: number of calls to the differentiated function for one call to the operator

  • samples::Int64: number of benchmarking samples taken

  • evals::Int64: number of evaluations used for averaging in each sample

  • time::Any: aggregated runtime over all samples, in seconds

  • allocs::Any: aggregated number of allocations over all samples

  • bytes::Any: aggregated memory allocated over all samples, in bytes

  • gc_fraction::Any: aggregated fraction of time spent in garbage collection over all samples, between 0.0 and 1.0

  • compile_fraction::Any: aggregated fraction of time spent compiling over all samples, between 0.0 and 1.0

See the documentation of Chairmarks.jl for more details on the measurement fields.

source

Pre-made scenario lists

The precise contents of the scenario lists are not part of the API, only their existence.

Internals

This is not part of the public API.

Base.zeroMethod
zero(scen::Scenario)

Return a new Scenario identical to scen except for the first- and second-order results which are set to zero.

source
DifferentiationInterfaceTest.batchifyMethod
batchify(scen::Scenario)

Return a new Scenario identical to scen except for the tangents tang and associated results res1 / res2, which are duplicated (batch mode).

Only works if scen is a pushforward, pullback or hvp scenario.

source
DifferentiationInterfaceTest.cachifyMethod
cachify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional cache argument a to store the result before it is returned.

source
DifferentiationInterfaceTest.constantifyMethod
constantify(scen::Scenario)

Return a new Scenario identical to scen except for the function f, which is made to accept an additional constant argument a by which the output is multiplied. The output and result fields are updated accordingly.

source
DifferentiationInterfaceTest.flux_scenariosFunction
flux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Flux.jl.

Warning

This function requires FiniteDifferences.jl and Flux.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API. Their ground truth values are computed with finite differences, and thus subject to imprecision.

source
DifferentiationInterfaceTest.lux_scenariosFunction
lux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Lux.jl.

Warning

This function requires ComponentArrays.jl, ForwardDiff.jl, Lux.jl and LuxTestUtils.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API.

source
diff --git a/DifferentiationInterfaceTest/dev/index.html b/DifferentiationInterfaceTest/dev/index.html index 3a3b8dc7b..c4f74dbd9 100644 --- a/DifferentiationInterfaceTest/dev/index.html +++ b/DifferentiationInterfaceTest/dev/index.html @@ -11,4 +11,4 @@ Pkg.add( url="https://github.com/JuliaDiff/DifferentiationInterface.jl", subdir="DifferentiationInterfaceTest" -) +) diff --git a/DifferentiationInterfaceTest/dev/tutorial/index.html b/DifferentiationInterfaceTest/dev/tutorial/index.html index d0e3ec160..b251f06be 100644 --- a/DifferentiationInterfaceTest/dev/tutorial/index.html +++ b/DifferentiationInterfaceTest/dev/tutorial/index.html @@ -12,13 +12,13 @@ correctness=true, # compares values against the reference type_stability=:none, # checks type stability with JET.jl detailed=true, # prints a detailed test set - )Test Summary: | Pass Total Time -Testing correctness | 88 88 10.0s - AutoForwardDiff() | 44 44 6.6s - gradient | 44 44 6.5s - Scenario{:gradient,:out} f : Vector{Float32} -> Float32 | 22 22 3.5s - Scenario{:gradient,:out} f : Matrix{Float64} -> Float64 | 22 22 2.1s - AutoZygote() | 44 44 3.4s - gradient | 44 44 3.4s - Scenario{:gradient,:out} f : Vector{Float32} -> Float32 | 22 22 2.7s - Scenario{:gradient,:out} f : Matrix{Float64} -> Float64 | 22 22 0.7s

Benchmarking

Once you are confident that your backends give the correct answers, you probably want to compare their performance. This is made easy by the benchmark_differentiation function, whose syntax should feel familiar:

df = benchmark_differentiation(backends, scenarios);
8×12 DataFrame
Rowbackendscenariooperatorpreparedcallssamplesevalstimeallocsbytesgc_fractioncompile_fraction
Abstract…Scenario…SymbolBoolInt64Int64Int64Float64Float64Float64Float64Float64
1AutoForwardDiff()Scenario{:gradient,:out} f : Vector{Float32} -> Float32value_and_gradienttrue1504522655.43245e-83.0112.00.00.0
2AutoForwardDiff()Scenario{:gradient,:out} f : Vector{Float32} -> Float32gradienttrue1376374394.70114e-82.080.00.00.0
3AutoForwardDiff()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64value_and_gradienttrue1281152171.3135e-74.0192.00.00.0
4AutoForwardDiff()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64gradienttrue1331312021.20564e-73.0160.00.00.0
5AutoZygote()Scenario{:gradient,:out} f : Vector{Float32} -> Float32value_and_gradienttrue128649367.80889e-724.0672.00.00.0
6AutoZygote()Scenario{:gradient,:out} f : Vector{Float32} -> Float32gradienttrue137300366.05833e-722.0608.00.00.0
7AutoZygote()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64value_and_gradienttrue121246923.06652e-710.0464.00.00.0
8AutoZygote()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64gradienttrue141204613.00885e-710.0464.00.00.0

The resulting object is a DataFrame from DataFrames.jl, whose columns correspond to the fields of DifferentiationBenchmarkDataRow:

+ )Test Summary: | Pass Total Time +Testing correctness | 88 88 9.8s + AutoForwardDiff() | 44 44 6.4s + gradient | 44 44 6.4s + Scenario{:gradient,:out} f : Vector{Float32} -> Float32 | 22 22 3.4s + Scenario{:gradient,:out} f : Matrix{Float64} -> Float64 | 22 22 2.1s + AutoZygote() | 44 44 3.3s + gradient | 44 44 3.3s + Scenario{:gradient,:out} f : Vector{Float32} -> Float32 | 22 22 2.6s + Scenario{:gradient,:out} f : Matrix{Float64} -> Float64 | 22 22 0.7s

Benchmarking

Once you are confident that your backends give the correct answers, you probably want to compare their performance. This is made easy by the benchmark_differentiation function, whose syntax should feel familiar:

df = benchmark_differentiation(backends, scenarios);
8×12 DataFrame
Rowbackendscenariooperatorpreparedcallssamplesevalstimeallocsbytesgc_fractioncompile_fraction
Abstract…Scenario…SymbolBoolInt64Int64Int64Float64Float64Float64Float64Float64
1AutoForwardDiff()Scenario{:gradient,:out} f : Vector{Float32} -> Float32value_and_gradienttrue1268844795.47996e-83.0112.00.00.0
2AutoForwardDiff()Scenario{:gradient,:out} f : Vector{Float32} -> Float32gradienttrue1348104454.45101e-82.080.00.00.0
3AutoForwardDiff()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64value_and_gradienttrue1218312051.3278e-74.0192.00.00.0
4AutoForwardDiff()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64gradienttrue1353181911.21586e-73.0160.00.00.0
5AutoZygote()Scenario{:gradient,:out} f : Vector{Float32} -> Float32value_and_gradienttrue128598367.97611e-724.0672.00.00.0
6AutoZygote()Scenario{:gradient,:out} f : Vector{Float32} -> Float32gradienttrue129858446.16614e-722.0608.00.00.0
7AutoZygote()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64value_and_gradienttrue129068873.17839e-710.0464.00.00.0
8AutoZygote()Scenario{:gradient,:out} f : Matrix{Float64} -> Float64gradienttrue131330803.16213e-710.0464.00.00.0

The resulting object is a DataFrame from DataFrames.jl, whose columns correspond to the fields of DifferentiationBenchmarkDataRow: