Sophon.AdaptiveTraining
Sophon.BetaRandomSampler
Sophon.ChainState
Sophon.ConstantFunction
Sophon.DeepONet
Sophon.DiscreteFourierFeature
Sophon.FactorizedDense
Sophon.FourierFeature
Sophon.NonAdaptiveTraining
Sophon.PINN
Sophon.PINNAttention
Sophon.QuasiRandomSampler
Sophon.RBF
Sophon.ScalarLayer
Sophon.SplitFunction
Sophon.TriplewiseFusion
Sophon.BACON
Sophon.FourierAttention
Sophon.FourierFilterNet
Sophon.FourierNet
Sophon.FullyConnected
Sophon.ResNet
Sophon.Sine
Sophon.Siren
Sophon.discretize
Sophon.gaussian
Sophon.stan
Sophon.AdaptiveTraining
— TypeAdaptiveTraining(pde_weights, bcs_weights)
Adaptive weights for the loss functions. Here pde_weights
and bcs_weights
are functions that take in (phi, x, θ)
and return the point-wise weights. Note that bcs_weights
can be real numbers but they will be converted to functions that return the same numbers.
Sophon.BetaRandomSampler
— TypeBetaRandomSampler(pde_points, bcs_points=pde_points; sampling_alg=SobolSample(),
- resample::Bool=false, α=0.4, β=1.0)
Same as QuasiRandomSampler
, but use Beta
distribution along time on the domain.
Sophon.ChainState
— TypeChainState(model, rng::AbstractRNG=Random.default_rng())
It this similar to Lux.Chain
but wraps it in a stateful container.
Fields
model
: The neural network.states
: The states of the neural network.
Input
x
: The input to the neural network.ps
: The parameters of the neural network.
Arguments
model
:AbstractExplicitLayer
, or a named tuple of them, which will be treated as aChain
.rng
:AbstractRNG
to use for initialising the neural network.
Sophon.ConstantFunction
— TypeConstantFunction()
A conatiner for scalar parameter. This is useful for the case that you want a dummy layer that returns the scalar parameter for any input.
Sophon.DeepONet
— TypeDeepONet(branch_net, trunk_net;
+API Reference · Sophon.jl Sophon.AdaptiveTraining
Sophon.BetaRandomSampler
Sophon.ChainState
Sophon.ConstantFunction
Sophon.DeepONet
Sophon.DiscreteFourierFeature
Sophon.FactorizedDense
Sophon.FourierFeature
Sophon.NonAdaptiveTraining
Sophon.PINN
Sophon.PINNAttention
Sophon.QuasiRandomSampler
Sophon.RBF
Sophon.ScalarLayer
Sophon.SplitFunction
Sophon.TriplewiseFusion
Sophon.BACON
Sophon.FourierAttention
Sophon.FourierFilterNet
Sophon.FourierNet
Sophon.FullyConnected
Sophon.ResNet
Sophon.Sine
Sophon.Siren
Sophon.discretize
Sophon.gaussian
Sophon.stan
Sophon.AdaptiveTraining
— TypeAdaptiveTraining(pde_weights, bcs_weights)
Adaptive weights for the loss functions. Here pde_weights
and bcs_weights
are functions that take in (phi, x, θ)
and return the point-wise weights. Note that bcs_weights
can be real numbers but they will be converted to functions that return the same numbers.
sourceSophon.BetaRandomSampler
— TypeBetaRandomSampler(pde_points, bcs_points=pde_points; sampling_alg=SobolSample(),
+ resample::Bool=false, α=0.4, β=1.0)
Same as QuasiRandomSampler
, but use Beta
distribution along time on the domain.
sourceSophon.ChainState
— TypeChainState(model, rng::AbstractRNG=Random.default_rng())
It this similar to Lux.Chain
but wraps it in a stateful container.
Fields
model
: The neural network.states
: The states of the neural network.
Input
x
: The input to the neural network.ps
: The parameters of the neural network.
Arguments
model
: AbstractExplicitLayer
, or a named tuple of them, which will be treated as a Chain
.rng
: AbstractRNG
to use for initialising the neural network.
sourceSophon.ConstantFunction
— TypeConstantFunction()
A conatiner for scalar parameter. This is useful for the case that you want a dummy layer that returns the scalar parameter for any input.
sourceSophon.DeepONet
— TypeDeepONet(branch_net, trunk_net;
flatten_layer=FlattenLayer(),
linear_layer=NoOpLayer(),
bias=ScalarLayer())
@@ -26,9 +26,9 @@
linear_layer = NoOpLayer(),
bias = ScalarLayer(), # 1 parameters
) # Total: 111 parameters,
- # plus 0 states, summarysize 80 bytes.
Reference
[5]
sourceSophon.DiscreteFourierFeature
— TypeDiscreteFourierFeature(in_dims::Int, out_dims::Int, N::Int, period::Real)
The discrete Fourier filter proposed in [6]. For a periodic function with period $P$, the Fourier series in amplitude-phase form is
\[s_N(x)=\frac{a_0}{2}+\sum_{n=1}^N{a_n}\cdot \sin \left( \frac{2\pi}{P}nx+\varphi _n \right)\]
The output is guaranteed to be periodic.
Arguments
in_dims
: Number of the input dimensions.out_dims
: Number of the output dimensions.N
: $N$ in the formula.period
: $P$ in the formula.
sourceSophon.FactorizedDense
— TypeFactorizedDense(in_dims::Int, out_dims::Int, activation=identity;
+ # plus 0 states, summarysize 80 bytes.
Reference
[5]
sourceSophon.DiscreteFourierFeature
— TypeDiscreteFourierFeature(in_dims::Int, out_dims::Int, N::Int, period::Real)
The discrete Fourier filter proposed in [6]. For a periodic function with period $P$, the Fourier series in amplitude-phase form is
\[s_N(x)=\frac{a_0}{2}+\sum_{n=1}^N{a_n}\cdot \sin \left( \frac{2\pi}{P}nx+\varphi _n \right)\]
The output is guaranteed to be periodic.
Arguments
in_dims
: Number of the input dimensions.out_dims
: Number of the output dimensions.N
: $N$ in the formula.period
: $P$ in the formula.
sourceSophon.FactorizedDense
— TypeFactorizedDense(in_dims::Int, out_dims::Int, activation=identity;
mean::AbstractFloat=1.0f0, std::AbstractFloat=0.1f0,
- init_weight=kaiming_uniform(activation), init_bias=zeros32)
Create a Dense
layer where the weight is factorized into twa parts, the scaling factors for each row and the weight matrix.
Arguments
in_dims
: number of input dimensionsout_dims
: number of output dimensionsactivation
: activation function
Keyword Arguments
mean
: mean of the scaling factorsstd
: standard deviation of the scaling factorsinit_weight
: weight initialization functioninit_bias
: bias initialization function
Input
x
: input vector or matrix
Returns
y = activation.(scale * weight * x+ bias)
.- Empty
NamedTuple()
.
Parameters
scale
: scaling factors. Shape: (out_dims, 1)
weight
: Weight Matrix of size (out_dims, in_dims)
.bias
: Bias of size (out_dims, 1)
.
References
[7]
sourceSophon.FourierFeature
— TypeFourierFeature(in_dims::Int, std::NTuple{N,Pair{S,T}}) where {N,S,T<:Int}
+ init_weight=kaiming_uniform(activation), init_bias=zeros32)
Create a Dense
layer where the weight is factorized into twa parts, the scaling factors for each row and the weight matrix.
Arguments
in_dims
: number of input dimensionsout_dims
: number of output dimensionsactivation
: activation function
Keyword Arguments
mean
: mean of the scaling factorsstd
: standard deviation of the scaling factorsinit_weight
: weight initialization functioninit_bias
: bias initialization function
Input
x
: input vector or matrix
Returns
y = activation.(scale * weight * x+ bias)
.- Empty
NamedTuple()
.
Parameters
scale
: scaling factors. Shape: (out_dims, 1)
weight
: Weight Matrix of size (out_dims, in_dims)
.bias
: Bias of size (out_dims, 1)
.
References
[7]
sourceSophon.FourierFeature
— TypeFourierFeature(in_dims::Int, std::NTuple{N,Pair{S,T}}) where {N,S,T<:Int}
FourierFeature(in_dims::Int, frequencies::NTuple{N, T}) where {N, T <: Real}
FourierFeature(in_dims::Int, out_dims::Int, std::Real)
Fourier Feature Network.
Arguments
in_dims
: Number of the input dimensions.std
: A tuple of pairs of sigma => out_dims
, where sigma
is the standard deviation of the Gaussian distribution.
\[\phi^{(i)}(x)=\left[\sin \left(2 \pi W^{(i)} x\right) ; \cos 2 \pi W^{(i)} x\right],\ W^{(i)} \sim \mathcal{N}\left(0, \sigma^{(i)}\right),\ i\in 1, \dots, D\]
frequencies
: A tuple of frequencies $(f1,f2,...,fn)$.
\[\phi^{(i)}(x)=\left[\sin \left(2 \pi f_i x\right) ; \cos 2 \pi f_i x\right]\]
Parameters
If std
is used, then parameters are W
s in the formula.
Inputs
x
: AbstractArray
with size(x, 1) == in_dims
.
Returns
- $[\phi^{(1)}, \phi^{(2)}, ... ,\phi^{(D)}]$ with
size(y, 1) == sum(last(modes) * 2)
.
Examples
julia> f = FourierFeature(2,10,1) # Random Fourier Feature
FourierFeature(2 => 10)
@@ -37,7 +37,7 @@
FourierFeature(2 => 14)
julia> f = FourierFeature(2, (1,2,3,4)) # Predefined frequencies
-FourierFeature(2 => 16)
References
[8]
[9]
[4]
sourceSophon.NonAdaptiveTraining
— TypeNonAdaptiveTraining(pde_weights=1, bcs_weights=pde_weights)
Fixed weights for the loss functions.
Arguments
pde_weights
: weights for the PDE loss functions. If a single number is given, it is used for all PDE loss functions.bcs_weights
: weights for the boundary conditions loss functions. If a single number is given, it is used for all boundary conditions loss functions.
sourceSophon.PINN
— Typesource Sophon.NonAdaptiveTraining
— TypeNonAdaptiveTraining(pde_weights=1, bcs_weights=pde_weights)
Fixed weights for the loss functions.
Arguments
pde_weights
: weights for the PDE loss functions. If a single number is given, it is used for all PDE loss functions.bcs_weights
: weights for the boundary conditions loss functions. If a single number is given, it is used for all boundary conditions loss functions.
sourceSophon.PINN
— TypePINN(chain, rng::AbstractRNG=Random.default_rng())
PINN(rng::AbstractRNG=Random.default_rng(); kwargs...)
A container for a neural network, its states and its initial parameters. The default element type of the parameters is Float64
.
Fields
phi
: ChainState
if there is only one neural network, or an named tuple of ChainState
s if there are multiple neural networks. The names are the same as the dependent variables in the PDE.init_params
: The initial parameters of the neural network.
Arguments
chain
: AbstractExplicitLayer
or a named tuple of AbstractExplicitLayer
s.rng
: AbstractRNG
to use for initialising the neural network. If yout want to set the seed, write
using Random
rng = Random.default_rng()
Random.seed!(rng, 0)d
and pass rng
to PINN
as
using Sophon
@@ -48,15 +48,15 @@
pinn = PINN(chain, rng);
# multiple dependent varibales
-pinn = PINN(rng; a=chain, b=chain);
sourceSophon.PINNAttention
— TypePINNAttention(H_net, U_net, V_net, fusion_layers)
+pinn = PINN(rng; a=chain, b=chain);
sourceSophon.PINNAttention
— TypePINNAttention(H_net, U_net, V_net, fusion_layers)
PINNAttention(in_dims::Int, out_dims::Int, activation::Function=sin;
hidden_dims::Int, num_layers::Int)
The output dimesion of H_net
and the input dimension of fusion_layers
must be the same. For the second and the third constructor, Dense
layers is used for H_net
, U_net
, and V_net
. Note that the first constructer does not contain the output layer, but the second one does.
x → U_net → u u
↘ ↘
x → H_net → h1 → fusionlayer1 → connection → fusionlayer2 → connection
↗ ↗
- x → V_net → v v
Arguments
H_net
: AbstractExplicitLayer
.U_net
: AbstractExplicitLayer
.V_net
: AbstractExplicitLayer
.fusion_layers
: Chain
.
Keyword Arguments
num_layers
: The number of hidden layers.hidden_dims
: The number of hidden dimensions of each hidden layer.
Reference
[10]
sourceSophon.QuasiRandomSampler
— TypeQuasiRandomSampler(pde_points, bcs_points=pde_points;
+ x → V_net → v v
Arguments
H_net
: AbstractExplicitLayer
.U_net
: AbstractExplicitLayer
.V_net
: AbstractExplicitLayer
.fusion_layers
: Chain
.
Keyword Arguments
num_layers
: The number of hidden layers.hidden_dims
: The number of hidden dimensions of each hidden layer.
Reference
[10]
sourceSophon.QuasiRandomSampler
— TypeQuasiRandomSampler(pde_points, bcs_points=pde_points;
sampling_alg=SobolSample(),
- resample = false))
Sampler to generate the datasets for PDE and boundary conditions using a quisa-random sampling algorithm. You can call sample(pde, sampler, strategy)
on it to generate all the datasets. See QuasiMonteCarlo.jl for available sampling algorithms. The default element type of the sampled data is Float64
. The initial sampled data lives on GPU if PINN
is. You will need manually move the data to GPU if you want to resample.
Arguments
pde_points
: The number of points to sample for each PDE. If a single number is given, the same number of points will be sampled for each PDE. If a tuple of numbers is given, the number of points for each PDE will be the corresponding element in the tuple. The default is 100
.bcs_points
: The number of points to sample for each boundary condition. If a single number is given, the same number of points will be sampled for each boundary condition. If a tuple of numbers is given, the number of points for each boundary condition will be the corresponding element in the tuple. The default is pde_points
.
Keyword Arguments
sampling_alg
: The sampling algorithm to use. The default is SobolSample()
.
resample
: Whether to resample the data for each equation. The default is false
, which can save a lot of memory if you are solving a large number of PDEs. In this case, pde_points
has to be a integer. If you want to resample the data, you will need to manually move the data to GPU if you want to use GPU to solve the PDEs.
sourceSophon.RBF
— TypeRBF(in_dims::Int, out_dims::Int, num_centers::Int=out_dims; sigma::AbstractFloat=0.2f0)
Normalized Radial Basis Fuction Network.
sourceSophon.ScalarLayer
— TypeScalarLayer(connection::Function)
Return connection(scalar, x)
sourceSophon.SplitFunction
— TypeSplitFunction(indices...)
Split the input along the first demision according to indices.
sourceSophon.TriplewiseFusion
— TypeTriplewiseFusion(connection, layers...)
u1 u2
+ resample = false))
Sampler to generate the datasets for PDE and boundary conditions using a quisa-random sampling algorithm. You can call sample(pde, sampler, strategy)
on it to generate all the datasets. See QuasiMonteCarlo.jl for available sampling algorithms. The default element type of the sampled data is Float64
. The initial sampled data lives on GPU if PINN
is. You will need manually move the data to GPU if you want to resample.
Arguments
pde_points
: The number of points to sample for each PDE. If a single number is given, the same number of points will be sampled for each PDE. If a tuple of numbers is given, the number of points for each PDE will be the corresponding element in the tuple. The default is 100
.bcs_points
: The number of points to sample for each boundary condition. If a single number is given, the same number of points will be sampled for each boundary condition. If a tuple of numbers is given, the number of points for each boundary condition will be the corresponding element in the tuple. The default is pde_points
.
Keyword Arguments
sampling_alg
: The sampling algorithm to use. The default is SobolSample()
.
resample
: Whether to resample the data for each equation. The default is false
, which can save a lot of memory if you are solving a large number of PDEs. In this case, pde_points
has to be a integer. If you want to resample the data, you will need to manually move the data to GPU if you want to use GPU to solve the PDEs.
sourceSophon.RBF
— TypeRBF(in_dims::Int, out_dims::Int, num_centers::Int=out_dims; sigma::AbstractFloat=0.2f0)
Normalized Radial Basis Fuction Network.
sourceSophon.ScalarLayer
— TypeScalarLayer(connection::Function)
Return connection(scalar, x)
sourceSophon.SplitFunction
— TypeSplitFunction(indices...)
Split the input along the first demision according to indices.
sourceSophon.TriplewiseFusion
— TypeTriplewiseFusion(connection, layers...)
u1 u2
↘ ↘
h1 → layer1 → connection → layer2 → connection
↗ ↗
@@ -64,7 +64,7 @@
h = connection(layers[i](h), u[i], v[i])
end
- A triple of
(h, u, v)
, where u
and v
are AbstractArray
s.
for i in 1:N
h = connection(layers[i](h), u, v)
-end
Parameters
- Parameters of each
layer
wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N
States
- States of each
layer
wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N
sourceSophon.BACON
— MethodBACON(in_dims::Int, out_dims::Int, N::Int, period::Real; hidden_dims::Int, num_layers::Int)
Band-limited Coordinate Networks (BACON) from [6]. Similar to FourierFilterNet
but the frequcies are dicrete and nontrainable.
Tips: It is recommended to set period
to be 1,2,π
or 2π
for better performance.
sourceSophon.FourierAttention
— MethodFourierAttention(in_dims::Int, out_dims::Int, activation::Function, std;
+end
Parameters
- Parameters of each
layer
wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N
States
- States of each
layer
wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N
sourceSophon.BACON
— MethodBACON(in_dims::Int, out_dims::Int, N::Int, period::Real; hidden_dims::Int, num_layers::Int)
Band-limited Coordinate Networks (BACON) from [6]. Similar to FourierFilterNet
but the frequcies are dicrete and nontrainable.
Tips: It is recommended to set period
to be 1,2,π
or 2π
for better performance.
sourceSophon.FourierAttention
— MethodFourierAttention(in_dims::Int, out_dims::Int, activation::Function, std;
hidden_dims::Int=512, num_layers::Int=6, modes::NTuple)
FourierAttention(in_dims::Int, out_dims::Int, activation::Function, frequencies;
hidden_dims::Int=512, num_layers::Int=6, modes::NTuple)
A model that combines FourierFeature
and PINNAttention
.
x → [FourierFeature(x); x] → PINNAttention
Arguments
in_dims
: The input dimension.
out_dims
: The output dimension.activation
: The activation function.std
: See FourierFeature
.frequencies
: See FourierFeature
.
Keyword Arguments
hidden_dim
: The hidden dimension of each hidden layer.num_layers
: The number of hidden layers.
Examples
julia> FourierAttention(3, 1, sin, (1 => 10, 10 => 10, 50 => 10); hidden_dims=10, num_layers=3)
@@ -83,12 +83,12 @@
),
layer_3 = Dense(10 => 1), # 11 parameters
) # Total: 2_371 parameters,
- # plus 90 states, summarysize 192 bytes
sourceSophon.FourierFilterNet
— MethodFourierFilterNet(in_dims::Int, out_dims::Int; hidden_dims::Int, num_layers::Int,
+ # plus 90 states, summarysize 192 bytes
sourceSophon.FourierFilterNet
— MethodFourierFilterNet(in_dims::Int, out_dims::Int; hidden_dims::Int, num_layers::Int,
bandwidth::Real)
Multiplicative filter network defined by
\[\begin{aligned}
z^{(1)} &=g\left(x ; \theta^{(1)}\right) \\
z^{(i+1)} &=\left(W^{(i)} z^{(i)}+b^{(i)}\right) \circ \sin \left(\omega^{(i)} x+\phi^{(i)}\right)\right) \\
f(x) &=W^{(k)} z^{(k)}+b^{(k)}
-\end{aligned}\]
Keyword Arguments
bandwidth
: The maximum bandwidth of the network. The bandwidth is the sum of each filter's bandwidth.
Parameters
- Parameters of the filters:
\[ W\sim \mathcal{U}(-\frac{ω}{n}, \frac{ω}{n}), \quad b\sim \mathcal{U}(-\pi, \pi),\]
where n
is the number of filters.
For a periodic function with period $P$, the Fourier series in amplitude-phase form is
\[s_N(x)=\frac{a_0}{2}+\sum_{n=1}^N{a_n}\cdot \sin \left( \frac{2\pi}{P}nx+\varphi _n \right)\]
We have the following relation between the banthwidth and the parameters of the model:
\[ω = 2πB=\frac{2πN}{P}.\]
where $B$ is the bandwidth of the network.
References
[11]
[6]
sourceSophon.FourierNet
— MethodFourierNet(ayer_sizes::NTuple, activation, modes::NTuple)
A model that combines FourierFeature
and FullyConnected
.
x → FourierFeature → FullyConnected → y
Arguments
in_dims
: The number of input dimensions.layer_sizes
: A tuple of hidden dimensions used to construct FullyConnected
.activation
: The activation function used to construct FullyConnected
.modes
: A tuple of modes used to construct FourierFeature
.
Examples
julia> FourierNet((2, 30, 30, 1), sin, (1 => 10, 10 => 10, 50 => 10))
+\end{aligned}\]Keyword Arguments
bandwidth
: The maximum bandwidth of the network. The bandwidth is the sum of each filter's bandwidth.
Parameters
- Parameters of the filters:
\[ W\sim \mathcal{U}(-\frac{ω}{n}, \frac{ω}{n}), \quad b\sim \mathcal{U}(-\pi, \pi),\]
where n
is the number of filters.
For a periodic function with period $P$, the Fourier series in amplitude-phase form is
\[s_N(x)=\frac{a_0}{2}+\sum_{n=1}^N{a_n}\cdot \sin \left( \frac{2\pi}{P}nx+\varphi _n \right)\]
We have the following relation between the banthwidth and the parameters of the model:
\[ω = 2πB=\frac{2πN}{P}.\]
where $B$ is the bandwidth of the network.
References
[11]
[6]
sourceSophon.FourierNet
— MethodFourierNet(ayer_sizes::NTuple, activation, modes::NTuple)
A model that combines FourierFeature
and FullyConnected
.
x → FourierFeature → FullyConnected → y
Arguments
in_dims
: The number of input dimensions.layer_sizes
: A tuple of hidden dimensions used to construct FullyConnected
.activation
: The activation function used to construct FullyConnected
.modes
: A tuple of modes used to construct FourierFeature
.
Examples
julia> FourierNet((2, 30, 30, 1), sin, (1 => 10, 10 => 10, 50 => 10))
Chain(
layer_1 = FourierFeature(2 => 60),
layer_2 = Dense(60 => 30, sin), # 1_830 parameters
@@ -104,7 +104,7 @@
layer_3 = Dense(30 => 30, sin), # 930 parameters
layer_4 = Dense(30 => 1), # 31 parameters
) # Total: 1_471 parameters,
- # plus 4 states, summarysize 96 bytes.
sourceSophon.FullyConnected
— MethodFullyConnected(layer_sizes::NTuple{N, Int}, activation; outermost = true,
+ # plus 4 states, summarysize 96 bytes.
sourceSophon.FullyConnected
— MethodFullyConnected(layer_sizes::NTuple{N, Int}, activation; outermost = true,
init_weight=kaiming_uniform(activation),
init_bias=zeros32,
allow_fast_activation=false)
@@ -127,7 +127,7 @@
layer_3 = Dense(20 => 20, relu), # 420 parameters
layer_4 = Dense(20 => 10), # 210 parameters
) # Total: 1_090 parameters,
- # plus 0 states, summarysize 64 bytes.
sourceSophon.ResNet
— MethodResNet(layer_sizes::NTuple{N, Int}, activation; outermost=true,
+ # plus 0 states, summarysize 64 bytes.
sourceSophon.ResNet
— MethodResNet(layer_sizes::NTuple{N, Int}, activation; outermost=true,
init_weight=kaiming_uniform(activation),
init_bias=zeros32,
allow_fast_activation=false)
@@ -159,8 +159,8 @@
),
layer_4 = Dense(20 => 10), # 210 parameters
) # Total: 1_090 parameters,
- # plus 0 states, summarysize 64 bytes.
sourceSophon.Sine
— MethodSine(in_dims::Int, out_dims::Int; omega::Real)
Sinusoidal layer.
Example
s = Sine(2, 2; omega=30.0f0) # first layer
-s = Sine(2, 2) # hidden layer
sourceSophon.Siren
— MethodSiren(in_dims::Int, out_dims::Int; hidden_dims::Int, num_layers::Int, omega=30.0f0,
+ # plus 0 states, summarysize 64 bytes.
sourceSophon.Sine
— MethodSine(in_dims::Int, out_dims::Int; omega::Real)
Sinusoidal layer.
Example
s = Sine(2, 2; omega=30.0f0) # first layer
+s = Sine(2, 2) # hidden layer
sourceSophon.Siren
— MethodSiren(in_dims::Int, out_dims::Int; hidden_dims::Int, num_layers::Int, omega=30.0f0,
init_weight=nothing))
Siren(layer_sizes::Int...; omega=30.0f0, init_weight=nothing)
Sinusoidal Representation Network.
Keyword Arguments
omega
: The ω₀
used for the first layer.init_weight
: The initialization algorithm for the weights of the input layer. Note that all hidden layers use kaiming_uniform
as the initialization algorithm. The default is\[ W\sim \mathcal{U}\left(-\frac{\omega}{fan_{in}}, \frac{\omega}{fan_{in}}\right)\]
Examples
julia> Siren(2, 32, 32, 1; omega=5.0f0)
Chain(
@@ -181,6 +181,6 @@
# Use your own initialization algorithm for the input layer.
julia> init_weight(rng::AbstractRNG, out_dims::Int, in_dims::Int) = randn(rng, Float32, out_dims, in_dims) .* 2.5f0
-julia> chain = Siren(2, 1; num_layers = 4, hidden_dims = 50, init_weight = init_weight)
Reference
[1]
sourceSophon.discretize
— Method discretize(pde_system::PDESystem, pinn::PINN, sampler::PINNSampler,
+julia> chain = Siren(2, 1; num_layers = 4, hidden_dims = 50, init_weight = init_weight)
Reference
[1]
sourceSophon.discretize
— Method discretize(pde_system::PDESystem, pinn::PINN, sampler::PINNSampler,
strategy::AbstractTrainingAlg; derivative=finitediff,
- additional_loss)
Convert the PDESystem into an OptimizationProblem
. You will have access to each loss function Sophon.residual_function_1
, Sophon.residual_function_2
... after calling this function.
sourceSophon.gaussian
— Functionsource Sophon.stan
— Functionsource Settings
This document was generated with Documenter.jl version 1.2.1 on Tuesday 6 February 2024. Using Julia version 1.10.0.
+ additional_loss)
Convert the PDESystem into an OptimizationProblem
. You will have access to each loss function Sophon.residual_function_1
, Sophon.residual_function_2
... after calling this function.
Sophon.gaussian
— FunctionSophon.stan
— Function