-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Notes on parabolic terms #1147
Comments
Regarding the analysis callback, when we call, e.g., Trixi.jl/src/callbacks_step/analysis_dg2d.jl Line 198 in db25cc3
is the du here only the rhs from the hyperbolic solver or would it already contain the sum of hyperbolic and parabolic?
|
Trixi.jl/src/callbacks_step/analysis.jl Line 222 in db25cc3
so it should be the combined RHS, I think. |
The analysis callback reports the hyperbolic equation as the system being used, e.g.,
This should be specialized. |
Hi, what is the status on AMR & Mortars? Is someone currently working on this, or is there still some theoretical work needed to get e.g. the Mortars for parabolic terms right? |
As far as I know, nobody is working on that (except maybe someone around you, @jlchan?). The L2 mortars should work fine for parabolic terms - I think the same approach was/is used in FLUXO. Thus, it would be great if you'd consider working on this - it would greatly increase the applicability of AMR for realistic flow setups 👍 |
I can only second what @sloede has said - it would be a great contribution! |
@DanielDoehring @sloede parabolic mortars are on me and @apey236's list, but we were going to focus on MPI parallelization first. If you are interested in adding mortars for parabolic terms, I would be happy to help! |
So I tried to come up with copy-paste based version originating from the mortars for purely hyperbolic equations (see #1571). As spelled out in the conversation of that draft PR #1571 (comment) , #1571 (comment) |
Hi @DanielDoehring - yes, I implemented this originally in |
Regarding AMR: My very first steps foreshadow that the splitform of the ODE might complicate things, thus I would prefer working with the standard form first. |
Can you explain briefly what makes it hard to use SplitODEProblem with AMR? |
No, not really - for this I lack proper understanding of what is going on, which is of course easier if I could only change thing at a time. Lines 538 to 541 in 3753846
fails when called from Trixi.jl/src/semidiscretization/semidiscretization_hyperbolic_parabolic.jl Lines 305 to 319 in 3753846
which is in turn called from function (f::SplitFunction)(du, u, p, t)
f.f1(f.cache, u, p, t)
f.f2(du, u, p, t)
du .+= f.cache
end which resides in The error will be in the AMR part, it is just a bit hard to track down where they originate since one has to navigate pretty much through the entire |
Could you please create an MWE of a |
Yes, that is probably what is going on - currently only the Trixi.jl/src/callbacks_step/amr.jl Lines 186 to 193 in b0ec66e
that needs to be adapted to incorporate (at least) the |
Yes, that's the fix we need to make 👍 |
MWE is not so easy to provide, but there is certainly something wrong with the While debugging I changed this Trixi.jl/src/callbacks_step/amr.jl Lines 177 to 180 in 67d137d
to if has_changed
println("Target length u_ode: ", length(u_ode))
println("f.cache pre-change: ", length(integrator.f.cache))
resize!(integrator, length(u_ode))
println("f.cache post-change: ", length(integrator.f.cache))
u_modified!(integrator, true)
end which outputs
which is the reason why the Lines 538 to 541 in 3753846
fails for rhs_parabolic! which is passed as f1
and calls thus function (f::SplitFunction)(du, u, p, t)
f.f1(f.cache, u, p, t) # Non-resized cache is put into f1!
f.f2(du, u, p, t)
du .+= f.cache
end For the moment, I avoid the splitfunction entirely by basically replicating the erronous code above in a bug-free manner via function semidiscretize(semi::SemidiscretizationHyperbolicParabolic, tspan; split_form::Bool=true)
u0_ode = compute_coefficients(first(tspan), semi)
# TODO: MPI, do we want to synchronize loading and print debug statements, e.g. using
# mpi_isparallel() && MPI.Barrier(mpi_comm())
# See https://github.com/trixi-framework/Trixi.jl/issues/328
iip = true # is-inplace, i.e., we modify a vector when calling rhs_parabolic!, rhs!
# Note that the IMEX time integration methods of OrdinaryDiffEq.jl treat the
# first function implicitly and the second one explicitly. Thus, we pass the
# stiffer parabolic function first.
if split_form
return SplitODEProblem{iip}(rhs_parabolic!, rhs!, u0_ode, tspan, semi)
else
specialize = SciMLBase.FullSpecialize # specialize on rhs! and parameters (semi)
return ODEProblem{iip, specialize}(rhs_hyperbolic_parabolic!, u0_ode, tspan, semi)
end
end
function rhs_hyperbolic_parabolic!(du_ode, u_ode, semi::SemidiscretizationHyperbolicParabolic, t)
@trixi_timeit timer() "hyperbolic-parabolic rhs!" begin
du_ode_hyp = similar(du_ode) # TODO: Avoid allocations, make member variable of something?
rhs!(du_ode_hyp, u_ode, semi, t)
rhs_parabolic!(du_ode, u_ode, semi, t)
du_ode .+= du_ode_hyp
end
end which actually works (currently 2D only). I will try to report this bug in the |
|
Having #1629 merged we can tick-off mortars & AMR for |
Having merged #1765 AMR for 3D parabolic terms on p4est meshes is now also available. |
I would like to pick up the viscous CFL business. I would be grateful if someone could point me to
|
The CFL (both advective and viscous) are similar to the FLEXI implementation. Have a look at Section 3.5 of the FLEXI code paper |
This is basically taken from #1136. Currently, our idea is to merge the
DGMulti
setup from that PR intodev
, draft a correspondingTreeMesh
&DGSEM
version in another PR, and see how it looks for Navier-Stokes. When that's okay, we can go ahead and mergedev
intomain
to finally get support for parabolic terms in Trixi.jl.TreeMesh
version withDGSEM
TreeMesh
P4estMesh
P4estMesh
StructuredMesh
jump(u)
orientation
(i.e., computex
andy
components separately)?Gradient
/Divergence
boundary conditions in docsgrad_u
(current) vs.gradients
orgradients_u
no_boundary_condition
(current) vs.boundary_condition_do_nothing
or something else (what?). Answer:boundary_condition_do_nothing
ViscousFormulation***
but this gets pretty long.diffusivity
as global functions #1503elixir_navier_stokes_xxx.jl
toelixir_navierstokes_xxx.jl
for consistency (e.g., withelixir_shallowwater_xxx.jl
).StepsizeCallback
with viscous CFL conditionThe text was updated successfully, but these errors were encountered: