Replies: 3 comments 2 replies
-
Thanks @jagoosw, this is a good question. The only analytical solutions for the biogeochemistry will be for very simplified systems (e.g. a limit cycle in a predator-prey P+Z model) and these would probably be of limited value for us. One universal check that would be good is to check that the system conserves nitrogen. @syou83syou83 did this check for the LOBSTER model. We could have a test to make sure that the total nitrogen remains within a certain percentage of the initial value. We could test this both for a box model configuration and a column model configuration. We will want the tests to run quickly, so the integration time for the column model could be short (I don't think there is any need to include an annual cycle for this test). |
Beta Was this translation helpful? Give feedback.
-
I'm trying to debug something I've broken so started making a simple test simulation and wanted to check: do we actually expect N conservation given that the detritus sinks out of the model? I can't find any clear documentation from Oceananigans as to whether zero flux boundary conditions are enforced on advective forcing of tracers? |
Beta Was this translation helpful? Give feedback.
-
Added these in PR #37. When we get it working it will be quite important that we don't make small commits to main because you can only use 2000min of workflow per month so I've set it to only run on direct push or pull requests to the main branch, and we can just run them locally on other branches. |
Beta Was this translation helpful? Give feedback.
-
Hi both,
As I've been trying to update the particles stuff to solve the conservation issue I've updated some of the tests I've written and was thinking more generally about automated testing.
While packages like Oceananigans have a lot of code that does deterministic things I'm not sure how much of our code we can actually test. For example I have written tests for the Particle's and for the PAR integration because these are both things that do a defined thing, but the models etc. don't really behave in the same way. I think I should probably write some tests to check for breaking changes to the setup functions to check they give the right types etc. but the only way I can think of testing the rest is to use some sort of validation case like they do in Oceananigans but these largely rely on analytically solvable problems. Is there something like this we could use to check our model implementation?
Interestingly they do have some "biogeochemistry" ones but its just things like checking that sinking tracers work, and we don't need to retest that.
Jago
Beta Was this translation helpful? Give feedback.
All reactions