heyoka 3.2.0
This new release of heyoka comes packed with several new features and enhancements.
Support for single-precision computations
In addition to extended and arbitrary precision computations, heyoka now supports also single-precision computations via the float
type. Single-precision computations can lead to substantial performance benefits, especially in batch mode and/or low-accuracy applications. See the single-precision tutorial for a usage example.
ELP2000 model
heyoka now includes an implementation of the ELP2000 lunar theory. It is thus now possible to formulate systems of differential equations with the time-dependent geocentric lunar position appearing in the right-hand side. See the tutorial for an introduction.
Low-precision vector math
When the fast_math
option is active, heyoka now employs lower-precision vector implementations of elementary functions, which can lead to substantial speedups in low-accuracy applications. The speedup is particularly visible when using single precision in AI and ML applications.
As usual, the full changelog is available here: