-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single precision variants #29
Comments
Hi @dylanede , that should be straight-forward if one would do what you're suggesting, namely truncating the approximations accordingly. (I guess you're asking for single-precision versions for performance reasons, right?) To answer your questions:
To begin with one could start with
I could help with that, if you like. (So far I did not see any application for this, but it seems you have one.) |
Thanks for the detailed reply! I have determined using that script that for the target of 2e-6 relative error, the numbers of coefficients for the numerator and denominator are two and three respectively (giving a relative error of approximately 8e-9). I'll see what I can do with the test cases. By the way, the main functions I am interested in are the complex versions of Li2, Li3 and Li4, so if/when I submit a PR, it will likely be for those plus their dependencies. |
Many thanks in advance for preparing a PR!
That's pretty good in the sense that the number of coefficients is not very big. (I guess reducing the number of coefficients further would give a relative error of more than
For the complex versions I've used the helper functions in Just out of curiosity, if you don't mind: What exactly is your application? Since you care about performance so much, I guess it you're evaluating these polylogarithms millions of times, right? |
Yes, though I did notice that for a given input number of coefficients, the number that comes out is one more than that for both the numerator and denominator, with the first coefficients being 1. So really it's three and four. The next step down in coefficient count produced a relative error of 4.4e-6.
Sure. The application relies heavily on parallel Monte-Carlo sampling of distributions proportional to the Green's functions for Laplace's equation on rectangular regions. These functions turn out to be expressible in terms of logarithms of the Jacobi theta functions, and the integrals needed to perform the sampling of them end up as quickly converging (within two or three terms) series involving complex polylogarithms. I'm currently using the double precision versions simply adjusted to use floats instead, since double precision currently isn't necessary, but I thought I'd enquire about how much effort would be necessary to optimise the functions for floats, in case I find myself needing more performance. So I can't guarantee that the PR will be soon. |
Some time ago I found this old paper [Morris], which gives several rational function approximations with different precision. For double precision (index 0011) with 16.2 decimal digits precision it uses 5 (numerator) and 6 terms (denominator), where one of them is equal to 1. For single precision, where one may be satisfied with 6.7 decimal digits of precision, Morris uses 2 (numerator) and 2 (denominator) terms (index 0004), where one of them is set to 1. If I understand correctly, this matches your observation.
Very interesting, thanks! No worries regarding the times scale. :) |
I've drafted a single-precision version of the real dilogarithm |
What would it take to produce versions of the functions for single precision floating point? Presumably in most cases it means truncating the series and thus the tables of coefficients at lower indexes, but I'm not sure where to begin with adjusting the test cases and finding the new cutoff points. On another note, what guarantees do the existing functions have with respect to precision over their entire input range?
The text was updated successfully, but these errors were encountered: