-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
low performance compared to Scipy minimize #896
Comments
Hi and welcome to julia! Hard to say because of the formatting (use triple backquotes to format blocks of code), but are you measuring the time to do |
@antoine-levitt In both cases I just measured the execution time of the minimization, i.e. |
It's a known tradeoff with julia: functions have to be compiled the first time they're used, which takes time. It's one thing that is being worked on by the julia people, but the first execution is never going to be as fast as python. The preferred workflow is to open one julia session and do all your computations inside, in which case one second overhead does not matter. Eg I get
which looks faster than python? |
@antoine-levitt Indeed it looks faster but I do not understand how can it help in improving performance of any code (except some massive loops). It may sound naive but if I need to execute the code twice to see any gain I could just read the output of the first run. Don't I? Is there a way to compile the code first to get this performance boost in the first go? What's the compilation time? |
The point is that for any realistic computation precompilation does not matter. The precompilation of f is very fast (you can see by running it once, redefining f and running it again), what takes time here is precompiling Optim, which happens on every new julia session. It's only triggered the first time, so if you leave your julia session open (eg using https://github.com/timholy/Revise.jl to minimize the need for restarts) it's basically not a problem. There are workflow tips on the julia manual if that helps. For that kind of questions you can also use the discourse or stackoverflow forum. |
It's possible that we can reduce compile time by reorganizing the code, but from Optim's side, we cannot avoid this completely because of the points Antoine made about Julia in general. |
@mrozkamil, here's one way to think about it:
In other words, in any case where it actually matters, Julia's model is probably a win. That said, all of us understand the frustration of latency. @pkofod, don't know if you saw the new SnoopCompile announcement on discourse, but there's at least a chance that could help Optim. (And more goodness to come...) |
I will try to find the time, I didn't see the announcement so thanks! |
What about the (re)compilation time when the objective function changes? Here's my use case: I have thousands of target functions which I want to test against some experimental data to see which target function better fits the data. In my experience, when I pass in a new target function, there seems to be a massive (re)compilation overhead. I don't have any MWE now, but here's the output of 0.036213 seconds (29.97 k allocations: 2.328 MiB, 95.46% compilation time: 100% of which was recompilation) |
I was attacked to Julia by its reputation about its speed compared to e.g. python. Therefore, I tested it with one of the problems I need to solve on a daily basis, i.e. function minimisation. I compared the speed of Optim.jl to the one of scipy.optimize.minimize with the Rosenbrock function. I used two minimisation methods in the comparison: BFGS and Nelder-Mead. For both methods, Scipy is approx. 150-200 times taster than Optim.jl as it can be seen below:
These are two pieces of code I generated:
Rosenbrock.jl:
Rosenbrock.py:
Any explanation of such a low performance of Julia?
The text was updated successfully, but these errors were encountered: