You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello Dr. Lu, I mostly appreciate your work on the development of the DeepXDE library. I have been using the library for some time now for research and lately I found out that the L-BFGS optimizer stops at exactly 30 iterations. I have been using it without an issue until a few weeks ago, when I observed this behaviour. This is quite strange, because I observed this "early stopping" behaviour while using code, for which the L-BFGS was functioning properly, meaning that I was able to use L-BFGS for as many iterations as I wanted.
To make sure that I did not brake something in my code, I tried some of the demo code from the DeepXDE documentation, code from you work on sampling strategies. The result is always the same: ADAM works perfectly for as many iterations as I want, but when the training advances to L-BFGS, the training stops at exactly 30 iterations. I tried tweaking the gtol and the ftol parameters, but no luck. The arithmetic precision has been set to float64 as always. In general, the issue I am facing is that without changing anything in the conditions of the code I am experimenting with, the L-BFGS optimizer stops at 30 iterations. Any advice on this would be very much appreciated. Thank you.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello Dr. Lu, I mostly appreciate your work on the development of the DeepXDE library. I have been using the library for some time now for research and lately I found out that the L-BFGS optimizer stops at exactly 30 iterations. I have been using it without an issue until a few weeks ago, when I observed this behaviour. This is quite strange, because I observed this "early stopping" behaviour while using code, for which the L-BFGS was functioning properly, meaning that I was able to use L-BFGS for as many iterations as I wanted.
To make sure that I did not brake something in my code, I tried some of the demo code from the DeepXDE documentation, code from you work on sampling strategies. The result is always the same: ADAM works perfectly for as many iterations as I want, but when the training advances to L-BFGS, the training stops at exactly 30 iterations. I tried tweaking the gtol and the ftol parameters, but no luck. The arithmetic precision has been set to float64 as always. In general, the issue I am facing is that without changing anything in the conditions of the code I am experimenting with, the L-BFGS optimizer stops at 30 iterations. Any advice on this would be very much appreciated. Thank you.
Beta Was this translation helpful? Give feedback.
All reactions