-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ALNS parameter optimization #170
Comments
Hi @dietmarwo! 👋 How does this compare to the ideas laid out in #109? I think we can combine things, but I'm hesitant to build a custom tuner just for ALNS. It might be easier to hook into existing tuning software instead, but that needs to be explored further. |
Not sure whether ML hyperperameter tuning is optimial here. In ML we usually use dedicated hardware (TPUs/GPUs) which blocks parallel evaluation of hyperparameter settings. ALNS is often applied to objective functions which can be evaluated single threaded, so on a modern CPU you could run >= 16 experiments in parallel. But nevertheless, you are right, applying these existing tools is better than nothing. But there are alternatives avoiding the creation of a custom tuner:
keras-tuner looks very ML specific to me, but |
I have had some success applying SMAC3 to tune an ALNS before - some example code for setting that up is available here, in Alternatively:
|
Indeed, in SMAC3 "Scenario" supports the "n_jobs" parameter where you also can execute multiple runs in parallel. It may indeed be a good solution. Will also have a look at this survey . |
Great, let me know what you find! This could be a cool addition to ALNS. |
Related to the parallelization topic: |
We used to have an issue open about just that, here: #59. I think this is the most basic parallel ALNS paper out there: PALNS. Whether that's the best way to do it is an open question. There are some papers from the past few years that describe parallel ALNS implementations, but I've not seen any that really convinced me they are significantly better than the obvious parallellisation of doing |
With evolutionary optimization - which I am more familiar with - we are faced with the same problem. Sometimes the "parallel runs" idea works very well, but sometimes it is better to parallelize the evaluation of a population. Population based EAs can support this by providing an ask/tell interface - then the user can apply parallel evaluation himself. Question is if ALNS can support a similar idea: Asking the user to apply a list of destroy/repair pairs and return the resulting states. Advantage is that ALNS doesn't have to perform any parallelization itself. So you could "do just ALNS really well" as planned, but still support parallelization. The only internal change would be that you would no longer update the internal state of the algorithms (its weights) for each destroy/restore pair separately, but would perform a kind of "batch update" for a list of destroy/restore pairs. |
I think something like that should be possible, but I don't have the time right now to mock something up. It might be tricky to do this without breaking existing interfaces. Would you be willing to have a go at this if you're interested? |
Is your feature request related to a problem? Please describe
Checking out the examples, for some of them (TSP, CVRP) I found that using decay = 0.85 improved the results significantly compared to the choosen value decay = 0.8. A useful feature would be an automated support for parameter testing.
Describe the solution you'd like
The solution would be an automated parameter swipe executing many optimization runs thereby changing one or more parameters
and the random seed. To utilize modern many-core CPUs these runs could be parallelized using Python multi-processing.
Result would be mean and standard deviation of the objective for specific parameter values which could be visualized using plots.
Describe alternatives you have considered
Alternative would be some guidance how to manually optimize the parameters - may be another example or based on an existing one.
Permutation Flow Shop already contains a section "Tuning the destroy rate". There could be a similar section for TSP called "Tuning the decay rate".
Additional context
The text was updated successfully, but these errors were encountered: