You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks a lot for your insightful work. I really enjoy reading it.
I am doing research in ML4CO, and have been focusing on probabilistic-method-based methods. Roughly speaking, we understand the output of neural networks as a solution distribution and try to directly compute the expected objective on the output distribution. Due to their directly interpretable nature, probabilistic-method-based methods have theoretical bounds for the final solution quality.
At the same time, I largely agree with your conclusion in your paper that the current ML4CO methods still need post-processing to obtain good final solutions. In the probabilistic-method-based case, from simple to sophisticated, we have been using the following post-processing methods: (1) direct sampling from the learned distribution, (2) entry-wise derandomization, and (3) greedy derandomization. Compared to MCTS, I believe the above three post-processing methods are still relatively simple.
What's your opinion on that? Do you think that would be a promising direction to further explore?
Feel free to check my recent work and the references thereof :D
Dear authors,
Thanks a lot for your insightful work. I really enjoy reading it.
I am doing research in ML4CO, and have been focusing on probabilistic-method-based methods. Roughly speaking, we understand the output of neural networks as a solution distribution and try to directly compute the expected objective on the output distribution. Due to their directly interpretable nature, probabilistic-method-based methods have theoretical bounds for the final solution quality.
At the same time, I largely agree with your conclusion in your paper that the current ML4CO methods still need post-processing to obtain good final solutions. In the probabilistic-method-based case, from simple to sophisticated, we have been using the following post-processing methods: (1) direct sampling from the learned distribution, (2) entry-wise derandomization, and (3) greedy derandomization. Compared to MCTS, I believe the above three post-processing methods are still relatively simple.
What's your opinion on that? Do you think that would be a promising direction to further explore?
Feel free to check my recent work and the references thereof :D
Best,
Fanchen
The text was updated successfully, but these errors were encountered: