Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On probabilistic-method-based ML4CO methods #3

Open
bokveizen opened this issue Jul 15, 2024 · 0 comments
Open

On probabilistic-method-based ML4CO methods #3

bokveizen opened this issue Jul 15, 2024 · 0 comments

Comments

@bokveizen
Copy link

bokveizen commented Jul 15, 2024

Dear authors,

Thanks a lot for your insightful work. I really enjoy reading it.

I am doing research in ML4CO, and have been focusing on probabilistic-method-based methods. Roughly speaking, we understand the output of neural networks as a solution distribution and try to directly compute the expected objective on the output distribution. Due to their directly interpretable nature, probabilistic-method-based methods have theoretical bounds for the final solution quality.

At the same time, I largely agree with your conclusion in your paper that the current ML4CO methods still need post-processing to obtain good final solutions. In the probabilistic-method-based case, from simple to sophisticated, we have been using the following post-processing methods: (1) direct sampling from the learned distribution, (2) entry-wise derandomization, and (3) greedy derandomization. Compared to MCTS, I believe the above three post-processing methods are still relatively simple.

What's your opinion on that? Do you think that would be a promising direction to further explore?

Feel free to check my recent work and the references thereof :D

Best,
Fanchen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant