-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configuration for 'eval' and 'gen', Sampling in Intensity-Free Models, and compacted event times in Multi-Step Inference #13
Comments
Hi, For IFTPP, we have to follow the original author's approach to do the sampling, which is in fact not compatible with our current framework. Thats why the master branch has no such code yet. We are considering pushing it to a new branch in the future. For the moment, you can directly use author's code. For multi-step sampling, we indeed notice similar things but we have not found any bug yet (if you found any please let us know). The only difference between our version and original version( e.g., https://github.com/ant-research/hypro_tpp/blob/main/hypro_tpp/lib/thinning.py, https://github.com/yangalan123/anhp-andtt/blob/master/anhp/esm/thinning.py) is, we perform the batch-wise prediction. We are committed to closely test this part of code again. |
we will look at the multi-step generation code and get back to you shortly |
Thank you for the response. I have been trying to identify the cause of consistently small values in the sampled delta time over the past week. I discovered that there is no cumulative process during the sampling of exp(lambda*) in the thinning algorithm. In my opinion, it seems that the sampled dt values should be accumulated.
After the above line, I think we have to add this below code line.
Could you please review this once? |
thanks for point this. Let me test it. |
i add this line. exp_numbers = torch.cumsum(exp_numbers, dim=-1), but i found the results become event more clustered. i am still working on this issue. will get back to you when i fix it. |
Hello. Thank you for your efforts in TPP benchmarking.
I have a few questions.
Some models have both train, eval, and gen in examples/configs/experiment_config.yaml, but for models that don't have one of them (eval or gen), how should it be handled?
For Intensity Free (IFTPP), it seems that thinning is not used because intensity is not used. In that case, how should sampling be done when only Density is known? Looking at the EasyTPP paper, it seems that you have somehow addressed this, given the presence of RMSE and ACC.
In the case of multi-step inference, it seems that most events are clustered around the initial event. Is this a natural phenomenon? I observed the same phenomenon in both ODETPP and NHP.
I would really appreciate it if you could provide answers.
The text was updated successfully, but these errors were encountered: