-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document manual MCMC processing #113
Comments
|
I am working on helper functions (partly pushed), although not yet incorporated in the TAGM vignette. Here are some suggestions for the vignette:
The number of outliers should be similar for each chain, but what number of proportion is acceptable. How should my run compare to 360? Can I, for example, have 2000 for similar training parameters?
Reminder: any changes need to be done in |
Yes, some clarification is needed. Thanks for adding the helper functions - though I think we should rename I'll think the Rhat statistic will need more detail. I thought about adding some more diagnostic test too. A review of most methods are given here: http://www.stat.columbia.edu/~gelman/research/published/brooksgelman2.pdf argue that 1.2 for the statistics is sufficient, however this is based on an heurisitic argument rather than a "proof". I think if it is less than 1.1 no question would be asked. I have personal preference for 1.05. |
Thanks - replaced thin with burn - it would be useful to review these names anyway later. Feel free to update the |
I'll write a function that pools the 4 chains into 1, so that all chains are used. This will give better quality results. |
The pooling function is not merged (see #116). Do you still need to update |
By the way, it's probably worth later simplifying the names of these helper function to |
Some code it duplicated, but these function might change slightly in the future so I don't want one to depend on the other and it come back a bite us later. Yes, I plan on updating the documentation. I want to write a few more helper functions so that the analysis is streamlined as possible but still flexible. |
Re code duplication, it is the point of one function using the other, to avoid that the manual processing (including pooling) is different from running |
I've added the helper functions I think we need, including access to more diagnostics help. Do I re-write tagmMcmcProcess to use the pool function? It would be good to do some code review and polish these functions. From here I think I can write up some more detailed documentation. More function may need to be added but they are not obvious to me at the moment. |
Thanks, I'll have a look.
In the future, if you want explicit review of the code (and I think it's generally a good thing to do), I suggest you clone the repo, send a pull request, and as for an explicit review as part of the merging process. Helps to track the review and resulting changes and updates better.
|
hi @ococrook I see it's already on the list above to add to the vignette but I second @lgatto's comment about needing more information about the Rhat statistic's. Using TAGM for the first time and looking at results from Claire's data > gelman.diag(out)
Potential scale reduction factors:
Point est. Upper C.I.
[1,] 1.07 1.2 I have no idea if this seems okay, according to you reference in the above comment, if the point estimate is < 1.2, this is okay? In the vignette you just say around 1. Thanks! |
Hi @lmsimp ! |
Currently there's the
tagmMcmcProcess
function, but we need to document in details how to process the chains manually (before starting with the GUI).I would suggest to add a section in the vignette. That section won't be reproducible, because a full
MCMCParam
object, with four (or more) full chains is too large to store with the package. What we can do is the put that object on a server so that people can download it manually and reproduce the results. The code chunks and the outputs will have to be copied from the console, unfortunately.@ococrook - could you start this, please.
The text was updated successfully, but these errors were encountered: