-
Hi all, Apologies upfront for this question which is mostly for my own curiosity but also related to comments I've observed in the code about KL transform as a "yet to do" feature. My understanding is that KL transform is a better alternative to log-transform and/or normalizing by standard deviation when using inversion approaches that leverage dimensional reduction - please correct me if I am wrong. Is this always the case or is it more specific to parameters that are hyper-sensitive? That said, are ensemble methods less prone to the deleterious effects of hyper-sensitive parameters on the inversion process or is it the opposite? Cheers, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Interesting that you bring this up - I havent thought about KL in a while but recently started to think about how it fits into the ensemble methods and ENSI schemes...The general idea is that the KL stuff in pyemu can used as an alternative parameterization device to pilot points - a dimensional reduction compared to grid-scale that still allows you to express plausible and expected patterns of heterogeneity (my understanding is that this is used a lot in the petrol world). In short, you get the prior parameter covariance matrix for a set of grid-scale property array that (hopefully) are described by a variogram. Then you do an eigen-solve on that cov matrix and use the eigen vectors as a projection matrix to go from standard normal deivates to a correlated property array - not unlike how we generated correlated realizations with a cov matrix. But you save this projection matrix and use it at runtime...what you estimate as parameters are multipliers against the columns of this projection matrix and, because is cov matrix is very sparse and bc of underlying correlation in the property array (as defined by the variogram), you only need to estimate a few 10s to 100s of column multipliers to recovery most of the "energy" of the cov matrix (not dissimilar to how we truncate the jco with TSVD as a solution scheme). So even if you have 10,000 active cells in a model layer, you only need a few 10s to 100s of parameters to estimate this property and allow for heterogeneity to arise in plausible ways. So its actually pretty similar to the super-parameter idea in SVD-assist but working on the prior cov matrix instead. The reason I personally stopped looking at this is that I got distracted with ensemble methods. But like I said Im starting to think more about it again...one of the GMDSI SVD notebooks has a lil demo with a slider that shows how the KL parameterization works if you like pictures (I know I do!). There are also prepare and apply functions in pyemu helpers that do implement this scheme. Let us know if you try it! |
Beta Was this translation helpful? Give feedback.
Interesting that you bring this up - I havent thought about KL in a while but recently started to think about how it fits into the ensemble methods and ENSI schemes...The general idea is that the KL stuff in pyemu can used as an alternative parameterization device to pilot points - a dimensional reduction compared to grid-scale that still allows you to express plausible and expected patterns of heterogeneity (my understanding is that this is used a lot in the petrol world). In short, you get the prior parameter covariance matrix for a set of grid-scale property array that (hopefully) are described by a variogram. Then you do an eigen-solve on that cov matrix and use the eigen vectors as a…