diff --git a/h2o-docs/src/product/data-science/gbm.rst b/h2o-docs/src/product/data-science/gbm.rst
index abadc2a86db9..99e65aad103b 100644
--- a/h2o-docs/src/product/data-science/gbm.rst
+++ b/h2o-docs/src/product/data-science/gbm.rst
@@ -358,6 +358,19 @@ Metrics
Usage is illustrated in the Examples section.
+GBM Friedman and Popescu's H statistics
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can calculates the Friedman and Popescu's H statistics to test for the presence of an interaction between specified variables.
+
+H varies from 0 to 1. It will have a value of 0 if the model exhibits no interaction between specified variables and a correspondingly larger value for a stronger interaction effect between them. NaN is returned if a computation is spoiled by weak main effects and rounding errors.
+
+This statistic can only be calculated for numerical variables. Missing values are supported.
+
+Reference implementation: `Python `__ and `R `__
+
+You can see how it used in the `Examples section <#examples>`__.
+
Examples
~~~~~~~~
@@ -394,6 +407,10 @@ Below is a simple example showing how to build a Gradient Boosting Machine model
# Extract feature interactions:
feature_interactions <- h2o.feature_interaction(pros_gbm)
+ # Get Friedman and Popescu's H statistics
+ h <- h2o.h(pros_gbm, prostate, c('DPROS','DCAPS'))
+ print(h)
+
.. code-tab:: python
@@ -424,6 +441,10 @@ Below is a simple example showing how to build a Gradient Boosting Machine model
# Extract feature interactions:
feature_interactions = pros_gbm.feature_interaction()
+ # Get Friedman and Popescu's H statistics
+ h = pros_gbm.h(prostate_train, ['DPROS','DCAPS'])
+ print(h)
+
.. code-tab:: scala
@@ -481,6 +502,8 @@ York, 2001. `__
`Nee, Daniel, "Calibrating Classifier Probabilities", 2014 `__
+`Jerome H. Friedman and Bogdan E. Popescu, 2008, "Predictive learning via rule ensembles", *Ann. Appl. Stat.* **2**:916-954. `__
+
FAQ
~~~
diff --git a/h2o-docs/src/product/data-science/xgboost.rst b/h2o-docs/src/product/data-science/xgboost.rst
index 960952be1346..55d34687074b 100644
--- a/h2o-docs/src/product/data-science/xgboost.rst
+++ b/h2o-docs/src/product/data-science/xgboost.rst
@@ -373,6 +373,19 @@ Metrics
Usage is illustrated in the Examples section.
+XGBoost Friedman and Popescu's H statistics
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can calculates the Friedman and Popescu's H statistics to test for the presence of an interaction between specified variables.
+
+H varies from 0 to 1. It will have a value of 0 if the model exhibits no interaction between specified variables and a correspondingly larger value for a stronger interaction effect between them. NaN is returned if a computation is spoiled by weak main effects and rounding errors.
+
+This statistic can only be calculated for numerical variables. Missing values are supported.
+
+Reference implementation: `Python `__ and `R `__
+
+You can see how it used in the `Examples section <#examples>`__.
+
Examples
~~~~~~~~
@@ -415,6 +428,10 @@ Below is a simple example showing how to build a XGBoost model.
# Extract feature interactions:
feature_interactions = h2o.feature_interaction(titanic_xgb)
+ # Get Friedman and Popescu's H statistics
+ h <- h2o.h(titanic_xgb, train, c('fair','age'))
+ print(h)
+
.. code-tab:: python
@@ -451,6 +468,10 @@ Below is a simple example showing how to build a XGBoost model.
# Extract feature interactions:
feature_interactions = titanic_xgb.feature_interaction()
+ # Get Friedman and Popescu's H statistics
+ h = titanic_xgb.h(train, ['fair','age'])
+ print(h)
+
Note
''''
@@ -507,4 +528,6 @@ References
- Mitchell R, Frank E. (2017) Accelerating the XGBoost algorithm using GPU computing. PeerJ Preprints 5:e2911v1 `https://doi.org/10.7287/peerj.preprints.2911v1 `__
+`Jerome H. Friedman and Bogdan E. Popescu, 2008, "Predictive learning via rule ensembles", *Ann. Appl. Stat.* **2**:916-954. `__
+