diff --git a/.nojekyll b/.nojekyll
index 732e178..135dcea 100644
--- a/.nojekyll
+++ b/.nojekyll
@@ -1 +1 @@
-6fee117f
\ No newline at end of file
+e2536b15
\ No newline at end of file
diff --git a/MarineSDMs.docx b/MarineSDMs.docx
index 69f1af5..e76aed0 100644
Binary files a/MarineSDMs.docx and b/MarineSDMs.docx differ
diff --git a/MarineSDMs.pdf b/MarineSDMs.pdf
index 4b77725..2fe051e 100644
Binary files a/MarineSDMs.pdf and b/MarineSDMs.pdf differ
diff --git a/ensemble.html b/ensemble.html
index 5fd51be..4a080ea 100644
--- a/ensemble.html
+++ b/ensemble.html
@@ -858,7 +858,7 @@
-
+
diff --git a/evaluate.html b/evaluate.html
index ea1badb..7753580 100644
--- a/evaluate.html
+++ b/evaluate.html
@@ -396,7 +396,13 @@
@@ -457,10 +463,17 @@
Figure 11.1: ROC curve generated by showing rates of false positive vs false negative as function of changing the threshold value (rainbow colors). Source: ROCR: visualizing classifier performance in R
+From the ROC curve, the area under the curve (AUC) is calculated, which is a measure of the model’s ability to distinguish between presence and absence. AUC values range from 0 to 1, with 0.5 being no better than random, and 1 being perfect.
+
+
diff --git a/explorations.html b/explorations.html
index b7158ad..53be0c5 100644
--- a/explorations.html
+++ b/explorations.html
@@ -450,7 +450,7 @@
-
+
-
+
-
+
-
+
-
+
diff --git a/explorations/obis-top-spp-by-class.html b/explorations/obis-top-spp-by-class.html
index 8c87641..53304e2 100644
--- a/explorations/obis-top-spp-by-class.html
+++ b/explorations/obis-top-spp-by-class.html
@@ -891,7 +891,7 @@
-
+
diff --git a/explorations/sdm-1_predicts.html b/explorations/sdm-1_predicts.html
index b9c2a3d..7a07067 100644
--- a/explorations/sdm-1_predicts.html
+++ b/explorations/sdm-1_predicts.html
@@ -897,7 +897,7 @@
-
+
diff --git a/mosaic.html b/mosaic.html
index 378e8a3..8f69680 100644
--- a/mosaic.html
+++ b/mosaic.html
@@ -858,7 +858,7 @@
-
+
diff --git a/occ.html b/occ.html
index b594131..a9ac389 100644
--- a/occ.html
+++ b/occ.html
@@ -897,7 +897,7 @@
-
+
diff --git a/search.json b/search.json
index d945996..06eec2a 100644
--- a/search.json
+++ b/search.json
@@ -147,11 +147,11 @@
"text": "The prediction step applies the environmental relationships from the fitted model to a new set of data, typically the seascape of interest, and perhaps with some sort of temporal snapshot (e.g., climatic annual or monthly average)."
},
{
- "objectID": "evaluate.html",
- "href": "evaluate.html",
+ "objectID": "evaluate.html#more-resources",
+ "href": "evaluate.html#more-resources",
"title": "11 Evaluate",
- "section": "",
- "text": "Model evaluation uses the set aside test data from the earlier splitting to evaluate how well the model predicts the response of presence or absence. Since the test response data is binary [0,1] and the prediction from the model is continuous [0-1], a threshold needs to be applied to assign to convert the continuous response to binary. This is often performed through a Receiver Operator Characteristic (ROC) curve (Figure 11.1), which evaluates at each threshold the confusion matrix (Table 11.1).\n\n\nTable 11.1: Confusion matrix to understand predicted versus observed.\n\n\n\n\nPredicted\n\n\n\n\n\n0 (absence)\n1 (presence)\n\n\nObserved\n0 (absence)\nTrue absence\nFalse presence\n\n\n\n1 (presence)\nFalse absence\nTrue presence\n\n\n\n\n\n\n\nFigure 11.1: ROC curve generated by showing rates of false positive vs false negative as function of changing the threshold value (rainbow colors). Source: ROCR: visualizing classifier performance in R"
+ "section": "11.1 More Resources",
+ "text": "11.1 More Resources\n\nClassification: ROC Curve and AUC | Machine Learning | Google for Developers"
},
{
"objectID": "combine.html",
diff --git a/software.html b/software.html
index 1cc1d4f..bc5d728 100644
--- a/software.html
+++ b/software.html
@@ -432,12 +432,12 @@
-
+