From 65133b4d53dd7744162300aa9c4fe9763efabbd3 Mon Sep 17 00:00:00 2001 From: Olivier Sprangers Date: Tue, 30 Apr 2024 14:36:46 +0200 Subject: [PATCH] fix_rendering_first_try --- .../loss_function_finetuning.ipynb | 75 ++++++++++--------- ...antification_with_quantile_forecasts.ipynb | 18 ++--- 2 files changed, 50 insertions(+), 43 deletions(-) diff --git a/nbs/docs/4_tutorials/loss_function_finetuning.ipynb b/nbs/docs/4_tutorials/loss_function_finetuning.ipynb index fcd1ab3b..43261be3 100644 --- a/nbs/docs/4_tutorials/loss_function_finetuning.ipynb +++ b/nbs/docs/4_tutorials/loss_function_finetuning.ipynb @@ -11,15 +11,16 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "When fine-tuning, the model trains on your dataset to tailor its predictions to your particular scenario. As such, it is possible to specify the loss function used during fine-tuning.\n", - "\n", + "When fine-tuning, the model trains on your dataset to tailor its predictions to your particular scenario. As such, it is possible to specify the loss function used during fine-tuning.\\\n", + "\\\n", "Specifically, you can choose from:\n", - "- `\"default\"` - a proprietary loss function that is robust to outliers\n", - "- `\"mae\"` - mean absolute error\n", - "- `\"mse\"` - mean squared error\n", - "- `\"rmse\"` - root mean squared error\n", - "- `\"mape\"` - mean absolute percentage error\n", - "- `\"smape\"` - symmetric mean absolute percentage error" + "\n", + "* `\"default\"` - a proprietary loss function that is robust to outliers\n", + "* `\"mae\"` - mean absolute error\n", + "* `\"mse\"` - mean squared error\n", + "* `\"rmse\"` - root mean squared error\n", + "* `\"mape\"` - mean absolute percentage error\n", + "* `\"smape\"` - symmetric mean absolute percentage error" ] }, { @@ -60,7 +61,7 @@ ], "source": [ "#| echo: false\n", - "colab_badge('docs/tutorials/11_loss_function_finetuning')" + "colab_badge('docs/4_tutorials/loss_function_finetuning')" ] }, { @@ -137,8 +138,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's fine-tune the model on a dataset using the mean absolute error (MAE).\n", - "\n", + "Let's fine-tune the model on a dataset using the mean absolute error (MAE).\\\n", + "\\\n", "For that, we simply pass the appropriate string representing the loss function to the `finetune_loss` parameter of the `forecast` method." ] }, @@ -233,9 +234,15 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 3. Fine-tuning with Mean Absolute Error\n", - "Let's fine-tune the model on a dataset using the Mean Absolute Error (MAE).\n", - "\n", + "## 3. Fine-tuning with Mean Absolute Error" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's fine-tune the model on a dataset using the Mean Absolute Error (MAE).\\\n", + "\\\n", "For that, we simply pass the appropriate string representing the loss function to the `finetune_loss` parameter of the `forecast` method." ] }, @@ -294,12 +301,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now, depending on your data, you will use a specific error metric to accurately evaluate your forecasting model's performance.\n", - "\n", - "Below is a non-exhaustive guide on which metric to use depending on your use case.\n", - "\n", - "**Mean absolute error (MAE)**\n", - "\n", + "Now, depending on your data, you will use a specific error metric to accurately evaluate your forecasting model's performance.\\\n", + "\\\n", + "Below is a non-exhaustive guide on which metric to use depending on your use case.\\\n", + "\\\n", + "**Mean absolute error (MAE)**\\\n", + "\\\n", "\n", "\n", "- Robust to outliers\n", @@ -307,8 +314,8 @@ "- You care equally about all error sizes\n", "- Same units as your data\n", "\n", - "**Mean squared error (MSE)**\n", - "\n", + "**Mean squared error (MSE)**\\\n", + "\\\n", "\n", "\n", "- You want to penalize large errors more than small ones\n", @@ -316,15 +323,15 @@ "- Used when large errors must be avoided\n", "- *Not* the same units as your data\n", "\n", - "**Root mean squared error (RMSE)**\n", - "\n", + "**Root mean squared error (RMSE)**\\\n", + "\\\n", "\n", "\n", "- Brings the MSE back to original units of data\n", "- Penalizes large errors more than small ones\n", "\n", - "**Mean absolute percentage error (MAPE)**\n", - "\n", + "**Mean absolute percentage error (MAPE)**\\\n", + "\\\n", "\n", "\n", "- Easy to understand for non-technical stakeholders\n", @@ -332,16 +339,16 @@ "- Heavier penalty on positive errors over negative errors\n", "- To be avoided if your data has values close to 0 or equal to 0\n", "\n", - "**Symmmetric mean absolute percentage error (sMAPE)**\n", - "\n", + "**Symmmetric mean absolute percentage error (sMAPE)**\\\n", + "\\\n", "\n", "\n", "- Fixes bias of MAPE\n", "- Equally senstitive to over and under forecasting\n", "- To be avoided if your data has values close to 0 or equal to 0\n", "\n", - "With TimeGPT, you can choose your loss function during fine-tuning as to maximize the model's performance metric for your particular use case. \n", - "\n", + "With TimeGPT, you can choose your loss function during fine-tuning as to maximize the model's performance metric for your particular use case.\\\n", + "\\\n", "Let's run a small experiment to see how each loss function improves their associated metric when compared to the default setting." ] }, @@ -650,10 +657,10 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "From the table above, we can see that using a specific loss function during fine-tuning will improve its associated error metric when compared to the default loss function.\n", - "\n", - "In this example, using the MAE as the loss function improves the metric by 8.54% when compared to using the default loss function.\n", - "\n", + "From the table above, we can see that using a specific loss function during fine-tuning will improve its associated error metric when compared to the default loss function.\\\n", + "\\\n", + "In this example, using the MAE as the loss function improves the metric by 8.54% when compared to using the default loss function.\\\n", + "\\\n", "That way, depending on your use case and performance metric, you can use the appropriate loss function to maximize the accuracy of the forecasts." ] } diff --git a/nbs/docs/4_tutorials/uncertainty_quantification_with_quantile_forecasts.ipynb b/nbs/docs/4_tutorials/uncertainty_quantification_with_quantile_forecasts.ipynb index 31a4846f..bc888219 100644 --- a/nbs/docs/4_tutorials/uncertainty_quantification_with_quantile_forecasts.ipynb +++ b/nbs/docs/4_tutorials/uncertainty_quantification_with_quantile_forecasts.ipynb @@ -13,14 +13,14 @@ "id": "0f73b521-05f2-4c53-b44c-d988bf8fe7b9", "metadata": {}, "source": [ - "In forecasting, we are often interested in a distribution of predictions rather than only a point prediction, because we want to have a notion of the uncertainty around the forecast.\n", - "\n", - "To this end, we can create _quantile forecasts_.\n", - "\n", - "Quantile forecasts have an intuitive interpretation, as they present a specific percentile of the forecast distribution. This allows us to make statements such as 'we expect 90% of our observations of air passengers to be above 100'. This approach is helpful for planning under uncertainty, providing a spectrum of possible future values and helping users make more informed decisions by considering the full range of potential outcomes. \n", - "\n", - "With TimeGPT, we can create a distribution of forecasts, and extract the quantile forecasts for a specified percentile. For instance, the 25th and 75th quantiles give insights into the lower and upper quartiles of expected outcomes, respectively, while the 50th quantile, or median, offers a central estimate.\n", - "\n", + "In forecasting, we are often interested in a distribution of predictions rather than only a point prediction, because we want to have a notion of the uncertainty around the forecast.\\\n", + "\\\n", + "To this end, we can create _quantile forecasts_.\\\n", + "\\\n", + "Quantile forecasts have an intuitive interpretation, as they present a specific percentile of the forecast distribution. This allows us to make statements such as 'we expect 90% of our observations of air passengers to be above 100'. This approach is helpful for planning under uncertainty, providing a spectrum of possible future values and helping users make more informed decisions by considering the full range of potential outcomes.\\\n", + "\\\n", + "With TimeGPT, we can create a distribution of forecasts, and extract the quantile forecasts for a specified percentile. For instance, the 25th and 75th quantiles give insights into the lower and upper quartiles of expected outcomes, respectively, while the 50th quantile, or median, offers a central estimate.\\\n", + "\\\n", "TimeGPT uses [conformal prediction](https://en.wikipedia.org/wiki/Conformal_prediction) to produce the quantiles." ] }, @@ -65,7 +65,7 @@ ], "source": [ "#| echo: false\n", - "colab_badge('docs/tutorials/10_uncertainty_quantification_with_quantile_forecasts')" + "colab_badge('docs/4_tutorials/uncertainty_quantification_with_quantile_forecasts')" ] }, {