diff --git a/nbs/docs/getting-started/1_introduction.ipynb b/nbs/docs/getting-started/1_introduction.ipynb
index fd31ebea..f9082981 100644
--- a/nbs/docs/getting-started/1_introduction.ipynb
+++ b/nbs/docs/getting-started/1_introduction.ipynb
@@ -52,7 +52,7 @@
"\n",
"Self-attention, the revolutionary concept introduced by the paper [Attention is all you need](https://arxiv.org/abs/1706.03762), is the basis of this foundation model. TimeGPT model is not based on any existing large language model(LLM). Instead, it is independently trained on a vast amount of time series data, and the large transformer model is designed to minimize the forecasting error.\n",
"\n",
- "\n",
+ "\n",
"\n",
"The architecture consists of an encoder-decoder structure with multiple layers, each with residual connections and layer normalization. Finally, a linear layer maps the decoder’s output to the forecasting window dimension. The general intuition is that attention-based mechanisms are able to capture the diversity of past events and correctly extrapolate potential future distributions."
]
diff --git a/nbs/docs/getting-started/7_why_timegpt.ipynb b/nbs/docs/getting-started/7_why_timegpt.ipynb
index 0c4add8b..afb4d2c6 100644
--- a/nbs/docs/getting-started/7_why_timegpt.ipynb
+++ b/nbs/docs/getting-started/7_why_timegpt.ipynb
@@ -761,7 +761,7 @@
"#### Benchmark Results\n",
"For a more comprehensive dive into model accuracy and performance, explore our [Time Series Model Arena](https://github.com/Nixtla/nixtla/tree/main/experiments/foundation-time-series-arena)! TimeGPT continues to lead the pack with exceptional performance across benchmarks! 🌟\n",
"\n",
- ""
+ ""
]
},
{
diff --git a/nbs/assets/timeseries_model_arena.png b/nbs/img/timeseries_model_arena.png
similarity index 100%
rename from nbs/assets/timeseries_model_arena.png
rename to nbs/img/timeseries_model_arena.png