Skip to content

Commit

Permalink
remove spaces in single-line equations
Browse files Browse the repository at this point in the history
  • Loading branch information
jmoralez committed Dec 6, 2023
1 parent 3e0e507 commit ac08a8f
Show file tree
Hide file tree
Showing 17 changed files with 87 additions and 64 deletions.
23 changes: 23 additions & 0 deletions action_files/clean_equations
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
#!/usr/bin/env python3
import re
from pathlib import Path

from nbdev.clean import process_write

pat = re.compile(r'\$\s*([^\$]+?)\s*\$')

def clean_equations(nb):
for cell in nb['cells']:
if cell['cell_type'] != 'markdown':
continue
lines = []
for i, line in enumerate(cell['source']):
line = pat.sub(r'$\1$', line)
lines.append(line)
cell['source'] = lines


if __name__ == '__main__':
repo_root = Path(__file__).parents[1]
for nb in (repo_root / 'nbs' / 'docs').rglob('*.ipynb'):
process_write(warn_msg='Failed to clean_nb', proc_nb=clean_equations, f_in=nb)
2 changes: 1 addition & 1 deletion nbs/docs/models/ARCH.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@
"$$\n",
"\n",
"Explicitly, the unconditional mean\n",
"$$E(X_t) = E(\\sigma_t \\varepsilon_t) = E(\\sigma_t) E(\\varepsilon_t) = 0. $$\n",
"$$E(X_t) = E(\\sigma_t \\varepsilon_t) = E(\\sigma_t) E(\\varepsilon_t) = 0.$$\n",
"\n",
"Additionally, the ARCH(1) model can be expressed as\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion nbs/docs/models/ARIMA.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@
"|Arima(0,0,0) |0 0 0 |$y_t=Y_t$ |White noise|\n",
"|ARIMA (0,1,0) |0 1 0 |$y_t = Y_t - Y_{t-1}$| Random walk|\n",
"|ARIMA (0,2,0) |0 2 0 |$y_t = Y_t - 2Y_{t-1} + Y_{t-2}$| Constant|\n",
"|ARIMA (1,0,0) |1 0 0 |$\\hat Y_t = \\mu + \\Phi_1 Y_{t-1} + \\epsilon $| AR(1): AR(1): First-order regression model|\n",
"|ARIMA (1,0,0) |1 0 0 |$\\hat Y_t = \\mu + \\Phi_1 Y_{t-1} + \\epsilon$| AR(1): AR(1): First-order regression model|\n",
"|ARIMA (2, 0, 0)|2 0 0 |$\\hat Y_t = \\Phi_0 + \\Phi_1 Y_{t-1} + \\Phi_2 Y_{t-2} + \\epsilon$| AR(2): Second-order regression model|\n",
"|ARIMA (1, 1, 0)|1 1 0 |$\\hat Y_t = \\mu + Y_{t-1} + \\Phi_1 (Y_{t-1}- Y_{t-2})$ | Differenced first-order\n",
"autoregressive model|\n",
Expand Down
2 changes: 1 addition & 1 deletion nbs/docs/models/AutoARIMA.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@
"|Arima(0,0,0) |0 0 0 |$y_t=Y_t$ |White noise|\n",
"|ARIMA (0,1,0) |0 1 0 |$y_t = Y_t - Y_{t-1}$| Random walk|\n",
"|ARIMA (0,2,0) |0 2 0 |$y_t = Y_t - 2Y_{t-1} + Y_{t-2}$| Constant|\n",
"|ARIMA (1,0,0) |1 0 0 |$\\hat Y_t = \\mu + \\Phi_1 Y_{t-1} + \\epsilon $| AR(1): AR(1): First-order regression model|\n",
"|ARIMA (1,0,0) |1 0 0 |$\\hat Y_t = \\mu + \\Phi_1 Y_{t-1} + \\epsilon$| AR(1): AR(1): First-order regression model|\n",
"|ARIMA (2, 0, 0)|2 0 0 |$\\hat Y_t = \\Phi_0 + \\Phi_1 Y_{t-1} + \\Phi_2 Y_{t-2} + \\epsilon$| AR(2): Second-order regression model|\n",
"|ARIMA (1, 1, 0)|1 1 0 |$\\hat Y_t = \\mu + Y_{t-1} + \\Phi_1 (Y_{t-1}- Y_{t-2})$ | Differenced first-order autoregressive model|\n",
"|ARIMA (0, 1, 1)|0 1 1 |$\\hat Y_t = Y_{t-1} - \\Phi_1 e^{t-1}$|Simple exponential smoothing|\n",
Expand Down
6 changes: 3 additions & 3 deletions nbs/docs/models/AutoCES.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@
"\\end{equation}\n",
"$$\n",
"\n",
"The idea of this representation is to demonstrate how the weights $\\alpha {\\left(1-\\alpha \\right)}^{j-1}$ are distributed over time in our sample. If the smoothing parameter $\\alpha \\in \\left(0,1\\right)$ then the weights decline exponentially with the increase of . If it lies in the so called “admissible bounds” (Brenner et al., 1968), that is $\\alpha \\in \\left(0,2\\right)$ then the weights decline in oscillating manner. Both traditional and admissible bounds have been used efficiently in practice and in academic literature (for application of the latter see for example Gardner & Diaz-Saiz, 2008; Snyder et al., 2017). However, in real life the distribution of weights can be more complex, with harmonic rather than exponential decline, meaning that some of the past observation might have more importance than the recent ones. In order to implement such distribution of weights, we build upon (2) and introduce complex dynamic interactions by substituting the real variables with the complex ones in (2). First, we substitute ${y}_{t-j}$ by the complex variable ${y}_{t-j}+{ie}_{t-j}$, where ${e}_t$ is the error term of the model and $i$ is the imaginary unit (which satisfies the equation ${i}^2=-1$). The idea behind this is to have the impact of both actual values and the error on each observation in the past on the final forecast. Second, we substitute $\\alpha$ with a complex variable ${\\alpha}_0+i{\\alpha}_1 $ and 1 by $1+i$ to introduce the harmonically declining weights. Depending on the values of the complex smoothing parameter, the weights distribution will exhibit a variety of trajectories over time, including exponential, oscillating, and harmonic. Finally, the result of multiplication of two complex numbers will be another complex number, so we substitute ${\\hat{y}}_{t-j}$ with ${\\hat{y}}_{t-j}+i{\\hat{e}}_{t-j}$, where ${\\hat{e}}_{t-j}$ is the proxy for the error term. The CES obtained as a result of this can be written as:\n",
"The idea of this representation is to demonstrate how the weights $\\alpha {\\left(1-\\alpha \\right)}^{j-1}$ are distributed over time in our sample. If the smoothing parameter $\\alpha \\in \\left(0,1\\right)$ then the weights decline exponentially with the increase of . If it lies in the so called “admissible bounds” (Brenner et al., 1968), that is $\\alpha \\in \\left(0,2\\right)$ then the weights decline in oscillating manner. Both traditional and admissible bounds have been used efficiently in practice and in academic literature (for application of the latter see for example Gardner & Diaz-Saiz, 2008; Snyder et al., 2017). However, in real life the distribution of weights can be more complex, with harmonic rather than exponential decline, meaning that some of the past observation might have more importance than the recent ones. In order to implement such distribution of weights, we build upon (2) and introduce complex dynamic interactions by substituting the real variables with the complex ones in (2). First, we substitute ${y}_{t-j}$ by the complex variable ${y}_{t-j}+{ie}_{t-j}$, where ${e}_t$ is the error term of the model and $i$ is the imaginary unit (which satisfies the equation ${i}^2=-1$). The idea behind this is to have the impact of both actual values and the error on each observation in the past on the final forecast. Second, we substitute $\\alpha$ with a complex variable ${\\alpha}_0+i{\\alpha}_1$ and 1 by $1+i$ to introduce the harmonically declining weights. Depending on the values of the complex smoothing parameter, the weights distribution will exhibit a variety of trajectories over time, including exponential, oscillating, and harmonic. Finally, the result of multiplication of two complex numbers will be another complex number, so we substitute ${\\hat{y}}_{t-j}$ with ${\\hat{y}}_{t-j}+i{\\hat{e}}_{t-j}$, where ${\\hat{e}}_{t-j}$ is the proxy for the error term. The CES obtained as a result of this can be written as:\n",
"\n",
"$$\n",
"\\begin{equation}\n",
Expand Down Expand Up @@ -237,7 +237,7 @@
"source": [
"### Conditional mean and variance of CES\n",
"\n",
"The conditional mean of CES for $h$ steps ahead with known $ {l}_t $ and $ {c}_t $ can be calculated using the state space model (6):\n",
"The conditional mean of CES for $h$ steps ahead with known ${l}_t$ and ${c}_t$ can be calculated using the state space model (6):\n",
"\n",
"$$\n",
"\\begin{equation}\n",
Expand All @@ -251,7 +251,7 @@
"\n",
"while $\\mathbf{F}$ and $\\mathbf{w}$ re the matrices from (7).\n",
"\n",
"The forecasting trajectories of (15) will differ depending on the values of $ {l}_t, {c}_t $, and the complex smoothing parameter. The analysis of stationarity condition shows that there are several types of forecasting trajectories of CES depending on the particular value of the complex smoothing parameter:\n",
"The forecasting trajectories of (15) will differ depending on the values of ${l}_t, {c}_t$, and the complex smoothing parameter. The analysis of stationarity condition shows that there are several types of forecasting trajectories of CES depending on the particular value of the complex smoothing parameter:\n",
"\n",
"1. When ${\\alpha}_1=1$ all the values of forecast will be equal to the last obtained forecast, which corresponds to a flat line. This trajectory is shown in [Figure 2A](https://onlinelibrary.wiley.com/cms/asset/16feeb7e-adf2-48f6-9df9-cab3e34b6e67/nav22074-fig-0002-m.jpg).\n",
"2. When ${\\alpha}_1>1$ the model produces trajectory with exponential growth which is shown in [Figure 2B](https://onlinelibrary.wiley.com/cms/asset/16feeb7e-adf2-48f6-9df9-cab3e34b6e67/nav22074-fig-0002-m.jpg).\n",
Expand Down
2 changes: 1 addition & 1 deletion nbs/docs/models/AutoETS.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@
" Forecast:\\hat{y}_{t+h|t} &= \\ell_{t} + hb_{t} + s_{t+h-m(k+1)} \\tag{1d} \\\\\n",
"\\end{align*}\n",
"\n",
"where $m$ is the length of seasonality (e.g., the number of months or quarters in a year), $ \\ell_{t}$ represents the level of the series, $b_t$ denotes the growth, $s_t$ is the seasonal component, $\\hat y_{t+h|t}$ is the forecast for $h$ periods ahead, and $h_{m}^{+} = [(h − 1) mod \\ m] + 1$. To use method (1), we need values for the initial states $\\ell_{0}$, $b_0$ and $s_{1−m}, . . . , s_0$, and for the smoothing parameters $\\alpha, \\beta^{*}$ and $\\gamma$. All of these will be estimated from the observed data.\n",
"where $m$ is the length of seasonality (e.g., the number of months or quarters in a year), $\\ell_{t}$ represents the level of the series, $b_t$ denotes the growth, $s_t$ is the seasonal component, $\\hat y_{t+h|t}$ is the forecast for $h$ periods ahead, and $h_{m}^{+} = [(h − 1) mod \\ m] + 1$. To use method (1), we need values for the initial states $\\ell_{0}$, $b_0$ and $s_{1−m}, . . . , s_0$, and for the smoothing parameters $\\alpha, \\beta^{*}$ and $\\gamma$. All of these will be estimated from the observed data.\n",
"\n",
"Equation (1c) is slightly different from the usual Holt-Winters equations such as those in Makridakis et al. (1998) or Bowerman, O’Connell, and Koehler (2005). These authors replace (1c) with\n",
"$$s_{t} = \\gamma^* (y_{t}-\\ell_{t})+ (1-\\gamma^*)s_{t-m}.$$\n",
Expand Down
20 changes: 10 additions & 10 deletions nbs/docs/models/AutoRegressive.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@
"\n",
"In addition, using the backshift([see](https://otexts.com/fpp3/backshift.html)) operator $B$, the $\\text{AR(p)}$ model can be rewritten as\n",
"\n",
"$$ \\varphi(B)X_t = \\varepsilon_t $$\n",
"$$\\varphi(B)X_t = \\varepsilon_t$$\n",
"\n",
"where $\\varphi(z) = 1 − \\varphi_1z − \\cdots − \\varphi_p z^p$ is called the (corresponding) $\\text{AR}$ polynomial. Besides, in the Python package |StatsModels|, $\\varphi(B)$ is called the $\\text{AR}$ lag polynomial."
]
Expand All @@ -121,7 +121,7 @@
"\n",
"where $\\{\\alpha_1, \\cdots , \\alpha_{k−1} \\}$ satisfy\n",
"\n",
"$$\\{\\alpha_1, \\cdots , \\alpha_{k−1} \\}=\\argmin_{β1,···,βk−1} E[X_t −(\\beta_1 X_{t−1} +\\cdots +\\beta_{k−1}X_{t−k+1})]^2 $$\n",
"$$\\{\\alpha_1, \\cdots , \\alpha_{k−1} \\}=\\argmin_{β1,···,βk−1} E[X_t −(\\beta_1 X_{t−1} +\\cdots +\\beta_{k−1}X_{t−k+1})]^2$$\n",
"\n",
"That is, $\\{\\alpha_1, \\cdots , \\alpha_{k−1} \\}$ are chosen by minimizing the mean squared error of prediction. Similarly, let $\\hat X_{t −k}$ denote the regression (prediction) of $X_{t −k}$ on $\\{X_{t −k+1:t −1} \\}$:\n",
"\n",
Expand All @@ -131,7 +131,7 @@
"\n",
"**Definition 2.** The partial autocorrelation function(PACF) at lag $k$ of astationary time series $\\{X_t \\}$ with $E(X_t ) = 0$ is\n",
"\n",
"$$\\phi_{11} = Corr(X_{t−1}, X_t ) = \\frac{Cov(X_{t−1}, X_t )} {[Var(X_{t−1})Var(X_t)]^1/2}=\\rho_1 $$\n",
"$$\\phi_{11} = Corr(X_{t−1}, X_t ) = \\frac{Cov(X_{t−1}, X_t )} {[Var(X_{t−1})Var(X_t)]^1/2}=\\rho_1$$\n",
"\n",
"and \n",
"\n",
Expand All @@ -143,7 +143,7 @@
"\n",
" **Theorem 1.** Let $\\{X_t \\}$ be a stationary time series with $E(X_t) = 0$, and $\\{a_{1k},\\cdots ,a_{kk} \\}$ satisfy\n",
"\n",
" $$\\{a_{1k},\\cdots,a_{kk} \\}=\\argmin_{a_1 ,\\cdots ,a_k} E(X_{t −a1}X_{t−1}−\\cdots −a_k X_{t−k})^2 $$\n",
" $$\\{a_{1k},\\cdots,a_{kk} \\}=\\argmin_{a_1 ,\\cdots ,a_k} E(X_{t −a1}X_{t−1}−\\cdots −a_k X_{t−k})^2$$\n",
"\n",
" Then $\\phi_{kk}=a_{kk}$ for $k≥1$."
]
Expand All @@ -156,14 +156,14 @@
"\n",
"From the $\\text{AR(p)}$ model, namely, Eq. (1), we can see that it is in the same form as the multiple linear regression model. However, it explains current itself with its own past. Given the past\n",
"\n",
"$$\\{X_{(t−p):(t−1)} \\} = \\{x_{(t−p):(t−1)} \\} $$\n",
"$$\\{X_{(t−p):(t−1)} \\} = \\{x_{(t−p):(t−1)} \\}$$\n",
"\n",
"we have\n",
"$$E(X_t |X_{(t−p):(t−1)}) = \\varphi_0 + \\varphi_1x_{t−1} + \\varphi_2 x_{t−2} + \\cdots + \\varphi_p x_{t−p} $$\n",
"$$E(X_t |X_{(t−p):(t−1)}) = \\varphi_0 + \\varphi_1x_{t−1} + \\varphi_2 x_{t−2} + \\cdots + \\varphi_p x_{t−p}$$\n",
"\n",
"This suggests that given the past, the right-hand side of this equation is a good estimate of $X_t$ . Besides\n",
"\n",
"$$Var(X_t |X_{(t −p):(t −1)}) = Var(\\varepsilon_t ) = \\sigma_{\\varepsilon}^2 $$\n",
"$$Var(X_t |X_{(t −p):(t −1)}) = Var(\\varepsilon_t ) = \\sigma_{\\varepsilon}^2$$\n",
"\n",
"Now we suppose that the AR(p) model, namely, Eq. (1), is stationary; then we have\n",
"\n",
Expand All @@ -175,7 +175,7 @@
"\n",
"Furthermore\n",
"\n",
"$$\\gamma_0 = \\sigma_{\\varepsilon}^2 / ( 1 − \\varphi_1 \\rho_1 − \\varphi_2 \\rho_2 − \\cdots − \\varphi_p \\rho_p ) $$\n",
"$$\\gamma_0 = \\sigma_{\\varepsilon}^2 / ( 1 − \\varphi_1 \\rho_1 − \\varphi_2 \\rho_2 − \\cdots − \\varphi_p \\rho_p )$$\n",
"\n",
"3. For all $k > p$, the partial autocorrelation $\\phi_{kk} = 0$, that is, the PACF of $\\text{AR(p)}$ models cuts off after lag $p$, which is very helpful in identifying an $\\text{AR}$ model. In fact, at this point, the predictor or regression of $X_t$ on $\\{X_{t−k+1:t−1} \\}$ is\n",
"\n",
Expand Down Expand Up @@ -223,7 +223,7 @@
"\n",
"**Definition 3** (1) A time series $\\{X_t \\}$ is causal if there exist coefficients $\\psi_j$ such that\n",
"\n",
"$$X_t =\\sum_{j=0}^{\\infty} \\psi_j \\varepsilon_{t-j}, \\ \\ \\sum_{j=0}^{\\infty} |\\psi_j |< \\infty $$\n",
"$$X_t =\\sum_{j=0}^{\\infty} \\psi_j \\varepsilon_{t-j}, \\ \\ \\sum_{j=0}^{\\infty} |\\psi_j |< \\infty$$\n",
"\n",
"where $\\psi_0 = 1, \\{\\varepsilon_t \\} \\sim WN(0, \\sigma_{\\varepsilon}^2 )$. At this point, we say that the time series $\\{X_t \\}$ has an $\\text{MA}(\\infty)$ representation. \n",
"\n",
Expand Down Expand Up @@ -264,7 +264,7 @@
"\n",
"* It may be shown that for an $\\text{AR(p)}$ model defined by Eq. (1), the coefficients $\\{\\psi_j \\}$ in Definition 3 satisfy $\\psi_0=1$ and\n",
"\n",
"$$\\psi_j=\\sum_{k=1}^{j} \\varphi '_k \\psi_{j-k}, \\ \\ j \\geq 1 \\ where \\ \\ \\varphi '_k =\\varphi_k \\ \\ if \\ \\ k \\leq p \\ \\ and \\ \\ \\varphi '_k =0 \\ \\ if \\ \\ k>p $$"
"$$\\psi_j=\\sum_{k=1}^{j} \\varphi '_k \\psi_{j-k}, \\ \\ j \\geq 1 \\ where \\ \\ \\varphi '_k =\\varphi_k \\ \\ if \\ \\ k \\leq p \\ \\ and \\ \\ \\varphi '_k =0 \\ \\ if \\ \\ k>p$$"
]
},
{
Expand Down
12 changes: 6 additions & 6 deletions nbs/docs/models/AutoTheta.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@
"\\end{equation}\n",
"$$\n",
"\n",
"where $\\text{A}_n$ and $\\text{B}_n$ are the minimum square coefficients of a simple linear regression over $Y_1, \\cdots Y_n$ against $1, \\cdots n \\ $, given by\n",
"where $\\text{A}_n$ and $\\text{B}_n$ are the minimum square coefficients of a simple linear regression over $Y_1, \\cdots Y_n$ against $1, \\cdots n \\$, given by\n",
"\n",
"$$\n",
"\\begin{equation}\n",
Expand All @@ -108,7 +108,7 @@
"\n",
"From this point of view, the theta lines can be interpreted as functions of the linear regression model applied to the data directly. However, note that $\\text{A}_n$ and $\\text{B}_n$ are only functions of the original data, not parameters of the Theta method.\n",
"\n",
"Finally, the forecasts produced by the Theta method for $h$ steps ahead of are an ad-hoc combination (50%-50%) of the extrapolations of $\\text{Z}(0) $ and $\\text{Z}(2)$ by the linear regression model and the simple exponential smoothing model respectively. We will refer to the above setup as the standard Theta method (STheta)."
"Finally, the forecasts produced by the Theta method for $h$ steps ahead of are an ad-hoc combination (50%-50%) of the extrapolations of $\\text{Z}(0)$ and $\\text{Z}(2)$ by the linear regression model and the simple exponential smoothing model respectively. We will refer to the above setup as the standard Theta method (STheta)."
]
},
{
Expand All @@ -117,7 +117,7 @@
"source": [
"The steps for building the STheta method of AN are as follows:\n",
"1. Deseasonalisation: The time series is tested for statistically significant seasonal behaviour. A time series is seasonal if\n",
"$$|r_m|>q_{1-\\alpha/2} \\sqrt{\\frac{1+2 \\sum_{i=1}^{m-1} r_{i}^2}{n} } $$\n",
"$$|r_m|>q_{1-\\alpha/2} \\sqrt{\\frac{1+2 \\sum_{i=1}^{m-1} r_{i}^2}{n} }$$\n",
" \n",
"where $r_k$ denotes the lag $k$ autocorrelation function, $m$ is the number of the periods within a seasonal cycle (for example, 12 for monthly data), $n$ is the sample size, $q$ is the quantile function of the standard normal distribution, and $(1-\\alpha)\\%$ is the confidence level. A&N opted for a 90% confidence level. If the time series is identified as seasonal, then it is deseasonalised via the classical decomposition method, assuming the seasonal component to have a multiplicative relationship.\n",
"\n",
Expand Down Expand Up @@ -164,7 +164,7 @@
"source": [
"In order to maintain the modelling of the long-term component and retain a fair comparison with the STheta method, in this work we fix $\\theta_1=0$ and focus on the optimisation of the short-term component, $\\theta_2=0$ with $\\theta \\geq 1$. Thus, $\\theta$ is the only parameter that requires estimation so far. The theta decomposition is now given by\n",
"\n",
"$$Y_t=(1-\\frac{1}{\\theta}) (\\text{A}_n+\\text{B}_n t)+ \\frac{1}{\\theta} \\text{Z}_t (\\theta), \\ t=1, \\cdots , n $$\n",
"$$Y_t=(1-\\frac{1}{\\theta}) (\\text{A}_n+\\text{B}_n t)+ \\frac{1}{\\theta} \\text{Z}_t (\\theta), \\ t=1, \\cdots , n$$\n",
"\n",
"The $h$ -step-ahead forecasts calculated at origin are given by\n",
"\n",
Expand Down Expand Up @@ -217,10 +217,10 @@
"\n",
"The $h$-step-ahead forecast at origin $n$ is given by\n",
"\n",
"$$\\hat Y_{n+h|n}=E[Y_{n+h}|Y_1,\\cdots, Y_n]=\\ell_{n}+(1-\\frac{1}{\\theta}) \\{(1-\\alpha)^n \\text{A}_n +[(h-1) + \\frac{1-(1-\\alpha)^{n+1}}{\\alpha}] \\text{B}_n \\} $$\n",
"$$\\hat Y_{n+h|n}=E[Y_{n+h}|Y_1,\\cdots, Y_n]=\\ell_{n}+(1-\\frac{1}{\\theta}) \\{(1-\\alpha)^n \\text{A}_n +[(h-1) + \\frac{1-(1-\\alpha)^{n+1}}{\\alpha}] \\text{B}_n \\}$$\n",
"\n",
"which is equivalent to Eq. (6). The conditional variance $\\text{Var}[Y_{n+h}|Y_1, \\cdots, Y_n]=[1+(h-1)\\alpha^2]\\sigma^2$ can be computed easily from the state space model. Thus, the $(1-\\alpha)\\%$ prediction interval for $Y_{n+h}$ is given by\n",
"$$\\hat Y_{n+h|n} \\ \\pm \\ q_{1-\\alpha/2} \\sqrt{[1+(h-1)\\alpha^2 ]\\sigma^2 } $$\n",
"$$\\hat Y_{n+h|n} \\ \\pm \\ q_{1-\\alpha/2} \\sqrt{[1+(h-1)\\alpha^2 ]\\sigma^2 }$$\n",
"\n",
"For $\\theta=2$ OTM reproduces the forecasts of the STheta method; hereafter, we will refer to this particular case as the standard Theta model (STM). In Theorem 2 of [Appendix A](https://www.sciencedirect.com/science/article/pii/S0169207016300243#s000075), we show that OTM is mathematically equivalent to the SES-d model. As a corollary of Theorem 2, STM is mathematically equivalent to SES-d with $b=\\frac{1}{2} \\text{B}_n$. Therefore, for $\\theta=2$ the corollary also re-confirms the H&B result on the relationship between STheta and the SES-d model.\n"
]
Expand Down
Loading

0 comments on commit ac08a8f

Please sign in to comment.