-
Notifications
You must be signed in to change notification settings - Fork 48
/
notation.tex
49 lines (39 loc) · 2.27 KB
/
notation.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
\input{common/header.tex}
\inbpdocument
\chapter*{Notation}
\label{ch:notation}
\addcontentsline{toc}{chapter}{Notation}
%Throughout this thesis we use Roman letters in place of greek letters wherever possible.
Unbolded $x$ represents a single number, boldface $\vx$ represents a vector, and capital boldface $\vX$ represents a matrix.
An individual element of a vector is denoted with a subscript and without boldface.
For example, the $i$th element of a vector $\vx$ is $x_i$.
A bold lower-case letter with an index such as $\vx_j$ represents a particular row of matrix $\vX$.
\vspace{1cm}
\begin{tabular}{lm{12cm}}
Symbol \quad & Description \\
\hline
%$y \sim p(y)$ & $y$ is drawn from, or distributed according to, distribution $p(y)$ \\
%$\feat(\vx)$ & A feature vector. \\
%$\vecf$ & A function represented as an infinite-dimensional vector. \\
$\kSE$ & The squared-exponential kernel, also known as the radial-basis function (\RBF{}) kernel, the Gaussian kernel, or the exponentiated quadratic. \\
$\kRQ$ & The rational-quadratic kernel. \\
$\kPer$ & The periodic kernel. \\
$\kLin$ & The linear kernel. \\
$\kWN$ & The white-noise kernel. \\
$\kC$ & The constant kernel. \\
$\vsigma$ & The changepoint kernel, $\vsigma(x, x') = \sigma(x) \sigma(x')$, where $\sigma(x)$ is a sigmoidal function such as the logistic function. \\
$k_a + k_b$ & Addition of kernels, shorthand for $k_a(\vx, \vx') + k_b(\vx, \vx')$ \\
$k_a \times k_b$& Multiplication of kernels, shorthand for $k_a(\vx, \vx') \times k_b(\vx, \vx')$ \\
$k(\vX, \vX)$ & The Gram matrix, whose $i,j$th element is $k(\vx_i, \vx_j)$. \\
$\vK$ & Shorthand for the Gram matrix $k(\vX, \vX)$ \\
$\vf(\vX)$ & A vector of function values, whose $i$th element is given by $f(\vx_i)$. \\
%$A \otimes B$ & The Kronecker product of matrices $A$ and $B$. \\
%$\textnormal{vec}(\vX)$ & The vectorization operator, which concatenates the columns of $\vX$ into a single column vector. \\
$\mod(i,j)$ & The modulo operator, giving the remainder after dividing $i$ by $j$. \\
$\mathcal{O}(\cdot)$ & The big-O asymptotic complexity of an algorithm. \\
$\vY_{:,d}$ & the $d$th column of matrix $\vY$.
\end{tabular}
\vspace{1cm}
Precise definitions of all kernels listed here are given in appendix \ref{sec:kernel-definitions}.
\outbpdocument{
}