forked from gphanikumar/id2090a3
-
Notifications
You must be signed in to change notification settings - Fork 0
/
me22b048.tex
21 lines (17 loc) · 960 Bytes
/
me22b048.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
\section*{me22b048}
My favorite equation is the formula for \footnote[1]{Sunnåker M, Busetto A.G, Numminen E, Corander J, Foll M., Dessimoz C. Approximate Bayesian Computation: 2. https://doi.org/10.1371/journal.pcbi.1002803 .} \textbf{Bayes’ theorem }, which is:
\begin{equation}
P(A|B) = \frac{P(B|A)P(A)}{P(B)}\
\end{equation}
where
\begin{itemize}
\item P(A$|$B) denotes the posterior
\item P(B$|$A) denotes the likelihood
\item P(A) denotes the prior
\item P(B) denotes the evidence
\end{itemize}
This formula relates the conditional probability of a particular parameter value A given data B to the probability of B given A. This helps us find the probability of occurrence of an event related to any condition.One can realise the beauty of this formula when he/she uses it in machine learning, where it is extensively used.
\begin{flushleft}
Name: Vaibhav K E\\
Github User ID: vaibhavkev
\end{flushleft}