diff --git a/doc/pub/week9/html/._week9-bs000.html b/doc/pub/week9/html/._week9-bs000.html index 7c08e6d5..0ee9f022 100644 --- a/doc/pub/week9/html/._week9-bs000.html +++ b/doc/pub/week9/html/._week9-bs000.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -187,7 +193,7 @@

    March 11-15

  • 9
  • 10
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs001.html b/doc/pub/week9/html/._week9-bs001.html index af5485f6..977e8485 100644 --- a/doc/pub/week9/html/._week9-bs001.html +++ b/doc/pub/week9/html/._week9-bs001.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -187,7 +193,7 @@

    Overview of week 11, Mar
  • 10
  • 11
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs002.html b/doc/pub/week9/html/._week9-bs002.html index 2a20b565..c96461c9 100644 --- a/doc/pub/week9/html/._week9-bs002.html +++ b/doc/pub/week9/html/._week9-bs002.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -174,7 +180,7 @@

    Why resampling methods ?

  • 11
  • 12
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs003.html b/doc/pub/week9/html/._week9-bs003.html index bbccd386..426da770 100644 --- a/doc/pub/week9/html/._week9-bs003.html +++ b/doc/pub/week9/html/._week9-bs003.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -179,7 +185,7 @@

    Statistical analysis

  • 12
  • 13
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs004.html b/doc/pub/week9/html/._week9-bs004.html index 5d054a97..18244a7c 100644 --- a/doc/pub/week9/html/._week9-bs004.html +++ b/doc/pub/week9/html/._week9-bs004.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -175,7 +181,7 @@

    And why do we use such me
  • 13
  • 14
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs005.html b/doc/pub/week9/html/._week9-bs005.html index 17b52a60..d6b04049 100644 --- a/doc/pub/week9/html/._week9-bs005.html +++ b/doc/pub/week9/html/._week9-bs005.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -165,11 +171,6 @@

    Central limit theorem

    -

    Note that we use \( n \) instead of \( n-1 \) in the definition of -variance. The sample variance and mean are not necessarily equal to -the exact values we would get if we knew the corresponding probability -distribution. -

    @@ -191,7 +192,7 @@

    Central limit theorem

  • 14
  • 15
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs006.html b/doc/pub/week9/html/._week9-bs006.html index 62708c8d..c5569c77 100644 --- a/doc/pub/week9/html/._week9-bs006.html +++ b/doc/pub/week9/html/._week9-bs006.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,26 +150,13 @@

     

     

     

    -

    Running many measurements

    +

    Further remarks

    -
    -
    - -

    With the assumption that the average measurements \( i \) are also defined as iid stochastic variables and have the same probability function \( p \), -we defined the total average over \( m \) experiments as +

    Note that we use \( n \) instead of \( n-1 \) in the definition of +variance. The sample variance and the sample mean are not necessarily equal to +the exact values we would get if we knew the corresponding probability +distribution.

    -$$ -\overline{X}=\frac{1}{m}\sum_{i} \overline{x}_{i}. -$$ - -

    and the total variance

    -$$ -\sigma^2_{m}=\frac{1}{m}\sum_{i} \left( \overline{x}_{i}-\overline{X}\right)^2. -$$ -
    -
    - -

    These are the quantities we used in showing that if the individual mean values are iid stochastic variables, then in the limit \( m\rightarrow \infty \), the distribution for \( \overline{X} \) is given by a Gaussian distribution with variance \( \sigma^2_m \).

    @@ -186,7 +179,7 @@

    Running many measurements

    15
  • 16
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs007.html b/doc/pub/week9/html/._week9-bs007.html index dfdd76e6..4c38912b 100644 --- a/doc/pub/week9/html/._week9-bs007.html +++ b/doc/pub/week9/html/._week9-bs007.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,23 +150,26 @@

     

     

     

    -

    Adding more definitions

    +

    Running many measurements

    -

    The total sample variance over the \( mn \) measurements is defined as

    +
    +
    + +

    With the assumption that the average measurements \( i \) are also defined as iid stochastic variables and have the same probability function \( p \), +we defined the total average over \( m \) experiments as +

    $$ -\sigma^2=\frac{1}{mn}\sum_{i=1}^{m} \sum_{j=1}^{n}\left(x_{ij}-\overline{X}\right)^2. +\overline{X}=\frac{1}{m}\sum_{i} \overline{x}_{i}. $$ -

    We have from the equation for \( \sigma_m^2 \)

    +

    and the total variance

    $$ -\overline{x}_i-\overline{X}=\frac{1}{n}\sum_{j=1}^{n}\left(x_{i}-\overline{X}\right), -$$ - -

    and introducing the centered value \( \tilde{x}_{ij}=x_{ij}-\overline{X} \), we can rewrite \( \sigma_m^2 \) as

    -$$ -\sigma^2_{m}=\frac{1}{m}\sum_{i} \left( \overline{x}_{i}-\overline{X}\right)^2=\frac{1}{m}\sum_{i=1}^{m}\left[ \frac{i}{n}\sum_{j=1}^{n}\tilde{x}_{ij}\right]^2. +\sigma^2_{m}=\frac{1}{m}\sum_{i} \left( \overline{x}_{i}-\overline{X}\right)^2. $$ +
    +
    +

    These are the quantities we used in showing that if the individual mean values are iid stochastic variables, then in the limit \( m\rightarrow \infty \), the distribution for \( \overline{X} \) is given by a Gaussian distribution with variance \( \sigma^2_m \).

    @@ -184,7 +193,7 @@

    Adding more definitions

  • 16
  • 17
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs008.html b/doc/pub/week9/html/._week9-bs008.html index 9a90b8c7..a20bb699 100644 --- a/doc/pub/week9/html/._week9-bs008.html +++ b/doc/pub/week9/html/._week9-bs008.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,17 +150,23 @@

     

     

     

    -

    Further rewriting

    +

    Adding more definitions

    -

    We can rewrite the latter in terms of a sum over diagonal elements only and another sum which contains the non-diagonal elements

    +

    The total sample variance over the \( mn \) measurements is defined as

    $$ -\begin{align*} -\sigma^2_{m}& =\frac{1}{m}\sum_{i=1}^{m}\left[ \frac{i}{n}\sum_{j=1}^{n}\tilde{x}_{ij}\right]^2 \\ - & = \frac{1}{mn^2}\sum_{i=1}^{m} \sum_{j=1}^{n}\tilde{x}_{ij}^2+\frac{2}{mn^2}\sum_{i=1}^{m} \sum_{j < k}^{n}\tilde{x}_{ij}\tilde{x}_{ik}. -\end{align*} +\sigma^2=\frac{1}{mn}\sum_{i=1}^{m} \sum_{j=1}^{n}\left(x_{ij}-\overline{X}\right)^2. +$$ + +

    We have from the equation for \( \sigma_m^2 \)

    +$$ +\overline{x}_i-\overline{X}=\frac{1}{n}\sum_{j=1}^{n}\left(x_{i}-\overline{X}\right), +$$ + +

    and introducing the centered value \( \tilde{x}_{ij}=x_{ij}-\overline{X} \), we can rewrite \( \sigma_m^2 \) as

    +$$ +\sigma^2_{m}=\frac{1}{m}\sum_{i} \left( \overline{x}_{i}-\overline{X}\right)^2=\frac{1}{m}\sum_{i=1}^{m}\left[ \frac{i}{n}\sum_{j=1}^{n}\tilde{x}_{ij}\right]^2. $$ -

    The first term on the last rhs is nothing but the total sample variance \( \sigma^2 \) divided by \( m \). The second term represents the covariance.

    @@ -179,7 +191,7 @@

    Further rewriting

  • 17
  • 18
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs009.html b/doc/pub/week9/html/._week9-bs009.html index 60153053..9ecf8b6f 100644 --- a/doc/pub/week9/html/._week9-bs009.html +++ b/doc/pub/week9/html/._week9-bs009.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,26 +150,17 @@

     

     

     

    -

    The covariance term

    +

    Further rewriting

    -

    Using the definition of the total sample variance we have

    +

    We can rewrite the latter in terms of a sum over diagonal elements only and another sum which contains the non-diagonal elements

    $$ \begin{align*} -\sigma^2_{m}& = \frac{\sigma^2}{m}+\frac{2}{mn^2}\sum_{i=1}^{m} \sum_{j < k}^{n}\tilde{x}_{ij}\tilde{x}_{ik}. +\sigma^2_{m}& =\frac{1}{m}\sum_{i=1}^{m}\left[ \frac{i}{n}\sum_{j=1}^{n}\tilde{x}_{ij}\right]^2 \\ + & = \frac{1}{mn^2}\sum_{i=1}^{m} \sum_{j=1}^{n}\tilde{x}_{ij}^2+\frac{2}{mn^2}\sum_{i=1}^{m} \sum_{j < k}^{n}\tilde{x}_{ij}\tilde{x}_{ik}. \end{align*} $$ -

    The first term is what we have used till now in order to estimate the -standard deviation. However, the second term which gives us a measure -of the correlations between different stochastic events, can result in -contributions which give rise to a larger standard deviation and -variance \( \sigma_m^2 \). Note also the evaluation of the second term -leads to a double sum over all events. If we run a VMC calculation -with say \( 10^9 \) Monte carlo samples, the latter term would lead to -\( 10^{18} \) function evaluations. We don't want to, by obvious reasons, to venture into that many evaluations. -

    - -

    Note also that if our stochastic events are iid then the covariance terms is zero.

    +

    The first term on the last rhs is nothing but the total sample variance \( \sigma^2 \) divided by \( m \). The second term represents the covariance.

    @@ -188,6 +185,8 @@

    The covariance term

  • 17
  • 18
  • 19
  • +
  • ...
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs010.html b/doc/pub/week9/html/._week9-bs010.html index 3a7972c3..f41eadcc 100644 --- a/doc/pub/week9/html/._week9-bs010.html +++ b/doc/pub/week9/html/._week9-bs010.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,23 +150,26 @@

     

     

     

    -

    Rewriting the covariance term

    +

    The covariance term

    -

    We introduce now a variable \( d=\vert j-k\vert \) and rewrite

    +

    Using the definition of the total sample variance we have

    $$ -\frac{2}{mn^2}\sum_{i=1}^{m} \sum_{j < k}^{n}\tilde{x}_{ij}\tilde{x}_{ik}, +\begin{align*} +\sigma^2_{m}& = \frac{\sigma^2}{m}+\frac{2}{mn^2}\sum_{i=1}^{m} \sum_{j < k}^{n}\tilde{x}_{ij}\tilde{x}_{ik}. +\end{align*} $$ -

    in terms of a function

    -$$ -f_d=\frac{2}{mn}\sum_{i=1}^{m} \sum_{k=1}^{n-d}\tilde{x}_{ik}\tilde{x}_{i(k+d)}. -$$ - -

    We note that for \( d=0 \) we have

    -$$ -f_0=\frac{2}{mn}\sum_{i=1}^{m} \sum_{k=1}^{n}\tilde{x}_{ik}\tilde{x}_{i(k)}=\sigma^2! -$$ +

    The first term is what we have used till now in order to estimate the +standard deviation. However, the second term which gives us a measure +of the correlations between different stochastic events, can result in +contributions which give rise to a larger standard deviation and +variance \( \sigma_m^2 \). Note also the evaluation of the second term +leads to a double sum over all events. If we run a VMC calculation +with say \( 10^9 \) Monte carlo samples, the latter term would lead to +\( 10^{18} \) function evaluations. We don't want to, by obvious reasons, to venture into that many evaluations. +

    +

    Note also that if our stochastic events are iid then the covariance terms is zero.

    @@ -185,6 +194,9 @@

    Rewriting the covariance t
  • 17
  • 18
  • 19
  • +
  • 20
  • +
  • ...
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs011.html b/doc/pub/week9/html/._week9-bs011.html index b24b2cf9..765472ef 100644 --- a/doc/pub/week9/html/._week9-bs011.html +++ b/doc/pub/week9/html/._week9-bs011.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,16 +150,23 @@

     

     

     

    -

    Introducing the correlation function

    +

    Rewriting the covariance term

    -

    We introduce then a correlation function \( \kappa_d=f_d/\sigma^2 \). Note that \( \kappa_0 =1 \). We rewrite the variance \( \sigma_m^2 \) as

    +

    We introduce now a variable \( d=\vert j-k\vert \) and rewrite

    $$ -\begin{align*} -\sigma^2_{m}& = \frac{\sigma^2}{m}\left[1+2\sum_{d=1}^{n-1} \kappa_d\right]. -\end{align*} +\frac{2}{mn^2}\sum_{i=1}^{m} \sum_{j < k}^{n}\tilde{x}_{ij}\tilde{x}_{ik}, +$$ + +

    in terms of a function

    +$$ +f_d=\frac{2}{mn}\sum_{i=1}^{m} \sum_{k=1}^{n-d}\tilde{x}_{ik}\tilde{x}_{i(k+d)}. +$$ + +

    We note that for \( d=0 \) we have

    +$$ +f_0=\frac{2}{mn}\sum_{i=1}^{m} \sum_{k=1}^{n}\tilde{x}_{ik}\tilde{x}_{i(k)}=\sigma^2! $$ -

    The code here shows the evolution of \( \kappa_d \) as a function of \( d \) for a series of random numbers. We see that the function \( \kappa_d \) approaches \( 0 \) as \( d\rightarrow \infty \).

    @@ -177,6 +190,10 @@

    Introducing the cor
  • 17
  • 18
  • 19
  • +
  • 20
  • +
  • 21
  • +
  • ...
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs012.html b/doc/pub/week9/html/._week9-bs012.html index 3b928c98..083ad35c 100644 --- a/doc/pub/week9/html/._week9-bs012.html +++ b/doc/pub/week9/html/._week9-bs012.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,28 +150,16 @@

     

     

     

    -

    Resampling methods: Blocking

    +

    Introducing the correlation function

    -

    The blocking method was made popular by Flyvbjerg and Pedersen (1989) -and has become one of the standard ways to estimate -\( V(\widehat{\theta}) \) for exactly one \( \widehat{\theta} \), namely -\( \widehat{\theta} = \overline{X} \). -

    - -

    Assume \( n = 2^d \) for some integer \( d>1 \) and \( X_1,X_2,\cdots, X_n \) is a stationary time series to begin with. -Moreover, assume that the time series is asymptotically uncorrelated. We switch to vector notation by arranging \( X_1,X_2,\cdots,X_n \) in an \( n \)-tuple. Define: -

    +

    We introduce then a correlation function \( \kappa_d=f_d/\sigma^2 \). Note that \( \kappa_0 =1 \). We rewrite the variance \( \sigma_m^2 \) as

    $$ \begin{align*} -\hat{X} = (X_1,X_2,\cdots,X_n). +\sigma^2_{m}& = \frac{\sigma^2}{m}\left[1+2\sum_{d=1}^{n-1} \kappa_d\right]. \end{align*} $$ -

    The strength of the blocking method is when the number of -observations, \( n \) is large. For large \( n \), the complexity of dependent -bootstrapping scales poorly, but the blocking method does not, -moreover, it becomes more accurate the larger \( n \) is. -

    +

    The code here shows the evolution of \( \kappa_d \) as a function of \( d \) for a series of random numbers. We see that the function \( \kappa_d \) approaches \( 0 \) as \( d\rightarrow \infty \).

    @@ -188,6 +182,9 @@

    Resampling methods: Blocking
  • 17
  • 18
  • 19
  • +
  • 20
  • +
  • 21
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs013.html b/doc/pub/week9/html/._week9-bs013.html index 024ec8f8..b3926051 100644 --- a/doc/pub/week9/html/._week9-bs013.html +++ b/doc/pub/week9/html/._week9-bs013.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,40 +150,23 @@

     

     

     

    -

    Blocking Transformations

    -

    We now define -blocking transformations. The idea is to take the mean of subsequent -pair of elements from \( \vec{X} \) and form a new vector -\( \vec{X}_1 \). Continuing in the same way by taking the mean of -subsequent pairs of elements of \( \vec{X}_1 \) we obtain \( \vec{X}_2 \), and -so on. -Define \( \vec{X}_i \) recursively by: +

    Resampling methods: Blocking

    + +

    The blocking method was made popular by Flyvbjerg and Pedersen (1989) +and has become one of the standard ways to estimate +\( V(\widehat{\theta}) \) for exactly one \( \widehat{\theta} \), namely +\( \widehat{\theta} = \overline{X} \).

    +

    Assume \( n = 2^d \) for some integer \( d>1 \) and \( X_1,X_2,\cdots, X_n \) is a stationary time series to begin with. +Moreover, assume that the series is asymptotically uncorrelated. We switch to vector notation by arranging \( X_1,X_2,\cdots,X_n \) in an \( n \)-tuple. Define: +

    $$ -\begin{align} -(\vec{X}_0)_k &\equiv (\vec{X})_k \nonumber \\ -(\vec{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\vec{X}_i)_{2k-1} + -(\vec{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 -\tag{1} -\end{align} +\begin{align*} +\hat{X} = (X_1,X_2,\cdots,X_n). +\end{align*} $$ -

    The quantity \( \vec{X}_k \) is -subject to \( k \) blocking transformations. We now have \( d \) vectors -\( \vec{X}_0, \vec{X}_1,\cdots,\vec X_{d-1} \) containing the subsequent -averages of observations. It turns out that if the components of -\( \vec{X} \) is a stationary time series, then the components of -\( \vec{X}_i \) is a stationary time series for all \( 0 \leq i \leq d-1 \) -

    - -

    We can then compute the autocovariance, the variance, sample mean, and -number of observations for each \( i \). -Let \( \gamma_i, \sigma_i^2, -\overline{X}_i \) denote the autocovariance, variance and average of the -elements of \( \vec{X}_i \) and let \( n_i \) be the number of elements of -\( \vec{X}_i \). It follows by induction that \( n_i = n/2^i \). -

    @@ -199,6 +188,9 @@

    Blocking Transformations

  • 17
  • 18
  • 19
  • +
  • 20
  • +
  • 21
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs014.html b/doc/pub/week9/html/._week9-bs014.html index 0950bd6f..17f75c4d 100644 --- a/doc/pub/week9/html/._week9-bs014.html +++ b/doc/pub/week9/html/._week9-bs014.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,25 +150,13 @@

     

     

     

    -

    Blocking Transformations

    +

    Why blocking?

    -

    Using the -definition of the blocking transformation and the distributive -property of the covariance, it is clear that since \( h =|i-j| \) -we can define +

    The strength of the blocking method is when the number of +observations, \( n \) is large. For large \( n \), the complexity of dependent +bootstrapping scales poorly, but the blocking method does not, +moreover, it becomes more accurate the larger \( n \) is.

    -$$ -\begin{align} -\gamma_{k+1}(h) &= cov\left( ({X}_{k+1})_{i}, ({X}_{k+1})_{j} \right) \nonumber \\ -&= \frac{1}{4}cov\left( ({X}_{k})_{2i-1} + ({X}_{k})_{2i}, ({X}_{k})_{2j-1} + ({X}_{k})_{2j} \right) \nonumber \\ -&= \frac{1}{2}\gamma_{k}(2h) + \frac{1}{2}\gamma_k(2h+1) \hspace{0.1cm} \mathrm{h = 0} -\tag{2}\\ -&=\frac{1}{4}\gamma_k(2h-1) + \frac{1}{2}\gamma_k(2h) + \frac{1}{4}\gamma_k(2h+1) \quad \mathrm{else} -\tag{3} -\end{align} -$$ - -

    The quantity \( \hat{X} \) is asymptotic uncorrelated by assumption, \( \hat{X}_k \) is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample mean \( V(\overline{X}) \).

    @@ -183,6 +177,9 @@

    Blocking Transformations

  • 17
  • 18
  • 19
  • +
  • 20
  • +
  • 21
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs015.html b/doc/pub/week9/html/._week9-bs015.html index 0735ecb8..58375c48 100644 --- a/doc/pub/week9/html/._week9-bs015.html +++ b/doc/pub/week9/html/._week9-bs015.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,24 +150,24 @@

     

     

     

    -

    Blocking Transformations, getting there

    -

    We have

    -$$ -\begin{align} -V(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. -\tag{4} -\end{align} -$$ +

    Blocking Transformations

    +

    We now define the blocking transformations. The idea is to take the mean of subsequent +pair of elements from \( \boldsymbol{X} \) and form a new vector +\( \boldsymbol{X}_1 \). Continuing in the same way by taking the mean of +subsequent pairs of elements of \( \boldsymbol{X}_1 \) we obtain \( \boldsymbol{X}_2 \), and +so on. +Define \( \boldsymbol{X}_i \) recursively by: +

    -

    The term \( e_k \) is called the truncation error:

    $$ -\begin{equation} -e_k = \frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h). -\tag{5} -\end{equation} +\begin{align} +(\boldsymbol{X}_0)_k &\equiv (\boldsymbol{X})_k \nonumber \\ +(\boldsymbol{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\boldsymbol{X}_i)_{2k-1} + +(\boldsymbol{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 +\tag{1} +\end{align} $$ -

    We can show that \( V(\overline{X}_i) = V(\overline{X}_j) \) for all \( 0 \leq i \leq d-1 \) and \( 0 \leq j \leq d-1 \).

    @@ -181,6 +187,9 @@

    Blocking Transfor
  • 17
  • 18
  • 19
  • +
  • 20
  • +
  • 21
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs016.html b/doc/pub/week9/html/._week9-bs016.html index df515c77..2e67d0cc 100644 --- a/doc/pub/week9/html/._week9-bs016.html +++ b/doc/pub/week9/html/._week9-bs016.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,34 +150,23 @@

     

     

     

    -

    Blocking Transformations, final expressions

    +

    Blocking transformations

    -

    We can then wrap up

    -$$ -\begin{align} -n_{j+1} \overline{X}_{j+1} &= \sum_{i=1}^{n_{j+1}} (\hat{X}_{j+1})_i = \frac{1}{2}\sum_{i=1}^{n_{j}/2} (\hat{X}_{j})_{2i-1} + (\hat{X}_{j})_{2i} \nonumber \\ -&= \frac{1}{2}\left[ (\hat{X}_j)_1 + (\hat{X}_j)_2 + \cdots + (\hat{X}_j)_{n_j} \right] = \underbrace{\frac{n_j}{2}}_{=n_{j+1}} \overline{X}_j = n_{j+1}\overline{X}_j. -\tag{6} -\end{align} -$$ - -

    By repeated use of this equation we get \( V(\overline{X}_i) = V(\overline{X}_0) = V(\overline{X}) \) for all \( 0 \leq i \leq d-1 \). This has the consequence that

    -$$ -\begin{align} -V(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \tag{7} -\end{align} -$$ - -

    Flyvbjerg and Petersen demonstrated that the sequence -\( \{e_k\}_{k=0}^{d-1} \) is decreasing, and conjecture that the term -\( e_k \) can be made as small as we would like by making \( k \) (and hence -\( d \)) sufficiently large. The sequence is decreasing (Master of Science thesis by Marius Jonsson, UiO 2018). -It means we can apply blocking transformations until -\( e_k \) is sufficiently small, and then estimate \( V(\overline{X}) \) by -\( \widehat{\sigma}^2_k/n_k \). +

    The quantity \( \boldsymbol{X}_k \) is +subject to \( k \) blocking transformations. We now have \( d \) vectors +\( \boldsymbol{X}_0, \boldsymbol{X}_1,\cdots,\vec X_{d-1} \) containing the subsequent +averages of observations. It turns out that if the components of +\( \boldsymbol{X} \) is a stationary time series, then the components of +\( \boldsymbol{X}_i \) is a stationary time series for all \( 0 \leq i \leq d-1 \)

    -

    For an elegant solution and proof of the blocking method, see the recent article of Marius Jonsson (former MSc student of the Computational Physics group).

    +

    We can then compute the autocovariance, the variance, sample mean, and +number of observations for each \( i \). +Let \( \gamma_i, \sigma_i^2, +\overline{X}_i \) denote the covariance, variance and average of the +elements of \( \boldsymbol{X}_i \) and let \( n_i \) be the number of elements of +\( \boldsymbol{X}_i \). It follows by induction that \( n_i = n/2^i \). +

    @@ -190,6 +185,9 @@

    Blocking Tran
  • 17
  • 18
  • 19
  • +
  • 20
  • +
  • 21
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs017.html b/doc/pub/week9/html/._week9-bs017.html index 637e3df8..9b2ac236 100644 --- a/doc/pub/week9/html/._week9-bs017.html +++ b/doc/pub/week9/html/._week9-bs017.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,245 +150,27 @@

     

     

     

    -

    Example code form last week

    - - -
    -
    -
    -
    -
    -
    # 2-electron VMC code for 2dim quantum dot with importance sampling
    -# Using gaussian rng for new positions and Metropolis- Hastings 
    -# Added energy minimization
    -from math import exp, sqrt
    -from random import random, seed, normalvariate
    -import numpy as np
    -import matplotlib.pyplot as plt
    -from mpl_toolkits.mplot3d import Axes3D
    -from matplotlib import cm
    -from matplotlib.ticker import LinearLocator, FormatStrFormatter
    -from scipy.optimize import minimize
    -import sys
    -import os
    -
    -# Where to save data files
    -PROJECT_ROOT_DIR = "Results"
    -DATA_ID = "Results/EnergyMin"
    -
    -if not os.path.exists(PROJECT_ROOT_DIR):
    -    os.mkdir(PROJECT_ROOT_DIR)
    -
    -if not os.path.exists(DATA_ID):
    -    os.makedirs(DATA_ID)
    -
    -def data_path(dat_id):
    -    return os.path.join(DATA_ID, dat_id)
    -
    -outfile = open(data_path("Energies.dat"),'w')
    -
    -
    -# Trial wave function for the 2-electron quantum dot in two dims
    -def WaveFunction(r,alpha,beta):
    -    r1 = r[0,0]**2 + r[0,1]**2
    -    r2 = r[1,0]**2 + r[1,1]**2
    -    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    -    deno = r12/(1+beta*r12)
    -    return exp(-0.5*alpha*(r1+r2)+deno)
    -
    -# Local energy  for the 2-electron quantum dot in two dims, using analytical local energy
    -def LocalEnergy(r,alpha,beta):
    -    
    -    r1 = (r[0,0]**2 + r[0,1]**2)
    -    r2 = (r[1,0]**2 + r[1,1]**2)
    -    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    -    deno = 1.0/(1+beta*r12)
    -    deno2 = deno*deno
    -    return 0.5*(1-alpha*alpha)*(r1 + r2) +2.0*alpha + 1.0/r12+deno2*(alpha*r12-deno2+2*beta*deno-1.0/r12)
    -
    -# Derivate of wave function ansatz as function of variational parameters
    -def DerivativeWFansatz(r,alpha,beta):
    -    
    -    WfDer  = np.zeros((2), np.double)
    -    r1 = (r[0,0]**2 + r[0,1]**2)
    -    r2 = (r[1,0]**2 + r[1,1]**2)
    -    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    -    deno = 1.0/(1+beta*r12)
    -    deno2 = deno*deno
    -    WfDer[0] = -0.5*(r1+r2)
    -    WfDer[1] = -r12*r12*deno2
    -    return  WfDer
    -
    -# Setting up the quantum force for the two-electron quantum dot, recall that it is a vector
    -def QuantumForce(r,alpha,beta):
    -
    -    qforce = np.zeros((NumberParticles,Dimension), np.double)
    -    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    -    deno = 1.0/(1+beta*r12)
    -    qforce[0,:] = -2*r[0,:]*alpha*(r[0,:]-r[1,:])*deno*deno/r12
    -    qforce[1,:] = -2*r[1,:]*alpha*(r[1,:]-r[0,:])*deno*deno/r12
    -    return qforce
    -    
    -
    -# Computing the derivative of the energy and the energy 
    -def EnergyDerivative(x0):
    -
    -    
    -    # Parameters in the Fokker-Planck simulation of the quantum force
    -    D = 0.5
    -    TimeStep = 0.05
    -    # positions
    -    PositionOld = np.zeros((NumberParticles,Dimension), np.double)
    -    PositionNew = np.zeros((NumberParticles,Dimension), np.double)
    -    # Quantum force
    -    QuantumForceOld = np.zeros((NumberParticles,Dimension), np.double)
    -    QuantumForceNew = np.zeros((NumberParticles,Dimension), np.double)
    -
    -    energy = 0.0
    -    DeltaE = 0.0
    -    alpha = x0[0]
    -    beta = x0[1]
    -    EnergyDer = 0.0
    -    DeltaPsi = 0.0
    -    DerivativePsiE = 0.0 
    -    #Initial position
    -    for i in range(NumberParticles):
    -        for j in range(Dimension):
    -            PositionOld[i,j] = normalvariate(0.0,1.0)*sqrt(TimeStep)
    -    wfold = WaveFunction(PositionOld,alpha,beta)
    -    QuantumForceOld = QuantumForce(PositionOld,alpha, beta)
    -
    -    #Loop over MC MCcycles
    -    for MCcycle in range(NumberMCcycles):
    -        #Trial position moving one particle at the time
    -        for i in range(NumberParticles):
    -            for j in range(Dimension):
    -                PositionNew[i,j] = PositionOld[i,j]+normalvariate(0.0,1.0)*sqrt(TimeStep)+\
    -                                       QuantumForceOld[i,j]*TimeStep*D
    -            wfnew = WaveFunction(PositionNew,alpha,beta)
    -            QuantumForceNew = QuantumForce(PositionNew,alpha, beta)
    -            GreensFunction = 0.0
    -            for j in range(Dimension):
    -                GreensFunction += 0.5*(QuantumForceOld[i,j]+QuantumForceNew[i,j])*\
    -	                              (D*TimeStep*0.5*(QuantumForceOld[i,j]-QuantumForceNew[i,j])-\
    -                                      PositionNew[i,j]+PositionOld[i,j])
    -      
    -            GreensFunction = exp(GreensFunction)
    -            ProbabilityRatio = GreensFunction*wfnew**2/wfold**2
    -            #Metropolis-Hastings test to see whether we accept the move
    -            if random() <= ProbabilityRatio:
    -                for j in range(Dimension):
    -                    PositionOld[i,j] = PositionNew[i,j]
    -                    QuantumForceOld[i,j] = QuantumForceNew[i,j]
    -                wfold = wfnew
    -        DeltaE = LocalEnergy(PositionOld,alpha,beta)
    -        DerPsi = DerivativeWFansatz(PositionOld,alpha,beta)
    -        DeltaPsi += DerPsi
    -        energy += DeltaE
    -        DerivativePsiE += DerPsi*DeltaE
    -            
    -    # We calculate mean values
    -    energy /= NumberMCcycles
    -    DerivativePsiE /= NumberMCcycles
    -    DeltaPsi /= NumberMCcycles
    -    EnergyDer  = 2*(DerivativePsiE-DeltaPsi*energy)
    -    return EnergyDer
    -
    -
    -# Computing the expectation value of the local energy 
    -def Energy(x0):
    -    # Parameters in the Fokker-Planck simulation of the quantum force
    -    D = 0.5
    -    TimeStep = 0.05
    -    # positions
    -    PositionOld = np.zeros((NumberParticles,Dimension), np.double)
    -    PositionNew = np.zeros((NumberParticles,Dimension), np.double)
    -    # Quantum force
    -    QuantumForceOld = np.zeros((NumberParticles,Dimension), np.double)
    -    QuantumForceNew = np.zeros((NumberParticles,Dimension), np.double)
    -
    -    energy = 0.0
    -    DeltaE = 0.0
    -    alpha = x0[0]
    -    beta = x0[1]
    -    #Initial position
    -    for i in range(NumberParticles):
    -        for j in range(Dimension):
    -            PositionOld[i,j] = normalvariate(0.0,1.0)*sqrt(TimeStep)
    -    wfold = WaveFunction(PositionOld,alpha,beta)
    -    QuantumForceOld = QuantumForce(PositionOld,alpha, beta)
    -
    -    #Loop over MC MCcycles
    -    for MCcycle in range(NumberMCcycles):
    -        #Trial position moving one particle at the time
    -        for i in range(NumberParticles):
    -            for j in range(Dimension):
    -                PositionNew[i,j] = PositionOld[i,j]+normalvariate(0.0,1.0)*sqrt(TimeStep)+\
    -                                       QuantumForceOld[i,j]*TimeStep*D
    -            wfnew = WaveFunction(PositionNew,alpha,beta)
    -            QuantumForceNew = QuantumForce(PositionNew,alpha, beta)
    -            GreensFunction = 0.0
    -            for j in range(Dimension):
    -                GreensFunction += 0.5*(QuantumForceOld[i,j]+QuantumForceNew[i,j])*\
    -	                              (D*TimeStep*0.5*(QuantumForceOld[i,j]-QuantumForceNew[i,j])-\
    -                                      PositionNew[i,j]+PositionOld[i,j])
    -      
    -            GreensFunction = exp(GreensFunction)
    -            ProbabilityRatio = GreensFunction*wfnew**2/wfold**2
    -            #Metropolis-Hastings test to see whether we accept the move
    -            if random() <= ProbabilityRatio:
    -                for j in range(Dimension):
    -                    PositionOld[i,j] = PositionNew[i,j]
    -                    QuantumForceOld[i,j] = QuantumForceNew[i,j]
    -                wfold = wfnew
    -        DeltaE = LocalEnergy(PositionOld,alpha,beta)
    -        energy += DeltaE
    -        if Printout: 
    -           outfile.write('%f\n' %(energy/(MCcycle+1.0)))            
    -    # We calculate mean values
    -    energy /= NumberMCcycles
    -    return energy
    -
    -#Here starts the main program with variable declarations
    -NumberParticles = 2
    -Dimension = 2
    -# seed for rng generator 
    -seed()
    -# Monte Carlo cycles for parameter optimization
    -Printout = False
    -NumberMCcycles= 10000
    -# guess for variational parameters
    -x0 = np.array([0.9,0.2])
    -# Using Broydens method to find optimal parameters
    -res = minimize(Energy, x0, method='BFGS', jac=EnergyDerivative, options={'gtol': 1e-4,'disp': True})
    -x0 = res.x
    -# Compute the energy again with the optimal parameters and increased number of Monte Cycles
    -NumberMCcycles= 2**19
    -Printout = True
    -FinalEnergy = Energy(x0)
    -EResult = np.array([FinalEnergy,FinalEnergy])
    -outfile.close()
    -#nice printout with Pandas
    -import pandas as pd
    -from pandas import DataFrame
    -data ={'Optimal Parameters':x0, 'Final Energy':EResult}
    -frame = pd.DataFrame(data)
    -print(frame)
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - +

    Blocking Transformations

    + +

    Using the +definition of the blocking transformation and the distributive +property of the covariance, it is clear that since \( h =|i-j| \) +we can define +

    +$$ +\begin{align} +\gamma_{k+1}(h) &= cov\left( ({X}_{k+1})_{i}, ({X}_{k+1})_{j} \right) \nonumber \\ +&= \frac{1}{4}cov\left( ({X}_{k})_{2i-1} + ({X}_{k})_{2i}, ({X}_{k})_{2j-1} + ({X}_{k})_{2j} \right) \nonumber \\ +&= \frac{1}{2}\gamma_{k}(2h) + \frac{1}{2}\gamma_k(2h+1) \hspace{0.1cm} \mathrm{h = 0} +\tag{2}\\ +&=\frac{1}{4}\gamma_k(2h-1) + \frac{1}{2}\gamma_k(2h) + \frac{1}{4}\gamma_k(2h+1) \quad \mathrm{else} +\tag{3} +\end{align} +$$ + +

    The quantity \( \hat{X} \) is asymptotically uncorrelated by assumption, \( \hat{X}_k \) is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample +mean \( \mathrm{var}(\overline{X}) \). +

    @@ -400,6 +188,9 @@

    Example code form last week
  • 17
  • 18
  • 19
  • +
  • 20
  • +
  • 21
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs018.html b/doc/pub/week9/html/._week9-bs018.html index 925c9cbe..db3ecf38 100644 --- a/doc/pub/week9/html/._week9-bs018.html +++ b/doc/pub/week9/html/._week9-bs018.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -144,90 +150,24 @@

     

     

     

    -

    Resampling analysis

    - -

    The next step is then to use the above data sets and perform a -resampling analysis using the blocking method -The blocking code, based on the article of Marius Jonsson is given here -

    - - - -
    -
    -
    -
    -
    -
    # Common imports
    -import os
    -
    -# Where to save the figures and data files
    -DATA_ID = "Results/EnergyMin"
    -
    -def data_path(dat_id):
    -    return os.path.join(DATA_ID, dat_id)
    -
    -infile = open(data_path("Energies.dat"),'r')
    -
    -from numpy import log2, zeros, mean, var, sum, loadtxt, arange, array, cumsum, dot, transpose, diagonal, sqrt
    -from numpy.linalg import inv
    -
    -def block(x):
    -    # preliminaries
    -    n = len(x)
    -    d = int(log2(n))
    -    s, gamma = zeros(d), zeros(d)
    -    mu = mean(x)
    -
    -    # estimate the auto-covariance and variances 
    -    # for each blocking transformation
    -    for i in arange(0,d):
    -        n = len(x)
    -        # estimate autocovariance of x
    -        gamma[i] = (n)**(-1)*sum( (x[0:(n-1)]-mu)*(x[1:n]-mu) )
    -        # estimate variance of x
    -        s[i] = var(x)
    -        # perform blocking transformation
    -        x = 0.5*(x[0::2] + x[1::2])
    -   
    -    # generate the test observator M_k from the theorem
    -    M = (cumsum( ((gamma/s)**2*2**arange(1,d+1)[::-1])[::-1] )  )[::-1]
    -
    -    # we need a list of magic numbers
    -    q =array([6.634897,9.210340, 11.344867, 13.276704, 15.086272, 16.811894, 18.475307, 20.090235, 21.665994, 23.209251, 24.724970, 26.216967, 27.688250, 29.141238, 30.577914, 31.999927, 33.408664, 34.805306, 36.190869, 37.566235, 38.932173, 40.289360, 41.638398, 42.979820, 44.314105, 45.641683, 46.962942, 48.278236, 49.587884, 50.892181])
    -
    -    # use magic to determine when we should have stopped blocking
    -    for k in arange(0,d):
    -        if(M[k] < q[k]):
    -            break
    -    if (k >= d-1):
    -        print("Warning: Use more data")
    -    return mu, s[k]/2**(d-k)
    -
    -
    -x = loadtxt(infile)
    -(mean, var) = block(x) 
    -std = sqrt(var)
    -import pandas as pd
    -from pandas import DataFrame
    -data ={'Mean':[mean], 'STDev':[std]}
    -frame = pd.DataFrame(data,index=['Values'])
    -print(frame)
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - +

    Blocking Transformations, getting there

    +

    We have

    +$$ +\begin{align} +\mathrm{var}(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. +\tag{4} +\end{align} +$$ + +

    The term \( e_k \) is called the truncation error:

    +$$ +\begin{equation} +e_k = \frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h). +\tag{5} +\end{equation} +$$ + +

    We can show that \( \mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_j) \) for all \( 0 \leq i \leq d-1 \) and \( 0 \leq j \leq d-1 \).

    @@ -244,6 +184,10 @@

    Resampling analysis

  • 17
  • 18
  • 19
  • +
  • 20
  • +
  • 21
  • +
  • 22
  • +
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs019.html b/doc/pub/week9/html/._week9-bs019.html index d3b0ada8..0366c982 100644 --- a/doc/pub/week9/html/._week9-bs019.html +++ b/doc/pub/week9/html/._week9-bs019.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -62,48 +63,13 @@ 2, None, 'introducing-the-correlation-function'), - ('Statistics, wrapping up from last week', - 2, - None, - 'statistics-wrapping-up-from-last-week'), - ('Statistics, final expression', - 2, - None, - 'statistics-final-expression'), - ('Statistics, effective number of correlations', - 2, - None, - 'statistics-effective-number-of-correlations'), - ('Can we understand this? Time Auto-correlation Function', - 2, - None, - 'can-we-understand-this-time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Correlation Time', 2, None, 'correlation-time'), ('Resampling methods: Blocking', 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -157,29 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Statistics, wrapping up from last week
  • -
  • Statistics, final expression
  • -
  • Statistics, effective number of correlations
  • -
  • Can we understand this? Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Correlation Time
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -191,44 +150,34 @@

     

     

     

    -

    Time Auto-correlation Function

    - -
    -
    - +

    Blocking Transformations, final expressions

    -

    We rewrite this relation as

    +

    We can then wrap up

    $$ - \langle \mathbf{M}(t) \rangle = \mathbf{\hat{w}}(t)\mathbf{m}=\sum_i\lambda_i^t\alpha_i\mathbf{\hat{v}}_i\mathbf{m}_i. +\begin{align} +n_{j+1} \overline{X}_{j+1} &= \sum_{i=1}^{n_{j+1}} (\hat{X}_{j+1})_i = \frac{1}{2}\sum_{i=1}^{n_{j}/2} (\hat{X}_{j})_{2i-1} + (\hat{X}_{j})_{2i} \nonumber \\ +&= \frac{1}{2}\left[ (\hat{X}_j)_1 + (\hat{X}_j)_2 + \cdots + (\hat{X}_j)_{n_j} \right] = \underbrace{\frac{n_j}{2}}_{=n_{j+1}} \overline{X}_j = n_{j+1}\overline{X}_j. +\tag{6} +\end{align} $$ -

    If we define \( m_i=\mathbf{\hat{v}}_i\mathbf{m}_i \) as the expectation value of -\( \mathbf{M} \) in the \( i^{\mathrm{th}} \) eigenstate we can rewrite the last equation as -

    +

    By repeated use of this equation we get \( \mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_0) = \mathrm{var}(\overline{X}) \) for all \( 0 \leq i \leq d-1 \). This has the consequence that

    $$ - \langle \mathbf{M}(t) \rangle = \sum_i\lambda_i^t\alpha_im_i. +\begin{align} +\mathrm{var}(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \tag{7} +\end{align} $$ -

    Since we have that in the limit \( t\rightarrow \infty \) the mean value is dominated by the -the largest eigenvalue \( \lambda_0 \), we can rewrite the last equation as +

    Flyvbjerg and Petersen demonstrated that the sequence +\( \{e_k\}_{k=0}^{d-1} \) is decreasing, and conjecture that the term +\( e_k \) can be made as small as we would like by making \( k \) (and hence +\( d \)) sufficiently large. The sequence is decreasing (Master of Science thesis by Marius Jonsson, UiO 2018). +It means we can apply blocking transformations until +\( e_k \) is sufficiently small, and then estimate \( \mathrm{var}(\overline{X}) \) by +\( \widehat{\sigma}^2_k/n_k \).

    -$$ - \langle \mathbf{M}(t) \rangle = \langle \mathbf{M}(\infty) \rangle+\sum_{i\ne 0}\lambda_i^t\alpha_im_i. -$$ - -

    We define the quantity

    -$$ - \tau_i=-\frac{1}{log\lambda_i}, -$$ - -

    and rewrite the last expectation value as

    -$$ - \langle \mathbf{M}(t) \rangle = \langle \mathbf{M}(\infty) \rangle+\sum_{i\ne 0}\alpha_im_ie^{-t/\tau_i}. -\tag{4} -$$ -
    -
    +

    For an elegant solution and proof of the blocking method, see the recent article of Marius Jonsson (former MSc student of the Computational Physics group).

    @@ -247,13 +196,6 @@

    Time Auto-correlation Fun
  • 20
  • 21
  • 22
  • -
  • 23
  • -
  • 24
  • -
  • 25
  • -
  • 26
  • -
  • 27
  • -
  • 28
  • -
  • 29
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs020.html b/doc/pub/week9/html/._week9-bs020.html index 882b8532..29d40bbe 100644 --- a/doc/pub/week9/html/._week9-bs020.html +++ b/doc/pub/week9/html/._week9-bs020.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -62,48 +63,13 @@ 2, None, 'introducing-the-correlation-function'), - ('Statistics, wrapping up from last week', - 2, - None, - 'statistics-wrapping-up-from-last-week'), - ('Statistics, final expression', - 2, - None, - 'statistics-final-expression'), - ('Statistics, effective number of correlations', - 2, - None, - 'statistics-effective-number-of-correlations'), - ('Can we understand this? Time Auto-correlation Function', - 2, - None, - 'can-we-understand-this-time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Correlation Time', 2, None, 'correlation-time'), ('Resampling methods: Blocking', 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -157,29 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Statistics, wrapping up from last week
  • -
  • Statistics, final expression
  • -
  • Statistics, effective number of correlations
  • -
  • Can we understand this? Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Correlation Time
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -191,32 +150,243 @@

     

     

     

    -

    Time Auto-correlation Function

    -
    -
    - - -

    The quantities \( \tau_i \) are the correlation times for the system. They control also the auto-correlation function -discussed above. The longest correlation time is obviously given by the second largest -eigenvalue \( \tau_1 \), which normally defines the correlation time discussed above. For large times, this is the -only correlation time that survives. If higher eigenvalues of the transition matrix are well separated from -\( \lambda_1 \) and we simulate long enough, \( \tau_1 \) may well define the correlation time. -In other cases we may not be able to extract a reliable result for \( \tau_1 \). -Coming back to the time correlation function \( \phi(t) \) we can present a more general definition in terms -of the mean magnetizations $ \langle \mathbf{M}(t) \rangle$. Recalling that the mean value is equal -to $ \langle \mathbf{M}(\infty) \rangle$ we arrive at the expectation values -

    -$$ -\phi(t) =\langle \mathbf{M}(0)-\mathbf{M}(\infty)\rangle \langle \mathbf{M}(t)-\mathbf{M}(\infty)\rangle, -$$ - -

    resulting in

    -$$ -\phi(t) =\sum_{i,j\ne 0}m_i\alpha_im_j\alpha_je^{-t/\tau_i}, -$$ - -

    which is appropriate for all times.

    +

    Example code form last week

    + + +
    +
    +
    +
    +
    +
    # 2-electron VMC code for 2dim quantum dot with importance sampling
    +# Using gaussian rng for new positions and Metropolis- Hastings 
    +# Added energy minimization
    +from math import exp, sqrt
    +from random import random, seed, normalvariate
    +import numpy as np
    +import matplotlib.pyplot as plt
    +from mpl_toolkits.mplot3d import Axes3D
    +from matplotlib import cm
    +from matplotlib.ticker import LinearLocator, FormatStrFormatter
    +from scipy.optimize import minimize
    +import sys
    +import os
    +
    +# Where to save data files
    +PROJECT_ROOT_DIR = "Results"
    +DATA_ID = "Results/EnergyMin"
    +
    +if not os.path.exists(PROJECT_ROOT_DIR):
    +    os.mkdir(PROJECT_ROOT_DIR)
    +
    +if not os.path.exists(DATA_ID):
    +    os.makedirs(DATA_ID)
    +
    +def data_path(dat_id):
    +    return os.path.join(DATA_ID, dat_id)
    +
    +outfile = open(data_path("Energies.dat"),'w')
    +
    +
    +# Trial wave function for the 2-electron quantum dot in two dims
    +def WaveFunction(r,alpha,beta):
    +    r1 = r[0,0]**2 + r[0,1]**2
    +    r2 = r[1,0]**2 + r[1,1]**2
    +    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    +    deno = r12/(1+beta*r12)
    +    return exp(-0.5*alpha*(r1+r2)+deno)
    +
    +# Local energy  for the 2-electron quantum dot in two dims, using analytical local energy
    +def LocalEnergy(r,alpha,beta):
    +    
    +    r1 = (r[0,0]**2 + r[0,1]**2)
    +    r2 = (r[1,0]**2 + r[1,1]**2)
    +    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    +    deno = 1.0/(1+beta*r12)
    +    deno2 = deno*deno
    +    return 0.5*(1-alpha*alpha)*(r1 + r2) +2.0*alpha + 1.0/r12+deno2*(alpha*r12-deno2+2*beta*deno-1.0/r12)
    +
    +# Derivate of wave function ansatz as function of variational parameters
    +def DerivativeWFansatz(r,alpha,beta):
    +    
    +    WfDer  = np.zeros((2), np.double)
    +    r1 = (r[0,0]**2 + r[0,1]**2)
    +    r2 = (r[1,0]**2 + r[1,1]**2)
    +    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    +    deno = 1.0/(1+beta*r12)
    +    deno2 = deno*deno
    +    WfDer[0] = -0.5*(r1+r2)
    +    WfDer[1] = -r12*r12*deno2
    +    return  WfDer
    +
    +# Setting up the quantum force for the two-electron quantum dot, recall that it is a vector
    +def QuantumForce(r,alpha,beta):
    +
    +    qforce = np.zeros((NumberParticles,Dimension), np.double)
    +    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    +    deno = 1.0/(1+beta*r12)
    +    qforce[0,:] = -2*r[0,:]*alpha*(r[0,:]-r[1,:])*deno*deno/r12
    +    qforce[1,:] = -2*r[1,:]*alpha*(r[1,:]-r[0,:])*deno*deno/r12
    +    return qforce
    +    
    +
    +# Computing the derivative of the energy and the energy 
    +def EnergyDerivative(x0):
    +
    +    
    +    # Parameters in the Fokker-Planck simulation of the quantum force
    +    D = 0.5
    +    TimeStep = 0.05
    +    # positions
    +    PositionOld = np.zeros((NumberParticles,Dimension), np.double)
    +    PositionNew = np.zeros((NumberParticles,Dimension), np.double)
    +    # Quantum force
    +    QuantumForceOld = np.zeros((NumberParticles,Dimension), np.double)
    +    QuantumForceNew = np.zeros((NumberParticles,Dimension), np.double)
    +
    +    energy = 0.0
    +    DeltaE = 0.0
    +    alpha = x0[0]
    +    beta = x0[1]
    +    EnergyDer = 0.0
    +    DeltaPsi = 0.0
    +    DerivativePsiE = 0.0 
    +    #Initial position
    +    for i in range(NumberParticles):
    +        for j in range(Dimension):
    +            PositionOld[i,j] = normalvariate(0.0,1.0)*sqrt(TimeStep)
    +    wfold = WaveFunction(PositionOld,alpha,beta)
    +    QuantumForceOld = QuantumForce(PositionOld,alpha, beta)
    +
    +    #Loop over MC MCcycles
    +    for MCcycle in range(NumberMCcycles):
    +        #Trial position moving one particle at the time
    +        for i in range(NumberParticles):
    +            for j in range(Dimension):
    +                PositionNew[i,j] = PositionOld[i,j]+normalvariate(0.0,1.0)*sqrt(TimeStep)+\
    +                                       QuantumForceOld[i,j]*TimeStep*D
    +            wfnew = WaveFunction(PositionNew,alpha,beta)
    +            QuantumForceNew = QuantumForce(PositionNew,alpha, beta)
    +            GreensFunction = 0.0
    +            for j in range(Dimension):
    +                GreensFunction += 0.5*(QuantumForceOld[i,j]+QuantumForceNew[i,j])*\
    +	                              (D*TimeStep*0.5*(QuantumForceOld[i,j]-QuantumForceNew[i,j])-\
    +                                      PositionNew[i,j]+PositionOld[i,j])
    +      
    +            GreensFunction = exp(GreensFunction)
    +            ProbabilityRatio = GreensFunction*wfnew**2/wfold**2
    +            #Metropolis-Hastings test to see whether we accept the move
    +            if random() <= ProbabilityRatio:
    +                for j in range(Dimension):
    +                    PositionOld[i,j] = PositionNew[i,j]
    +                    QuantumForceOld[i,j] = QuantumForceNew[i,j]
    +                wfold = wfnew
    +        DeltaE = LocalEnergy(PositionOld,alpha,beta)
    +        DerPsi = DerivativeWFansatz(PositionOld,alpha,beta)
    +        DeltaPsi += DerPsi
    +        energy += DeltaE
    +        DerivativePsiE += DerPsi*DeltaE
    +            
    +    # We calculate mean values
    +    energy /= NumberMCcycles
    +    DerivativePsiE /= NumberMCcycles
    +    DeltaPsi /= NumberMCcycles
    +    EnergyDer  = 2*(DerivativePsiE-DeltaPsi*energy)
    +    return EnergyDer
    +
    +
    +# Computing the expectation value of the local energy 
    +def Energy(x0):
    +    # Parameters in the Fokker-Planck simulation of the quantum force
    +    D = 0.5
    +    TimeStep = 0.05
    +    # positions
    +    PositionOld = np.zeros((NumberParticles,Dimension), np.double)
    +    PositionNew = np.zeros((NumberParticles,Dimension), np.double)
    +    # Quantum force
    +    QuantumForceOld = np.zeros((NumberParticles,Dimension), np.double)
    +    QuantumForceNew = np.zeros((NumberParticles,Dimension), np.double)
    +
    +    energy = 0.0
    +    DeltaE = 0.0
    +    alpha = x0[0]
    +    beta = x0[1]
    +    #Initial position
    +    for i in range(NumberParticles):
    +        for j in range(Dimension):
    +            PositionOld[i,j] = normalvariate(0.0,1.0)*sqrt(TimeStep)
    +    wfold = WaveFunction(PositionOld,alpha,beta)
    +    QuantumForceOld = QuantumForce(PositionOld,alpha, beta)
    +
    +    #Loop over MC MCcycles
    +    for MCcycle in range(NumberMCcycles):
    +        #Trial position moving one particle at the time
    +        for i in range(NumberParticles):
    +            for j in range(Dimension):
    +                PositionNew[i,j] = PositionOld[i,j]+normalvariate(0.0,1.0)*sqrt(TimeStep)+\
    +                                       QuantumForceOld[i,j]*TimeStep*D
    +            wfnew = WaveFunction(PositionNew,alpha,beta)
    +            QuantumForceNew = QuantumForce(PositionNew,alpha, beta)
    +            GreensFunction = 0.0
    +            for j in range(Dimension):
    +                GreensFunction += 0.5*(QuantumForceOld[i,j]+QuantumForceNew[i,j])*\
    +	                              (D*TimeStep*0.5*(QuantumForceOld[i,j]-QuantumForceNew[i,j])-\
    +                                      PositionNew[i,j]+PositionOld[i,j])
    +      
    +            GreensFunction = exp(GreensFunction)
    +            ProbabilityRatio = GreensFunction*wfnew**2/wfold**2
    +            #Metropolis-Hastings test to see whether we accept the move
    +            if random() <= ProbabilityRatio:
    +                for j in range(Dimension):
    +                    PositionOld[i,j] = PositionNew[i,j]
    +                    QuantumForceOld[i,j] = QuantumForceNew[i,j]
    +                wfold = wfnew
    +        DeltaE = LocalEnergy(PositionOld,alpha,beta)
    +        energy += DeltaE
    +        if Printout: 
    +           outfile.write('%f\n' %(energy/(MCcycle+1.0)))            
    +    # We calculate mean values
    +    energy /= NumberMCcycles
    +    return energy
    +
    +#Here starts the main program with variable declarations
    +NumberParticles = 2
    +Dimension = 2
    +# seed for rng generator 
    +seed()
    +# Monte Carlo cycles for parameter optimization
    +Printout = False
    +NumberMCcycles= 10000
    +# guess for variational parameters
    +x0 = np.array([0.9,0.2])
    +# Using Broydens method to find optimal parameters
    +res = minimize(Energy, x0, method='BFGS', jac=EnergyDerivative, options={'gtol': 1e-4,'disp': True})
    +x0 = res.x
    +# Compute the energy again with the optimal parameters and increased number of Monte Cycles
    +NumberMCcycles= 2**19
    +Printout = True
    +FinalEnergy = Energy(x0)
    +EResult = np.array([FinalEnergy,FinalEnergy])
    +outfile.close()
    +#nice printout with Pandas
    +import pandas as pd
    +from pandas import DataFrame
    +data ={'Optimal Parameters':x0, 'Final Energy':EResult}
    +frame = pd.DataFrame(data)
    +print(frame)
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    @@ -236,13 +406,6 @@

    Time Auto-correlation Fun
  • 20
  • 21
  • 22
  • -
  • 23
  • -
  • 24
  • -
  • 25
  • -
  • 26
  • -
  • 27
  • -
  • 28
  • -
  • 29
  • »
  • diff --git a/doc/pub/week9/html/._week9-bs021.html b/doc/pub/week9/html/._week9-bs021.html index 7c87cad8..a017baf4 100644 --- a/doc/pub/week9/html/._week9-bs021.html +++ b/doc/pub/week9/html/._week9-bs021.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -62,48 +63,13 @@ 2, None, 'introducing-the-correlation-function'), - ('Statistics, wrapping up from last week', - 2, - None, - 'statistics-wrapping-up-from-last-week'), - ('Statistics, final expression', - 2, - None, - 'statistics-final-expression'), - ('Statistics, effective number of correlations', - 2, - None, - 'statistics-effective-number-of-correlations'), - ('Can we understand this? Time Auto-correlation Function', - 2, - None, - 'can-we-understand-this-time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Time Auto-correlation Function', - 2, - None, - 'time-auto-correlation-function'), - ('Correlation Time', 2, None, 'correlation-time'), ('Resampling methods: Blocking', 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -157,29 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Statistics, wrapping up from last week
  • -
  • Statistics, final expression
  • -
  • Statistics, effective number of correlations
  • -
  • Can we understand this? Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Time Auto-correlation Function
  • -
  • Correlation Time
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -191,25 +150,88 @@

     

     

     

    -

    Correlation Time

    -
    -
    - +

    Resampling analysis

    + +

    The next step is then to use the above data sets and perform a +resampling analysis using the blocking method +The blocking code, based on the article of Marius Jonsson is given here +

    + + + +
    +
    +
    +
    +
    +
    # Common imports
    +import os
     
    -

    If the correlation function decays exponentially

    -$$ \phi (t) \sim \exp{(-t/\tau)}$$ +# Where to save the figures and data files +DATA_ID = "Results/EnergyMin" -

    then the exponential correlation time can be computed as the average

    -$$ \tau_{\mathrm{exp}} = -\langle \frac{t}{log|\frac{\phi(t)}{\phi(0)}|} \rangle. $$ +def data_path(dat_id): + return os.path.join(DATA_ID, dat_id) -

    If the decay is exponential, then

    -$$ \int_0^{\infty} dt \phi(t) = \int_0^{\infty} dt \phi(0)\exp{(-t/\tau)} = \tau \phi(0),$$ +infile = open(data_path("Energies.dat"),'r') -

    which suggests another measure of correlation

    -$$ \tau_{\mathrm{int}} = \sum_k \frac{\phi(k)}{\phi(0)}, $$ +from numpy import log2, zeros, mean, var, sum, loadtxt, arange, array, cumsum, dot, transpose, diagonal, sqrt +from numpy.linalg import inv -

    called the integrated correlation time.

    +def block(x): + # preliminaries + n = len(x) + d = int(log2(n)) + s, gamma = zeros(d), zeros(d) + mu = mean(x) + + # estimate the auto-covariance and variances + # for each blocking transformation + for i in arange(0,d): + n = len(x) + # estimate autocovariance of x + gamma[i] = (n)**(-1)*sum( (x[0:(n-1)]-mu)*(x[1:n]-mu) ) + # estimate variance of x + s[i] = var(x) + # perform blocking transformation + x = 0.5*(x[0::2] + x[1::2]) + + # generate the test observator M_k from the theorem + M = (cumsum( ((gamma/s)**2*2**arange(1,d+1)[::-1])[::-1] ) )[::-1] + + # we need a list of magic numbers + q =array([6.634897,9.210340, 11.344867, 13.276704, 15.086272, 16.811894, 18.475307, 20.090235, 21.665994, 23.209251, 24.724970, 26.216967, 27.688250, 29.141238, 30.577914, 31.999927, 33.408664, 34.805306, 36.190869, 37.566235, 38.932173, 40.289360, 41.638398, 42.979820, 44.314105, 45.641683, 46.962942, 48.278236, 49.587884, 50.892181]) + + # use magic to determine when we should have stopped blocking + for k in arange(0,d): + if(M[k] < q[k]): + break + if (k >= d-1): + print("Warning: Use more data") + return mu, s[k]/2**(d-k) + + +x = loadtxt(infile) +(mean, var) = block(x) +std = sqrt(var) +import pandas as pd +from pandas import DataFrame +data ={'Mean':[mean], 'STDev':[std]} +frame = pd.DataFrame(data,index=['Values']) +print(frame) +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    @@ -228,14 +250,6 @@

    Correlation Time

  • 20
  • 21
  • 22
  • -
  • 23
  • -
  • 24
  • -
  • 25
  • -
  • 26
  • -
  • 27
  • -
  • 28
  • -
  • 29
  • -
  • »
  • diff --git a/doc/pub/week9/html/week9-bs.html b/doc/pub/week9/html/week9-bs.html index 7c08e6d5..0ee9f022 100644 --- a/doc/pub/week9/html/week9-bs.html +++ b/doc/pub/week9/html/week9-bs.html @@ -47,6 +47,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -66,7 +67,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -120,19 +123,22 @@
  • Statistical analysis
  • And why do we use such methods?
  • Central limit theorem
  • -
  • Running many measurements
  • -
  • Adding more definitions
  • -
  • Further rewriting
  • -
  • The covariance term
  • -
  • Rewriting the covariance term
  • -
  • Introducing the correlation function
  • -
  • Resampling methods: Blocking
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations
  • -
  • Blocking Transformations, getting there
  • -
  • Blocking Transformations, final expressions
  • -
  • Example code form last week
  • -
  • Resampling analysis
  • +
  • Further remarks
  • +
  • Running many measurements
  • +
  • Adding more definitions
  • +
  • Further rewriting
  • +
  • The covariance term
  • +
  • Rewriting the covariance term
  • +
  • Introducing the correlation function
  • +
  • Resampling methods: Blocking
  • +
  • Why blocking?
  • +
  • Blocking Transformations
  • +
  • Blocking transformations
  • +
  • Blocking Transformations
  • +
  • Blocking Transformations, getting there
  • +
  • Blocking Transformations, final expressions
  • +
  • Example code form last week
  • +
  • Resampling analysis
  • @@ -187,7 +193,7 @@

    March 11-15

  • 9
  • 10
  • ...
  • -
  • 19
  • +
  • 22
  • »
  • diff --git a/doc/pub/week9/html/week9-reveal.html b/doc/pub/week9/html/week9-reveal.html index bdd02a2a..e6b89bc5 100644 --- a/doc/pub/week9/html/week9-reveal.html +++ b/doc/pub/week9/html/week9-reveal.html @@ -289,9 +289,13 @@

    Central limit theorem

    $$

     

    + + +
    +

    Further remarks

    Note that we use \( n \) instead of \( n-1 \) in the definition of -variance. The sample variance and mean are not necessarily equal to +variance. The sample variance and the sample mean are not necessarily equal to the exact values we would get if we knew the corresponding probability distribution.

    @@ -439,7 +443,7 @@

    Resampling methods: Blocking

    Assume \( n = 2^d \) for some integer \( d>1 \) and \( X_1,X_2,\cdots, X_n \) is a stationary time series to begin with. -Moreover, assume that the time series is asymptotically uncorrelated. We switch to vector notation by arranging \( X_1,X_2,\cdots,X_n \) in an \( n \)-tuple. Define: +Moreover, assume that the series is asymptotically uncorrelated. We switch to vector notation by arranging \( X_1,X_2,\cdots,X_n \) in an \( n \)-tuple. Define:

     
    $$ @@ -448,6 +452,10 @@

    Resampling methods: Blocking

    \end{align*} $$

     
    +

    + +
    +

    Why blocking?

    The strength of the blocking method is when the number of observations, \( n \) is large. For large \( n \), the complexity of dependent @@ -458,40 +466,43 @@

    Resampling methods: Blocking

    Blocking Transformations

    -

    We now define -blocking transformations. The idea is to take the mean of subsequent -pair of elements from \( \vec{X} \) and form a new vector -\( \vec{X}_1 \). Continuing in the same way by taking the mean of -subsequent pairs of elements of \( \vec{X}_1 \) we obtain \( \vec{X}_2 \), and +

    We now define the blocking transformations. The idea is to take the mean of subsequent +pair of elements from \( \boldsymbol{X} \) and form a new vector +\( \boldsymbol{X}_1 \). Continuing in the same way by taking the mean of +subsequent pairs of elements of \( \boldsymbol{X}_1 \) we obtain \( \boldsymbol{X}_2 \), and so on. -Define \( \vec{X}_i \) recursively by: +Define \( \boldsymbol{X}_i \) recursively by:

     
    $$ \begin{align} -(\vec{X}_0)_k &\equiv (\vec{X})_k \nonumber \\ -(\vec{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\vec{X}_i)_{2k-1} + -(\vec{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 +(\boldsymbol{X}_0)_k &\equiv (\boldsymbol{X})_k \nonumber \\ +(\boldsymbol{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\boldsymbol{X}_i)_{2k-1} + +(\boldsymbol{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 \tag{1} \end{align} $$

     
    +

    -

    The quantity \( \vec{X}_k \) is +

    +

    Blocking transformations

    + +

    The quantity \( \boldsymbol{X}_k \) is subject to \( k \) blocking transformations. We now have \( d \) vectors -\( \vec{X}_0, \vec{X}_1,\cdots,\vec X_{d-1} \) containing the subsequent +\( \boldsymbol{X}_0, \boldsymbol{X}_1,\cdots,\vec X_{d-1} \) containing the subsequent averages of observations. It turns out that if the components of -\( \vec{X} \) is a stationary time series, then the components of -\( \vec{X}_i \) is a stationary time series for all \( 0 \leq i \leq d-1 \) +\( \boldsymbol{X} \) is a stationary time series, then the components of +\( \boldsymbol{X}_i \) is a stationary time series for all \( 0 \leq i \leq d-1 \)

    We can then compute the autocovariance, the variance, sample mean, and number of observations for each \( i \). Let \( \gamma_i, \sigma_i^2, -\overline{X}_i \) denote the autocovariance, variance and average of the -elements of \( \vec{X}_i \) and let \( n_i \) be the number of elements of -\( \vec{X}_i \). It follows by induction that \( n_i = n/2^i \). +\overline{X}_i \) denote the covariance, variance and average of the +elements of \( \boldsymbol{X}_i \) and let \( n_i \) be the number of elements of +\( \boldsymbol{X}_i \). It follows by induction that \( n_i = n/2^i \).

    @@ -516,7 +527,9 @@

    Blocking Transformations

    $$

     
    -

    The quantity \( \hat{X} \) is asymptotic uncorrelated by assumption, \( \hat{X}_k \) is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample mean \( V(\overline{X}) \).

    +

    The quantity \( \hat{X} \) is asymptotically uncorrelated by assumption, \( \hat{X}_k \) is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample +mean \( \mathrm{var}(\overline{X}) \). +

    @@ -525,7 +538,7 @@

    Blocking Transformations, gettin

     
    $$ \begin{align} -V(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. +\mathrm{var}(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. \tag{4} \end{align} $$ @@ -541,7 +554,7 @@

    Blocking Transformations, gettin $$

     
    -

    We can show that \( V(\overline{X}_i) = V(\overline{X}_j) \) for all \( 0 \leq i \leq d-1 \) and \( 0 \leq j \leq d-1 \).

    +

    We can show that \( \mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_j) \) for all \( 0 \leq i \leq d-1 \) and \( 0 \leq j \leq d-1 \).

    @@ -558,11 +571,11 @@

    Blocking Transformations, fi $$

     
    -

    By repeated use of this equation we get \( V(\overline{X}_i) = V(\overline{X}_0) = V(\overline{X}) \) for all \( 0 \leq i \leq d-1 \). This has the consequence that

    +

    By repeated use of this equation we get \( \mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_0) = \mathrm{var}(\overline{X}) \) for all \( 0 \leq i \leq d-1 \). This has the consequence that

     
    $$ \begin{align} -V(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \tag{7} +\mathrm{var}(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \tag{7} \end{align} $$

     
    @@ -572,7 +585,7 @@

    Blocking Transformations, fi \( e_k \) can be made as small as we would like by making \( k \) (and hence \( d \)) sufficiently large. The sequence is decreasing (Master of Science thesis by Marius Jonsson, UiO 2018). It means we can apply blocking transformations until -\( e_k \) is sufficiently small, and then estimate \( V(\overline{X}) \) by +\( e_k \) is sufficiently small, and then estimate \( \mathrm{var}(\overline{X}) \) by \( \widehat{\sigma}^2_k/n_k \).

    diff --git a/doc/pub/week9/html/week9-solarized.html b/doc/pub/week9/html/week9-solarized.html index 405b3368..8becdd7a 100644 --- a/doc/pub/week9/html/week9-solarized.html +++ b/doc/pub/week9/html/week9-solarized.html @@ -74,6 +74,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -93,7 +94,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -239,8 +242,12 @@

    Central limit theorem

    $$
    + +









    +

    Further remarks

    +

    Note that we use \( n \) instead of \( n-1 \) in the definition of -variance. The sample variance and mean are not necessarily equal to +variance. The sample variance and the sample mean are not necessarily equal to the exact values we would get if we knew the corresponding probability distribution.

    @@ -361,7 +368,7 @@

    Resampling methods: Blocking

    Assume \( n = 2^d \) for some integer \( d>1 \) and \( X_1,X_2,\cdots, X_n \) is a stationary time series to begin with. -Moreover, assume that the time series is asymptotically uncorrelated. We switch to vector notation by arranging \( X_1,X_2,\cdots,X_n \) in an \( n \)-tuple. Define: +Moreover, assume that the series is asymptotically uncorrelated. We switch to vector notation by arranging \( X_1,X_2,\cdots,X_n \) in an \( n \)-tuple. Define:

    $$ \begin{align*} @@ -369,6 +376,10 @@

    Resampling methods: Blocking

    \end{align*} $$ + +









    +

    Why blocking?

    +

    The strength of the blocking method is when the number of observations, \( n \) is large. For large \( n \), the complexity of dependent bootstrapping scales poorly, but the blocking method does not, @@ -377,38 +388,41 @@

    Resampling methods: Blocking











    Blocking Transformations

    -

    We now define -blocking transformations. The idea is to take the mean of subsequent -pair of elements from \( \vec{X} \) and form a new vector -\( \vec{X}_1 \). Continuing in the same way by taking the mean of -subsequent pairs of elements of \( \vec{X}_1 \) we obtain \( \vec{X}_2 \), and +

    We now define the blocking transformations. The idea is to take the mean of subsequent +pair of elements from \( \boldsymbol{X} \) and form a new vector +\( \boldsymbol{X}_1 \). Continuing in the same way by taking the mean of +subsequent pairs of elements of \( \boldsymbol{X}_1 \) we obtain \( \boldsymbol{X}_2 \), and so on. -Define \( \vec{X}_i \) recursively by: +Define \( \boldsymbol{X}_i \) recursively by:

    $$ \begin{align} -(\vec{X}_0)_k &\equiv (\vec{X})_k \nonumber \\ -(\vec{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\vec{X}_i)_{2k-1} + -(\vec{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 +(\boldsymbol{X}_0)_k &\equiv (\boldsymbol{X})_k \nonumber \\ +(\boldsymbol{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\boldsymbol{X}_i)_{2k-1} + +(\boldsymbol{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 \label{_auto1} \end{align} $$ -

    The quantity \( \vec{X}_k \) is + +









    +

    Blocking transformations

    + +

    The quantity \( \boldsymbol{X}_k \) is subject to \( k \) blocking transformations. We now have \( d \) vectors -\( \vec{X}_0, \vec{X}_1,\cdots,\vec X_{d-1} \) containing the subsequent +\( \boldsymbol{X}_0, \boldsymbol{X}_1,\cdots,\vec X_{d-1} \) containing the subsequent averages of observations. It turns out that if the components of -\( \vec{X} \) is a stationary time series, then the components of -\( \vec{X}_i \) is a stationary time series for all \( 0 \leq i \leq d-1 \) +\( \boldsymbol{X} \) is a stationary time series, then the components of +\( \boldsymbol{X}_i \) is a stationary time series for all \( 0 \leq i \leq d-1 \)

    We can then compute the autocovariance, the variance, sample mean, and number of observations for each \( i \). Let \( \gamma_i, \sigma_i^2, -\overline{X}_i \) denote the autocovariance, variance and average of the -elements of \( \vec{X}_i \) and let \( n_i \) be the number of elements of -\( \vec{X}_i \). It follows by induction that \( n_i = n/2^i \). +\overline{X}_i \) denote the covariance, variance and average of the +elements of \( \boldsymbol{X}_i \) and let \( n_i \) be the number of elements of +\( \boldsymbol{X}_i \). It follows by induction that \( n_i = n/2^i \).











    @@ -430,14 +444,16 @@

    Blocking Transformations

    \end{align} $$ -

    The quantity \( \hat{X} \) is asymptotic uncorrelated by assumption, \( \hat{X}_k \) is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample mean \( V(\overline{X}) \).

    +

    The quantity \( \hat{X} \) is asymptotically uncorrelated by assumption, \( \hat{X}_k \) is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample +mean \( \mathrm{var}(\overline{X}) \). +











    Blocking Transformations, getting there

    We have

    $$ \begin{align} -V(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. +\mathrm{var}(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. \label{_auto4} \end{align} $$ @@ -450,7 +466,7 @@

    Blocking Transformations, gettin \end{equation} $$ -

    We can show that \( V(\overline{X}_i) = V(\overline{X}_j) \) for all \( 0 \leq i \leq d-1 \) and \( 0 \leq j \leq d-1 \).

    +

    We can show that \( \mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_j) \) for all \( 0 \leq i \leq d-1 \) and \( 0 \leq j \leq d-1 \).











    Blocking Transformations, final expressions

    @@ -464,10 +480,10 @@

    Blocking Transformations, fi \end{align} $$ -

    By repeated use of this equation we get \( V(\overline{X}_i) = V(\overline{X}_0) = V(\overline{X}) \) for all \( 0 \leq i \leq d-1 \). This has the consequence that

    +

    By repeated use of this equation we get \( \mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_0) = \mathrm{var}(\overline{X}) \) for all \( 0 \leq i \leq d-1 \). This has the consequence that

    $$ \begin{align} -V(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \label{eq:convergence} +\mathrm{var}(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \label{eq:convergence} \end{align} $$ @@ -476,7 +492,7 @@

    Blocking Transformations, fi \( e_k \) can be made as small as we would like by making \( k \) (and hence \( d \)) sufficiently large. The sequence is decreasing (Master of Science thesis by Marius Jonsson, UiO 2018). It means we can apply blocking transformations until -\( e_k \) is sufficiently small, and then estimate \( V(\overline{X}) \) by +\( e_k \) is sufficiently small, and then estimate \( \mathrm{var}(\overline{X}) \) by \( \widehat{\sigma}^2_k/n_k \).

    diff --git a/doc/pub/week9/html/week9.html b/doc/pub/week9/html/week9.html index d0446a80..e26205e1 100644 --- a/doc/pub/week9/html/week9.html +++ b/doc/pub/week9/html/week9.html @@ -151,6 +151,7 @@ None, 'and-why-do-we-use-such-methods'), ('Central limit theorem', 2, None, 'central-limit-theorem'), + ('Further remarks', 2, None, 'further-remarks'), ('Running many measurements', 2, None, @@ -170,7 +171,9 @@ 2, None, 'resampling-methods-blocking'), + ('Why blocking?', 2, None, 'why-blocking'), ('Blocking Transformations', 2, None, 'blocking-transformations'), + ('Blocking transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations', 2, None, 'blocking-transformations'), ('Blocking Transformations, getting there', 2, @@ -316,8 +319,12 @@

    Central limit theorem

    $$
    + +









    +

    Further remarks

    +

    Note that we use \( n \) instead of \( n-1 \) in the definition of -variance. The sample variance and mean are not necessarily equal to +variance. The sample variance and the sample mean are not necessarily equal to the exact values we would get if we knew the corresponding probability distribution.

    @@ -438,7 +445,7 @@

    Resampling methods: Blocking

    Assume \( n = 2^d \) for some integer \( d>1 \) and \( X_1,X_2,\cdots, X_n \) is a stationary time series to begin with. -Moreover, assume that the time series is asymptotically uncorrelated. We switch to vector notation by arranging \( X_1,X_2,\cdots,X_n \) in an \( n \)-tuple. Define: +Moreover, assume that the series is asymptotically uncorrelated. We switch to vector notation by arranging \( X_1,X_2,\cdots,X_n \) in an \( n \)-tuple. Define:

    $$ \begin{align*} @@ -446,6 +453,10 @@

    Resampling methods: Blocking

    \end{align*} $$ + +









    +

    Why blocking?

    +

    The strength of the blocking method is when the number of observations, \( n \) is large. For large \( n \), the complexity of dependent bootstrapping scales poorly, but the blocking method does not, @@ -454,38 +465,41 @@

    Resampling methods: Blocking











    Blocking Transformations

    -

    We now define -blocking transformations. The idea is to take the mean of subsequent -pair of elements from \( \vec{X} \) and form a new vector -\( \vec{X}_1 \). Continuing in the same way by taking the mean of -subsequent pairs of elements of \( \vec{X}_1 \) we obtain \( \vec{X}_2 \), and +

    We now define the blocking transformations. The idea is to take the mean of subsequent +pair of elements from \( \boldsymbol{X} \) and form a new vector +\( \boldsymbol{X}_1 \). Continuing in the same way by taking the mean of +subsequent pairs of elements of \( \boldsymbol{X}_1 \) we obtain \( \boldsymbol{X}_2 \), and so on. -Define \( \vec{X}_i \) recursively by: +Define \( \boldsymbol{X}_i \) recursively by:

    $$ \begin{align} -(\vec{X}_0)_k &\equiv (\vec{X})_k \nonumber \\ -(\vec{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\vec{X}_i)_{2k-1} + -(\vec{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 +(\boldsymbol{X}_0)_k &\equiv (\boldsymbol{X})_k \nonumber \\ +(\boldsymbol{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\boldsymbol{X}_i)_{2k-1} + +(\boldsymbol{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 \label{_auto1} \end{align} $$ -

    The quantity \( \vec{X}_k \) is + +









    +

    Blocking transformations

    + +

    The quantity \( \boldsymbol{X}_k \) is subject to \( k \) blocking transformations. We now have \( d \) vectors -\( \vec{X}_0, \vec{X}_1,\cdots,\vec X_{d-1} \) containing the subsequent +\( \boldsymbol{X}_0, \boldsymbol{X}_1,\cdots,\vec X_{d-1} \) containing the subsequent averages of observations. It turns out that if the components of -\( \vec{X} \) is a stationary time series, then the components of -\( \vec{X}_i \) is a stationary time series for all \( 0 \leq i \leq d-1 \) +\( \boldsymbol{X} \) is a stationary time series, then the components of +\( \boldsymbol{X}_i \) is a stationary time series for all \( 0 \leq i \leq d-1 \)

    We can then compute the autocovariance, the variance, sample mean, and number of observations for each \( i \). Let \( \gamma_i, \sigma_i^2, -\overline{X}_i \) denote the autocovariance, variance and average of the -elements of \( \vec{X}_i \) and let \( n_i \) be the number of elements of -\( \vec{X}_i \). It follows by induction that \( n_i = n/2^i \). +\overline{X}_i \) denote the covariance, variance and average of the +elements of \( \boldsymbol{X}_i \) and let \( n_i \) be the number of elements of +\( \boldsymbol{X}_i \). It follows by induction that \( n_i = n/2^i \).











    @@ -507,14 +521,16 @@

    Blocking Transformations

    \end{align} $$ -

    The quantity \( \hat{X} \) is asymptotic uncorrelated by assumption, \( \hat{X}_k \) is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample mean \( V(\overline{X}) \).

    +

    The quantity \( \hat{X} \) is asymptotically uncorrelated by assumption, \( \hat{X}_k \) is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample +mean \( \mathrm{var}(\overline{X}) \). +











    Blocking Transformations, getting there

    We have

    $$ \begin{align} -V(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. +\mathrm{var}(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. \label{_auto4} \end{align} $$ @@ -527,7 +543,7 @@

    Blocking Transformations, gettin \end{equation} $$ -

    We can show that \( V(\overline{X}_i) = V(\overline{X}_j) \) for all \( 0 \leq i \leq d-1 \) and \( 0 \leq j \leq d-1 \).

    +

    We can show that \( \mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_j) \) for all \( 0 \leq i \leq d-1 \) and \( 0 \leq j \leq d-1 \).











    Blocking Transformations, final expressions

    @@ -541,10 +557,10 @@

    Blocking Transformations, fi \end{align} $$ -

    By repeated use of this equation we get \( V(\overline{X}_i) = V(\overline{X}_0) = V(\overline{X}) \) for all \( 0 \leq i \leq d-1 \). This has the consequence that

    +

    By repeated use of this equation we get \( \mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_0) = \mathrm{var}(\overline{X}) \) for all \( 0 \leq i \leq d-1 \). This has the consequence that

    $$ \begin{align} -V(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \label{eq:convergence} +\mathrm{var}(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \label{eq:convergence} \end{align} $$ @@ -553,7 +569,7 @@

    Blocking Transformations, fi \( e_k \) can be made as small as we would like by making \( k \) (and hence \( d \)) sufficiently large. The sequence is decreasing (Master of Science thesis by Marius Jonsson, UiO 2018). It means we can apply blocking transformations until -\( e_k \) is sufficiently small, and then estimate \( V(\overline{X}) \) by +\( e_k \) is sufficiently small, and then estimate \( \mathrm{var}(\overline{X}) \) by \( \widehat{\sigma}^2_k/n_k \).

    diff --git a/doc/pub/week9/ipynb/ipynb-week9-src.tar.gz b/doc/pub/week9/ipynb/ipynb-week9-src.tar.gz index 27d96613..1d027324 100644 Binary files a/doc/pub/week9/ipynb/ipynb-week9-src.tar.gz and b/doc/pub/week9/ipynb/ipynb-week9-src.tar.gz differ diff --git a/doc/pub/week9/ipynb/week9.ipynb b/doc/pub/week9/ipynb/week9.ipynb index c9d937ec..2a9be3da 100644 --- a/doc/pub/week9/ipynb/week9.ipynb +++ b/doc/pub/week9/ipynb/week9.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "112079bb", + "id": "4b5596ee", "metadata": { "editable": true }, @@ -14,7 +14,7 @@ }, { "cell_type": "markdown", - "id": "01330897", + "id": "d33a95e3", "metadata": { "editable": true }, @@ -27,7 +27,7 @@ }, { "cell_type": "markdown", - "id": "f02c162a", + "id": "d548cf0f", "metadata": { "editable": true }, @@ -54,7 +54,7 @@ }, { "cell_type": "markdown", - "id": "fdd2e018", + "id": "930c8d02", "metadata": { "editable": true }, @@ -71,7 +71,7 @@ }, { "cell_type": "markdown", - "id": "b0b7d10d", + "id": "4d06554e", "metadata": { "editable": true }, @@ -90,7 +90,7 @@ }, { "cell_type": "markdown", - "id": "90baf4e8", + "id": "0be7d825", "metadata": { "editable": true }, @@ -108,7 +108,7 @@ }, { "cell_type": "markdown", - "id": "17222598", + "id": "f008a4ba", "metadata": { "editable": true }, @@ -125,7 +125,7 @@ }, { "cell_type": "markdown", - "id": "906c2dfa", + "id": "f32d922f", "metadata": { "editable": true }, @@ -137,7 +137,7 @@ }, { "cell_type": "markdown", - "id": "12e3c52f", + "id": "2484aca0", "metadata": { "editable": true }, @@ -147,7 +147,7 @@ }, { "cell_type": "markdown", - "id": "6c6118fa", + "id": "7b45ca9a", "metadata": { "editable": true }, @@ -159,20 +159,22 @@ }, { "cell_type": "markdown", - "id": "887c2e0a", + "id": "acf96a39", "metadata": { "editable": true }, "source": [ + "## Further remarks\n", + "\n", "Note that we use $n$ instead of $n-1$ in the definition of\n", - "variance. The sample variance and mean are not necessarily equal to\n", + "variance. The sample variance and the sample mean are not necessarily equal to\n", "the exact values we would get if we knew the corresponding probability\n", "distribution." ] }, { "cell_type": "markdown", - "id": "acafea2b", + "id": "9369eada", "metadata": { "editable": true }, @@ -187,7 +189,7 @@ }, { "cell_type": "markdown", - "id": "e35d683b", + "id": "50693dbd", "metadata": { "editable": true }, @@ -199,7 +201,7 @@ }, { "cell_type": "markdown", - "id": "2284213e", + "id": "7a026966", "metadata": { "editable": true }, @@ -209,7 +211,7 @@ }, { "cell_type": "markdown", - "id": "38fc350c", + "id": "82e82fa9", "metadata": { "editable": true }, @@ -221,7 +223,7 @@ }, { "cell_type": "markdown", - "id": "e6a10e33", + "id": "5b3f8c63", "metadata": { "editable": true }, @@ -231,7 +233,7 @@ }, { "cell_type": "markdown", - "id": "77708cbe", + "id": "2e651fdb", "metadata": { "editable": true }, @@ -243,7 +245,7 @@ }, { "cell_type": "markdown", - "id": "a7d8f255", + "id": "bf9b0aa4", "metadata": { "editable": true }, @@ -255,7 +257,7 @@ }, { "cell_type": "markdown", - "id": "0efe976e", + "id": "0f3459da", "metadata": { "editable": true }, @@ -265,7 +267,7 @@ }, { "cell_type": "markdown", - "id": "b1a1fbc4", + "id": "3aecc426", "metadata": { "editable": true }, @@ -277,7 +279,7 @@ }, { "cell_type": "markdown", - "id": "c1520556", + "id": "451588ba", "metadata": { "editable": true }, @@ -287,7 +289,7 @@ }, { "cell_type": "markdown", - "id": "d558f855", + "id": "8cbe60fb", "metadata": { "editable": true }, @@ -299,7 +301,7 @@ }, { "cell_type": "markdown", - "id": "0ddd692b", + "id": "cf819af3", "metadata": { "editable": true }, @@ -311,7 +313,7 @@ }, { "cell_type": "markdown", - "id": "10116eff", + "id": "94d446e9", "metadata": { "editable": true }, @@ -326,7 +328,7 @@ }, { "cell_type": "markdown", - "id": "e6854707", + "id": "a3bed598", "metadata": { "editable": true }, @@ -336,7 +338,7 @@ }, { "cell_type": "markdown", - "id": "428e0cc5", + "id": "289ea914", "metadata": { "editable": true }, @@ -348,7 +350,7 @@ }, { "cell_type": "markdown", - "id": "95748e29", + "id": "e4f189cc", "metadata": { "editable": true }, @@ -362,7 +364,7 @@ }, { "cell_type": "markdown", - "id": "c16de7af", + "id": "e0c75890", "metadata": { "editable": true }, @@ -381,7 +383,7 @@ }, { "cell_type": "markdown", - "id": "4c34dcab", + "id": "b5fc2142", "metadata": { "editable": true }, @@ -393,7 +395,7 @@ }, { "cell_type": "markdown", - "id": "2fa3ac90", + "id": "3ba59e20", "metadata": { "editable": true }, @@ -405,7 +407,7 @@ }, { "cell_type": "markdown", - "id": "ba09ecf3", + "id": "85c360b6", "metadata": { "editable": true }, @@ -415,7 +417,7 @@ }, { "cell_type": "markdown", - "id": "121cd516", + "id": "88132941", "metadata": { "editable": true }, @@ -427,7 +429,7 @@ }, { "cell_type": "markdown", - "id": "758c4303", + "id": "47ce48bd", "metadata": { "editable": true }, @@ -437,7 +439,7 @@ }, { "cell_type": "markdown", - "id": "5f24791c", + "id": "a17dde39", "metadata": { "editable": true }, @@ -449,7 +451,7 @@ }, { "cell_type": "markdown", - "id": "31ff5de6", + "id": "dbcd8555", "metadata": { "editable": true }, @@ -461,7 +463,7 @@ }, { "cell_type": "markdown", - "id": "40587bca", + "id": "9753ec4d", "metadata": { "editable": true }, @@ -475,7 +477,7 @@ }, { "cell_type": "markdown", - "id": "c6af6233", + "id": "aed90511", "metadata": { "editable": true }, @@ -485,7 +487,7 @@ }, { "cell_type": "markdown", - "id": "0b88af71", + "id": "3d10512f", "metadata": { "editable": true }, @@ -498,12 +500,12 @@ "$\\widehat{\\theta} = \\overline{X}$. \n", "\n", "Assume $n = 2^d$ for some integer $d>1$ and $X_1,X_2,\\cdots, X_n$ is a stationary time series to begin with. \n", - "Moreover, assume that the time series is asymptotically uncorrelated. We switch to vector notation by arranging $X_1,X_2,\\cdots,X_n$ in an $n$-tuple. Define:" + "Moreover, assume that the series is asymptotically uncorrelated. We switch to vector notation by arranging $X_1,X_2,\\cdots,X_n$ in an $n$-tuple. Define:" ] }, { "cell_type": "markdown", - "id": "94b1edde", + "id": "297d90f5", "metadata": { "editable": true }, @@ -517,11 +519,13 @@ }, { "cell_type": "markdown", - "id": "af97e0d7", + "id": "74859bc8", "metadata": { "editable": true }, "source": [ + "## Why blocking?\n", + "\n", "The strength of the blocking method is when the number of\n", "observations, $n$ is large. For large $n$, the complexity of dependent\n", "bootstrapping scales poorly, but the blocking method does not,\n", @@ -530,36 +534,35 @@ }, { "cell_type": "markdown", - "id": "7feaa251", + "id": "0bed800d", "metadata": { "editable": true }, "source": [ "## Blocking Transformations\n", - " We now define\n", - "blocking transformations. The idea is to take the mean of subsequent\n", - "pair of elements from $\\vec{X}$ and form a new vector\n", - "$\\vec{X}_1$. Continuing in the same way by taking the mean of\n", - "subsequent pairs of elements of $\\vec{X}_1$ we obtain $\\vec{X}_2$, and\n", + " We now define the blocking transformations. The idea is to take the mean of subsequent\n", + "pair of elements from $\\boldsymbol{X}$ and form a new vector\n", + "$\\boldsymbol{X}_1$. Continuing in the same way by taking the mean of\n", + "subsequent pairs of elements of $\\boldsymbol{X}_1$ we obtain $\\boldsymbol{X}_2$, and\n", "so on. \n", - "Define $\\vec{X}_i$ recursively by:" + "Define $\\boldsymbol{X}_i$ recursively by:" ] }, { "cell_type": "markdown", - "id": "9d5024aa", + "id": "a7e0c557", "metadata": { "editable": true }, "source": [ "$$\n", - "(\\vec{X}_0)_k \\equiv (\\vec{X})_k \\nonumber\n", + "(\\boldsymbol{X}_0)_k \\equiv (\\boldsymbol{X})_k \\nonumber\n", "$$" ] }, { "cell_type": "markdown", - "id": "08342d94", + "id": "7557cd42", "metadata": { "editable": true }, @@ -569,8 +572,8 @@ "\n", "$$\n", "\\begin{equation} \n", - "(\\vec{X}_{i+1})_k \\equiv \\frac{1}{2}\\Big( (\\vec{X}_i)_{2k-1} +\n", - "(\\vec{X}_i)_{2k} \\Big) \\qquad \\text{for all} \\qquad 1 \\leq i \\leq d-1\n", + "(\\boldsymbol{X}_{i+1})_k \\equiv \\frac{1}{2}\\Big( (\\boldsymbol{X}_i)_{2k-1} +\n", + "(\\boldsymbol{X}_i)_{2k} \\Big) \\qquad \\text{for all} \\qquad 1 \\leq i \\leq d-1\n", "\\label{_auto1} \\tag{1}\n", "\\end{equation}\n", "$$" @@ -578,29 +581,31 @@ }, { "cell_type": "markdown", - "id": "35d66f14", + "id": "315c77b3", "metadata": { "editable": true }, "source": [ - "The quantity $\\vec{X}_k$ is\n", + "## Blocking transformations\n", + "\n", + "The quantity $\\boldsymbol{X}_k$ is\n", "subject to $k$ **blocking transformations**. We now have $d$ vectors\n", - "$\\vec{X}_0, \\vec{X}_1,\\cdots,\\vec X_{d-1}$ containing the subsequent\n", + "$\\boldsymbol{X}_0, \\boldsymbol{X}_1,\\cdots,\\vec X_{d-1}$ containing the subsequent\n", "averages of observations. It turns out that if the components of\n", - "$\\vec{X}$ is a stationary time series, then the components of\n", - "$\\vec{X}_i$ is a stationary time series for all $0 \\leq i \\leq d-1$\n", + "$\\boldsymbol{X}$ is a stationary time series, then the components of\n", + "$\\boldsymbol{X}_i$ is a stationary time series for all $0 \\leq i \\leq d-1$\n", "\n", "We can then compute the autocovariance, the variance, sample mean, and\n", "number of observations for each $i$. \n", "Let $\\gamma_i, \\sigma_i^2,\n", - "\\overline{X}_i$ denote the autocovariance, variance and average of the\n", - "elements of $\\vec{X}_i$ and let $n_i$ be the number of elements of\n", - "$\\vec{X}_i$. It follows by induction that $n_i = n/2^i$." + "\\overline{X}_i$ denote the covariance, variance and average of the\n", + "elements of $\\boldsymbol{X}_i$ and let $n_i$ be the number of elements of\n", + "$\\boldsymbol{X}_i$. It follows by induction that $n_i = n/2^i$." ] }, { "cell_type": "markdown", - "id": "bd0d6c0d", + "id": "e11a5834", "metadata": { "editable": true }, @@ -615,7 +620,7 @@ }, { "cell_type": "markdown", - "id": "c7771171", + "id": "03174f94", "metadata": { "editable": true }, @@ -627,7 +632,7 @@ }, { "cell_type": "markdown", - "id": "b013c68e", + "id": "741ee4c9", "metadata": { "editable": true }, @@ -639,7 +644,7 @@ }, { "cell_type": "markdown", - "id": "5aba9b30", + "id": "f14a96cb", "metadata": { "editable": true }, @@ -657,7 +662,7 @@ }, { "cell_type": "markdown", - "id": "80359834", + "id": "323e46df", "metadata": { "editable": true }, @@ -675,17 +680,18 @@ }, { "cell_type": "markdown", - "id": "b4609263", + "id": "a71d9b3c", "metadata": { "editable": true }, "source": [ - "The quantity $\\hat{X}$ is asymptotic uncorrelated by assumption, $\\hat{X}_k$ is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample mean $V(\\overline{X})$." + "The quantity $\\hat{X}$ is asymptotically uncorrelated by assumption, $\\hat{X}_k$ is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample\n", + "mean $\\mathrm{var}(\\overline{X})$." ] }, { "cell_type": "markdown", - "id": "4d7c4cf9", + "id": "dd98a112", "metadata": { "editable": true }, @@ -696,7 +702,7 @@ }, { "cell_type": "markdown", - "id": "bcb7a18f", + "id": "0e0bec22", "metadata": { "editable": true }, @@ -706,7 +712,7 @@ "\n", "$$\n", "\\begin{equation}\n", - "V(\\overline{X}_k) = \\frac{\\sigma_k^2}{n_k} + \\underbrace{\\frac{2}{n_k} \\sum_{h=1}^{n_k-1}\\left( 1 - \\frac{h}{n_k} \\right)\\gamma_k(h)}_{\\equiv e_k} = \\frac{\\sigma^2_k}{n_k} + e_k \\quad \\text{if} \\quad \\gamma_k(0) = \\sigma_k^2. \n", + "\\mathrm{var}(\\overline{X}_k) = \\frac{\\sigma_k^2}{n_k} + \\underbrace{\\frac{2}{n_k} \\sum_{h=1}^{n_k-1}\\left( 1 - \\frac{h}{n_k} \\right)\\gamma_k(h)}_{\\equiv e_k} = \\frac{\\sigma^2_k}{n_k} + e_k \\quad \\text{if} \\quad \\gamma_k(0) = \\sigma_k^2. \n", "\\label{_auto4} \\tag{4}\n", "\\end{equation}\n", "$$" @@ -714,7 +720,7 @@ }, { "cell_type": "markdown", - "id": "3c0415c5", + "id": "7b004b32", "metadata": { "editable": true }, @@ -724,7 +730,7 @@ }, { "cell_type": "markdown", - "id": "b917d6e7", + "id": "a70cba82", "metadata": { "editable": true }, @@ -742,17 +748,17 @@ }, { "cell_type": "markdown", - "id": "8a3a843d", + "id": "14a5f387", "metadata": { "editable": true }, "source": [ - "We can show that $V(\\overline{X}_i) = V(\\overline{X}_j)$ for all $0 \\leq i \\leq d-1$ and $0 \\leq j \\leq d-1$." + "We can show that $\\mathrm{var}(\\overline{X}_i) = \\mathrm{var}(\\overline{X}_j)$ for all $0 \\leq i \\leq d-1$ and $0 \\leq j \\leq d-1$." ] }, { "cell_type": "markdown", - "id": "ff26532f", + "id": "68a6d6ac", "metadata": { "editable": true }, @@ -764,7 +770,7 @@ }, { "cell_type": "markdown", - "id": "53d488ff", + "id": "86df1a10", "metadata": { "editable": true }, @@ -776,7 +782,7 @@ }, { "cell_type": "markdown", - "id": "12616630", + "id": "0a2ea53b", "metadata": { "editable": true }, @@ -794,17 +800,17 @@ }, { "cell_type": "markdown", - "id": "97fd701f", + "id": "c5852aa3", "metadata": { "editable": true }, "source": [ - "By repeated use of this equation we get $V(\\overline{X}_i) = V(\\overline{X}_0) = V(\\overline{X})$ for all $0 \\leq i \\leq d-1$. This has the consequence that" + "By repeated use of this equation we get $\\mathrm{var}(\\overline{X}_i) = \\mathrm{var}(\\overline{X}_0) = \\mathrm{var}(\\overline{X})$ for all $0 \\leq i \\leq d-1$. This has the consequence that" ] }, { "cell_type": "markdown", - "id": "4bb4df84", + "id": "09cbe900", "metadata": { "editable": true }, @@ -814,14 +820,14 @@ "\n", "$$\n", "\\begin{equation}\n", - "V(\\overline{X}) = \\frac{\\sigma_k^2}{n_k} + e_k \\qquad \\text{for all} \\qquad 0 \\leq k \\leq d-1. \\label{eq:convergence} \\tag{7}\n", + "\\mathrm{var}(\\overline{X}) = \\frac{\\sigma_k^2}{n_k} + e_k \\qquad \\text{for all} \\qquad 0 \\leq k \\leq d-1. \\label{eq:convergence} \\tag{7}\n", "\\end{equation}\n", "$$" ] }, { "cell_type": "markdown", - "id": "763e56cf", + "id": "a15058db", "metadata": { "editable": true }, @@ -831,7 +837,7 @@ "$e_k$ can be made as small as we would like by making $k$ (and hence\n", "$d$) sufficiently large. The sequence is decreasing (Master of Science thesis by Marius Jonsson, UiO 2018).\n", "It means we can apply blocking transformations until\n", - "$e_k$ is sufficiently small, and then estimate $V(\\overline{X})$ by\n", + "$e_k$ is sufficiently small, and then estimate $\\mathrm{var}(\\overline{X})$ by\n", "$\\widehat{\\sigma}^2_k/n_k$. \n", "\n", "For an elegant solution and proof of the blocking method, see the recent article of [Marius Jonsson (former MSc student of the Computational Physics group)](https://journals.aps.org/pre/abstract/10.1103/PhysRevE.98.043304)." @@ -839,7 +845,7 @@ }, { "cell_type": "markdown", - "id": "7df111e3", + "id": "4f46a36c", "metadata": { "editable": true }, @@ -850,7 +856,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "917cbd60", + "id": "9b554ff8", "metadata": { "collapsed": false, "editable": true @@ -1079,7 +1085,7 @@ }, { "cell_type": "markdown", - "id": "d6173775", + "id": "7dee6cbf", "metadata": { "editable": true }, @@ -1094,7 +1100,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "ba6dda27", + "id": "989a7557", "metadata": { "collapsed": false, "editable": true diff --git a/doc/pub/week9/pdf/week9-beamer.pdf b/doc/pub/week9/pdf/week9-beamer.pdf index b5b72c17..ffae9f55 100644 Binary files a/doc/pub/week9/pdf/week9-beamer.pdf and b/doc/pub/week9/pdf/week9-beamer.pdf differ diff --git a/doc/pub/week9/pdf/week9.pdf b/doc/pub/week9/pdf/week9.pdf index fe2780cd..eac21e0c 100644 Binary files a/doc/pub/week9/pdf/week9.pdf and b/doc/pub/week9/pdf/week9.pdf differ diff --git a/doc/src/week9/week9.do.txt b/doc/src/week9/week9.do.txt index 1c51c8d4..96a7f399 100644 --- a/doc/src/week9/week9.do.txt +++ b/doc/src/week9/week9.do.txt @@ -70,8 +70,12 @@ and the sample variance \] !et !eblock + +!split +===== Further remarks ===== + Note that we use $n$ instead of $n-1$ in the definition of -variance. The sample variance and mean are not necessarily equal to +variance. The sample variance and the sample mean are not necessarily equal to the exact values we would get if we knew the corresponding probability distribution. @@ -198,13 +202,17 @@ $V(\widehat{\theta})$ for exactly one $\widehat{\theta}$, namely $\widehat{\theta} = \overline{X}$. Assume $n = 2^d$ for some integer $d>1$ and $X_1,X_2,\cdots, X_n$ is a stationary time series to begin with. -Moreover, assume that the time series is asymptotically uncorrelated. We switch to vector notation by arranging $X_1,X_2,\cdots,X_n$ in an $n$-tuple. Define: +Moreover, assume that the series is asymptotically uncorrelated. We switch to vector notation by arranging $X_1,X_2,\cdots,X_n$ in an $n$-tuple. Define: !bt \begin{align*} \hat{X} = (X_1,X_2,\cdots,X_n). \end{align*} !et + +!split +===== Why blocking? ===== + The strength of the blocking method is when the number of observations, $n$ is large. For large $n$, the complexity of dependent bootstrapping scales poorly, but the blocking method does not, @@ -212,35 +220,38 @@ moreover, it becomes more accurate the larger $n$ is. !split ===== Blocking Transformations ===== - We now define -blocking transformations. The idea is to take the mean of subsequent -pair of elements from $\vec{X}$ and form a new vector -$\vec{X}_1$. Continuing in the same way by taking the mean of -subsequent pairs of elements of $\vec{X}_1$ we obtain $\vec{X}_2$, and + We now define the blocking transformations. The idea is to take the mean of subsequent +pair of elements from $\bm{X}$ and form a new vector +$\bm{X}_1$. Continuing in the same way by taking the mean of +subsequent pairs of elements of $\bm{X}_1$ we obtain $\bm{X}_2$, and so on. -Define $\vec{X}_i$ recursively by: +Define $\bm{X}_i$ recursively by: !bt \begin{align} -(\vec{X}_0)_k &\equiv (\vec{X})_k \nonumber \\ -(\vec{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\vec{X}_i)_{2k-1} + -(\vec{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 +(\bm{X}_0)_k &\equiv (\bm{X})_k \nonumber \\ +(\bm{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\bm{X}_i)_{2k-1} + +(\bm{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 \end{align} !et -The quantity $\vec{X}_k$ is +!split +===== Blocking transformations ===== + + +The quantity $\bm{X}_k$ is subject to $k$ _blocking transformations_. We now have $d$ vectors -$\vec{X}_0, \vec{X}_1,\cdots,\vec X_{d-1}$ containing the subsequent +$\bm{X}_0, \bm{X}_1,\cdots,\vec X_{d-1}$ containing the subsequent averages of observations. It turns out that if the components of -$\vec{X}$ is a stationary time series, then the components of -$\vec{X}_i$ is a stationary time series for all $0 \leq i \leq d-1$ +$\bm{X}$ is a stationary time series, then the components of +$\bm{X}_i$ is a stationary time series for all $0 \leq i \leq d-1$ We can then compute the autocovariance, the variance, sample mean, and number of observations for each $i$. Let $\gamma_i, \sigma_i^2, -\overline{X}_i$ denote the autocovariance, variance and average of the -elements of $\vec{X}_i$ and let $n_i$ be the number of elements of -$\vec{X}_i$. It follows by induction that $n_i = n/2^i$. +\overline{X}_i$ denote the covariance, variance and average of the +elements of $\bm{X}_i$ and let $n_i$ be the number of elements of +$\bm{X}_i$. It follows by induction that $n_i = n/2^i$. !split ===== Blocking Transformations ===== @@ -258,14 +269,15 @@ we can define \end{align} !et -The quantity $\hat{X}$ is asymptotic uncorrelated by assumption, $\hat{X}_k$ is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample mean $V(\overline{X})$. +The quantity $\hat{X}$ is asymptotically uncorrelated by assumption, $\hat{X}_k$ is also asymptotic uncorrelated. Let's turn our attention to the variance of the sample +mean $\mathrm{var}(\overline{X})$. !split ===== Blocking Transformations, getting there ===== We have !bt \begin{align} -V(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. +\mathrm{var}(\overline{X}_k) = \frac{\sigma_k^2}{n_k} + \underbrace{\frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h)}_{\equiv e_k} = \frac{\sigma^2_k}{n_k} + e_k \quad \text{if} \quad \gamma_k(0) = \sigma_k^2. \end{align} !et The term $e_k$ is called the _truncation error_: @@ -274,7 +286,7 @@ The term $e_k$ is called the _truncation error_: e_k = \frac{2}{n_k} \sum_{h=1}^{n_k-1}\left( 1 - \frac{h}{n_k} \right)\gamma_k(h). \end{equation} !et -We can show that $V(\overline{X}_i) = V(\overline{X}_j)$ for all $0 \leq i \leq d-1$ and $0 \leq j \leq d-1$. +We can show that $\mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_j)$ for all $0 \leq i \leq d-1$ and $0 \leq j \leq d-1$. !split ===== Blocking Transformations, final expressions ===== @@ -286,10 +298,10 @@ n_{j+1} \overline{X}_{j+1} &= \sum_{i=1}^{n_{j+1}} (\hat{X}_{j+1})_i = \frac{1 &= \frac{1}{2}\left[ (\hat{X}_j)_1 + (\hat{X}_j)_2 + \cdots + (\hat{X}_j)_{n_j} \right] = \underbrace{\frac{n_j}{2}}_{=n_{j+1}} \overline{X}_j = n_{j+1}\overline{X}_j. \end{align} !et -By repeated use of this equation we get $V(\overline{X}_i) = V(\overline{X}_0) = V(\overline{X})$ for all $0 \leq i \leq d-1$. This has the consequence that +By repeated use of this equation we get $\mathrm{var}(\overline{X}_i) = \mathrm{var}(\overline{X}_0) = \mathrm{var}(\overline{X})$ for all $0 \leq i \leq d-1$. This has the consequence that !bt \begin{align} -V(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \label{eq:convergence} +\mathrm{var}(\overline{X}) = \frac{\sigma_k^2}{n_k} + e_k \qquad \text{for all} \qquad 0 \leq k \leq d-1. \label{eq:convergence} \end{align} !et @@ -298,7 +310,7 @@ $\{e_k\}_{k=0}^{d-1}$ is decreasing, and conjecture that the term $e_k$ can be made as small as we would like by making $k$ (and hence $d$) sufficiently large. The sequence is decreasing (Master of Science thesis by Marius Jonsson, UiO 2018). It means we can apply blocking transformations until -$e_k$ is sufficiently small, and then estimate $V(\overline{X})$ by +$e_k$ is sufficiently small, and then estimate $\mathrm{var}(\overline{X})$ by $\widehat{\sigma}^2_k/n_k$.