Skip to content

Commit

Permalink
✨ Add schedule
Browse files Browse the repository at this point in the history
  • Loading branch information
o-laurent committed Sep 29, 2024
1 parent 0e60a60 commit c6897dd
Showing 1 changed file with 155 additions and 129 deletions.
284 changes: 155 additions & 129 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@
<h1 class="project-name">A Bayesian Odyssey in Uncertainty: from Theoretical
Foundations to Real-World Applications</h1>
<h2 class="project-tagline">ECCV 2024 - Room: <strong>Suite 7</strong><br><strong>30 Sept</strong> - 8:30 AM to
12:30 PM<br>This tutorial will be
<strong>available online and recorded</strong>
12:30 PM<br>This recording of the tutorial will be
<strong>available online</strong>.
</h2>
</section>

Expand Down Expand Up @@ -97,146 +97,172 @@ <h2 style="text-align: center">Overview</h2>
</p>
</div>

<br>

<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Outline</h2>

<h3 style="text-align: left">Introduction: Why & where is UQ helpful?</h3>
<p>
Initial exploration into the critical role of uncertainty quantification (UQ) within the realm
of computer vision (CV): participants will gain an understanding of why it’s essential to consider
uncertainty in CV, especially concerning decision-making in complex
environments. We will introduce real-world scenarios where uncertainty can profoundly
impact model performance and safety, setting the stage for deeper exploration through out the tutorial.
</p>
<h3 style="text-align: left">From maximum a posteriori to BNNs.</h3>
<p>
In this part, we will journey through the evolution of UQ techniques, starting
from classic approaches such as maximum a posteriori estimation to the more ellaborate Bayesian Neural
Networks. The participants will grasp the conceptual foundations
of UQ, laying the groundwork for the subsequent discussions of Bayesian methods.
</p>
<h3 style="text-align: left">Strategies for BNN posterior inference.</h3>
<p>
This is the core part, which will dive into the process of estimating the posterior distribution of BNNs.
The participants
will gain insights into the computational complexities involved in modeling uncertainty
through a comprehensive overview of techniques such as Variational Inference (VI),
Hamiltonian Monte Carlo (HMC), and Langevin Dynamics. Moreover, we will explore
the characteristics and visual representation of posterior distributions, providing a better
understanding of Bayesian inference.
</p>
<h3 style="text-align: left">Computationally-efficient BNNs for CV.</h3>
<h2 style="text-align: center">Schedule</h2>
<p>
Here, we will present recent techniques to improve the computational efficiency of BNNs for computer vision
tasks.
We will present different forms of obtaining BNNs from a intermediate checkpoints,
weight trajectories during a training run, different types of variational subnetworks,
etc., along with their main strenghts and limitations.
</p>
<h3 style="text-align: left">Convert your DNN into a BNN: post-hoc BNN inference.</h3>
<p>
This segment focuses on post-hoc inference techniques, with a focus on Laplace approximation. The
participants
will learn how Laplace approximation serves as a computationally efficient method for
approximating the posterior distribution of Bayesian Neural Networks.
</p>
<h3 style="text-align: left">Quality of estimated uncertainty and practical examples.</h3>
<p>
In the final session, participants will learn how to evaluate the quality of UQ in practi-
cal settings. We will develop multiple approaches to assess the reliability and calibra-
tion of uncertainty estimates, equipping participants with the tools to gauge the robust-
ness of their models. Additionally, we will dive into real-world examples and applica-
tions, showcasing how UQ can enhance the reliability
and performance of computer vision systems in diverse scenarios. Through interactive
discussions and case studies, participants will gain practical insights into deploying
uncertainty-aware models in real-world applications.
</p>
<ul>
<li>8:45-9:15: Opening - Andrei</li>
<li>
9:15-10:05: Uncertainty quantification: from maximum a posteriori to BNNs - Pavel (remotely)
</li>
<li>10:05-10:30: Computationally-efficient BNNs for computer vision - Gianni</li>
<li>10:35-11:00: Coffee</li>
<li>11:00-11:50: Convert your DNN into a BNN - Alexander</li>
<li>11:50-12:20: Quality of estimated uncertainty and practical examples - Adrien (remotely) & Gianni </li>
<li>12:20-12:40: Closing remarks + Q&A - Andrei, Alex, Pavel & Gianni</li>
</ul>

<h3 style="text-align: left">Uncertainty Quantification Framework.</h3>
<p>
This tutorial will also very quickly introduce the <a
href="https://github.com/ensta-u2is-ai/torch-uncertainty">TorchUncertainty
library</a>, an uncertainty-aware open-source framework for training models in PyTorch.
</p>
</div>

<a href="https://torch-uncertainty.github.io/" target="_blank">
<div><img src="assets/logoTU_full.png" width="20%" hspace="2%"> </div>
</a>

<br>

<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Relation to prior tutorials and short courses</h2>
<p> This tutorial is affiliated with the <a href="https://uncv2023.github.io/">UNCV Workshop</a>,
which had its inaugural edition at ECCV 2022, a subsequent one at ICCV, and is back at ECCV this year.
In constrast to the workshop, the tutorial puts its primary emphasis on the theoretical facets. </p>
<p> UQ has received some attention
in recent times, as evidenced by its inclusion in
the tutorial <a href="https://abursuc.github.io/many-faces-reliability/">'Many Faces of Reliability of Deep
Learning for Real-World Deployment'</a>. While this tutorial explored various applications associated with
uncertainty,
it did not place a specific emphasis on probabilistic models and Bayesian Neural Networks. Our tutorial aims
to provide a more in-depth exploration of uncertainty theory, accompanied by the introduction of practical
applications, including the presentation of the library, <a
href="https://github.com/ensta-u2is-ai/torch-uncertainty">TorchUncertainty</a>.</p>
</div>

<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Selected References</h2>
<ol>
<li><b>Immer, A.</b>, Palumbo, E., Marx, A., & Vogt, J. E. E<a
href="https://proceedings.neurips.cc/paper_files/paper/2023/file/a901d5540789a086ee0881a82211b63d-Paper-Conference.pdf">
Effective Bayesian Heteroscedastic Regres-
sion with Deep Neural Networks</a>. In NeurIPS, 2023.</li>
<li><b>Franchi, G., Bursuc, A.,</b> Aldea, E., Dubuisson, S.,
& Bloch, I. <a href="https://arxiv.org/pdf/2012.02818">Encoding the latent posterior of
Bayesian Neural Networks for uncertainty quantification</a>. IEEE TPAMI, 2023.</li>
<li><b>Franchi, G.</b>, Yu, X., <b>Bursuc, A.</b>, Aldea, E., Dubuisson,
S., & Filliat, D. <a href="https://arxiv.org/pdf/2207.10130">Latent Discriminant
deterministic Uncertainty</a>. In ECCV 2022.</li>
<li><b>Laurent, O.</b>, <b>Lafage, A.</b>, Tartaglione, E., Daniel, G.,
Martinez, J. M., <b>Bursuc, A.</b>, & <b>Franchi, G.</b>
<a href="https://arxiv.org/pdf/2210.09184">Packed-Ensembles for Efficient Uncertainty Estimation</a>. In
ICLR 2023.
</li>
<li><b>Izmailov, P.</b>, Vikram, S., Hoffman, M. D., & Wilson, A. G. <a
href="https://arxiv.org/pdf/2104.14421">What are Bayesian neural network
posteriors really like?</a> In ICML, 2021.</li>
<li><b>Izmailov, P.</b>, Maddox, W. J., Kirichenko, P., Garipov, T., Vetrov, D., & Wilson, A. G. <a
href="https://arxiv.org/pdf/1907.07504">Subspace inference for Bayesian deep learning</a>. In UAI, 2020.
</li>
<li><b>Franchi, G.</b>, <b>Bursuc, A.</b>, Aldea, E., Dubuisson, S., &
Bloch, I. <a href="https://arxiv.org/pdf/1912.11316">TRADI: Tracking deep neural
network weight distributions</a>. In ECCV 2020.</li>
<li>Wilson, A. G., & <b>Izmailov, P</b>. <a href="https://arxiv.org/pdf/2002.08791">Bayesian deep
learning and a probabilistic perspective of generalization</a>. In NeurIPS, 2020.</li>
<li>Hendrycks, D., Dietterich, T. <a href="https://arxiv.org/pdf/1903.12261">Benchmarking Neural Network
Robustness to Common Corruptions and
Perturbations</a>. In ICLR 2019.</li>
<li><b> Izmailov, P.</b>, Podoprikhin, D., Garipov, T., Vetrov, D., & Wilson, A. G. <a
href="https://arxiv.org/pdf/1803.05407">Averaging weights
leads to wider optima and better generalization</a>. In UAI, 2018. </li>
</ol>
You will find more references in the <a
href="https://github.com/ensta-u2is-ai/awesome-uncertainty-deeplearning">Awesome Uncertainty in deep
learning.</a>
</div>

<br>
</p>
<br>

<div class="containertext">
<h3 style="text-align: center">Andrei Bursuc is supported by ELSA:</h3>
<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Outline</h2>

<center>
<a href="https://elsa-ai.eu/" target="_blank"><img src="assets/elsa_logo.png" width="10%" hspace="2%" />
</center>
<h3 style="text-align: left">Introduction: Why & where is UQ helpful?</h3>
<p>
Initial exploration into the critical role of uncertainty quantification (UQ) within the realm
of computer vision (CV): participants will gain an understanding of why it’s essential to consider
uncertainty in CV, especially concerning decision-making in complex
environments. We will introduce real-world scenarios where uncertainty can profoundly
impact model performance and safety, setting the stage for deeper exploration through out the tutorial.
</p>
<h3 style="text-align: left">From maximum a posteriori to BNNs.</h3>
<p>
In this part, we will journey through the evolution of UQ techniques, starting
from classic approaches such as maximum a posteriori estimation to the more ellaborate Bayesian Neural
Networks. The participants will grasp the conceptual foundations
of UQ, laying the groundwork for the subsequent discussions of Bayesian methods.
</p>
<h3 style="text-align: left">Strategies for BNN posterior inference.</h3>
<p>
This is the core part, which will dive into the process of estimating the posterior distribution of BNNs.
The participants
will gain insights into the computational complexities involved in modeling uncertainty
through a comprehensive overview of techniques such as Variational Inference (VI),
Hamiltonian Monte Carlo (HMC), and Langevin Dynamics. Moreover, we will explore
the characteristics and visual representation of posterior distributions, providing a better
understanding of Bayesian inference.
</p>
<h3 style="text-align: left">Computationally-efficient BNNs for CV.</h3>
<p>
Here, we will present recent techniques to improve the computational efficiency of BNNs for computer
vision
tasks.
We will present different forms of obtaining BNNs from a intermediate checkpoints,
weight trajectories during a training run, different types of variational subnetworks,
etc., along with their main strenghts and limitations.
</p>
<h3 style="text-align: left">Convert your DNN into a BNN: post-hoc BNN inference.</h3>
<p>
This segment focuses on post-hoc inference techniques, with a focus on Laplace approximation. The
participants
will learn how Laplace approximation serves as a computationally efficient method for
approximating the posterior distribution of Bayesian Neural Networks.
</p>
<h3 style="text-align: left">Quality of estimated uncertainty and practical examples.</h3>
<p>
In the final session, participants will learn how to evaluate the quality of UQ in practi-
cal settings. We will develop multiple approaches to assess the reliability and calibra-
tion of uncertainty estimates, equipping participants with the tools to gauge the robust-
ness of their models. Additionally, we will dive into real-world examples and applica-
tions, showcasing how UQ can enhance the reliability
and performance of computer vision systems in diverse scenarios. Through interactive
discussions and case studies, participants will gain practical insights into deploying
uncertainty-aware models in real-world applications.
</p>

<h3 style="text-align: left">Uncertainty Quantification Framework.</h3>
<p>
This tutorial will also very quickly introduce the <a
href="https://github.com/ensta-u2is-ai/torch-uncertainty">TorchUncertainty
library</a>, an uncertainty-aware open-source framework for training models in PyTorch.
</p>
</div>

<a href="https://torch-uncertainty.github.io/" target="_blank">
<div><img src="assets/logoTU_full.png" width="20%" hspace="2%"> </div>
</a>

<br>

<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Relation to prior tutorials and short courses</h2>
<p> This tutorial is affiliated with the <a href="https://uncv2023.github.io/">UNCV Workshop</a>,
which had its inaugural edition at ECCV 2022, a subsequent one at ICCV, and is back at ECCV this year.
In constrast to the workshop, the tutorial puts its primary emphasis on the theoretical facets. </p>
<p> UQ has received some attention
in recent times, as evidenced by its inclusion in
the tutorial <a href="https://abursuc.github.io/many-faces-reliability/">'Many Faces of Reliability of
Deep
Learning for Real-World Deployment'</a>. While this tutorial explored various applications associated
with
uncertainty,
it did not place a specific emphasis on probabilistic models and Bayesian Neural Networks. Our tutorial
aims
to provide a more in-depth exploration of uncertainty theory, accompanied by the introduction of practical
applications, including the presentation of the library, <a
href="https://github.com/ensta-u2is-ai/torch-uncertainty">TorchUncertainty</a>.</p>
</div>

<div class="containertext" style="max-width:50rem">
<h2 style="text-align: center">Selected References</h2>
<ol>
<li><b>Immer, A.</b>, Palumbo, E., Marx, A., & Vogt, J. E. E<a
href="https://proceedings.neurips.cc/paper_files/paper/2023/file/a901d5540789a086ee0881a82211b63d-Paper-Conference.pdf">
Effective Bayesian Heteroscedastic Regres-
sion with Deep Neural Networks</a>. In NeurIPS, 2023.</li>
<li><b>Franchi, G., Bursuc, A.,</b> Aldea, E., Dubuisson, S.,
& Bloch, I. <a href="https://arxiv.org/pdf/2012.02818">Encoding the latent posterior of
Bayesian Neural Networks for uncertainty quantification</a>. IEEE TPAMI, 2023.</li>
<li><b>Franchi, G.</b>, Yu, X., <b>Bursuc, A.</b>, Aldea, E., Dubuisson,
S., & Filliat, D. <a href="https://arxiv.org/pdf/2207.10130">Latent Discriminant
deterministic Uncertainty</a>. In ECCV 2022.</li>
<li><b>Laurent, O.</b>, <b>Lafage, A.</b>, Tartaglione, E., Daniel, G.,
Martinez, J. M., <b>Bursuc, A.</b>, & <b>Franchi, G.</b>
<a href="https://arxiv.org/pdf/2210.09184">Packed-Ensembles for Efficient Uncertainty Estimation</a>. In
ICLR 2023.
</li>
<li><b>Izmailov, P.</b>, Vikram, S., Hoffman, M. D., & Wilson, A. G. <a
href="https://arxiv.org/pdf/2104.14421">What are Bayesian neural network
posteriors really like?</a> In ICML, 2021.</li>
<li><b>Izmailov, P.</b>, Maddox, W. J., Kirichenko, P., Garipov, T., Vetrov, D., & Wilson, A. G. <a
href="https://arxiv.org/pdf/1907.07504">Subspace inference for Bayesian deep learning</a>. In UAI,
2020.
</li>
<li><b>Franchi, G.</b>, <b>Bursuc, A.</b>, Aldea, E., Dubuisson, S., &
Bloch, I. <a href="https://arxiv.org/pdf/1912.11316">TRADI: Tracking deep neural
network weight distributions</a>. In ECCV 2020.</li>
<li>Wilson, A. G., & <b>Izmailov, P</b>. <a href="https://arxiv.org/pdf/2002.08791">Bayesian deep
learning and a probabilistic perspective of generalization</a>. In NeurIPS, 2020.</li>
<li>Hendrycks, D., Dietterich, T. <a href="https://arxiv.org/pdf/1903.12261">Benchmarking Neural Network
Robustness to Common Corruptions and
Perturbations</a>. In ICLR 2019.</li>
<li><b> Izmailov, P.</b>, Podoprikhin, D., Garipov, T., Vetrov, D., & Wilson, A. G. <a
href="https://arxiv.org/pdf/1803.05407">Averaging weights
leads to wider optima and better generalization</a>. In UAI, 2018. </li>
</ol>
You will find more references in the <a
href="https://github.com/ensta-u2is-ai/awesome-uncertainty-deeplearning">Awesome Uncertainty in deep
learning.</a>
</div>

<br>

<div class="containertext">
<h3 style="text-align: center">Andrei Bursuc is supported by ELSA:</h3>

<center>
<a href="https://elsa-ai.eu/" target="_blank"><img src="assets/elsa_logo.png" width="10%" hspace="2%" />
</center>
</a>
</div>
</div>
</div>
</div>


</section>
Expand Down

0 comments on commit c6897dd

Please sign in to comment.