In the first part of this two-part series on Function-Space Variational Inference (FSVI), we looked at the Data Processing Inequality (DPI). In this second part, we finally look at the relationship between FSVI, a method focusing on the Bayesian predictive posterior rather than the parameter space, and the DPI. We...
[Read More]
Data Processing Inequalities and Function-Space Variational Inference (#1)
In information theory, the data processing inequality
(DPI) is a powerful concept. Informally, it tells us that
processing data cannot increase the amount of contained information. In
this two-part blog post, we will explore the DPI and its
applications to function-space variational inference
(FSVI).
[Read More]
Bayesian Appropriation: Variational Inference = PAC-Bayes Optimization?
In this blog post, following the previous blog post1 on “Bayesian Appropriation: General Likelihood for Loss Functions”, we will examine and better understand parts of the paper “PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks”2 (“PACTran”), which was presented as an oral at the ECCV...
[Read More]
Bayesian Appropriation: General Likelihood for Loss Functions
In this blog post, we explore how some losses could be rewritten as a Bayesian objective using ideas from variational inference—hence, the tongue-in-cheek “Bayesian Appropriation.” This can make it easier to see connections between loss functions and Bayesian methods (e.g. by spotting similar patterns in the wild). We will first provide...
[Read More]
Understanding the Rao-Blackwell Theorem
The Rao-Blackwell theorem is a fundamental theorem in statistics that offers a powerful method for improving estimators by conditioning on sufficient statistics. It is named after two statisticians, C.R. Rao and David Blackwell, who independently discovered it. The theorem is relevant in many areas of statistics, including machine learning algorithms...
[Read More]