This post introduces a family of much less expensive approximations for BatchBALD that might work well where BatchBALD works. You might have noticed that BatchBALD can be very, very slow. We can approximate BatchBALD using pairwise mutual information terms, leading to a new approximation, we call 2-BALD, or generally, following...
[Read More]
Paper Review: Bayesian Model Selection, the Marginal Likelihood, and Generalization
The paper, accepted as Long Oral at ICML 2022, discusses the (log) marginal likelihood (LML) in detail: its advantages, use-cases, and potential pitfalls, with an extensive review of related work. It further suggests using the “conditional (log) marginal likelihood (CLML)” instead of the LML and shows that it captures the...
[Read More]
On the Total Variation Distance
The definition of the total variation distance can be confusing (at
least to me) as it is formulated as a supremum. There is a simpler
formulation. We connect the two here and provide some intuitions.
[Read More]
On Classification Metrics and an Alternative to the F1 Score
We express common performance metrics, such as recall, precision and
so on, for classification tasks using probabilities and examine the F1
score and simplify it to a ratio that is simpler to understand.
[Read More]
Research Idea: Intellectually Pleasing Outlier Exposure (with Applications in Active Learning)
This post discusses potential failure cases of outlier exposure—when
using “fake” label distributions for outliers—and presents an
intellectually pleasing version of outlier exposure in latent space,
treating outliers as purely negative samples from a contrastive
point-of-view.
[Read More]