Software engineering is on the brink of a revolution with the
emergence of large language models (LLMs). LLMs are AI systems that have
been trained on large amounts of data, allowing them to generate natural
language text and source code.
[Read More]
Research Idea: Encouraging Ensemble Diversity and Model Disagreement in Active Learning and Beyond
While training of deep ensembles or BNNs, we should be able to maximize the BALD score (model disagreement metric) as a regularizer using unlabeled data to improve model diversity and active learning efficiency (or OOD detection) where it matters: for pool or evaluation set data. Given the limited capacity of...
[Read More]
Research Idea: Approximating BatchBALD via "k-BALD"
This post introduces a family of much less expensive approximations for BatchBALD that might work well where BatchBALD works. You might have noticed that BatchBALD can be very, very slow. We can approximate BatchBALD using pairwise mutual information terms, leading to a new approximation, we call 2-BALD, or generally, following...
[Read More]
Paper Review: Bayesian Model Selection, the Marginal Likelihood, and Generalization
The paper, accepted as Long Oral at ICML 2022, discusses the (log) marginal likelihood (LML) in detail: its advantages, use-cases, and potential pitfalls, with an extensive review of related work. It further suggests using the “conditional (log) marginal likelihood (CLML)” instead of the LML and shows that it captures the...
[Read More]
On the Total Variation Distance
The definition of the total variation distance can be confusing (at
least to me) as it is formulated as a supremum. There is a simpler
formulation. We connect the two here and provide some intuitions.
[Read More]