Nearest Neighbor classifier

This post is about(probably) one of the most simple algorithms of ML- Nearest Neighbor classifier. To begin with, it’s a known fact that the whole concept of ML relies on a crucial idea called the “plug-in principle”. This blog is a nice read for the people interested in learning the basics of statistical foundations of ML. Nearest Neighbor algorithm is one of the many algorithms that branched out of this principle.

Read More

Class conditional and class generic disentangled representations

During my internship at IIT-H, I came across disentangled representations while going through the ICML paper Disentangling by factorising. In this work by Kim et al, Factor-VAE, modeling of the disentangled representations is done by encouraging the marginal distribution of representation to be factorial. They achieved this by minimizing the total correlation(TC) of the latent units- where we define the TC of a set of random variables \(z_1,z_2,...z_n\) as \begin{equation} TC(z_1,z_2,…,z_n)= KL(p(z_1,z_2,…,z_n)||\prod_{i=1}^{n}p(z_i)) \end{equation} (Here \(KL\) stands for Kullback-leibler divergence). They estimated TC by using density ratio estimation.

Problem

Read More