Anand K Subramanian



Posts about #ml

26 Sep 2024

Classification to Regression and Back

A trick to convert classification labels to regression targets and back.

#math #ml #code

18 Mar 2024

L1 Regularization in terms of L2

A short note on over-parameterizing the L1 regularizer to make it differentiable

#math #ml

12 Jan 2024

Extending Mahalanobis Distance to Gaussian Mixtures

A simple generalization of Mahalanobis distance to Gaussian Mixture Models (GMMs).

#math #ml #code

04 Aug 2023

Mathematics of Changing One's Mind

A guide to updating probabilistic beliefs using Jeffrey's rule and Pearl's method.

#math #ml #probability

29 Aug 2022

Improving the RANSAC Algorithm

Discussion about the MAGSAC algorithm, addressing a crucial hyperparameter selection issue for the RANSAC algorithm.

#math #ml #code #jax

18 Jul 2022

A Detour to the Imaginary has its Benefits

Two examples of using complex numbers for real-function optimization.

#numerics #math #ml #gradient #code #python

29 Jun 2022

The Back-Gradient Trick

Stochastic Gradient Descent can be (kind of) reversed and can be used to compute gradients with respect to its hyperparameters.

#math #ml #gradient #graph #code #jax #deep-learning

31 May 2022

Parallelizing Kalman Filters

The associative property of Kalman (Bayesian) filters can yield a parallel algorithm in O(log N).

#math #ml #parallel #code #jax

23 Sep 2021

Fast Sample-Covariance Computation for Multidimensional Arrays

A quick discussion and a vectorized Python implementation for the computation of sample covariance matrices for multi-dimensional arrays.

#math #ml #code

30 Aug 2021

Asymmetric Numeral Systems

A tutorial on the lossless Asymmetric Numeral Systems (ANS) coding commonly used in image compression.

#math #ml #information-theory #code

27 Jun 2021

A Beautiful Way to Characterize Directed Acyclic Graphs

An interesting connection between the number of cycles in a digraph and its power adjacency matrix leads to a beautiful formulation for DAG constrains.

#graph #math #ml

18 Jun 2020

A Cleverer Trick on top of the Reparametrization Trick

Implicit differentiation can lead to an efficient computation of the gradient of reparametrized samples.

#math #ml #gradient #deep-learning

08 Jun 2020

A Trick for Computing the gradient of Expectation

A trick for interchanging the gradient and Expectation of a function under the Gaussian distribution.

#math #ml #gradient #deep-learning

18 Aug 2019

Segue from Euclidean Gradient Descent to Natural Gradient Descent

A slight change in SGD formulation, in terms of maximization of local approximation, leads to an interesting general connection to NGD via mirror descent.

#math #ml #gradient #natural-gradient #deep-learning

08 Aug 2019

A Fascinating connection between Natural Gradients and the Exponential Family

The Exponential family provides an elegant and easy method to compute Natural Gradients and thus can be used for Variational Inference.

#math #ml #gradient #natural-gradient #deep-learning

© 2024 Anand K Subramanian License Design Built with Kutti