Machine Learning

About

Our group uses machine learning across a variety of applications, but recently, a main thrust of theoretical work has revolved around developing information theoretic methods.

Mutual information is among the most powerful and general measures of the relationship between random variables, but it’s effectiveness in machine learning has been limited by a) the difficulty of measuring entropy and mutual information in high dimensions (see UAI-15, AISTATS-15, NIPS-16), and b) frequent misuse of (pairwise) mutual information (e.g. ICML-14). It can be difficult to even define meaningful information measures for the interaction of variables in high dimensions.

We have taken steps toward formalizing the use of multi-variate mutual information objectives for unsupervised learning. The method of Correlation Explanation (AISTATS-15,NIPS-14) provides an information-theoretic foundation for modularly and hierarchically decomposing information in complex systems. Information may be extracted incrementally using the “information sieve” method (ICML-16), and its extension for continuous variables.

These methods have found applications in gene expression (interesting podcast and article about this work), brain imaging, text analysis, and psychometrics.

Code for Correlation Explanation and the Information Sieve can be found here.

Publications

 

People

Faculty

Students


Rob Brekelmans

Shuyang Gao



Neal Lawton