This post is just to remind myself of some of my favourite posters/presentations that I saw while attending ICML. I have undoubtably missed a lot of interesting stuff. If you have any particular suggestions, please let me know!

The Fundamental Incompatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsampling

*Michael Betancourt*

I liked the topic and the kind of analysis and I especially liked his clear style of presentation. Moreover, there was quite a lively discussion about whether this incompatibility is actually a problem, or whether it focussed too much on only the bias that is introduced by naive subsampling.

Markov Chain Monte Carlo and Variational Inference: Bridging the Gap

*Tim Salimans, Diederik Kingma, Max Welling*

The presentation and poster were a bit hard for me to follow but the problem seems important.

Towards a Learning Theory of Cause-Effect Inference

*David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, Iliya Tolstikhin*

Interesting use of Maximum Mean Discrepancy in a clear analysis of an important problem.

Weight Uncertainty in Neural Network

*Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra*

I have not looked into how exactly their approach is different from previous attempts at incorporating weight uncertainty, but the updates for the weight parameters seemed surprisingly simple.

Convex Calibrated Surrogates for Hierarchical Classification

*Harish Ramaswamy, Ambuj Tewari, Shivani Agarwal*

I like this idea of classification calibrated losses and this seems like an interesting extension to hierarchical loss functions.

Optimizing Non-decomposable Performance Measures: A Tale of Two Classes

*Harikrishna Narasimhan, Purushottam Kar, Prateek Jain*

The authors consider functions of the true positive rate and true negative rate and come up with two classes of such functions and an approach to maximize them. The one class includes measures like the G-mean and the H-mean, while the other class includes the F-measure and Jaccard coefficient.

The Kendall and Mallows Kernels for Permutations

*Yunlong Jiao & Jean-Philippe Vert*

The authors consider the problem of learning from permutations or rankings instead of vector of real valued numbers. In particular, they construct PSD kernels based on Kendall’s coefficient and Mallows kernel in order to apply kernel methods to the problem.

Enabling scalable stochastic gradient-based inference for Gaussian processes by employing the Unbiased LInear System SolvEr (ULISSE)

*Maurizio Filippone & Raphael Engler*

This seems to tackle the important problem exact quantification of uncertainty in covariance parameters for gaussian processes with seemingly few constraints on the number type of covariance function.

Risk and Regret of Hierarchical Bayesian Learners

*Jonathan H. Huggins & Joshua B. Tenenbaum*

Again, an interesting analysis of an important problem, although it will take me some more time to study the actual result.