Header logo is
al Marin Vlastelica Pogancic
al Anselm Paulus
Anselm Paulus
Bachelor's Student Intern
al Michal Rolinek
Michal Rolinek
Postdoctoral Researcher
al Dominik Zietlow
Dominik Zietlow
Ph.D. Student
5 results

2020


Optimizing Rank-based Metrics with Blackbox Differentiation
Optimizing Rank-based Metrics with Blackbox Differentiation

Rolinek, M., Musil, V., Paulus, A., Vlastelica, M., Michaelis, C., Martius, G.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 7620-7630, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020, Best paper nomination (inproceedings)

Abstract
Rank-based metrics are some of the most widely used criteria for performance evaluation of computer vision models. Despite years of effort, direct optimization for these metrics remains a challenge due to their non-differentiable and non-decomposable nature. We present an efficient, theoretically sound, and general method for differentiating rank-based metrics with mini-batch gradient descent. In addition, we address optimization instability and sparsity of the supervision signal that both arise from using rank-based metrics as optimization targets. Resulting losses based on recall and Average Precision are applied to image retrieval and object detection tasks. We obtain performance that is competitive with state-of-the-art on standard image retrieval datasets and consistently improve performance of near state-of-the-art object detectors.

Paper @ CVPR Long Oral Short Oral Arxiv Code Pdf Project Page [BibTex]

2020

Paper @ CVPR Long Oral Short Oral Arxiv Code Pdf Project Page [BibTex]


Differentiation of Blackbox Combinatorial Solvers
Differentiation of Blackbox Combinatorial Solvers

Vlastelica, M., Paulus, A., Musil, V., Martius, G., Rolı́nek, M.

In International Conference on Learning Representations, ICLR’20, 2020 (incollection)

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]

2019


no image
Variational Autoencoders Pursue PCA Directions (by Accident)

Rolinek, M., Zietlow, D., Martius, G.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
The Variational Autoencoder (VAE) is a powerful architecture capable of representation learning and generative modeling. When it comes to learning interpretable (disentangled) representations, VAE and its variants show unparalleled performance. However, the reasons for this are unclear, since a very particular alignment of the latent embedding is needed but the design of the VAE does not encourage it in any explicit way. We address this matter and offer the following explanation: the diagonal approximation in the encoder together with the inherent stochasticity force local orthogonality of the decoder. The local behavior of promoting both reconstruction and orthogonality matches closely how the PCA embedding is chosen. Alongside providing an intuitive understanding, we justify the statement with full theoretical analysis as well as with experiments.

arXiv link (url) Project Page [BibTex]

2019

arXiv link (url) Project Page [BibTex]

2018


no image
L4: Practical loss-based stepsize adaptation for deep learning

Rolinek, M., Martius, G.

In Advances in Neural Information Processing Systems 31 (NeurIPS 2018), pages: 6434-6444, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 2018 (inproceedings)

Github link (url) Project Page [BibTex]

2018

Github link (url) Project Page [BibTex]

2016


no image
Extrapolation and learning equations

Martius, G., Lampert, C. H.

2016, arXiv preprint \url{https://arxiv.org/abs/1610.02995} (misc)

Project Page [BibTex]

2016

Project Page [BibTex]