Department Talks

Frederick Eberhardt - TBA

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Organizers: Sebastian Weichwald

  • Felix Leibfried and Jordi Grau-Moya
  • N 4.022 (Seminar Room EI-Dept.)

Autonomous systems rely on learning from experience to automatically refine their strategy and adapt to their environment, and thereby have huge advantages over traditional hand engineered systems. At PROWLER.io we use reinforcement learning (RL) for sequential decision making under uncertainty to develop intelligent agents capable of acting in dynamic and unknown environments. In this talk we first give a general overview of the goals and the research conducted at PROWLER.io. Then, we will talk about two specific research topics. The first is Information-Theoretic Model Uncertainty which deals with the problem of making robust decisions that take into account unspecified models of the environment. The second is Deep Model-Based Reinforcement Learning which deals with the problem of learning the transition and the reward function of a Markov Decision Process in order to use it for data-efficient learning.

Organizers: Michel Besserve


  • Sebastian Nowozin
  • Max Planck House Lecture Hall

Probabilistic deep learning methods have recently made great progress for generative and discriminative modeling. I will give a brief overview of recent developments and then present two contributions. The first is on a generalization of generative adversarial networks (GAN), extending their use considerably. GANs can be shown to approximately minimize the Jensen-Shannon divergence between two distributions, the true sampling distribution and the model distribution. We extend GANs to the class of f-divergences which include popular divergences such as the Kullback-Leibler divergence. This enables applications to variational inference and likelihood-free maximum likelihood, as well as enables GAN models to become basic building blocks in larger models. The second contribution is to consider representation learning using variational autoencoder models. To make learned representations of data useful we need ground them in semantic concepts. We propose a generative model that can decompose an observation into multiple separate latent factors, each of which represents a separate concept. Such disentangled representation is useful for recognition and for precise control in generative modeling. We learn our representations using weak supervision in the form of groups of observations where all samples within a group share the same value in a given latent factor. To make such learning feasible we generalize recent methods for amortized probabilistic inference to the dependent case. Joint work with: Ryota Tomioka (MSR Cambridge), Botond Cseke (MSR Cambridge), Diane Bouchacourt (Oxford)

Organizers: Lars Mescheder


Statistical testing of epiphenomena for multi-index data

IS Colloquium
  • 06 March 2017 • 11:15 12:15
  • John Cunningham
  • MPH Lecture Hall

As large tensor-variate data increasingly become the norm in applied machine learning and statistics, complex analysis methods similarly increase in prevalence. Such a trend offers the opportunity to understand more intricate features of the data that, ostensibly, could not be studied with simpler datasets or simpler methodologies. While promising, these advances are also perilous: these novel analysis techniques do not always consider the possibility that their results are in fact an expected consequence of some simpler, already-known feature of simpler data (for example, treating the tensor like a matrix or a univariate quantity) or simpler statistic (for example, the mean and covariance of one of the tensor modes). I will present two works that address this growing problem, the first of which uses Kronecker algebra to derive a tensor-variate maximum entropy distribution that shares modal moments with the real data. This distribution of surrogate data forms the basis of a statistical hypothesis test, and I use this method to answer a question of epiphenomenal tensor structure in populations of neural recordings in the motor and prefrontal cortex. In the second part, I will discuss how to extend this maximum entropy formulation to arbitrary constraints using deep neural network architectures in the flavor of implicit generative modeling, and I will use this method in a texture synthesis application.

Organizers: Philipp Hennig


Brain-machine interfaces: New treatment options for psychiatric disorders

IS Colloquium
  • 06 February 2017 • 11:15 12:15
  • Surjo R. Soekadar

Organizers: Moritz Grosse-Wentrup


  • Fabien Lotte
  • Max Planck House Lecture Hall

Brain-Computer Interfaces (BCIs) are systems that can translate brain activity patterns of a user into messages or commands for an interactive application. Such brain activity is typically measured using Electroencephalography (EEG), before being processed and classified by the system. EEG-based BCIs have proven promising for a wide range of applications ranging from communication and control for motor impaired users, to gaming targeted at the general public, real-time mental state monitoring and stroke rehabilitation, to name a few. Despite this promising potential, BCIs are still scarcely used outside laboratories for practical applications. The main reason preventing EEG-based BCIs from being widely used is arguably their poor usability, which is notably due to their low robustness and reliability, as well as their long training times. In this talk I present some of our research aimed at addressing these points in order to make EEG-based BCIs usable, i.e., to increase their efficacy and efficiency. In particular, I will present a set of contributions towards this goal 1) at the user training level, to ensure that users can learn to control a BCI efficiently and effectively, and 2) at the usage level, to explore novel applications of BCIs for which the current reliability can already be useful, e.g., for neuroergonomics or real-time brain activity and mental state visualization.


  • Hannes Nickisch, Philips Research, Hamburg
  • MRZ seminar room

Coronary artery disease (CAD) is the single leading cause of death worldwide and Cardiac Computed Tomography Angiography (CCTA) is a non-invasive test to rule out CAD using the anatomical characterization of the coronary lesions. Recent studies suggest that coronary lesions’ hemodynamic significance can be assessed by Fractional Flow Reserve (FFR), which is usually measured invasively in the CathLab but can also be simulated from a patient-specific biophysical model based on CCTA data. We learn a parametric lumped model (LM) enabling fast computational fluid dynamic simulations of blood flow in elongated vessel networks to alleviate the computational burden of 3D finite element (FE) simulations. We adapt the coefficients balancing the local nonlinear hydraulic effects from a training set of precomputed FE simulations. Our LM yields accurate pressure predictions suggesting that costly FE simulations can be replaced by our fast LM paving the way to use a personalised interactive biophysical model with realtime feedback in clinical practice.


  • Catrin Misselhorn
  • Max Planck Haus Lecture Hall

The development of increasingly intelligent and autonomous technologies will inevitably lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. It will, therefore, be necessary in the long run to develop machines which have the capacity for a certain amount of autonomous moral decision-making. The goal of this talk is to provide the theoretical foundations for artificial morality, i.e., for implementing moral capacities in artificial systems in general and a roadmap for developing an assistive system in geriatric care which is capable of moral learning.

Organizers: Ludovic Righetti Philipp Hennig


Images of planets orbiting other stars

Talk
  • 01 March 2016 • 11:00 12:00
  • Sascha Quantz
  • AGBS Seminar Room

The detection and characterization of planets orbiting other stars than the Sun, i.e., so-called extrasolar planets, is one of the fastest growing and most vibrant research fields in modern astrophysics. In the last 25 years, more than 5400 extrasolar planets and planet candidates were revealed, but the vast majority of these objects was detected with indirect techniques, where the existence of the planet is inferred from periodic changes in the light coming from the central star. No photons from the planets themselves are detected. In this talk, however, I will focus on the direct detection of extrasolar planets. On the one hand I will describe the main challenges that have to be overcome in order to image planets around other stars. In addition to using the world’s largest telescopes and optimized cameras it was realized in last few years that by applying advanced image processing techniques significant sensitivity gains can be achieved. On the other hand I will demonstrate what can be learned if one is successful in “taking a picture” of an extrasolar planet. After all, there must be good scientific reasons and a strong motivation why the direct detection of extrasolar planets is one of the key science drivers for current and future projects on major ground- and space-based telescopes.

Organizers: Diana Rebmann


  • Aldo Faisal
  • MPH Lecture Hall

Our research questions are centred on a basic characteristic of human brains: variability in their behaviour and their underlying meaning for cognitive mechanisms. Such variability is emerging as a key ingredient in understanding biological principles (Faisal, Selen & Wolpert, 2008, Nature Rev Neurosci) and yet lacks adequate quantitative and computational methods for description and analysis. Crucially, we find that biological and behavioural variability contains important information that our brain and our technology can make us of (instead of just averaging it away): Using advanced body sensor networks, we measured eye-movements, full-body and hand kinematics of humans living in a studio flat and are going to present some insightful results on motor control and visual attention that suggest that the control of behaviour "in-the-wild" is predictably different ways than what we measure "in-the-lab". The results have implications for robotics, prosthetics and neuroscience.

Organizers: Matthias Hohmann


Probabilistic Numerics for Differential Equations

IS Colloquium
  • 11 January 2016 • 11:15 12:15
  • Tim Sullivan

Beginning with a seminal paper of Diaconis (1988), the aim of so-called "probabilistic numerics" is to compute probabilistic solutions to deterministic problems arising in numerical analysis by casting them as statistical inference problems. For example, numerical integration of a deterministic function can be seen as the integration of an unknown/random function, with evaluations of the integrand at the integration nodes proving partial information about the integrand. Advantages offered by this viewpoint include: access to the Bayesian representation of prior and posterior uncertainties; better propagation of uncertainty through hierarchical systems than simple worst-case error bounds; and appropriate accounting for numerical truncation and round-off error in inverse problems, so that the replicability of deterministic simulations is not confused with their accuracy, thereby yielding an inappropriately concentrated Bayesian posterior. This talk will describe recent work on probabilistic numerical solvers for ordinary and partial differential equations, including their theoretical construction, convergence rates, and applications to forward and inverse problems. Joint work with Andrew Stuart (Warwick).

Organizers: Philipp Hennig