Niels Richard Hansen

Niels Richard Hansen

Professor of Computational Statistics

University of Copenhagen

About

I am a professor of computational statistics in the MATH department at University of Copenhagen. I cofounded Copenhagen Causality Lab and do research at the intersection of AI and statistics. My main interest is to automatize learning of causal explanations from data. I use techniques from Bayesian networks, stochastic processes, predictive modeling and machine learning to discover causal structures and achieve explainable, robust and transportable AI.

Interests

  • Causality
  • Machine learning and AI
  • Model selection
  • Stochastic dynamic models
  • Event processes

Education

  • PhD in Statistics, 2004

    University of Copenhagen

  • MSc in Mathematics, 2000

    University of Copenhagen

Recent Posts

Why stochastic gradient descent works

The Robbins-Siegmund theorem give conditions on a nonnegative almost supermartingale that ensure its almost sure convergence. It can be seen as a generalization of Doob’s martingale convergence theorem for nonnegative supermartingales. It is fairly easy to derive a number of almost sure convergence results from the Robbins-Siegmund theorem, e.g., the strong law of large numbers. In this post we show how it can be used to show convergence of a stochastic gradient descent algorithm.

Explainable AI

Shapley values explain how each feature contributes to a prediction. However, what it is that Shapley values precisely explain is determined by a value function. Multiple choices of value functions are possible, and each choice implies a different interpretation of the explanations.