Découvrez les nouveautés de cette version : Fonctionnalités, améliorations et évolutions vous attendent ! 👉 Cliquez ici pour en savoir plus

Monte Carlo Methods

Version 9.42 par Rémi Peyre le 02/05/2025 - 15:00

Monte Carlo Methods

8KUAAN11

ECTS

2

SEMESTER

S8

lectures

classes / seminars

practical work

integrative teaching

independent work

7 h

14 h

0 h

0 h

36 h

Language used

French

 

Course supervisor(s)

Rémi PEYRE, associate professorhttps://play-lh.googleusercontent.com/kMofEFLjobZy_bCuaiDogzBcUT-dz3BBbOrIEjJ-hqOabjK8ieuevGe6wlTD15QzOqwindex.png

Key words

Random simulation; Monte Carlo method; Variance reduction; MCMC methods; Simulated annealing; Stochastic gradien descent

Prerequisites

Probability theory (level: 1st year of master's degree); basics in Python programming language

Overall objective

Learn how to use random simulation to solve complicated deterministic problems.

Course content and organisation

   This course is made of two distinct (though closely related) parts, the core aspect shared by both parts being the use of random simulation procedures in order to solve problems whose statement was initially purely deterministic!

   The first part deals with the “Monte Carlo method” to compute an integral-like quantity (so, something deterministic): the method consists in, first, writing that quantity as a probabilistic expectation; and then, estimating that expectation through the use of random simulations.  In many practical situations, the approximated result given by that method will turn out to be more precise than the computation of the quantity that one would have obtained by “classical” numerical analysis techniques!

   Besides the basic question of estimating the quantity of interest, the cours will also show how to provide with a confidence interval for the estimator.  Sometimes it is important to have means to shorten the length of the interval that one will obtain for a given computational cost: such a goal may be achieved by the so-called variance reduction techniques.  Four such techniques are explained by the course, viz.: importance sampling; conditioning; common random numbers; and control variate.

   In some situations, simulating the needed wanted variables turns out to be a more difficult challenge than applying the Monte Carlo method as such.  To tackle that issue, we will introduce the so-called Monte Carlo Markov Chain (or commonly “MCMC”) methods, which enable to circumvent that difficulty in many practical cases.

   That Monte Carlo Markov Chains chapter will naturally lead us to the second part of the course, devoted to stochastic methods for mathematical optimization. The most famous such method is simulated annealing, which is closely related to the Metropolis-Hastings MCMC algorithm, introduced in the previous chapter. Also, the course will explain how simulated annealing can be improved into parallel tempering.

   To finish, the last chapter shall be devoted to the stochastic gradient descent method: this is an optimization procedure which lays at the core of the training protocols for artificial neuron networks!

 

   Le dernier chapitre, enfin, sera consacré à la méthode de descente de gradient stochastique, un procédé d'optimisation qui est notamment au cœur de l'entrainement des réseaux de neurones artificiels !

   Ce module comprendra une large partie de mise en œuvre informatique des concepts étudiés, que nous effectuerons en l'occurrence avec Python.

 

   A large part of the teaching will be devoted to implementing the studied concepts into computer programs.  For the computer exercise sessions, Python has been chosen for language.

Skills

Levels

Description and operational verbs

Know

How to compute the confidence interval for a Monte Carlo estimation method

What the main variance reduction techniques are

What are the formulas that characterize the Markov chains used in MCMC methods

Understand

What is the principle of the Monte Carlo method to estimate and expectation; What are its strengths and weaknesses

What is the use of the MCMC methods; How do the Markov chains involved in these methods behave

Apply 

Implement a Monte Carlo estimation method, resp. a variance reduction technique, into a computer program

Implement a Metropolis‒Hastings algorithm, resp. a Gibbs algorithm

Analyze 

Choose a reduction variance technique that will be appropriate for the problem considered

Look critically at the convergence behaviour of a Monte Carlo estimator

Choose an MCMC method that will be appropriate for the problem considered

Look critically at the convergence behaviour of a MCMC method

Summarize

In a statistical context, use an MCMC algorithm to compute relevant Bayesian estimators

Assess

 

Compliance with the United Nations Sustainable Development Goals

Evaluation methods

Continuous assessment

Written test

Oral presentation / viva

Written report / project