Découvrez les nouveautés de cette version : Fonctionnalités, améliorations et évolutions vous attendent ! 👉 Cliquez ici pour en savoir plus

Monte Carlo Methods

Modifié par Rémi Peyre le 02/05/2025 - 15:32

Monte Carlo Methods

8KUAAN11

ECTS

2

SEMESTER

S8

lectures

classes / seminars

practical work

integrative teaching

independent work

7 h

14 h

0 h

0 h

36 h

Language used

French

 

Course supervisor(s)

Rémi PEYRE, associate professorhttps://play-lh.googleusercontent.com/kMofEFLjobZy_bCuaiDogzBcUT-dz3BBbOrIEjJ-hqOabjK8ieuevGe6wlTD15QzOqwindex.png

Key words

Random simulation; Monte Carlo method; Variance reduction; MCMC methods; Simulated annealing; Stochastic gradien descent

Prerequisites

Probability theory (level: 1st year of master's degree); basics in Python programming language

Overall objective

Learn how to use random simulation to solve complicated deterministic problems.

Course content and organisation

   This course is made of two distinct (though closely related) parts, the core aspect shared by both parts being the use of random simulation procedures in order to solve problems whose statement was initially purely deterministic!

   The first part deals with the “Monte Carlo method” to compute an integral-like quantity (so, something deterministic): the method consists in, first, writing that quantity as a probabilistic expectation; and then, estimating that expectation through the use of random simulations.  In many practical situations, the approximated result given by that method will turn out to be more precise than the computation of the quantity that one would have obtained by “classical” numerical analysis techniques!

   Besides the basic question of estimating the quantity of interest, the cours will also show how to provide with a confidence interval for the estimator.  Sometimes it is important to have means to shorten the length of the interval that one will obtain for a given computational cost: such a goal may be achieved by the so-called variance reduction techniques.  Four such techniques are explained by the course, viz.: importance sampling; conditioning; common random numbers; and control variate.

   In some situations, simulating the needed wanted variables turns out to be a more difficult challenge than applying the Monte Carlo method as such.  To tackle that issue, we will introduce the so-called Monte Carlo Markov Chain (or commonly “MCMC”) methods, which enable to circumvent that difficulty in many practical cases.

   That Monte Carlo Markov Chains chapter will naturally lead us to the second part of the course, devoted to stochastic methods for mathematical optimization.  The most famous such method is simulated annealing, which is closely related to the Metropolis-Hastings MCMC algorithm, introduced in the previous chapter.  Also, the course will explain how simulated annealing can be improved into parallel tempering.

   To finish, the last chapter of the course shall be devoted to the stochastic gradient descent method: this is an optimization procedure which lies at the core of the training protocols for artificial neuron networks!

   A large part of the teaching will be devoted to implementing the studied concepts into computer programs.  For the computer exercise sessions, Python has been chosen for language.

Skills

Levels

Description and operational verbs

Know

How to compute the Monte Carlo estimator for an expected value, as well as the associated the confidence interval

In what the main variance reduction techniques consist

How MCMC methods work, in particular Metropolis-Hastings Markov chains

In what the following stochastic optimization algorithms consist: simulated annealing; parallel tempring; stochastic gradient descent

Understand

In which situations it is relevant to use the Monte Carlo method to estimate an expectation; What are its strengths and weaknesses

Regarding the evolution algorithms introduced by the course: how the involved Markov chains do behave

Apply 

Compute theoretically the auxiliary quantities that are required in such or such method explained by the course: e.g., the quantile function of the law that one wants to simulate; the expectation of some control variate; the density ratios for a measure that one wants to sample via a Metropolis-Hastings Markov chain; the gradient of some function whose expectation one wants to minimize; etc.

Implement a Monte Carlo estimation method, resp. a variance reduction technique, into a computer program

Implement a Metropolis-Hastings algorithm, resp. a simulated annealing algorithm (possibly with parallel tempering), resp. a stochastic gradient descent algorithm.

Analyze 

Choose a reduction variance technique that will be appropriate for the problem considered

Recognize situations in which it will be better to use MCMC methods rather than “direct” simulation

Recognize situations for which such or such stochastic optimization algorithm will be suited

Tune the parameters of such or such stochastic algorithm in a clever way

Summarize

 

Assess

Look critically at the convergence behaviour of a Monte Carlo estimator

Look critically at the solution that one obtained via some stochastic optimization algorithm

Compliance with the United Nations Sustainable Development Goals

Evaluation methods

Continuous assessment

Written test

Oral presentation / viva

Written report / project