![[logo ENSTA]](Logo-ENSTA.jpg) |
   University Paris-Saclay   |
![[logo ENSTA]](Logo-UPS.jpg) |
Master ``Optimization'' |
Academic Year 2018/2019 |
First term's advanced courses |
Stochastic Optimization |
Professors
Goals
The course presents both theoretical and numerical aspects of decision problems
with uncertainty, where one sets a probabilistic framework in order to minimize
the expectation of a cost. Two directions are explored:
-
we investigate the so-called "open-loop" situation, that is,
the case where decisions do not depend on information available for the problem,
and we thoroughly study the stochastic gradient method and its variants,
-
we also study "closed-loop" optimization problems, that is, the case where
decisions are based on partial information (often corresponding to measurements
made in the past when facing an unknown future).
Such problems are of course well-motivated by decision problems in the industry.
They also have a deep mathematical content, especially in the dynamic case when
only the past information is available. In this setting the decision is a function
in a high dimensional space and therefore the numerical aspects also are challenging.
This course is part of the M2 Optimization program (Paris-Saclay University).
Structure
The course takes place at ENSTA on Wednesday, from 09:00 to 12:00 and from 14:00 to 16:00,
and is given in English.
Getting to ENSTA
-
Lesson 1 (November 21, room 2.2.36)
-
09:00 - 12:00 (P. Carpentier)
Issues in decision making under uncertainty.
Slides
(Updated November 20, 2018)
-
14:00 - 16:00 (V. Leclère)
Convex analysis and probability tools for stochastic optimization - Part I.
Slides
(Updated November 16, 2018)
-
Lesson 2 (November 28, room 2.2.36)
-
09:00 - 12:00 (P. Carpentier)
Stochastic gradient method overview.
Slides
(Updated November 10, 2018)
-
14:00 - 16:00 (V. Leclère)
Convex analysis and probability tools for stochastic optimization - Part II.
Slides
(Updated November 19, 2018)
-
Lesson 3 (December 05, room 2.2.36)
-
09:00 - 12:00 (P. Carpentier)
Generalized stochastic gradient method.
Slides
(Updated November 29, 2018)
-
14:00 - 16:00 (V. Leclère)
Stochastic Programming. The two-stage case.
Slides
(Updated December 11, 2018)
-
Lesson 4 (December 12, room 2.2.36)
-
09:00 - 12:00 (V. Leclère)
Scenario decomposition: L-Shaped and Progressive Hedging methods.
Slides
(Updated December 11, 2018)
-
14:00 - 16:00 (P. Carpentier)
Applications of the stochastic gradient method.
Slides
(Updated December 5, 2018)
-
Lesson 5 (December 19, room 2.2.36)
-
09:00 - 12:00 (P. Carpentier)
Discretization issues of general stochastic optimization problems.
Slides
(Updated December 16, 2018)
-
14:00 - 16:00 (V. Leclère)
Bellman operators and Stochastic Dynamic Programming.
Slides
(Updated December 19, 2018)
-
Lesson 6 (January 09, room 2.2.36)
-
09:00 - 12:00 (V. Leclère)
The Stochastic Dual Dynamic Programming (SDDP) approach.
Slides
(Updated January 09, 2019)
-
14:00 - 16:00 (P. Carpentier)
Decomposition approaches for large scale stochastic optimization problems.
Slides
(Updated January 04, 2019)
-
Evaluation (January 16, room 2.2.36).
-
08:45 - 12:15 (P. Carpentier)
Articles session (presentation: 25 minutes - questions: 10 minutes).
-
08:45 Dalle - Le-Franc (Pflug and Pichler, SIAM, 2012)
-
09:20 Shilov - Syrtseva (Heitsch et al., SIAM, 2006)
-
09:55 Tang - Xu (Scieur et al., arXiv, 2017)
-
10:30 Bertin - Marescaux (Bollapragada et al., arXiv, 2017)
-
11:05 Charpenay - Shakoori (Kovacevic and Pichler, AOR, 2015)
-
11:40 Gupta - Jerhaoui (Xiao and Zhang, SIAM, 2014)
-
14:00 - 16:00 (V. Leclère)
Written exam.
Course resources
External resources
Research articles to study
Page managed by P. Carpentier
(last update: January 10, 2019)