
University ParisSaclay 

Master ``Optimization'' 
Academic Year 2017/2018 
First term's advanced courses 
Stochastic Optimization 
Professors
Goals
The course presents both theoretical and numerical aspects of decision problems
with uncertainty, where one sets a probabilistic framework in order to minimize
the expectation of a cost. Two directions are explored:
 we investigate the socalled "openloop" situation, that is,
the case where decisions do not depend on information available for the problem,
and we thoroughly study the stochastic gradient method and its variants,
 we also study "closedloop" optimization problems, that is, the case where
decisions are based on partial information (often corresponding to measurements
made in the past when facing an unknown future).
Such problems are of course wellmotivated by decision problems in the industry.
They also have a deep mathematical content, especially in the dynamic case when
only the past information is available. In this setting the decision is a function
in a high dimensional space and therefore the numerical aspects also are challenging.
This course is part of the M2 Optimization program (ParisSaclay University).
Structure
The course takes place at ENSTA on Wednesday, from 09:00 to 12:00 and from 14:00 to 16:00.
(Getting to ENSTA),
and is given in English.

Lesson 1 (November 22, room 2.1.49)

09:00  12:00 (P. Carpentier & F. Bonnans)
Issues in decision making under uncertainty.
Slides

14:00  16:00 (F. Bonnans)
Convex duality, subdifferential calculus.

Lesson 2 (November 29, room 2.1.49)

09:00  12:00 (P. Carpentier)
Stochastic gradient method overview.
Slides

14:00  16:00 (F. Bonnans)
Subdifferential of an expectation. Normal integrands. Information constraints.

Lesson 3 (December 06, room 2.1.49)

09:00  12:00 (P. Carpentier)
Generalized stochastic gradient method.
Slides

14:00  16:00 (F. Bonnans)
Dynamic optimization in the convex stochastic setting.

Lesson 4 (December 13, room 2.1.49)

09:00  12:00 (P. Carpentier)
Stochastic gradient method with constraint in expectation and applications.
Slides

14:00  16:00 (F. Bonnans)
Stochastic dual dynamic programming.

Lesson 5 (December 20, room 2.1.49)

09:00  12:00 (P. Carpentier)
Discretization issues of general stochastic optimization problems.
Slides

14:00  16:00 (F. Bonnans)
Introduction to linear decision rules.

Lesson 6 (January 10, room 1.2.24)

09:00  12:00 (P. Carpentier)
Decomposition approaches for large scale stochastic optimization problems.
Slides

14:00  16:00 (F. Bonnans)
Asymptotic probability laws in stochastic programming.

Lesson 7 (January 17, room 2.3.30).

09:00  12:00 (P. Carpentier)
Articles presentation.

09h00 : Huynh  Jafarzadeh (Wang et al., SIAM, 2017)

09h45 : Jain  Pham (Byrd et al., SIAM, 2016)

10h30 : Guicheteau  Tabatabaie (Xiao and Zhang, SIAM, 2014)

11h15 : Forcier  Terris (Pflug and Pichler, SIAM, 2012

14:00  16:00 (F. Bonnans)
Written exam.

Evaluation (January 22, room 2.4.25).

09:00  10:30 (P. Carpentier)
Articles presentation.

09h00 : Golvet  WeillDuflos (Schmidt et al., MP, 2017)

09h45 : Kouhkouh  Saadi (Heitsch et al. SIAM, 2006)
Resources
Research articles to study

4. A Stochastic QuasiNewton Method for LargeScale Optimization
(Byrd et al., SIAM, 2016)

7. Stability of Multistage Stochastic Programs
(Heitsch et al. SIAM, 2006)

12. A Distance for Multistage Stochastic Optimization Models
(Pflug and Pichler, SIAM, 2012)

13. Minimizing Finite Sums with the Stochastic Average Gradient
(Schmidt et al., MP, 2017)

15. Stochastic QuasiNewton Methods for Nonconvex Stochastic Optimization
(Wang et al., SIAM, 2017)

16. A Proximal Stochastic Gradient Method with Progressive Variance Reduction
(Xiao and Zhang, SIAM, 2014)
Page managed by P. Carpentier
(last update: January 10, 2018)