University ParisSaclay
Master "Optimization"
Academic year 2017/2018
.
First term's advanced courses
"Stochastic Optimization"
Goals
The course presents both theoretical and numerical aspects of decision problems
with uncertainty, where one sets a probabilistic framework in order to minimize
the expectation of a cost. Two directions are explored:
 we investigate the socalled "openloop" situation, that is,
the case where decisions do not depend on information available for the problem,
and we thoroughly study the stochastic gradient method and its variants,
 we also study "closedloop" optimization problems, that is, the case where
decisions are based on partial information (often corresponding to measurements
made in the past when facing an unknown future).
Such problems are of course wellmotivated by decision problems in the industry.
They also have a deep mathematical content, especially in the dynamic case when
only the past information is available. In this setting the decision is a function
in a high dimensional space and therefore the numerical aspects also are challenging.
This course is part of the M2 Optimization program (ParisSaclay University).
Structure
The course takes place at ENSTA on Wednesday, from 09:00 to 12:00 and from 14:00 to 16:00.
(Getting to ENSTA),
and is given in English.

Lesson 1 (November 22, room 2.3.30)

09:00  12:00 & 14:00  16:00 (F. Bonnans)
Motivation and examples (maximum likelihood, optimal storage).
Convex duality, subdifferential calculus.

Lesson 2 (November 29, room 2.3.30)

09:00  12:00 (P. Carpentier)
Overview of the stochastic gradient method
(principle, convergence and convergence speed, practical aspects).
Slides

14:00  16:00 (F. Bonnans)
Subdifferential of an expectation. Normal integrands. Information constraints.

Lesson 3 (December 06, room 2.3.30)

09:00  12:00 (P. Carpentier)
Generalized stochastic gradient method: introduction to the Auxiliary
Problem Principle, convergence in the cases without and with constraints.
Slides

14:00  16:00 (F. Bonnans)
Dynamic optimization in the convex stochastic setting.

Lesson 4 (December 13, room 2.3.30)

09:00  12:00 (P. Carpentier)
Stochastic gradient method with constraint in expectation and in probability.
Applications (variance minimization in finance, aerospace rendezvous problem).
Slides

14:00  16:00 (F. Bonnans)
Dynamic programming. Stochastic dual dynamic programming.

Lesson 5 (December 20, room 2.3.30)

09:00  12:00 (P. Carpentier)
Discretization issues of general stochastic optimization problems:
problem setting, counterexample, convergence theorem.
Slides

14:00  16:00 (F. Bonnans)
Optimal management of a GNL portfolio.

Lesson 6 (January 10, room 2.3.30)

09:00  12:00 (P. Carpentier)
Decomposition approach for large scale stochastic optimization problems.
Slides

14:00  16:00 (F. Bonnans)
Management of water resources for electricity production.

Lesson 7 (January 17, room 2.3.30).

14:00  16:00
Written exam.
Resources
Research articles to study

1. Nonstronglyconvex smooth stochastic approximation with convergence rate O(1/n)
(Bach, 2013)

2. Gradient convergence in gradient methods with errors
(Bertsekas, 2000)

3. A stochastic quasiNewton method for largescale optimization
(Byrd, 2015)

4. Accelerated gradient methods for nonconvex nonlinear and stochastic programming
(Ghadimi, 2013)

5. Stability of multistage stochastic programs
(Heitsch, 2006)

6. Algorithms for stochastic optimization with expectation constraints
(Lan, 2016)

7. Validation analysis of mirror descent stochastic approximation method
(Lan, 2012)

8. Robust stochastic approximation approach to stochastic programming
(Nemirovski, 2009)

9. A distance for multistage stochastic optimization models
(Pflug, 2012)

10. On stochastic gradient and subgradient methods with adaptive steplength sequences
(Yousefian, 2012)
Page managed by P. Carpentier
(last update: July 21, 2017)