POSTPONED: OR and Uncertainty: A unified framework
Sequential decision analytics is the study of making decisions over time under uncertainty. These are the problems at the heart of the transition to automated systems that are being increasingly used in e-commerce, digital transportation, supply chain management, process automation, and robotics.
These problems have been studied by over 15 different communities under names that include dynamic programming (including approximate dynamic programming and reinforcement learning), stochastic programming, stochastic search, stochastic control, and simulation optimization, as well as multiarmed bandit problems and active learning. Each community has evolved its own modeling style and family of algorithms designed for certain classes of applications. As each community has evolved to address a broader range of problems, there has been a consistent pattern of rediscovery of tools that sometimes differ in name only, or modest implementation details.
I will represent all of these communities using a single, canonical framework that mirrors the widely used modeling style from deterministic math programming or optimal control. The key difference when introducing uncertainty is the need to optimize over policies. I will show that all the solution strategies suggested by the research literature, in addition to some that are widely used in practice, can be organized into four fundamental classes. One of these classes, which we call “parametric cost function approximations,” is widely used in practice, but has been largely overlooked by the academic community. These ideas will be illustrated using a variety of applications.
This material will be covered in an intense, one-day short course. The only prerequisite is a course in probability and statistics. Extensive supporting material, including an undergraduate-level online book and a graduate level book (but written for a broad audience) available at jungle.princeton.edu.
Deadline for registration is 15 April by using the registration form.
|08:45-09:00||Welcome and introduction|
|09:00-10:20||Part 1 of the lecture|
|10:30-12:00||Part 2 of the lecture|
|12:00-13:30||Lunch on the campus|
|13:30-15:00||Part 3 of the lecture|
|15:15-17:00||Part 4 of the lecture|
Warren Powell is a faculty member in the Department of Operations Research and Financial Engineering at Princeton University where he has taught since 1981. In 1990, he founded CASTLE Laboratory which spans research in computational stochastic optimization with applications initially in transportation and logistics. In 2011, he founded the Princeton laboratory for ENergy Systems Analysis (PENSA) to tackle the rich array of problems in energy systems analysis. In 2013, this morphed into "CASTLE Labs," focusing on computational stochastic optimization and learning.
He has started two consulting firms: Princeton Transportation Consulting Group (1988) and Transport Dynamics (1995), but he has continued to do his developmental work through CASTLE Laboratory at Princeton University, where he has worked with the leading companies in less-than-truckload trucking (Yellow Freight System), parcel shipping (United Parcel Service), truckload trucking (Schneider National), rail (primarily Norfolk Southern Railway), air (Netjets and Embraer), as well as the Air Mobility Command. As he moved into energy, he has worked with PJM Interconnections (the grid operator for the mid-Atlantic states), and PSE&G (the utility that serves 75 percent of New Jersey).
Motivated by these applications, he developed a method for bridging dynamic programming with math programming to solve very high-dimensional stochastic, dynamic programs using the modeling and algorithmic framework of approximate dynamic programming. This work has been used in a variety of applications including fleet management at Schneider National, the SMART energy resource planning model (175,000 time periods), and locomotive optimization at Norfolk Southern. He identified four fundamental classes of policies for solving sequential decision problems, integrating fields such as stochastic programming, dynamic prograrmming (including approximate dynamic programming/reinforcement learning), robust optimization, optimal control and stochastic search (to name a few). This work identified a new class of policy called a parametric cost function. His work in industry is balanced by contributions to the theory of stochastic optimization, and machine learning.
He has been the recipient of many prestigious awards throughout his career to name a few: Recipient Docteur Honoris Causa from the University of Quebec in Montreal in 2013. Winner, Daniel Wagner Prize for extending approximate dynamic programming to very high-dimensional problems for Schneider National. Best Paper Prize from the Society for Transportation Science and Logistics (once for this problem, and once for our ADP model for locomotive management at Norfolk Southern). His students have won many awards (Dantzig Prize for best dissertation in Operations Research, several winners of the Transportation Science dissertation prize, Doing Good with Good OR Competition honorable mention, Nicholson Prize finalist). Finalist in the prestigious Edelman competition in 1987 and 1991. Informs Fellows Award, Presidential Young Investigator Award.