Home | Registration | Program | Posters | Flash Talks |
12:00pm-1:00pm Registration
Quad entrance |
Come pick up your name tag before the conference begins! |
1:00pm-1:20pm
PNC room |
Opening Remarks |
1:20pm-2:00pm
PNC room |
Leveraging Generative AI for Creative Problem-Solving in Operations Research and Management Science,
Predictive and Prescriptive Analytics toward Optimizing Wildfire Suppression |
Léonard Boussioux, Assistant Professor of Information Systems and Operations Management, University of Washington | |
Part 1: Leveraging Generative AI for Creative Problem-Solving in Operations Research and Management Science.
The rapid advancement of generative artificial intelligence (AI) presents unprecedented opportunities for creative problem-solving in Operations Research and Management Science (OR/MS) through human-AI collaboration. This talk explores the potential of integrating AI into the early phases of business solution generation and multi-criteria decision analysis within innovation and strategic planning processes.
We will present two complementary field experiments designed to investigate the impact of human-AI collaboration in both ideation and evaluation contexts. The first experiment utilized a crowdsourcing challenge to compare the effectiveness of a human crowd and human-AI collaborative approaches in generating innovative solutions for circular economy business ideas. The second experiment examined the influence of AI assistance on the evaluation process of early-stage innovations, involving a diverse group of expert and non-expert screeners to assess solutions for a global health equity challenge. Results from both experiments provide insights into the strengths and limitations of human-AI collaboration in the initial stages of problem-solving and decision-making. Our findings highlight the potential for incorporating “AI-in-the-loop” methodologies into OR/MS practices, offering a scalable and cost-efficient approach to enhancing creative idea generation and systematic evaluation processes. Our research emphasizes the importance of strategically integrating AI capabilities with human expertise to improve heuristic development and evaluation in complex problem domains. Paper 1 (Organization Science): The Crowdless Future? Generative AI and Creative Problem-Solving — https://pubsonline.informs.org/doi/full/10.1287/orsc.2023.18430. Joint work with Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, Karim R. Lakhani. Paper 2 (Preprint): The Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations — https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4914367. Joint work with Jacqueline N. Lane, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh, Pei-Hsin Wang. Part 2: Predictive and Prescriptive Analytics toward Optimizing Wildfire Suppression. Intense wildfire seasons require critical prioritization decisions to combat wildfire spread over a dispersed geographic area with limited suppression resources. This paper develops a predictive and prescriptive approach to jointly optimize crew assignments and wildfire suppression based on fire spread dynamics, crew availability, and spatiotemporal dynamics. The problem features a combinatorial resource allocation structure with endogenous wildfire demand and non-linear wildfire dynamics. We formulate an integer optimization model in a time-space-rest network representation of crew assignments and a time-state network representation of wildfire dynamics, with linking constraints to ensure consistency between both. We develop a branch-and-price-and-cut algorithm based on: (i) a two-sided column generation scheme that generates fire suppression plans and crew routes iteratively; (ii) a new family of cuts exploiting the knapsack structure of the linking constraints; and (iii) novel branching rules to accommodate non-linear wildfire dynamics. We also develop a data-driven approach based on double machine learning and causal forests to estimate wildfire spread as a function of covariate information and suppression efforts, while controlling for the endogeneity of treatment and outcome variables. Extensive computational experiments demonstrate that the optimization algorithm scales to practical and previously intractable instances, and that the methodology can enhance wildfire suppression effectiveness, resulting in significant reduction in area burned over a busy wildfire season. |
2:00pm-2:40pm
PNC room |
PDLP: A Practical First-Order Method for Large-Scale Linear Programming |
Oliver Hinder, Assistant Professor of Industrial Engineering, University of Pittsburgh | |
This talk presents PDLP, a practical first-order method for large-scale linear programming (LP) problems. Built upon the primal-dual hybrid gradient (PDHG) method applied to the minimax formulation of LP, PDLP incorporates several advanced features, including diagonal preconditioning, presolving, adaptive step sizes, adaptive restarting, and feasibility polishing.
To evaluate PDLP, we introduce a suite of eleven large-scale LP problems, ranging from 0.14 to 6.3 billion nonzeros. Our implementation solves eight of these instances to an optimality gap of 1% within six days on a single machine. We compare PDLP against Gurobi’s barrier method, primal simplex, and dual simplex implementations. Gurobi barrier solves only three instances and exceeds a 1 TB RAM limit on the others. While primal and dual simplex methods are more memory efficient, they are much slower, solving only three instances within six days. These results demonstrate the significant advantages of PDLP in solving large-scale LPs efficiently. |
2:40pm-3:05pm | Break |
3:05pm-3:45pm
3801 Swartz Center |
Robust Paths: Geometry and Computation |
Peter Zhang, Assistant Professor of Operations Research, Carnegie Mellon University | |
Applying robust optimization often requires selecting an appropriate uncertainty set—both in shape and size—a choice that directly affects the trade-off between average-case and worst-case performances. In practice, this calibration is usually done via trial-and-error: solving the robust optimization problem many times with different uncertainty set shapes and sizes, and examining their performance trade-off. This process is computationally expensive and ad hoc. In this work, we take a scientific approach to this issue for robust optimization problems with linear objective functions, convex feasible regions, and convex uncertainty sets. We introduce and study what we define as the robust path: a set of robust solutions obtained by varying the uncertainty set’s shape and size parameters. Our central geometric insight is that the robust path can be characterized as a Bregman projection of a curve onto the feasible region. This leads to a surprising discovery that the robust path can be approximated via the trajectories of standard optimization algorithms, such as the proximal point method, of the deterministic counterpart problem. We give a sharp approximation error bound and show it depends on the geometries of the feasible region and the uncertainty set. We also illustrate two special cases where the approximation error is zero: the feasible region is polyhedrally monotone, or the feasible region and the uncertainty set follow a dual relationship. We demonstrate the practical impact of this approach in two settings: portfolio optimization and adversarial deep learning. The former numerically validates the zero approximation error under favorable conditions (feasible set is polyhedrally monotone); and when the technical conditions are violated, still retains a very small error. The latter case breaks the linear objective function condition in our theorems. But our technology still shows strong performance: 85% reduction in computational time and near-Pareto efficiency in terms of average-case and worst-case performances. |
3:45pm-4:25pm
3801 Swartz Center |
Computing dual bounds for set-based models using column generation and column elimination in Hexaly |
Fred Gardi, Founder & CEO of Hexaly | |
Hexaly is a model-and-run solver that integrates heuristics and exact methods. A set and permutation-based modeling formalism was introduced to simplify the modeling of some combinatorial problems, like routing or packing problems. For instance, in a routing problem, list variables can be used to model the sequence of visits made by each truck. These decision variables are suited for a heuristic search but are much more challenging to integrate into a mathematical programming approach to compute lower bounds. A direct reformulation in a MILP model introduces a quadratic number of binary decisions with several big M constraints, leading to poor scalability and bounds. Hexaly automatically detects such structures in a user model and reformulates them in an extended MILP model to compute lower bounds in parallel with a heuristic search. This model is solved efficiently using state-of-the-art branch-and-cut-and-price techniques and column elimination algorithms. This talk will present the general approach, the algorithms used for the resolution, and some benchmarks on the classical vehicle routing and packing problems. |
4:25pm-4:50pm | Break |
4:50pm-6:30pm
3801 Swartz Center |
10-Minute Flash Talk Competition |
Selected students will present 10-minute flash talks in a competition judged by the attendees with cash prizes! |
6:30pm-9:00pm
PNC room |
Dinner |
Everyone is welcome to join us for a free dinner catered by a local Pittsburgh restaurant. We might go out for drinks afterward as well. |
9:30am-10:30am Breakfast
3808 |
All are welcome to a free breakfast and coffee to start the day. |
10:30am-11:15am
3801 Swartz Center |
Decision-Dependent Distributionally Robust Optimization with Applications to Offline Dynamic Pricing |
Huiwen Jia, Assistant Professor of Industrial Engineering and Operations Research, UC Berkeley | |
We consider stochastic programs in which the distribution of the random variables depends on the decision variables and is only observable through a finite offline dataset. To mitigate out-of-sample risks, we formulate a decision-dependent distributionally robust optimization (DD-DRO) problem and propose a systematic method for constructing decision-dependent ambiguity sets intended to contain the true distribution, using the Wasserstein metric and interpolation techniques. Leveraging this framework, we establish finite-sample performance guarantees for the DD-DRO solution and show that the optimality gap can be bounded with the sample size. Furthermore, we show that under mild conditions, the DD-DRO problem can be reformulated as an optimization program with finitely many convex constraints, which can be solved with global optimization approaches. We demonstrate the practical effectiveness of our approach through an offline dynamic pricing application, where we derive a pricing strategy with guaranteed expected revenue. |
11:15am-12:00pm
3801 Swartz Center |
Real-time Personalization with Simple Transformers |
Lin An, 5th Year PhD Candidate at Tepper School of Business, Carnegie Mellon University | |
Real-time personalization has advanced significantly in recent years, with platforms utilizing machine learning models to predict user preferences based on rich behavioral data on each individual user. Traditional approaches usually rely on embedding-based machine learning models to capture user preferences, and then reduce the final optimization task to nearest-neighbors, which can be performed extremely fast. However, these models struggle to capture complex user behaviors, which are essential for making accurate recommendations. Transformer-based models, on the other hand, are known for their practical ability to model sequential behaviors, and hence have been intensively used in personalization recently to overcome these limitations. However, optimizing recommendations under transformer-based models is challenging due to their complicated architectures. In this work, we address this challenge by considering a specific class of transformers, showing its ability to represent complex user preferences, and developing efficient algorithms for real-time personalization. We focus on a particular set of transformers called simple transformers. We show that simple transformers are capable of capturing complex user preferences. We then develop an algorithm that enables fast optimization of recommendation tasks based on simple transformers. Our algorithm achieves near-optimal performance in sub-linear time. Finally, we demonstrate the effectiveness of our approach through an empirical study on datasets from Spotify and Trivago. Our experiment results show that (1) simple transformers can model/predict user preferences substantially more accurately than non-transformer models and nearly as accurately as more complex transformers, and (2) our algorithm completes simple-transformer-based recommendation tasks quickly and effectively. |
12:00pm-2:00pm | Lunch |
Lunch on your own! We welcome you to explore restaurants around the area. Please ask us if you would like any recommendations! |
2:00pm-3:30pm
PNC room |
Poster Session |
Students will setup posters in an open space. Winners will be determined by both formal judges and attendees popular votes! |
3:30pm-4:15pm
3801 Swartz Center |
Duality, the Value Function, and Multiobjective Optimization |
Ted Ralphs, Professor of Industrial and Systems Engineering, Lehigh University | |
In the first part of the talk, we present a generalized version of the Farkas Lemma that provides a unifying framework for understanding the relationships between solution algorithms for discrete optimization problems with multiple objectives, multiple decision-makers, and/or multiple stages. A well-known geometric interpretation of Farkas' certificate of infeasibility for a linear optimization problem is as a hyperplane separating a given right-hand side vector from a convex cone. This interpretation generalizes in a natural way to the MILP setting and provides a geometric way of understanding MILP duality and related concepts. This geometric point of view also provides insight into the algorithms for a variety of optimization problems. In particular, the second part of the talk will focus on how this point of view led to development of a class of algorithms for constructing the efficient frontier of a general multiobjective mixed integer linear optimization problem with an arbitrary number of objectives using a generalized cutting plane algorithm. |
4:15pm-5:00pm
3801 Swartz Center |
Learning to cover: online learning and optimization with irreversible decisions |
Michael Lingzhi Li, Assistant Professor of Business Administration, Harvard Business School | |
We define an online learning and optimization problem with discrete and irreversible decisions contributing toward a coverage target. In each period, a decision-maker selects facilities to open, receives information on the success of each one, and updates a classification model to guide future decisions. The goal is to minimize facility openings under a chance constraint reflecting the coverage target, in an asymptotic regime characterized by a large target number of facilities m → ∞ but a finite horizon. We prove that, under statistical conditions, the online classifier converges to the Bayes-optimal classifier at a rate of at best O(1/√n). Thus, we formulate our online learning and optimization problem, with a generalized learning rate r > 0 and a residual error 1−p. We derive an asymptotically optimal algorithm and an asymptotically tight lower bound. The regret grows in Θ(m^{(1−r)/(1−r^T)}) if p = 1 (perfect learning) or in Θ(max(m^{(1−r)/(1−r^T)}, √m)) otherwise; in particular, the regret rate is sub-linear and converges exponentially fast to its infinite-horizon limit. We extend this result to a more complicated facility location setting in a bipartite facility-customer graph with a target on customer coverage. Throughout, constructive proofs identify a policy featuring limited exploration initially and fast exploitation later on once uncertainty gets mitigated. These results uncover the benefits of limited online learning and optimization through pilot programs prior to full-fledged expansion. |
5:00pm-5:30pm
3801 Swartz Center |
Closing Remarks |
5:30pm-7:30pm
PNC room |
Happy Hour |
Free refreshments to celebrate a wonderful conference. The party may continue past 7:30pm outside of the building. |
Design by Mike Pierce | © CMU INFORMS |