Stateful Posted Pricing with Vanishing Regret via Dynamic Deterministic Markov Decision Processes
Speaker
Yuval Emek
Date
12/01/2021 - 13:00 - 11:30Add To Calendar
2021-01-12 11:30:00
2021-01-12 13:00:00
Stateful Posted Pricing with Vanishing Regret via Dynamic Deterministic Markov Decision Processes
In this talk, a rather general online problem called dynamic resource allocation with capacity constraints (DRACC) is introduced and studied in the realm of posted price mechanisms. This problem subsumes several applications of stateful pricing, including but not limited to posted prices for online job scheduling and matching over a dynamic bipartite graph. As the existing online learning techniques do not yield vanishing regret mechanisms for this problem, we develop a novel online learning framework defined over deterministic Markov decision processes with dynamic state transition and reward functions. We then prove that if the Markov decision process is guaranteed to admit an oracle that can simulate any given policy from any initial state with bounded loss — a condition that is satisfied in the DRACC problem — then the online learning problem can be solved with vanishing regret. Our proof technique is based on a reduction to online learning with switching cost, in which an online decision maker incurs an extra cost every time she switches from one arm to another. If time permits, we will demonstrate this connection and further show how DRACC can be used in our proposed applications of stateful pricing.
Based on a joint work with Ron Lavi, Rad Niazadeh, and Yangguang Shi.
To view the seminar recording, click here
Zoom https://us02web.zoom.us/j/82536086839
אוניברסיטת בר-אילן - Department of Economics
Economics.Dept@mail.biu.ac.il
Asia/Jerusalem
public
Place
Zoom https://us02web.zoom.us/j/82536086839
Affiliation
Technion - Israel Institute of Technology
Abstract
In this talk, a rather general online problem called dynamic resource allocation with capacity constraints (DRACC) is introduced and studied in the realm of posted price mechanisms. This problem subsumes several applications of stateful pricing, including but not limited to posted prices for online job scheduling and matching over a dynamic bipartite graph. As the existing online learning techniques do not yield vanishing regret mechanisms for this problem, we develop a novel online learning framework defined over deterministic Markov decision processes with dynamic state transition and reward functions. We then prove that if the Markov decision process is guaranteed to admit an oracle that can simulate any given policy from any initial state with bounded loss — a condition that is satisfied in the DRACC problem — then the online learning problem can be solved with vanishing regret. Our proof technique is based on a reduction to online learning with switching cost, in which an online decision maker incurs an extra cost every time she switches from one arm to another. If time permits, we will demonstrate this connection and further show how DRACC can be used in our proposed applications of stateful pricing.
Based on a joint work with Ron Lavi, Rad Niazadeh, and Yangguang Shi.
To view the seminar recording, click here
Last Updated Date : 12/01/2021