Smooth Calibration, Leaky Forecasts, and Nash Dynamics

Speaker
Sergiu Hart
Date
31/10/2017 - 13:00 - 11:30Add To Calendar 2017-10-31 11:30:00 2017-10-31 13:00:00 Smooth Calibration, Leaky Forecasts, and Nash Dynamics Paper: http://www.ma.huji.ac.il/hart/abs/calib-eq.html Joint work with Dean P. Foster Abstract. How good is a forecaster? Assume for concreteness that every day the forecaster issues a forecast of the type "the chance of rain tomorrow is 30%." A simple test one may conduct is to calculate the proportion of rainy days out of those days that the forecast was 30%, and compare it to 30%; and do the same for all other forecasts. A forecaster is said to be _calibrated_ if, in the long run, the differences between the actual proportions of rainy days and the forecasts are small--—no matter what the weather really was.  The classical result of Foster and Vohra (1998) is: calibration can always be guaranteed by randomized forecasting procedures (a short proof will be provided). We propose to smooth out the calibration score, which measures how good a forecaster is, by combining nearby forecasts. While regular calibration can be guaranteed only by randomized forecasting procedures, we show that smooth calibration can be guaranteed by deterministic procedures. As a consequence, it does not matter if the forecasts are leaked, i.e., made known in advance: smooth calibration can nevertheless be guaranteed (while regular calibration cannot). Moreover, our procedure has finite recall, is stationary, and all forecasts lie on a finite grid. To construct it, we deal also with the related setups of online linear regression and weak calibration. Finally, we show that smooth calibration yields uncoupled finite-memory dynamics in n-person games—"smooth calibrated learning"—in which the players play approximate Nash equilibria in almost all periods. We will also discuss a new "integral" approach to calibration. Economics building (504), faculty lounge on the first floor אוניברסיטת בר-אילן - המחלקה לכלכלה Economics.Dept@mail.biu.ac.il Asia/Jerusalem public
Place
Economics building (504), faculty lounge on the first floor
Affiliation
Hebrew University
Abstract

Paper: http://www.ma.huji.ac.il/hart/abs/calib-eq.html

Joint work with Dean P. Foster

Abstract. How good is a forecaster? Assume for concreteness that every day the
forecaster issues a forecast of the type "the chance of rain tomorrow is
30%." A simple test one may conduct is to calculate the proportion of
rainy days out of those days that the forecast was 30%, and compare it to
30%; and do the same for all other forecasts. A forecaster is said to be
_calibrated_ if, in the long run, the differences between the actual
proportions of rainy days and the forecasts are small--—no matter what the
weather really was.  The classical result of Foster and Vohra (1998) is:
calibration can always be guaranteed by randomized forecasting procedures
(a short proof will be provided).

We propose to smooth out the calibration score, which measures how good a
forecaster is, by combining nearby forecasts. While regular calibration
can be guaranteed only by randomized forecasting procedures, we show that
smooth calibration can be guaranteed by deterministic procedures. As a
consequence, it does not matter if the forecasts are leaked, i.e., made
known in advance: smooth calibration can nevertheless be guaranteed (while
regular calibration cannot). Moreover, our procedure has finite recall, is
stationary, and all forecasts lie on a finite grid. To construct it, we
deal also with the related setups of online linear regression and weak
calibration. Finally, we show that smooth calibration yields uncoupled
finite-memory dynamics in n-person games—"smooth calibrated learning"—in
which the players play approximate Nash equilibria in almost all periods.

We will also discuss a new "integral" approach to calibration.

תאריך עדכון אחרון : 04/12/2022