Social Learning with Model Misspecification: A Framework and A Robustness Result

Daniel Hauser, Aislinn Bohren

Tutkimustuotos: TyöpaperiWorking paperScientific

Abstrakti

We explore model misspecification in an observational learning framework. Individuals learn from private and public signals and the actions of others. An agent's type specifies her model of the world. Misspecified types have incorrect beliefs about the signal distribution, how other agents draw inference and/or others' payoffs. We establish that the correctly specified model is robust in that agents with approximately correct models almost surely learn the true state asymptotically. We develop a simple criterion to identify the asymptotic learning outcomes that arise when misspecification is more severe. Depending on the nature of the misspecification, learning may be correct, incorrect or beliefs may not converge. Different types may asymptotically disagree, despite
observing the same sequence of information. This framework captures behavioral biases such as confirmation bias, false consensus effect, partisan bias and correlation neglect, as well as models of inference such as level-k and cognitive hierarchy.
AlkuperäiskieliEnglanti
KustantajaUNIVERSITY OF PENNSYLVANIA PRESS
Sivumäärä103
TilaJulkaistu - 2018
OKM-julkaisutyyppiD4 Julkaistut kehitykset tai tutkimusraportit tai tutkimukset

Sormenjälki Sukella tutkimusaiheisiin 'Social Learning with Model Misspecification: A Framework and A Robustness Result'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

  • Siteeraa tätä