Social Learning with Model Misspecification: A Framework and A Robustness Result

Daniel Hauser, Aislinn Bohren

Research output: Working paperScientific


We explore model misspecification in an observational learning framework. Individuals learn from private and public signals and the actions of others. An agent's type specifies her model of the world. Misspecified types have incorrect beliefs about the signal distribution, how other agents draw inference and/or others' payoffs. We establish that the correctly specified model is robust in that agents with approximately correct models almost surely learn the true state asymptotically. We develop a simple criterion to identify the asymptotic learning outcomes that arise when misspecification is more severe. Depending on the nature of the misspecification, learning may be correct, incorrect or beliefs may not converge. Different types may asymptotically disagree, despite
observing the same sequence of information. This framework captures behavioral biases such as confirmation bias, false consensus effect, partisan bias and correlation neglect, as well as models of inference such as level-k and cognitive hierarchy.
Original languageEnglish
Number of pages103
Publication statusPublished - 2018
MoE publication typeD4 Published development or research report or study


  • Model isspecification
  • Social Learning
  • Bounded Rationality


Dive into the research topics of 'Social Learning with Model Misspecification: A Framework and A Robustness Result'. Together they form a unique fingerprint.

Cite this