- I experimentally study social learning in the simple two-armed bandit problem.
- Two- armed bandit setting allows me to confront different theories of social learning.
- Results are in line with counting heuristics, but not Bayesian-based reasoning.
- Results urge incorporation of count heuristics in the theory of social learning.
- Results helps to explain the technology, practice adoption failure, poor investments.
Abstract: I conduct an experimental investigation of observational (social) learning in a simple two- armed bandit framework where the models are based on Bayesian reasoning and non- Bayesian count heuristics providing different predictions. The agents can choose between two alternatives with different probabilities of providing a reward. They must make their choice in order to see the outcome and act in a sequence. They can base their decision on the choices of the predecessors and the outcomes of their own choice. The results of the experiment follow neither Bayesian Nash Equilibrium nor Naïve herding model (BRTNI): Subjects follow and cascade on choices that contain no information about the state of the world, and, therefore, sustain losses when learning from others. I also test the Quantal response equilibrium and the robustness of this theory.
Igor Asanov, Bandit cascade: A test of observational learning in the bandit problem, Journal of Economic Behavior & Organization, Volume 189, 2021, Pages 150-171, ISSN 0167-2681, https://doi.org/10.1016/j.jebo.2021.06.006.