People reason about uncertainty with deliberately incomplete models, including only the most relevant variables. How do people hampered by different, incomplete views of the world learn from each other? We introduce a model of “model-based inference.” Model-based reasoners partition an otherwise hopelessly complex state space into a manageable model. We nd that unless the differences in agents’ models are trivial, interactions will often not lead agents to have common beliefs, and indeed the correct-model belief will typically lie outside the convex hull of the agents’ beliefs. However, if the agents’ models have enough in common, then interacting will lead agents to similar beliefs, even if their models also exhibit some bizarre idiosyncrasi...