论文标题
贝叶斯感知模型的一致性和统计效率
Agreement and Statistical Efficiency in Bayesian Perception Models
论文作者
论文摘要
自1970年代以来,研究了经济学的贝叶斯学习模型。最近在计算语言学中。经济学的模型假设,代理在其沟通和行动中最大程度地提高了效用。经济学模型并未解释在许多实验研究中观察到的``概率匹配''现象。为了解决这些观察,引入了不正式地适合经济实用性最大化框架的贝叶斯模型。在这些模型中,这些模型中的个人在交流中从其后代中进行了样本。尽管在这项工作中,我们在这项工作中都不会有这种模型在各个模型上与反复的交流有关,因此可能出乎意料的是,可能是出乎意料的。从古典意义上讲,我们确定个人最终同意,此外表明,限制后部是最佳的。 我们根据大语言模型(LLM)探讨了我们结果的解释。在积极的方向上,我们的结果可以解释为指出不同LLM之间的相互作用可以导致最佳学习。但是,我们提供了一个示例,表明错误指定如何导致LLM代理在其估计中过分自信。
Bayesian models of group learning are studied in Economics since the 1970s. and more recently in computational linguistics. The models from Economics postulate that agents maximize utility in their communication and actions. The Economics models do not explain the ``probability matching" phenomena that are observed in many experimental studies. To address these observations, Bayesian models that do not formally fit into the economic utility maximization framework were introduced. In these models individuals sample from their posteriors in communication. In this work we study the asymptotic behavior of such models on connected networks with repeated communication. Perhaps surprisingly, despite the fact that individual agents are not utility maximizers in the classical sense, we establish that the individuals ultimately agree and furthermore show that the limiting posterior is Bayes optimal. We explore the interpretation of our results in terms of Large Language Models (LLMs). In the positive direction our results can be interpreted as stating that interaction between different LLMs can lead to optimal learning. However, we provide an example showing how misspecification may lead LLM agents to be overconfident in their estimates.