论文标题
意思没有大语言模型中没有参考
Meaning without reference in large language models
论文作者
论文摘要
大型语言模型(LLMS)的广泛成功已受到怀疑,即他们拥有人类的概念或含义。与声称LLM没有任何意义的声称相反,我们认为它们可能会捕获意义的重要方面,此外,工作的工作方式近似于对人类认知的令人信服的说法,在这种认知中,其意义来自概念上的作用。因为概念角色是由内部代表性之间的关系定义的,所以含义不能从模型的架构,培训数据或目标功能中确定,而仅通过检查其内部状态之间的相互关系。这种方法可以阐明为什么LLM如此成功,并暗示如何使它们变得更像人性化。
The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning whatsoever, we argue that they likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from conceptual role. Because conceptual role is defined by the relationships between internal representational states, meaning cannot be determined from a model's architecture, training data, or objective function, but only by examination of how its internal states relate to each other. This approach may clarify why and how LLMs are so successful and suggest how they can be made more human-like.