Papers
arxiv:2308.09830

Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis

Published on Aug 18, 2023
Authors:
,
,
,

Abstract

This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and <PRE_TAG>Cognitive Architectures (CAs)</POST_TAG>. We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The <PRE_TAG>modular approach</POST_TAG>, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the <PRE_TAG>simulation theory of cognition</POST_TAG>. The agency approach, motivated by the <PRE_TAG>Society of Mind theory</POST_TAG> and the <PRE_TAG>LIDA cognitive architecture</POST_TAG>, proposes the formation of <PRE_TAG>agent collections</POST_TAG> that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2308.09830 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2308.09830 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2308.09830 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.