autogpt / forge /tutorials /004_memories.md
kakumusic's picture
Upload folder using huggingface_hub
b225a21 verified
|
raw
history blame
7.02 kB

Memory Integration: Enabling Your Agent to Remember and Learn

Introduction

  • Importance of Memory Integration in AI Agents
  • Overview of Memory Mechanisms in AutoGPT

Section 1: Understanding Memory Integration

  • Concept of Memory in AI Agents
  • Types of Memory: Short-term vs. Long-term

Section 2: Implementing Memory in Your Agent

  • Setting up Memory Structures in the Forge Environment
  • Utilizing Agent Protocol for Memory Integration

Section 3: Developing Learning Mechanisms

  • Creating Learning Algorithms for Your Agent
  • Implementing Learning Mechanisms using Task and Artifact Schemas

Section 4: Testing and Optimizing Memory Integration

  • Employing AGBenchmark for Memory Testing
  • Optimizing Memory for Enhanced Performance and Efficiency

Section 5: Best Practices in Memory Integration

  • Tips and Strategies for Effective Memory Integration
  • Avoiding Common Pitfalls in Memory Development

Conclusion

  • Recap of the Tutorial
  • Future Directions in Memory Integration

Additional Resources

From The Rise and Potential of Large Language Model Based Agents: A Survey Zhiheng Xi (Fudan University) et al. arXiv. [paper] [code]

Memory capability
Raising the length limit of Transformers
  • [2023/05] Randomized Positional Encodings Boost Length Generalization of Transformers. Anian Ruoss (DeepMind) et al. arXiv. [paper] [code]
  • [2023-03] CoLT5: Faster Long-Range Transformers with Conditional Computation. Joshua Ainslie (Google Research) et al. arXiv. [paper]
  • [2022/03] Efficient Classification of Long Documents Using Transformers. Hyunji Hayley Park (Illinois University) et al. arXiv. [paper] [code]
  • [2021/12] LongT5: Efficient Text-To-Text Transformer for Long Sequences. Mandy Guo (Google Research) et al. arXiv. [paper] [code]
  • [2019/10] BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. Michael Lewis(Facebook AI) et al. arXiv. [paper] [code]
Summarizing memory
  • [2023/08] ExpeL: LLM Agents Are Experiential Learners. Andrew Zhao (Tsinghua University) et al. arXiv. [paper] [code]
  • [2023/08] ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate. Chi-Min Chan (Tsinghua University) et al. arXiv. [paper] [code]
  • [2023/05] MemoryBank: Enhancing Large Language Models with Long-Term Memory. Wanjun Zhong (Harbin Institute of Technology) et al. arXiv. [paper] [code]
  • [2023/04] Generative Agents: Interactive Simulacra of Human Behavior. Joon Sung Park (Stanford University) et al. arXiv. [paper] [code]
  • [2023/04] Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System. Xinnian Liang(Beihang University) et al. arXiv. [paper] [code]
  • [2023/03] Reflexion: Language Agents with Verbal Reinforcement Learning. Noah Shinn (Northeastern University) et al. arXiv. [paper] [code]
  • [2023/05] RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text. Wangchunshu Zhou (AIWaves) et al. arXiv.* [paper] [code]
Compressing memories with vectors or data structures
  • [2023/07] Communicative Agents for Software Development. Chen Qian (Tsinghua University) et al. arXiv. [paper] [code]
  • [2023/06] ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory. Chenxu Hu(Tsinghua University) et al. arXiv. [paper] [code]
  • [2023/05] Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory. Xizhou Zhu (Tsinghua University) et al. arXiv. [paper] [code]
  • [2023/05] RET-LLM: Towards a General Read-Write Memory for Large Language Models. Ali Modarressi (LMU Munich) et al. arXiv. [paper] [code]
  • [2023/05] RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text. Wangchunshu Zhou (AIWaves) et al. arXiv.* [paper] [code]
Memory retrieval
  • [2023/08] Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents. Ziheng Huang(University of California—San Diego) et al. arXiv. [paper]
  • [2023/08] AgentSims: An Open-Source Sandbox for Large Language Model Evaluation. Jiaju Lin (PTA Studio) et al. arXiv. [paper] [project page] [code]
  • [2023/06] ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory. Chenxu Hu(Tsinghua University) et al. arXiv. [paper] [code]
  • [2023/05] MemoryBank: Enhancing Large Language Models with Long-Term Memory. Wanjun Zhong (Harbin Institute of Technology) et al. arXiv. [paper] [code]
  • [2023/04] Generative Agents: Interactive Simulacra of Human Behavior. Joon Sung Park (Stanford) et al. arXiv. [paper] [code]
  • [2023/05] RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text. Wangchunshu Zhou (AIWaves) et al. arXiv.* [paper] [code]

Appendix

  • Examples of Memory Integration Implementations
  • Glossary of Memory-Related Terms