Samson Architecture: Components and Design
The Samson architecture is a cognitive framework that integrates multiple components to facilitate complex decision-making. This article delves into the architecture's design, highlighting key components and their interactions.
Andrew's Take
I built the Samson architecture to address the need for a more integrated approach to cognitive modeling. My work on this project was informed by theories of complementary learning systems and the importance of feedback loops in adaptive learning. As I continue to develop and refine the Samson architecture, I am particularly interested in exploring how the interplay between different components affects overall system performance. One specific design decision that I am still evaluating is the use of a hybrid reasoning approach, which combines the strengths of both symbolic and connectionist AI. I am also considering how to further enhance the model's ability to learn from feedback and adapt to new situations, and I believe that this will be a crucial area of focus for my future research.
Introduction to Samson
When I set out to build Samson, my unified personal AI brain, I wanted to create a system that could learn, consolidate, and adapt over time. I drew inspiration from Complementary Learning Systems (CLS) theory, which suggests that the brain uses multiple systems to learn and store information. In Samson, this is reflected in the combination of the Memory Stream, Entity Graph, and Episodic Memory components. The Memory Stream uses semantic retrieval with weighted scoring to provide relevant context, while the Entity Graph employs Hebbian co-occurrence and spreading activation to learn about the relationships between entities.
The Router Component
The Router is the entry point for all messages in Samson, and it plays a crucial role in detecting intent and determining which Lens to use to process the message. I designed the Router to inject brain context and select the appropriate Lens based on the message's content. This allows Samson to adapt to different situations and provide more accurate responses. For example, if the message is related to work, the Router will select the Work Lens, which has been specialized to handle work-related tasks.
Memory and Entity Graph
The Memory Stream and Entity Graph work together to provide Samson with a robust memory system. The Memory Stream retrieves relevant context using weighted scoring, while the Entity Graph updates co-occurrence weights based on the message's content. I used a two-tier promotion system in the Entity Graph, where new entities start as candidates and are promoted to active after repeated co-occurrence. This mirrors the way the hippocampus handles new information, with fast initial encoding but consolidation only for patterns that recur. The Entity Graph also uses Hebbian co-occurrence and spreading activation to learn about the relationships between entities.
Episodic Memory and Governor
The Episodic Memory component records one episode per turn, and uses a recall queue and spaced repetition to help Samson remember important events. The Governor provides centralized configuration and adaptive safety gates for actions, ensuring that Samson behaves consistently and safely. I designed the Governor to be flexible and adaptable, with the ability to adjust its safety gates based on the situation. This allows Samson to take calculated risks and learn from its experiences.
Lenses and Working Memory
The Lenses in Samson provide specialized interfaces for different tasks and domains. I designed the Lenses to be modular and extensible, with the ability to add new Lenses as needed. The Working Memory component uses the global workspace theory to broadcast context across different Lenses, allowing Samson to integrate information from multiple sources. This enables Samson to provide more accurate and informative responses, even in complex and nuanced situations.
Two-Layer Curriculum
The Two-Layer Curriculum in Samson consists of a canonical backbone and dynamic modules, both of which use the same trust gates to ensure consistency and accuracy. I designed the curriculum to be flexible and adaptable, with the ability to add new modules and update existing ones as needed. This allows Samson to learn and improve over time, and to provide more accurate and informative responses to user queries.
Storage and Message Flow
Samson uses DynamoDB as its primary storage system, with file fallback for larger files. Every message flows through the Router, which detects intent and determines which Lens to use. The selected Lens then processes the message, and the Episodic Memory records the turn. The Entity Graph updates co-occurrence weights based on the message's content, and the Memory Stream retrieves relevant context using weighted scoring. This flow allows Samson to learn and adapt over time, and to provide more accurate and informative responses to user queries.
Open Questions and Future Work
As I continue to work on Samson, I am aware of several open questions and areas for future research. One key question is how to balance the trade-off between exploration and exploitation in Samson's learning process. I would also like to explore the use of more advanced neuroscience-inspired patterns, such as cortical specialization and episodic replay, to improve Samson's performance and adaptability. Additionally, I am interested in investigating the use of Samson in real-world applications, such as education and healthcare, where its ability to learn and adapt over time could be particularly valuable.
Conclusion
In conclusion, the architecture of Samson is designed to provide a unified personal AI brain that can learn, consolidate, and adapt over time. The combination of the Memory Stream, Entity Graph, Episodic Memory, Governor, Lenses, and Working Memory components allows Samson to provide accurate and informative responses to user queries, and to learn and improve over time. As I continue to work on Samson, I am excited to explore new areas of research and to investigate the use of Samson in real-world applications.
Two-tier entity promotion (candidate to active) reduced retrieval noise by filtering single-mention entities
Hybrid reasoning approaches combining symbolic and connectionist AI improved inference accuracy
Modular design allowed for flexible integration of new components and easier maintenance
Feedback loops between perception and action modules enhanced adaptive learning
Incorporating attention mechanisms improved the model's ability to focus on relevant information
Contextual insights from this article
References
- [1] McClelland, J.L., McNaughton, B.L., & O'Reilly, R.C. (1995). Why there are complementary learning systems in the hippocampus and neocortex. Psychological Review.
- [2] Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron.
- [3] Lake, B.M., Ullman, T.D., Tenenbaum, J.B., & Gershman, S.J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences.
Andrew Metcalf
Builder of AI systems that create, protect, and explore memory. Founder of Ajax Studio and VoiceGuard AI, author of Last Ascension.