-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update comments #2
Conversation
022113b
to
ec57b19
Compare
README.md
Outdated
|
||
This repo provides a simple example of a long-term memory service you can build and deploy using LangGraph. | ||
Memory is a powerful way to improve and personalize applications, allowing storage of information (e.g., a user-specific profile or memories) that can be used to inform responses or decisions across multiple interactions. This template provides a simple example of a long-term memory service you can build and deploy using LangGraph. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"specific" probably unnecessary
README.md
Outdated
|
||
The memory graph handles debouncing when processing individual conversations (to help deduplicate work) and supports continuous updates to a single "memory schema" as well as "event-based" memories that can be fetched by recency and filtered. | ||
(1) `Chatbot Graph`: This is a simple chatbot that interacts with a user. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd re-order as:
- Memory graph: ... creating and re-contextualizing memories ... This is developer facing.
- Memory Storage: provided through LangGraph's BaseStore... (etc. - link to base store somehow)
- Chatbot Graph: This is a simple example chatbot that shows how to connect to your memory service. Users interact with this bot.
README.md
Outdated
|
||
### Test in LangGraph Studio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Test in LangGraph Studio | |
### Try out in LangGraph Studio |
README.md
Outdated
|
||
If you want to test locally, [install the LangGraph Studio desktop app](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file#download). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to test locally, [install the LangGraph Studio desktop app](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file#download). | |
If you want to test locally, [open this template in LangGraph Studio](https://langgraph-studio.vercel.app/templates/open?githubUrl=https://github.com/langchain-ai/memory-template). |
README.md
Outdated
This chat bot reads from your memory graph's `Store` to easily list extracted memories. | ||
### Memory Storage | ||
|
||
The LangGraph API comes with a built-in memory storage layer that can be used to store and retrieve information across threads. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd link to BaseStore conceptual doc or ref doc
README.md
Outdated
|
||
The central points are that it: | ||
|
||
1. It is accessible to both the `chatbot` and the `memory_graph` in all nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(since they are running in the same deployment)
README.md
Outdated
1. It is accessible to both the `chatbot` and the `memory_graph` in all nodes. | ||
2. It provides an interface for storing (`put` method) and retrieving (`search` method) memories in a namespaced manner. | ||
|
||
Learn more about the Memory Storage layer [here](https://langchain-ai.github.io/langgraph/how-tos/memory/shared-state/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think say LangGraph's storage layer
README.md
Outdated
|
||
The `chatbot` graph, defined in [graph.py](./src/chatbot/graph.py), has two nodes, `bot` and `schedule_memories`. | ||
|
||
The `chatbot` is invoked with a `user_id` supplied by configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a default user ID for easy testing
README.md
Outdated
|
||
## How it works | ||
|
||
This chat bot reads from your memory graph's `Store` to easily list extracted memories. | ||
### Memory Storage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These sections can be more concise I think. It reads like bullet points but the bullet points aren't sufficiently punchy/high-entropy
README.md
Outdated
|
||
Connecting to this type of memory service typically follows an interaction pattern similar to the one outlined below: | ||
The `memory_graph` graph, defined in [graph.py](./src/memory_graph/graph.py) incoperates two different concepts: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The `memory_graph` graph, defined in [graph.py](./src/memory_graph/graph.py) incoperates two different concepts: | |
The `memory_graph` graph, defined in [graph.py](./src/memory_graph/graph.py) incorperates two different concepts: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just me, but when something leads with "it uses 2 concepts" it feels hard & abstract. "Why" does it use these concepts? How are they going to relate? It feels very abstract.
I learn better if it goes by data flow/needs, something like:
The memory service needs to know two things to properly organize memories for your bot:
- What should each memory look like? (to focus on what is most relevant to your application context)
- How to update each memory? (should we continuously update a fixed schema, or should we update or insert more atomic memories to search for later)
We configure this behavior using memory schemas. Each schema tells the graph the structure of a single memory and how to manage that memory when new information is received.
The graph has a couple defaults to get you started. Let's check them out below:
... (show the two things; explain the different types)
... then as your explaining the mechanics of each type, explain trustcall
No description provided.