Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update comments #2

Merged
merged 6 commits into from
Oct 5, 2024
Merged

Update comments #2

merged 6 commits into from
Oct 5, 2024

Conversation

rlancemartin
Copy link
Contributor

No description provided.

README.md Outdated
Comment on lines 6 to 7

This repo provides a simple example of a long-term memory service you can build and deploy using LangGraph.
Memory is a powerful way to improve and personalize applications, allowing storage of information (e.g., a user-specific profile or memories) that can be used to inform responses or decisions across multiple interactions. This template provides a simple example of a long-term memory service you can build and deploy using LangGraph.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"specific" probably unnecessary

README.md Outdated

The memory graph handles debouncing when processing individual conversations (to help deduplicate work) and supports continuous updates to a single "memory schema" as well as "event-based" memories that can be fetched by recency and filtered.
(1) `Chatbot Graph`: This is a simple chatbot that interacts with a user.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'd re-order as:

  1. Memory graph: ... creating and re-contextualizing memories ... This is developer facing.
  2. Memory Storage: provided through LangGraph's BaseStore... (etc. - link to base store somehow)
  3. Chatbot Graph: This is a simple example chatbot that shows how to connect to your memory service. Users interact with this bot.

README.md Outdated

### Test in LangGraph Studio
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Test in LangGraph Studio
### Try out in LangGraph Studio

README.md Outdated

If you want to test locally, [install the LangGraph Studio desktop app](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file#download).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If you want to test locally, [install the LangGraph Studio desktop app](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file#download).
If you want to test locally, [open this template in LangGraph Studio](https://langgraph-studio.vercel.app/templates/open?githubUrl=https://github.com/langchain-ai/memory-template).

README.md Outdated
This chat bot reads from your memory graph's `Store` to easily list extracted memories.
### Memory Storage

The LangGraph API comes with a built-in memory storage layer that can be used to store and retrieve information across threads.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd link to BaseStore conceptual doc or ref doc

README.md Outdated

The central points are that it:

1. It is accessible to both the `chatbot` and the `memory_graph` in all nodes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(since they are running in the same deployment)

README.md Outdated
1. It is accessible to both the `chatbot` and the `memory_graph` in all nodes.
2. It provides an interface for storing (`put` method) and retrieving (`search` method) memories in a namespaced manner.

Learn more about the Memory Storage layer [here](https://langchain-ai.github.io/langgraph/how-tos/memory/shared-state/).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think say LangGraph's storage layer

README.md Outdated

The `chatbot` graph, defined in [graph.py](./src/chatbot/graph.py), has two nodes, `bot` and `schedule_memories`.

The `chatbot` is invoked with a `user_id` supplied by configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a default user ID for easy testing

README.md Outdated

## How it works

This chat bot reads from your memory graph's `Store` to easily list extracted memories.
### Memory Storage
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These sections can be more concise I think. It reads like bullet points but the bullet points aren't sufficiently punchy/high-entropy

README.md Outdated

Connecting to this type of memory service typically follows an interaction pattern similar to the one outlined below:
The `memory_graph` graph, defined in [graph.py](./src/memory_graph/graph.py) incoperates two different concepts:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The `memory_graph` graph, defined in [graph.py](./src/memory_graph/graph.py) incoperates two different concepts:
The `memory_graph` graph, defined in [graph.py](./src/memory_graph/graph.py) incorperates two different concepts:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just me, but when something leads with "it uses 2 concepts" it feels hard & abstract. "Why" does it use these concepts? How are they going to relate? It feels very abstract.

I learn better if it goes by data flow/needs, something like:

The memory service needs to know two things to properly organize memories for your bot:

  • What should each memory look like? (to focus on what is most relevant to your application context)
  • How to update each memory? (should we continuously update a fixed schema, or should we update or insert more atomic memories to search for later)

We configure this behavior using memory schemas. Each schema tells the graph the structure of a single memory and how to manage that memory when new information is received.

The graph has a couple defaults to get you started. Let's check them out below:
... (show the two things; explain the different types)

... then as your explaining the mechanics of each type, explain trustcall

@hinthornw hinthornw merged commit e2ec497 into main Oct 5, 2024
0 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants