Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: how to add memory to existing RAG #17323

Open
1 task done
JINO-ROHIT opened this issue Dec 19, 2024 · 6 comments
Open
1 task done

[Question]: how to add memory to existing RAG #17323

JINO-ROHIT opened this issue Dec 19, 2024 · 6 comments
Labels
question Further information is requested

Comments

@JINO-ROHIT
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

Im trying to add a memory component to my already existing RAG , how can i do this? Here is my existing code

`

vector_retriever_metadata = index.as_retriever(similarity_top_k=2)
retriever_metadata = RecursiveRetriever(
"vector",
retriever_dict={"vector": vector_retriever_metadata},
node_dict=all_nodes_dict,
verbose=True,
)

response_synthesizer = get_response_synthesizer(response_mode="compact", service_context = service_context, streaming=True)

query_engine = RetrieverQueryEngine.from_args(
retriever_metadata, response_synthesizer=response_synthesizer
)

nodes = retriever_metadata.retrieve(prompt)
response = query_engine.query(prompt)
for text in response.response_gen:
full_response += text + ""

`

@JINO-ROHIT JINO-ROHIT added the question Further information is requested label Dec 19, 2024
Copy link

dosubot bot commented Dec 19, 2024

To add a memory component to your existing RAG setup, you can integrate Activeloop's Deep Memory feature. Here's a step-by-step guide on how to do this:

  1. Install Required Packages:

    %pip install llama-index-vector-stores-deeplake
    %pip install llama-index-llms-openai
  2. Set Up Environment:

    import nest_asyncio
    import os
    import getpass
    
    nest_asyncio.apply()
  3. Install Additional Dependencies:

    !pip install deeplake beautifulsoup4 html2text tiktoken openai llama-index python-dotenv
  4. Create and Populate Vector Store:

    from llama_index.core import VectorStoreIndex, StorageContext
    from llama_index.vector_stores.deeplake import DeepLakeVectorStore
    from llama_index.llms.openai import OpenAI
    
    os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API token: ")
    os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass("Enter your ActiveLoop API token: ")
    
    token = os.getenv("ACTIVELOOP_TOKEN")
    
    vector_store = DeepLakeVectorStore(
        dataset_path="hub://activeloop-test/deeplake_docs_deepmemory2",
        overwrite=False,
        runtime={"tensor_db": True},
        token=token,
    )
  5. Create Modules and Initialize Deep Memory Retriever:

    def create_modules(vector_store, docs=[], populate_vector_store=True):
        if populate_vector_store:
            node_parser = SimpleNodeParser.from_defaults(chunk_size=512)
            nodes = node_parser.get_nodes_from_documents(docs)
        else:
            nodes = []
    
        for idx, node in enumerate(nodes):
            node.id_ = f"node_{idx}"
    
        llm = OpenAI(model="gpt-4")
        storage_context = StorageContext.from_defaults(vector_store=vector_store)
        return storage_context, nodes, llm
    
    storage_context, nodes, llm = create_modules(docs=docs, vector_store=vector_store)
    
    vector_index = VectorStoreIndex(nodes, storage_context=storage_context)
    deep_memory_retriever = vector_index.as_retriever(similarity_top_k=4, deep_memory=True)
  6. Integrate Deep Memory with Your Existing Setup:
    Replace your current retriever with the deep_memory_retriever:

    retriever_metadata = RecursiveRetriever(
        "vector",
        retriever_dict={"vector": deep_memory_retriever},
        node_dict=all_nodes_dict,
        verbose=True,
    )

This setup will allow you to integrate a memory component into your RAG system using Activeloop's Deep Memory, enhancing retrieval accuracy and efficiency [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@JINO-ROHIT
Copy link
Author

No this, i want to add memory to remember my previous context

@JINO-ROHIT
Copy link
Author

@dosu @logan-markewich is this possible to convert a parent child retriever to have memory?

Copy link

dosubot bot commented Dec 20, 2024

To add memory to your existing RAG setup to remember previous context, you can utilize the memory classes available in the LlamaIndex framework. These include GPTIndexMemory, GPTIndexChatMemory, and others like ChatMemoryBuffer and VectorMemory [1][2][3].

To convert a parent-child retriever to have memory, you would need to integrate these memory components into your retriever setup. Here’s a general approach:

  1. Choose a Memory Class: Depending on your needs, select a memory class such as GPTIndexChatMemory for chat applications or VectorMemory for vector-based memory storage.

  2. Integrate Memory with Retriever: Modify your retriever to utilize the chosen memory class. This involves saving context during retrieval and using it to enhance future queries.

  3. Implement Context Retention: Use methods like save_context to store relevant information and load_memory_variables to retrieve it when needed.

  4. Modify Query Engine: Ensure your query engine is capable of handling the additional context provided by the memory component.

This approach will allow your system to retain and utilize context across interactions, enhancing the capabilities of your RAG setup.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@JINO-ROHIT
Copy link
Author

@dosu can you take my code snippet and convert it?

Copy link

dosubot bot commented Dec 20, 2024

This is a tough nut to crack, could really use your help on this, @logan-markewich.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant