You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am using PandsAI in my project, I want to compare and test the retrieval accuracy of different models, but I found a problem, that is, when I use the same query statement to query the data of different models such as ChatGPT and local LLM on the same excel sheet, the query result of the previous model seems to have an impact on the latter result, such as the query error of the previous model, and the same error will occur when the latter model is queried, and the answer will be given quickly, like I've memorized the wrong answer and didn't do a new search process, is this normal, or how can I avoid it?
The text was updated successfully, but these errors were encountered:
PandasAI has a caching mechanism that might be causing the issue you're experiencing. The cache stores the results of previous queries, which can affect subsequent queries by different models. This caching is enabled by default and is intended to speed up query retrieval and reduce API call costs. To avoid this issue, you can disable the cache by setting the enable_cache parameter to False when creating the PandasAI object. Additionally, you can clear the cache by calling the clear_cache() method on the PandasAI object, or destroy it using the destroy() method of the Cache class, which removes all cache files [1][2][3].
System Info
Python 3.12
pandasai 2.4.0
🐛 Describe the bug
Hello, I am using PandsAI in my project, I want to compare and test the retrieval accuracy of different models, but I found a problem, that is, when I use the same query statement to query the data of different models such as ChatGPT and local LLM on the same excel sheet, the query result of the previous model seems to have an impact on the latter result, such as the query error of the previous model, and the same error will occur when the latter model is queried, and the answer will be given quickly, like I've memorized the wrong answer and didn't do a new search process, is this normal, or how can I avoid it?
The text was updated successfully, but these errors were encountered: