Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

conversational failed #1405

Open
HAL9KKK opened this issue Oct 21, 2024 · 3 comments
Open

conversational failed #1405

HAL9KKK opened this issue Oct 21, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@HAL9KKK
Copy link

HAL9KKK commented Oct 21, 2024

System Info

OS version: windows 10
Python version: 3.12.7
The current version of pandasai being used: 2.3.0

🐛 Describe the bug

It seems the pipeline is not able to categorize correctly the conversation:

Here the config code for the Agent:
config = {"llm":llm,"verbose": True, "direct_sql": False,"enable_cache": True,"response_parser": StreamlitResponse2, 'conversational':True}

Here the pipepline log, where in the first attempt with "Hello" it is ok, then with "How are you?" it is trying to create the code...missing the skipping:

2024-10-21 19:03:01 [INFO] Question: hello
2024-10-21 19:03:01 [INFO] Running PandasAI with azure-openai LLM...
2024-10-21 19:03:01 [INFO] Prompt ID: 0e905df9-dec5-4279-9c2e-cd16bff1b15a
2024-10-21 19:03:01 [INFO] Executing Pipeline: GenerateChatPipeline
2024-10-21 19:03:01 [INFO] Executing Step 0: ValidatePipelineInput
2024-10-21 19:03:01 [INFO] Executing Step 1: CacheLookup
2024-10-21 19:03:01 [INFO] Using cached response
2024-10-21 19:03:01 [INFO] Executing Step 2: PromptGeneration
2024-10-21 19:03:01 [INFO] Executing Step 2: Skipping...
2024-10-21 19:03:01 [INFO] Executing Step 3: CodeGenerator
2024-10-21 19:03:01 [INFO] Executing Step 3: Skipping...
2024-10-21 19:03:01 [INFO] Executing Step 4: CachePopulation
2024-10-21 19:03:01 [INFO] Executing Step 4: Skipping...
2024-10-21 19:03:01 [INFO] Executing Step 5: CodeCleaning
2024-10-21 19:03:01 [INFO] 
Code running:

result = {'type': 'string', 'value': 'Hello! How can I assist you today?'}
print(result)
```
2024-10-21 19:03:01 [INFO] Executing Step 6: CodeExecution
2024-10-21 19:03:01 [INFO] Executing Step 7: ResultValidation
2024-10-21 19:03:01 [INFO] Answer: {'type': 'string', 'value': 'Hello! How can I assist you today?'}
2024-10-21 19:03:01 [INFO] Executing Step 8: ResultParsing
2024-10-21 19:03:07 [INFO] Question: how are you?
2024-10-21 19:03:07 [INFO] Running PandasAI with azure-openai LLM...
2024-10-21 19:03:07 [INFO] Prompt ID: 03e3f142-d68f-48b7-9475-f7e163e49234
2024-10-21 19:03:07 [INFO] Executing Pipeline: GenerateChatPipeline
2024-10-21 19:03:07 [INFO] Executing Step 0: ValidatePipelineInput
2024-10-21 19:03:07 [INFO] Executing Step 1: CacheLookup
2024-10-21 19:03:07 [INFO] Executing Step 2: PromptGeneration
2024-10-21 19:03:07 [INFO] Using prompt:
....
2024-10-21 19:03:07 [INFO] Executing Step 3: CodeGenerator
2024-10-21 19:03:11 [ERROR] Pipeline failed on step 3: No code found in the response

Copy link

dosubot bot commented Oct 21, 2024

The issue you're experiencing seems to be related to the pipeline not correctly categorizing conversational inputs, specifically when it should skip steps for general greetings like "How are you?".

In the PandasAI pipeline, steps like PromptGeneration and CodeGenerator are typically skipped if the response is found in the cache, as determined by the is_cached method [1]. However, it seems that the pipeline is not correctly identifying "How are you?" as a conversational input that should be skipped.

To address this, you might want to check the logic that determines when to skip these steps for conversational inputs. Ensure that the configuration or logic responsible for identifying and categorizing conversational inputs is correctly implemented. This might involve reviewing how the conversational parameter is being used in your setup, as its expected behavior is not clearly documented in the available information.

If the issue persists, consider reviewing the implementation of the SemanticChatPipeline and related components like SemanticResultParser, which are responsible for parsing and categorizing conversational inputs [2][3]. Adjustments in these areas might help in correctly categorizing and skipping steps for conversational inputs.

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the bug Something isn't working label Oct 21, 2024
@HAL9KKK
Copy link
Author

HAL9KKK commented Oct 22, 2024

in the second question "how are you?" the pipeline try to generate code and executu and of course it fails. What is the pipeline logic that supposed to understand that this is only a convestation rather than a riquest to perfome some analysis of the data?

Copy link

dosubot bot commented Oct 22, 2024

The PandasAI pipeline uses caching to determine when to skip certain steps like PromptGeneration and CodeGenerator. These steps are skipped if the context indicates that the result is already cached. However, the specific logic for distinguishing between conversational inputs and analytical requests isn't clearly documented in the available information.

To address the issue where the pipeline incorrectly attempts to generate code for a conversational input like "How are you?", you should review the configuration and logic that categorizes inputs. This might involve checking how the conversational parameter is implemented and ensuring that it correctly influences the pipeline's behavior. Additionally, examining components like the SemanticChatPipeline and SemanticResultParser, which are responsible for parsing and categorizing inputs, could provide insights into how the pipeline differentiates between conversational and analytical requests. Adjustments in these areas might help in correctly categorizing and skipping steps for purely conversational inputs.

To continue talking to Dosu, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant