-
Notifications
You must be signed in to change notification settings - Fork 8.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Firecrawl started throwing the error: 'Failed to invoke tool: unhashable type: 'dict'.' #8792
Comments
Hey @ugyuji! I'm here to help you with any bugs, questions, or contributions you have for our project. Let's squash those bugs together! 😊 I found a similar issue that might help you resolve the error "Failed to invoke tool: unhashable type: 'dict'". The error occurs because the To resolve this issue, ensure that Here is the relevant part of the code for reference: class CrawlTool(BuiltinTool):
def _invoke(self, user_id: str, tool_parameters: dict[str, Any]) -> ToolInvokeMessage:
app = FirecrawlApp(
api_key=self.runtime.credentials["firecrawl_api_key"], base_url=self.runtime.credentials["base_url"]
)
scrapeOptions = {}
payload = {}
wait_for_results = tool_parameters.get("wait_for_results", True)
payload["excludePaths"] = get_array_params(tool_parameters, "excludePaths")
payload["includePaths"] = get_array_params(tool_parameters, "includePaths")
payload["maxDepth"] = tool_parameters.get("maxDepth")
payload["ignoreSitemap"] = tool_parameters.get("ignoreSitemap", False)
payload["limit"] = tool_parameters.get("limit", 5)
payload["allowBackwardLinks"] = tool_parameters.get("allowBackwardLinks", False)
payload["allowExternalLinks"] = tool_parameters.get("allowExternalLinks", False)
payload["webhook"] = tool_parameters.get("webhook")
scrapeOptions["formats"] = get_array_params(tool_parameters, "formats")
scrapeOptions["headers"] = get_json_params(tool_parameters, "headers")
scrapeOptions["includeTags"] = get_array_params(tool_parameters, "includeTags")
scrapeOptions["excludeTags"] = get_array_params(tool_parameters, "excludeTags")
scrapeOptions["onlyMainContent"] = tool_parameters.get("onlyMainContent", False)
scrapeOptions["waitFor"] = tool_parameters.get("waitFor", 0)
scrapeOptions = {k: v for k, v in scrapeOptions.items() if v not in (None, "")}
payload["scrapeOptions"] = scrapeOptions or None
payload = {k: v for k, v in payload.items() if v not in (None, "")}
crawl_result = app.crawl_url(url=tool_parameters["url"], wait=wait_for_results, **payload)
return self.create_json_message(crawl_result) Ensure that Additionally, you might find it helpful to look at similar issues and discussions:
|
I can't reproduce. Can you upload your workflow DSL? |
The same question, the process used is the official template <Interview Preparation (Conditional Parallel)> |
now I reproduce it in the cloud version, |
Here's the DSL content: app:
description: ''
icon: 🤖
icon_background: '#FFEAD5'
mode: workflow
name: Test Workflow
use_icon_as_answer_icon: false
kind: app
version: 0.1.2
workflow:
conversation_variables: []
environment_variables: []
features:
file_upload:
image:
enabled: false
number_limits: 3
transfer_methods:
- local_file
- remote_url
opening_statement: ''
retriever_resource:
enabled: false
sensitive_word_avoidance:
enabled: false
speech_to_text:
enabled: false
suggested_questions: []
suggested_questions_after_answer:
enabled: false
text_to_speech:
enabled: false
language: ''
voice: ''
graph:
edges:
- data:
isInIteration: false
sourceType: start
targetType: tool
id: 1712630129285-source-1727334022276-target
source: '1712630129285'
sourceHandle: source
target: '1727334022276'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: tool
targetType: end
id: 1727334022276-source-1713020453724-target
source: '1727334022276'
sourceHandle: source
target: '1713020453724'
targetHandle: target
type: custom
zIndex: 0
nodes:
- data:
desc: ''
selected: false
title: Start
type: start
variables:
- label: url
max_length: 256
options: []
required: true
type: text-input
variable: url
height: 90
id: '1712630129285'
position:
x: 30
y: 427
positionAbsolute:
x: 30
y: 427
selected: true
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
desc: ''
outputs:
- value_selector:
- '1727334022276'
- text
variable: output
selected: false
title: End
type: end
height: 90
id: '1713020453724'
position:
x: 638
y: 427
positionAbsolute:
x: 638
y: 427
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
desc: ''
provider_id: firecrawl
provider_name: firecrawl
provider_type: builtin
selected: false
title: Crawl
tool_configurations:
allowBackwardLinks: 0
allowExternalLinks: 0
excludePaths: null
excludeTags: null
formats: null
headers: null
ignoreSitemap: 1
includePaths: null
includeTags: null
limit: 5
maxDepth: 2
onlyMainContent: 0
waitFor: null
wait_for_results: 1
webhook: null
tool_label: Crawl
tool_name: crawl
tool_parameters:
url:
type: mixed
value: '{{#1712630129285.url#}}'
type: tool
height: 454
id: '1727334022276'
position:
x: 334
y: 427
positionAbsolute:
x: 334
y: 427
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
viewport:
x: 307.80000000000007
y: -8.799999999999955
zoom: 0.7 The same error can be seen in both local and cloud environments. |
I also face the same problem. When will you publish a new version? |
Self Checks
Dify version
0.8.3
Cloud or Self Hosted
Cloud, Self Hosted (Docker)
Steps to reproduce
✔️ Expected Behavior
No response
❌ Actual Behavior
The text was updated successfully, but these errors were encountered: