You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have looked at #293 and #289, but those issues are slightly different. We have a crawler library based on node-crawler that performs computationally intensive crawling tasks and writes to different output sources, depending on the current task.
Due to the nature of our tasks, it is crucial for crawlers to be interruptible and pick up work at the same point later on, and write consistent output files at the same time.
Think of a JSON output file, for example, to which an array of objects is written, one object per URL. If the crawler stops, a final closing bracket must be written to the output file to ensure the file is valid JSON.
We achieved this using the preRequest hook and a flag:
This works, but it's not optimal: Requests may still be queued while we're in aborted state, depending on the implementation. Additionally, there's no way to abort in-flight requests.
To tackle this issue, I'd like to suggest implementing support for the AbortController API, which can be used in browsers to abort ongoing fetch requests and has been implemented in recent Node.JS versions (with a poly-fill available, too). Implementation wise, one could steal from node-fetch:
// Wrap http.request into fetchconstsend=(options.protocol==='https:' ? https : http).request;const{ signal }=request;letresponse=null;constabort=()=>{consterror=newAbortError('The operation was aborted.');reject(error);if(request.body&&request.bodyinstanceofStream.Readable){request.body.destroy(error);}if(!response||!response.body){return;}response.body.emit('error',error);};if(signal&&signal.aborted){abort();return;}
Maybe it would make sense to swap request for node-fetch altogether at this point? I'm currently entangled in another project, but I'll set a reminder to get back to this issue and see what I can come up with ✌️
Thanks. I had a chance to use node-fetch in a much small project and found out that I couldn't manage the tasks very well but rely on the Promise itself. What do you think about this?
I have looked at #293 and #289, but those issues are slightly different. We have a crawler library based on
node-crawler
that performs computationally intensive crawling tasks and writes to different output sources, depending on the current task.Due to the nature of our tasks, it is crucial for crawlers to be interruptible and pick up work at the same point later on, and write consistent output files at the same time.
Think of a JSON output file, for example, to which an array of objects is written, one object per URL. If the crawler stops, a final closing bracket must be written to the output file to ensure the file is valid JSON.
We achieved this using the
preRequest
hook and a flag:This works, but it's not optimal: Requests may still be queued while we're in aborted state, depending on the implementation. Additionally, there's no way to abort in-flight requests.
To tackle this issue, I'd like to suggest implementing support for the
AbortController
API, which can be used in browsers to abort ongoing fetch requests and has been implemented in recent Node.JS versions (with a poly-fill available, too). Implementation wise, one could steal fromnode-fetch
:(See full source)
I'm happy to help with this.
The text was updated successfully, but these errors were encountered: