You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Anyway you can add sccraping/parsing of the gallery/query pages you could download.
Would just need to load the html and look for the same urls the download system supports.
Unless you make some sort of url class to identify the correct urls in a list.
This way the user dont need to open all the issues in tabs before sending, instead they could refine queries and send it all in a single bulk.
EDIT: i guess this could also be done on the downloader side for a smoother and better interaction when i think about it. Oh well.
The text was updated successfully, but these errors were encountered:
The download system doesn't really have any concept of "supported" URLs, it will try to download anything that's thrown at it. (Potentially going through a downloader plugin if their regex match the URL)
I don't really plan to add a system like Hydrus' url classes as I feel it's a lot of overhead for little gain. (It's useful when you want to dump a whole danbooru page and not have to open every image, but for doujins? meh)
Anyway you can add sccraping/parsing of the gallery/query pages you could download.
Would just need to load the html and look for the same urls the download system supports.
Unless you make some sort of url class to identify the correct urls in a list.
This way the user dont need to open all the issues in tabs before sending, instead they could refine queries and send it all in a single bulk.
EDIT: i guess this could also be done on the downloader side for a smoother and better interaction when i think about it. Oh well.
The text was updated successfully, but these errors were encountered: