-
-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add emule/amule support #178
Comments
So it's a website similar to getcomics, but for international comics? And the links in the second image, are they direct download links (clicking them downloads them in the browser)? Is it possible to search on the website? Can I get a link to the website? I can build a scraper similar to the one used for getcomics... |
You need to create a free account to access the links, the site is ddunlimited.net ed2k://|file|Julia.Speciale.01.-..Il.caso.del.convegno.insanguinato.(SBE.2015-08).[c2c.found].cbr|43348457|C69A9D5FE0875BBC683BCA14AE8D1B20|h=KS7IEZR7LSRW7DX6PPHCPKV36OY7DSYH|/ |
So I need to require users to make an account, I need to make a web scraper, a new download client for a rare protocol, and all for international comics that aren't even guaranteed to be on CV? Doesn't sound like it's worth the effort... |
The protocol is not rare, it's just as old as torrent and very much used in Europe. While many non-US comics are not on ComicVine, most of the mainstream are. I haven't tried Kapowarr, but any other *arr uses external clients to download, like sab or qbit. Amule is open source, what is the problem to add it? |
Do you think this page is a good way to find series on the website? https://ddunlimited.net/viewtopic.php?p=10000038 Then once we've found the right series, search on that page for the right download. This avoids the search rate limit that the website has. I also have the logging in working, so together with this we'd have the searching done. Then check if aMule has an API and we should have all the information we need to make this work. Also, could you give a volume that is both on CV and has downloads on ddunlimited? Helps with testing stuff out. |
Hi! The one you linked is only one of the 4 main libraries, check this pic Fumettografie = just a redundant library were characters or similar are collected from other libraries So if you only search in Comics, only US comics translated in italian will be found. You should link at least Fumetti/Comics/Manga. Fumetti section https://ddunlimited.net/viewtopic.php?t=3821676 Manga section https://ddunlimited.net/viewtopic.php?t=191929 About Comics section, keep in mind that US comics outside US are often collected into anthology volumes, with 2 or 3 series inside, so there is no direct relation or numbering. For instance, this Spider-man series depending on the period, contained a second series of Hulk, Iron Man, or something else inside. Anyway it's indexed on CV: https://ddunlimited.net/viewtopic.php?t=3420747 |
I've written a basic version of a ddunlimited client that works well and fast. It logs in, finds the series and extracts the ed2k download links. Only thing left is a client that interacts with the download client. The only clients I can find online are amule and emule. I cannot find anything about an API though, which I need to control the clients. Assuming you have one of them installed, can you tell me if you have a web interface or that it's a direct application (something that you had to install on your PC using an installer)? Also, do you happen to know why often the filenames include a number at the end? It's messing up the algorithm that extracts data from the filename. An example is: Client Codefrom asyncio import create_task, gather, run
from time import time
from typing import Dict, List
from urllib.parse import unquote_plus
from aiohttp import ClientSession, FormData
from bs4 import BeautifulSoup
USERNAME = ""
PASSWORD = ""
SEARCH_TERM = "L'Uomo Ragno (Editoriale Corno)"
dd_base = 'https://ddunlimited.net'
class DDUnlimited:
def __init__(self, session: ClientSession) -> None:
self.session = session
return
async def login(self, username: str, password: str) -> None:
form = FormData()
form.add_field('username', username)
form.add_field('password', password)
form.add_field('redirect', 'index.php')
form.add_field('login', 'Login')
r = await self.session.post(
f'{dd_base}/ucp.php',
params={
'mode': 'login'
},
data=form
)
if "/ucp.php" in str(r.url):
raise ValueError("Login failed")
return
async def __fetch_topic(self, topic_id: str) -> str:
async with self.session.get(
f'{dd_base}/viewtopic.php?t={topic_id}'
) as response:
return await response.text()
async def _get_series_list(self) -> Dict[str, str]:
if (
not hasattr(self, 'series_list')
or (self.series_fetch_time + 86400) < time()
):
tasks = [
create_task(self.__fetch_topic(t))
for t in ('3639202', '3639235')
]
pages = await gather(*tasks)
self.series_list = {}
for page in pages:
soup = BeautifulSoup(page, 'html.parser')
for r in soup.select('a.postlink-local'):
self.series_list[r.get_text()] = r["href"].split( # type: ignore
't='
)[-1]
self.series_fetch_time = time()
return self.series_list
async def search(self, query: str) -> List[str]:
series_list = await self._get_series_list()
# ==============
# This matching should be improved
# Currently requires exact match
# Should instead be a matching algorithm
# ==============
if query not in series_list:
return []
series_output = BeautifulSoup(
await self.__fetch_topic(series_list[query]),
'html.parser'
)
results = [
unquote_plus(a['href']) # type: ignore
for a in series_output.select('a[href^="ed2k://|file|"]')
]
return results
async def get_ddunlimited_search_results(
username: str,
password: str,
query: str
) -> List[str]:
async with ClientSession(headers={'User-Agent': 'Kapowarr'}) as session:
dd = DDUnlimited(session)
await dd.login(username, password)
return await dd.search(query)
results = run(get_ddunlimited_search_results(
USERNAME, PASSWORD, SEARCH_TERM
))
for r in results:
print()
print(r) |
On my docker server I have amule, which is the *nix version and on Windows I have emule. For the one on docker, this is my compose as an example
I don't know about an API, I think you should look for ED2K protocol, I don't know if this is useful: About the numbers at the end of the filenames: unfortunately metadata tagging is not a thing in EU, so most of scanners put info in the filename. In that case 2.0, 2.1, etc is the revision number of the scan. Sometimes a user notices that a comic is missing a page or some other problem and the scanner fixes it re-releasing a new version. EDIT I've found a couple of links that may be interesting |
Well you're in luck, looks like amule has a API and web-UI built in. The UI you use with your Docker container is just a skin over the standard UI that is included. I've spun up an instance of myself and I can see the API calls being made, which means that I can make a client for it. I know about the rest of the ed2k format and it's meanings (hash, etc.). It was purely about the apparent revision number. It's going to be some time before I'm going to implement this into Kapowarr, but it looks like it is going to happen. |
Super! That is GREAT news!! Thank you!! |
Hi, sorry for intruding but did you give a look to this: https://github.com/vexdev/amarr ? It uses amule to emulate a torrent client such as qbittorrent in order to use it with radarr and sonarr... I tried it with Kapowarr bt it doesn't work (the program says "Invalid instance; not Qbittorrent"). With Radarr and Sonarr it works. |
This is very interesting, thank you. When *arr /subs were on Reddit I read many times that devs were not going to add support for emule, and now you come up with this! I will add it to Radarr+Sonarr ASAP. I wonder why they chose Adunanza, that is an old mod for a very specific ISP network. |
@pavipollo yeah I've seen it. I'm planning on just adding native support for the client in Kapowarr, instead of requiring something like that. |
Despite the mainstream attention that torrent and usenet get, in Europe (and South America AFAIK) emule is still a BIG thing. With the diffusion of fiber, it's the most secure and moderately quick tool to download anything. So among the usual movies and series, comics, especially in Italy, are basically released ONLY to emule (I mean 100% of comics), then a limited part of that is cross-seeded to very short-living torrents. But on emule they live on and on, and if not, most of them can be asked for a reshare on the main italian hub, ddunlimited.net, which exists since the times of edonkey, if you are old enough to remember.
The problem of emule is that you cannot automate it, you are forced either to check the release pages OR launch a manual search for each series.
I've been using Mylar for US comics because those are the only I can find on torrent/usenet/getcomics, but I must manually download all the italian (or french, spanish, dutch) from emule.
If Kapowarr added emule/amule support, it would be a killer application for non-US users.
To give you an idea, each series is well sorted
and each issue has its verified ed2k link
It would be a matter of monitoring the desired pages for new links.
There is a catch: many non-US comics are not present on ComicVine. For those series there should be something like a blind-scan: the user sets the series to be monitored and Kapowarr just gets and adds it to the library as-is.
The text was updated successfully, but these errors were encountered: