Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Allow exclusion of URLs from crawler #2208

Open
brocktaylor7 opened this issue Aug 15, 2022 · 10 comments
Open

[Feature Request] Allow exclusion of URLs from crawler #2208

brocktaylor7 opened this issue Aug 15, 2022 · 10 comments

Comments

@brocktaylor7
Copy link
Contributor

Is your feature request related to a problem? Please describe.

When using the crawler, there is an input that allows for the use of discovery patterns. However, this appears to be limited to adding URL and not excluding them.

Describe the solution you'd like

There is interest in adding an additional parameter to exclude URLs via regex (or other options) in the same manner as the discovery pattern.

Some possible examples of use cases mentioned:
Exclude URLs that include query parameters (everything after /?)
Exclude URLs that are in the domain but fall into a specific subdomain or path (like example.com/blog/....)

Describe alternatives you've considered

Additional context

This is user feedback that we received for the ADO Extension. We've had at least 3 teams ask us about this feature.
Original issue can be found here: microsoft/accessibility-insights-action#1019

@ghost
Copy link

ghost commented Aug 15, 2022

This issue has been marked as ready for team triage; we will triage it in our weekly review and update the issue. Thank you for contributing to Accessibility Insights!

@ghost ghost removed the status: new label Aug 15, 2022
@sfoslund sfoslund removed their assignment Aug 15, 2022
@ferBonnin
Copy link

This needs more information in order to understand the work required. @lamaks can you help us understand better if there is an exclusion functionality already existing that needs to be wired up or if this would be something new that needs to be built?

@ferBonnin
Copy link

ferBonnin commented Aug 23, 2022

Per conversation with Maxim: the functionality doesn't exist yet and it needs implementation, however we support regular expressions that can be used to exclude urls by pattern (parameter discoveryPatterns), which might help. Currently discovery pattern is to for include ULRs, but can be tailored to match to restricted range/pattern, just as any regular expression in general, so it will select subset based on the expression rules

@ferBonnin
Copy link

@brocktaylor7 would this approach help for this request?

@ghost
Copy link

ghost commented Aug 23, 2022

The team requires additional author feedback; please review their replies and update this issue accordingly. Thank you for contributing to Accessibility Insights!

@ghost
Copy link

ghost commented Aug 27, 2022

This issue has been automatically marked as stale because it is marked as requiring author feedback but has not had any activity for 4 days. It will be closed if no further activity occurs within 3 days of this comment. Thank you for contributing to Accessibility Insights!

@ferBonnin
Copy link

per offline conversation, leaving this as needs investigation since the workaorund of using discovery patterns for exclusion is fairly limited to certain scenarios

@ghost
Copy link

ghost commented Aug 29, 2022

This issue requires additional investigation by the Accessibility Insights team. When the issue is ready to be triaged again, we will update the issue with the investigation result and add "status: ready for triage". Thank you for contributing to Accessibility Insights!

@andrewluebke-ms
Copy link

Hi! My team is attempting to implement the workaround for exclusionary links in the discovery pattern but are having some troubles. I've detailed an example below that we would expect to work but doesn't seem to work (checked with RegEx checker). Could you provide me with some feedback as to where we might be going wrong and/or some examples of exclusionary RegEx in the discovery pattern?

We are using the Azure DevOps Extension v3.

Here is a throwaway example using google and is similar to what we've tried...
Dynamic Site URL: https://google.com
Discovery pattern: https://google.com/(?!.*FilterMeOut).*

Expected:
https://google.com (link is scanned and crawled)
https://google.com/FilterMeOut (link is skipped)
https://google.com/ValidPath (link is scanned and crawled)

Actual:
https://google.com (link is scanned and crawled)
https://google.com/FilterMeOut (link is skipped)
https://google.com/ValidPath (link is skipped)
(only the first URL is scanned and nothing else is discovered)

The only scenario that fully crawls all our web pages is when we use the provided URL format from the docs:
https://google.com/[.*]. However, we have quite a lot of links we do not want crawled and so excluding them is critical for us.

Thanks!

@brocktaylor7
Copy link
Contributor Author

Hello @andrewluebke-ms,

We use a third-party library (Apify) to handle the crawling portion of the extension. The discoveryPatterns input is passed directly into Apify's PseudoUrl class, so any functionality that will or won't work will be determined by the implementation within Apify.

Here is the documentation to Apify's PseudoUrl which includes their documentation on "special directives" (regexes) that can be passed in via our discoveryPatterns input: https://sdk.apify.com/docs/2.3/api/pseudo-url

There are a few rules about how things should be escaped to be handled properly in the Apify library, so it may be worth looking through their documentation to ensure that the regex you're passing in matches what they expect to receive.

This is meant to be positive matching not exclusionary, and in my own testing seems to be fairly quirky about what does and doesn't work. For example I found capture groups to not consistently work the way I expected them to in a standard JavaScript regex, but Apify's docs don't clearly outline why that would be the case.

We hope to be able to implement a solution for excluding URLs in the future that is much more robust and built to do what we're asking it to do, rather than trying to use a mechanism for a purpose it wasn't meant for. We have this on our radar, but currently don't have an ETA for when a better solution will be in place.

@DaveTryon DaveTryon moved this from Needs triage to Old backlog in Accessibility Insights Jul 27, 2023
@DaveTryon DaveTryon moved this from Old backlog to Tabled in Accessibility Insights Jul 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

No branches or pull requests

5 participants