-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
contribution at 14-07-2020 #18645
base: develop
Are you sure you want to change the base?
contribution at 14-07-2020 #18645
Conversation
Q-back
commented
Jul 14, 2020
•
edited
Loading
edited
- includes all changes from autocomplete_js improvements #18630
- adds SOAP plugin
… username input and login button
…tton CSS selectors
…_js.has_active_session if chrome belongs to outer scope
… so we return all buttons in parent node
…er exception handling during js login process
…entedChrome.focus() method
…s' into holm-master
Feature/improve report autocomplete js See merge request holmsecurity/w3af!111
SOAP plugin Closes #18 See merge request holmsecurity/w3af!113
w3af/plugins/crawl/soap.py
Outdated
import sys | ||
from cStringIO import StringIO | ||
|
||
import zeep |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you add this to the requirements file(s)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
w3af/plugins/crawl/soap.py
Outdated
operation_url = port.binding_options['address'] | ||
if self.verbosity_level == 1 and operation_url not in discovered_urls: | ||
self._report_all_operations_for_url(service, operation_url) | ||
discovered_urls.add(operation_url) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about HTTP request parameters? I don't see any handling of POST-body nor query string parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend moving the parsing of WSDL to the parsers
path, where all the other parsers live. The WSDL parser will eventually grow and we don't want it here in the plugin.
I'm not 100% sure that we need this plugin. If you add the WSDL parser to:
https://github.com/andresriancho/w3af/blob/master/w3af/core/data/parsers/document_parser.py#L37
And w3af finds the WSDL URL, it will be able to parse it in web_spider
and return Fuzzable HTTP requests, which can then be translated into a WSDL mutant, similar to:
https://github.com/andresriancho/w3af/blob/master/w3af/core/data/fuzzer/mutants/json_mutant.py
https://github.com/andresriancho/w3af/blob/master/w3af/core/data/fuzzer/mutants/xmlrpc_mutant.py (this one doesn't work)
The user would just have to add the WSDL to the target URL(s) of w3af, without even having to enable a plugin and set the target there. Also, another benefit of this approach, would be that the user doesn't even know that there is a WSDL endpoint, w3af finds it and analyzes it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wsdl_parser
created.
…ckling errors for wsdl, refactored MultiProcessingDocumentParser to be less coupled to filesystem, improved plugin_testing_tools.py
@@ -91,3 +93,4 @@ def _handle_exception(self, strategy, e): | |||
e, | |||
self.debugging_id) | |||
om.out.debug(msg % args) | |||
om.out.error(traceback.format_exc()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was just added for debugging?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was added by purpose as I think it's good to have at least traceback saved when exception is silenced. If I wouldn't add this there would be no way to find this traceback as exception is silenced at this point. Showing only the exception name is not enough in many cases. How should be handled this one then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the context here, and if it will work, but we have an exception handler that will catch "all exceptions" and save them to a text file in /tmp.
Another alternative, if the scan can continue without a major reduction in functionality and user doesn't need to know about this issue is to also use debug
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have an exception handler that will catch "all exceptions" and save them to a text file in /tmp
Not this time. The exception is not propagated in this place, it's simply silenced.
Changed to debug
according to your recommendation c3e0eec.
w3af/core/data/parsers/doc/wsdl.py
Outdated
def can_parse(http_resp): | ||
url = http_resp.get_uri() | ||
try: | ||
wsdl_client = zeep.Client(str(url)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will send an HTTP request to the URL without using the w3af HTTP client. This means that, for example, HTTP proxy configurations are not going to be respected. For some cases that will break the feature. Is there a way to create a wsdl_client by sending the WSDL URL and HTTP response content?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix in a81845f
|
||
@staticmethod | ||
def get_name(): | ||
return 'wsdl_parser' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I read too quick, but where is the parser returning the requests? You should have something similar to:
https://github.com/andresriancho/w3af/blob/master/w3af/core/data/parsers/doc/open_api/main.py#L201
https://github.com/andresriancho/w3af/blob/master/w3af/core/data/parsers/doc/open_api/main.py#L250
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While OpenAPI().get_api_calls()
is a special method used only by open_api
crawl plugin I've resigned from SOAP plugin according to this comment. So wsdl
parser only provides the standard get_references
method which is inherited from BaseParser
and used by web_spider
. get_references
method provides results as URL
instances not FuzzableRequest
instances. Anyway I guess that in my current flow WSDLMutant
is missing.
Please let me know if above is the right approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If your parser only returns URLs, then unless there is a SOAP endpoint which takes query string parameters... you won't be able to find any vulnerabilities based on the output of this parser.
I strongly recommend you add some code that will allow you to retrieve fuzzable requests (or was it mutants? I forgot) from the parser.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you might need a custom plugin after all? OpenAPI document parser has its own plugin to extract the HTTP requests
https://github.com/andresriancho/w3af/blob/feature/js/w3af/plugins/crawl/open_api.py
https://github.com/andresriancho/w3af/blob/feature/js/w3af/plugins/crawl/open_api.py#L217
Now that we might have two parsers (openapi, wsdl) with their own plugins (eeek), maybe it is time to do an analysis and understand if we need to generalize all this into web_spider
? Not sure... but worth thinking about it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 1a063e2. I've added method get_fuzzable_requests
to BaseParser
as WSDLParser
is second parser (after OpenAPI
) which will produce fuzzable requests. BaseParser.get_fuzzable_requests
returns empty list by default instead of raising NotImplementedError
, as I don't see any point of raising exception in base class (which is a kind of interface or abstract probably) and overriding it in every subclass to return empty list. It's better to provide handy default in base class already.
Also in future we may also want to refactor OpenAPI
to implement get_fuzzable_requests
and resign from openapi plugin, but it will break compatibility with current release by deleting one of plugins, so I opt for doing it later, ideally after moving to semantic versioning system.
… accepts url as str in GET and POST, and headers as dict
77eeef9
to
a81845f
Compare
…g. Removed unused import
…est() method to BaseParser