Skip to content

Latest commit

 

History

History
192 lines (140 loc) · 11.1 KB

s3.md

File metadata and controls

192 lines (140 loc) · 11.1 KB

== Simple Storage Service (S3) Overview == Simple Storage Service (S3) is among the most popularThe Most Popular AWS Products of 2018 https://www.clickittech.com/aws/top-10-aws-services/Top 10 AWS services you should know about (2019 Edition) https://www.2ndwatch.com/blog/popular-aws-products-2018/ of the AWS services and provides a scalable solution for objects storage via web APIsAmazon S3 - Wikipedia https://en.wikipedia.org/wiki/Amazon_S3. S3 buckets can be employed for many different usage and in many different scenarios.
The security of S3 buckets narrows down to an Access Control List (ACL)S3 Access Control List (ACL) https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html that defines access to buckets and objects.

=== S3 prerequisite === Before getting started with the actual identification of S3 buckets, it is important to give a little background on S3 nomenclature:

  • a bucket is a container of objects;
  • objects are contained in S3 buckets and are essentially files;
  • when creating a bucket, it is possible to specify its region i.e. in which part of the world the bucket will be physically located.

S3 buckets have unique names, meaning that two different users cannot have a bucket with the same name.

Assuming a bucket named mybucket, it is possible to access the bucket via the following URLs:

https://s3-[region].amazonaws.com/[bucketname]/

Where [region] depends on the one selected during bucket creation.

https://[bucketname].s3.amazonaws.com/

S3 also provides the possibility of hosting static HTML content thus making an S3 bucket behaving as a static HTML web server. If this option is selected for a bucket, the following URL can be used to access the HTML code contained in that bucket:

https://[bucketname].s3-website-[region].amazonaws.com/

=== Identifying S3 buckets === The first step when testing a web application that makes use of AWS S3 buckets is identifying the bucket(s) itself, meaning the URL that can be used to interact with the bucket. '''Note:''' it is not necessary to know the region of a S3 bucket to identify it. Once the name is found, it is possible to cicle

==== HTML Inspection ==== The web application might expose the URL to the S3 bucket directly within the HTML code. To search for S3 bucket within HTML code, inspect the code in search of sub-string such as:

s3.amazonaws
amazonaws

==== Brute force \ Educated guessing ==== A brute-force approach, possibly based on a word-list of common words along with specific words coming from the domain under testing, might be useful in identifying S3 buckets. As described in [...], S3 buckets are identified by a predefined and predictable schema that can be useful for buckets identification. By means of an automatic tool it is possible to test multiple URLs in search of S3 buckets starting from a word-list.

In OWASP ZAP (v2.7.0) the fuzzer feature can be used for testing:

With OWASP ZAP up and running, navigate to https://s3.amazonaws.com/bucket to generate a request to https://s3.amazonaws.com/bucket in the Sites panel;

From the Sites panel, right click on the GET request and select Attack -> Fuzz to configure the fuzzer;

Select the word bucket from the request to tell the fuzzer to fuzz in that location;

Click Add and Add again to specify the payload the fuzzer will use;

Select the type of payload, which could be a list of string given to ZAP itself or loaded from a file;

Finally, press Add to add the payload, Ok to confirm the setting and Start Fuzzer to star fuzzing.

If a bucket is found, ZAP will show a response with status code 301 Moved Permanently on the other hand, if the bucket does not exist, the response status will be 404 Not Found.

==== Google Dork ==== Google DorkGoogle Hacking - Wikipedia https://en.wikipedia.org/wiki/Google_hacking can also be used to search for S3 buckets URLs. The Inurl directive provided by Google can be used to search for S3 buckets. For example, the Inurl directive can be used to search for common names as shown in the following list:

Inurl: s3.amazonaws.com/legacy/
Inurl: s3.amazonaws.com/uploads/
Inurl: s3.amazonaws.com/backup/
Inurl: s3.amazonaws.com/mp3/
Inurl: s3.amazonaws.com/movie/
Inurl: s3.amazonaws.com/video/
inurl: s3.amazonaws.com

More Google DorksGoogle hacking Amazon Web Services Cloud front and S3 https://it.toolbox.com/blogs/rmorril/google-hacking-amazon-web-services-cloud-front-and-s3-011613 can be used for S3 buckets identification.

==== Bing reverse IP ==== Microsoft's Bing search engine can be helpful in identifying S3 buckets thanks to its feature of searching domains given an IP address. Given the IP address of a known S3 end-point, the ip:[IP] feature of Bing can be used to retrieve other S3 buckets resolving to the same IP.

==== DNS Caching ==== There are many services that maintain a DNS cache that can be queried by users to find domain names correspondence from IP address and vice versa. By taking advantage of such services it is possible to identify S3 buckets. The following list shows some services worth noting:

https://findsubdomains.com/
https://www.robtex.com/
https://buckets.grayhatwarfare.com/ (created specifically to collect AWS S3 buckets)

The following is a screenshot from https://findsubdomains.com showing how it is possible to retrieve S3 buckets by searching for subdomains of s3.amazonaws.com.

=== S3 buckets permissions ===

An S3 bucket provides a set of five permissions that can be granted at the bucket level or at the object level. S3 buckets permissions can be tested via two means: HTTP request or by using the aws command line tool.

READ

At bucket level allows the to list the objects in the bucket.
At object level allows to read the content as well as the metadata of the object.

WRITE

At bucket level allows to create, overwrite, and delete objects in the bucket.
At object level allows to edit the object itself.

READ_ACP

At bucket level allows to read the bucket’s Access Control List.
At object level allows to read the object’s Access Control List.

WRITE_ACP

At bucket level allows to set the Access Control List for the bucket.
At object level allows to set an Acess Control List for the object.

FULL_CONTROL

At bucket level is equivalent to granting the READ, WRITE, READ_ACP, and WRITE_ACP permissions.
At the object levelis equivalent to granting the READ, WRITE, READ_ACP, and WRITE_ACP.

==== READ Permission ==== Via HTTP, try to access the bucket by requesting the following URL:

https://[bucketname].s3.amazonaws.com

It is also possible to use the awsaws s3 ls documentation https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html command line tool to list the content of a bucket:

aws s3 ls s3://[bucketname] --no-sign-request

'''Note:''' the -no-sign-request flag specifies to not use credential to sign the request.

==== WRITE Permission ==== Via the aws command line tool it is possible to write to a bucket aws cp documentation https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html line:

aws s3 cp localfile s3://[bucketname]/file.txt --no-sign-request

A bucket that allows arbitrary file upload will answer with a message showing that the file has been uploaded, such as the following:

upload: Pictures/ec2-s3.png to s3://mindeds3test01/file.txt

'''Note:''' the -no-sign-request flag specifies to not use credential to sign the request.

==== READ_ACL Permission ==== Via the aws command line tool it is possible to test READ_ACL for both an S3 bucketaws s3api get-bucket-acl documentation https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-acl.html and single objectsaws s3api get-object-acl documentation https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object-acl.html.

Testing bucket for READ_ACL:

aws s3api get-bucket-acl --bucket [bucketname] --no-sign

Testing single object for READ_ACL:

aws s3api get-object-acl --bucket [bucketname] --key index.html --no-sign-request

Both commands will output a JSON showing the ACL policies for the specified resource.

'''Note:''' the -no-sign-request flag specifies to not use credential to sign the request.

==== WRITE_ACL Permission ==== Via AWS command line it is possible to test WRITE_ACL for both an S3 bucketaws s3api put-bucket-acl https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html and single objectsaws s3api put-object-acl https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object-acl.html.

Testing bucket for WRITE_ACL:

aws s3api put-bucket-acl --bucket [bucketname] [ACLPERMISSIONS] --no-sign-request

Testing single object for WRITE_ACL:

aws s3api put-object-acl --bucket [bucketname] --key file.txt [ACLPERMISSIONS] --no-sign-request

Both commands do not display an output in case of operation successful.

'''Note:''' the -no-sign-request flag specifies to not use credential to sign the request.

==== Any authenticated AWS client ==== Finally, S3 permissions used to include a peculiar grant named "any authenticated AWS client". This permission allows any AWS member, regardless of who they are, access to the bucket. This feature is not provided anymore but there are still buckets with this permission enabled. To test for this type of permission, create an AWS account and configure it locally with the aws command line:

aws configure

Try to access the bucket with the same commands described above, the only difference is that the flag --no-sign-request should be replaced with --profile [PROFILENAME] where [PROFILENAME] is the name of the profile created with the configure command.

== External Resources == This is a collection of additional external resources related to testing S3 buckets.

== References ==