This document enumerates a few examples of common error messages, what they mean, and how to resolve them.
Mountpoint is optimized for applications that need high read throughput to large objects, potentially from many clients at once, and to write new objects sequentially from a single client at a time. To achieve this, Mountpoint does not implement all the features of a POSIX file system and this may affect compatibility with your application. For more detailed information, please refer to Mountpoint's semantics documentation. This document aims to capture errors from the latest versions of Mountpoint. If you are using an older version of Mountpoint, please refer to older versions of this document.
A great first step for troubleshooting Mountpoint is to inspect its logs,
which are emitted to journald
by default. See the logging documentation for more details on how to access Mountpoint logs.
Mountpoint supports writing to a file sequentially. Random writes, or 'out-of-order' writes, will return an error to FUSE which may appear in applications with the error message "Invalid argument".
For example, the following code seeks one byte into the file and writes a single byte using dd
.
$ dd if=/dev/random of=out seek=1 count=1
dd: writing to 'out': Invalid argument
In Mountpoint's logs, a warning message will be emitted similar to below:
WARN write{req=52 ino=49 fh=3 offset=512 length=512 name="out"}:
mountpoint_s3::fuse: write failed: upload error: out of order write NOT supported by Mountpoint, aborting the upload; expected offset 0 but got 512
To work around this, write your file to a temporary location such as /tmp/
, and copy or move to the mounted directory
after the file is written.
Trying to open an existing file for writing using Mountpoint without the --allow-overwrite
flag will fail with the error: Operation not permitted
.
For example, overwriting an existing file in Mountpoint would result in the following error:
$ echo "Overwriting a file..." > existing-file.txt
operation not permitted: existing-file.txt
Log entries for overwriting a file looks like one of the following depending on the Mountpoint version:
WARN setattr{req=11 ino=2 name="existing-file.txt"}:
mountpoint_s3::fuse: setattr failed: inode error: inode 2 (full key "existing-file.txt") is a remote inode and its attributes cannot be modified
WARN setattr{req=11 ino=2 name="existing-file.txt"}:
mountpoint_s3::fuse: setattr failed: file overwrite is disabled by default, you need to remount with --allow-overwrite flag and open the file in truncate mode (O_TRUNC) to overwrite it
If you want to overwrite a file using Mountpoint, please use --allow-overwrite
CLI flag during mounting the bucket on a directory.
Trying to delete a file using Mountpoint (without --allow-delete
CLI flag), for example test-file.txt, Operation not permitted
error will be emitted as follows:
$ rm test-file.txt
rm: cannot remove 'test-file.txt': Operation not permitted
In Mountpoint's logs, a message similar to the one below should be emitted:
WARN unlink{req=8 parent=1 name="test-file.txt"}:
mountpoint_s3::fuse: unlink failed: Deletes are disabled. Use '--allow-delete' mount option to enable it.
In order to delete files using Mountpoint, you must opt-in using the --allow-delete
CLI flag.
To learn more about how file deletion works in Mountpoint, please review the file deletion section of Mountpoint's semantics documentation.
A file system directory cannot contain both a file and a directory of the same name. If your bucket's directory structure would result in this state, only the directory will be accessible. The object will not be accessible.
For example, if a bucket contains the following object keys:
- a
- a/b
When listing the contents of a directory mnt
with the objects listed above, only the directory a
would be returned:
$ ls mnt/
a
Mountpoint logs will show an entry like this:
WARN readdir{req=5 ino=1 fh=2 offset=17}:
mountpoint_s3::inode::readdir::ordered: file 'a' (full key "a") is omitted because another directory 'a' exist with the same name
When listing the contents of an S3 Express One Zone directory bucket, both a file and directory of the same name may be shown without the above warning being emitted. However, only the directory remains accessible to other file operations. This issue is tracked in #725.
For more details on how Mountpoint maps S3 object keys to files and directories, see the semantics documentation.
Renaming a file or a directory inside the mounted directory is not supported by Mountpoint. Attempting to rename a file or directory will return an error:
$ mv hello.txt new_hello.txt
mv: cannot move 'hello.txt' to 'new_hello.txt': Function not implemented
Mountpoint logs should show the following message:
rename{req=120 parent=1 name="hello.txt" newparent=1 newname="new_hello.txt"}:
mountpoint_s3::fuse: rename failed: operation not supported by Mountpoint
Objects in Glacier Flexible Retrieval storage class, Glacier Deep Archive storage class, and non-instant access tiers of S3 Intelligent-Tiering storage class are not accessible with Mountpoint. When trying to access objects in these storage classes, Mountpoint logs will show entries like:
WARN lookup{req=6 ino=1 name="class_GLACIER"}:
mountpoint_s3::inode: objects in the GLACIER and DEEP_ARCHIVE storage classes are only accessible if restored
To access objects in these storage classes with Mountpoint, restore or copy them to another storage class first. To learn more about working with archived objects, see the S3 User Guide.
Mountpoint does not support modifying metadata such as file modification times and file size.
For example, attempting to update the file modification time using touch
will result in the following error:
$ touch -a -m -t 201512180130.09 init.txt
touch: init.txt: Operation not permitted
Mountpoint logs should contain an error similar to below:
WARN setattr{req=4 ino=21 name="init.txt"}:
mountpoint_s3::fuse: setattr failed: inode error: inode 21 (full key "init.txt") is a remote inode and its attributes cannot be modified
Mountpoint by default resolves endpoint for requests in virtual hosted style. If your storage provider does not support virtual style hosted bucket, you may recieve the following error:
Error: Failed to create S3 client
Caused by:
0: initial ListObjectsV2 failed for bucket my-bucket in region us-east-1
1: Client error
2: Unknown CRT error
3: CRT error 1059: aws-c-io: AWS_IO_DNS_INVALID_NAME, Host name was invalid for dns resolution.
Error: Failed to create mount process
In this case, try using --force-path-style
CLI option when you are mounting the bucket using Mountpoint.
For more details on how Mountpoint handles endpoint, please see our configuration documentation.
The Amazon S3 data model is a flat structure, with no hierarchy of subdirectories.
Mountpoint automatically infers a directory structure for your bucket by treating the /
separator after prefix in your object keys as a delimiter between directories.
If all the files within a prefix are deleted, the prefix itself and the corresponding directory cease to exist.
In this case, it is expected that Mountpoint will no longer show the directory or be able to create new files within it. You can recreate the directory with mkdir
and then continue creating new files within it. Alternatively, you can prevent a directory from disappearing by creating an empty, hidden file (for example, .keep
) inside it.
For more details on how Mountpoint maps S3 object keys to files and directories, see the semantics documentation.
If you're seeing slower performance than expected for things like opening files or reading file attributes, there are some actions that can be taken to mitigate this.
Mountpoint supports Linux file system operations using FUSE.
To communicate with Mountpoint, the Linux kernel refers to files and directories using inode numbers and sends many requests with reference to the parent directory.
As an example, opening file a/c.txt
will cause the kernel to lookup a
, then lookup c.txt
within a
, and then finally open c.txt
.
The deeper the file system structure, the more lookups incurred.
Each lookup may incur requests to S3.
There are two recommendations for this scenario:
- Use
--prefix <PREFIX>
to reduce the depth of the file system structure. For example, if all files to be accessed are ina/b/
then use--prefix a/b/
to avoid any lookups to those two parts of the path. - If your bucket content isn't expected to change or your application can tolerate stale file system content, leverage metadata caching which allows lookups to be cached. Learn more in our configuration documentation.
If you're seeing slower throughput than expected (i.e. significantly slower than the network bandwidth for an EC2 instance type), there may be a few areas to investigate.
If you're only reading from one file at a given time, the network interface may not be fully saturated.
Mountpoint supports Linux file system operations using FUSE.
Operations on files pass through several subsystems including the Linux VFS layer as well as the Mountpoint process.
These steps are serial, and so reading from a single file sequentially will be bounded by CPU performance rather than the available network throughput.
When possible, we recommend reading from files in parallel from multiple file handles (multiple calls to open
) to maximize throughput.
Note
Some common utilities like cp
will operate on files one-by-one, even when used with flags like --recursive
.
In these instances, we recommend using tools to parallelize the operations such as GNU Parallel.
Request retries may also introduce delays in processing requests. We recommend reviewing Mountpoint logs to confirm if requests may be failing and incurring retries. For example, throttled requests being retried may introduce latency to file system requests. Learn more about how to use Mountpoint logging in our logging documentation. To further debug throttling errors, see the throttling errors section of this page.
When looking at the logs, throttling errors will appear as failed requests with http_status=503
or http_status=429
. For example:
[WARN] lookup{req=20094 ino=109 name="***"}:
list_objects{id=16589 bucket=*** continued=false delimiter=/ max_keys=1 prefix=***}: mountpoint_s3_client::s3_crt_client:
request failed request_type=Default http_status=503 range=None duration=426.995805ms ttfb=Some(7.681499ms) request_id=***
[WARN] open{req=20158 ino=1706 pid=1759}:
list_objects{id=16643 bucket=*** continued=false delimiter=/ max_keys=1 prefix=***}: mountpoint_s3_client::s3_crt_client:
request failed request_type=Default http_status=503 range=None duration=314.021865ms ttfb=Some(8.180981ms) request_id=***
The 503 or 429 status codes means the request limits have been exceeded. Mountpoint itself does not do any throttling. These errors are returned from S3 or from dependent services, like STS which is used to provide credentials.
Amazon S3 automatically scales to high request rates. Your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned Amazon S3 prefix. You can reduce the impact of throttling errors by distributing objects across multiple prefixes in your bucket.
By default, Mountpoint retries throttled requests up to a total to 10 attempts. You can increase this default by setting the AWS_MAX_ATTEMPTS
environment variable.
For more details on optimizing Amazon S3 performance and avoiding throttling errors, see the S3 best practices documentation.
Amazon S3 supports object uploads of sizes up to 5 TiB and a maximum part count of 10,000, as documented in the S3 User Guide. Mountpoint uses a part size of 8MiB by default which is optimal for reading and writing most objects, however this does limit upload size to 80,000 MiB (78.1 GiB).
If you encounter this limit, you may see error messages in your application similar to the one shown below for dd
:
dd: error writing ‘/mnt/s3-bucket/200GiB-file’: File too large
You should also see an error message in Mountpoint's logs:
WARN write{req=100 ino=5 fh=2 offset=83886080000 length=1048576 pid=100 name="200GiB-file"}:
mountpoint_s3::fuse: write failed: upload error: object exceeded maximum upload size of 83886080000 bytes
For workloads uploading files larger than 78GiB, we recommend configuring a larger part size using the --write-part-size <MiB>
command-line argument.
For more information, see Mountpoint's configuration documentation.
If you are seeing errors such as "No signing credentials available, see CRT debug logs" or "No signing credentials found", this means that Mountpoint was unable to load credentials to sign the request it was attempting to make. This could be due to no credentials being available or an error with the credentials provider.
We recommend reviewing the AWS Common Runtime (CRT) client logs to identify the issue by using the --debug-crt
command-line argument.
The output is verbose, however it should be possible to identify the error coming from the AWS CRT.
Review Mountpoint's logging documentation for more information on how to configure logging.
As an example, the logs below shows some error emitted when using an invalid credential_source
property in an AWS Profile.
DEBUG awscrt::AWSProfile: Creating profile collection from file at "/home/ec2-user/.aws/config"
INFO awscrt::AuthCredentialsProvider: static: profile my-profile has role_arn property is set to arn:aws:iam::111122223333:role/my-iam-role, attempting to create an STS credentials provider.
DEBUG awscrt::AuthCredentialsProvider: static: computed session_name as aws-common-runtime-profile-config-19
INFO awscrt::AuthCredentialsProvider: static: credential_source property set to SomeWrongSource
ERROR awscrt::AuthCredentialsProvider: static: invalid credential_source property: SomeWrongSource
For more information on how to configure the AWS credentials Mountpoint uses, see Mountpoint's configuration documentation.