Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Security] It's really easy to remove a protector. Can this be abused by ransomware? #251

Closed
Redsandro opened this issue Sep 11, 2020 · 19 comments
Labels

Comments

@Redsandro
Copy link

Redsandro commented Sep 11, 2020

It's really easy to remove a protector/policy even if the protector is in the unlocked state.

fscrypt metadata destroy --protector=MOUNTPOINT:ID.
fscrypt metadata destroy --policy=MOUNTPOINT:ID

Can this be abused by ransomware? Should we enforce a sudo roundtrip admin rights for those actions, so that if user steps into a malicious buffer overflow or tricked into running something, it will be stopped at the administrator elevation prompt? Or can we think of another solution?

@Redsandro
Copy link
Author

@Maryse47 commented:

It's same as easy to remove any other file or even all files. sudo is not a boundary if user account has access to it and attacker has arbitrary code exec. They can just run something in a loop while waiting for your credentials.

@Redsandro
Copy link
Author

If the user has sudo rights. If the hacker catches your password. If the hacker targets the subset of sudo-distributions specifically as opposed to distributions where admin rights are acquired otherwise.

@Maryse47 If one makes enough assumptions, no security is relevant. I agree that the question could be: Is this relevant enough? In my opinion it is, or at least seems to be, from my limited understanding.

First, I don't believe it's fair to equate removal of one little key that instantly makes all your most important (because encrypted) files across all shares - possibly even offline ones - unreadable, to actual file removal. I have been that idiot that accidentally removed a huge parent directory rather than a small child directory almost a dozen times in my life. With a Ctrl+C, and one time a quick unplug of the computer when it froze, damage was always limited, and in dated directory names (like photography) rm attacks oldest directories first, meaning even the most lax backup policy makes you lucky to have a backup of the oldest files.

Second, a malicious actor being able to destroy your files is not the same as being exploitable or open to blackmail due to a malicious actor being able to hold your files hostage.

Since this discussion will be similar to the famous FileZilla master password discussion, perhaps we can move past the questions of relevance, and think about possible solutions.


Now, for your entertainment, the FileZilla discussion as quoted from Bleeping Computer on Slashdot:

Following years of criticism and user requests, the FileZilla FTP client is finally adding support for a master password that will act as a key for storing FTP login credentials in an encrypted format.

Users have been requesting this feature for a decade, since 2007, and they have asked it many and many times since then. All their requests have fallen on deaf ears and met with refusal from FileZilla maintainer.

By encrypting its saved FTP logins, FileZilla will finally thwart malware that scrapes the sitemanager.xml file and steals FTP credentials, which were previously stolen in plain text.

In November 2016, a user frustrated with the maintainer's stance forked the FileZilla FTP client and added support for a master password via a spin-off app called FileZilla Secure.

The author of FileZilla Secure took this action after his computer was infected with malware, and the malware stole the FileZilla password trove, a file named sitemanager.xml. Because FileZilla didn't store passwords in an encrypted format, the attacker had access to all the user's FTP credentials, stored as plain text inside the sitemanager.xml file.

The FileZilla maintainer has since merged the master password functionality back into FileZilla. Someone thanked the developer for adding this feature (after filing a request for it 9 years ago), and he replies: "I'm glad you like a feature that doesn't even increase security."

@Redsandro Redsandro changed the title It's really easy to remove a protector. Can this be abused by ransomware? [Security] It's really easy to remove a protector. Can this be abused by ransomware? Sep 11, 2020
@josephlr
Copy link
Member

josephlr commented Sep 11, 2020

I'm leaning towards requiring root permissions to remove fscrypt metadata. Two concerns:

  • we need to make sure unprivileged users can still create encryption metadata, I think this is possible with some weird Linux permissions, but I'm not sure.
  • updating passwords on protectors might be much harder now, depending on how I implemented it (I've forgotten)

@Maryse47
Copy link

If one makes enough assumptions, no security is relevant. I agree that the question could be: Is this relevant enough? In my opinion it is, or at least seems to be, from my limited understanding.

You make assumptions that makes your proposal relevant while in reality it isn't. You assume that when attacker get arbitrary code exec in your system then they go for your protector file instead of for example removing any of your files, encrypting any your files, sending any your files to remote host or anything else among infinity number of ways they can harm you. You want to block exactly one door while leaving the other 1000 doors open and feel more secure.

rm attacks oldest directories first, meaning even the most lax backup policy makes you lucky to have a backup of the oldest files

What if I told you that you can backup your protector and make a good laugh at attacker that was so stupid when break into your system to only take one file as hostage?

Current fscrypt threat model is to make sure your encrypted files cannot by decrypted without the key not to prevent someone creating a mess with your system. The latter is impossible for fscrypt to defend against and any attempt to do it will look like security theater. I would recommend looking for some sandbox solutions instead.

Since this discussion will be similar to the famous FileZilla master password discussion, perhaps we can move past the questions of relevance, and think about possible solutions.

No, this discussion won't be similar to FileZilla encryption of its own files especially as we talk about a tool that only job is to encrypt stuff.

@Redsandro
Copy link
Author

You want to block exactly one door while leaving the other 1000 doors open and feel more secure.

fscrypt creates exactly one new door in this context, and I suggest we block this door. The other doors are not within the scope of this project.

@ebiggers
Copy link
Collaborator

It's really easy to remove a protector/policy even if the protector is in the unlocked state.

Just FYI, this was possible with eCryptfs too, which the Ubuntu installer used for home directory encryption for years, and I don't think people really complained about it:

scp /home/.ecryptfs/$USER/.ecryptfs/* $RANSOMWARE_SERVER
rm -f /home/.ecryptfs/$USER/.ecryptfs/*

Ubuntu did encourage people to back up the eCryptfs metadata files, though. We could make fscrypt recommend (in command-line output and in documentation) backups more strongly, and provide some convenience commands to backup and restore metadata.

Also note that if fscrypt was running as root when it created the metadata files, e.g. if you ran sudo fscrypt encrypt dir --user=foo, then currently the metadata files will be owned by root and therefore be undeleteable, as requested. It's sort of incidental that it works that way (and someone could just as easily argue that this is broken and that a user should always own their login protector, etc.), but if you find the current behavior useful and you'd like to take advantage of it, you certainly could.

we need to make sure unprivileged users can still create encryption metadata, I think this is possible with some weird Linux permissions, but I'm not sure.

I don't think this is possible without forcing all metadata creation to go through a setuid binary. For the metadata files to be undeletable and unmodifiable by a user, they'd need to be owned by a different user. But if the user creates a file, they will own it. And on Linux, a user doesn't have permission to chown their files to someone else.

updating passwords on protectors might be much harder now, depending on how I implemented it (I've forgotten)

If a user has permission to write to the protector file, then they could just overwrite it with garbage too.

Since this discussion will be similar to the famous FileZilla master password discussion, perhaps we can move past the questions of relevance, and think about possible solutions.

FWIW, these cases are quite different. The FileZilla thing was about protecting a file containing FTP passwords, which could be used to log into remote servers, and that file could be exposed via a route other than local malware -- such as a path traversal bug or compromise of a backup. Those types of things really happen -- I've fixed path traversal bugs in software before, where anyone could download any file from a server's filesystem, etc. I'm not sure why it was so controversial. With fscrypt, on the other hand, the metadata files have a very different purpose. The concern isn't about them being leaked, but rather being deleted.

@Redsandro
Copy link
Author

Redsandro commented Sep 11, 2020

@ebiggers commented:

Just FYI, this was possible with eCryptfs too

I believe you are not entirely correct. This directory was only used for the home encryption mechanism built on top of ecryptfs. It contained mount path metadata and a keyring signature. It was used for the automatic unlocking of the home directory after logging in, comparable to the fscrypt login protector.

However, the unwrapped passphrase was printed to a text file in your home directory, and Ubuntu kept reminding you to write it down as a recovery passphrase.

Now the big difference is that, unlike the recovery protector, this unwrapped passphrase does not require metadata. With all metadata removed, you could simply mount the files using this passphrase alone. I've used manual ecryptfs separate from home encryption for a long time, sharing it over dropbox, being able to read it on all computers using the passphrase alone. It was not possible to remove the metadata and render all encrypted files useless.

Also note that if fscrypt was running as root when it created the metadata files, e.g. if you ran sudo fscrypt encrypt dir --user=foo, then currently the metadata files will be owned by root and therefore be undeleteable, as requested.

So I can chown all metadata to root and chmod to 644 for my users without breaking anything?

This could be an opportunity. Would something like this imaginary output be something to think about?

$ fscrypt encrypt /mnt/data/police_reports

You are attempting to encrypt a directory without root privileges.
It is *highly recommended* to use root privileges to set up an encrypted directory. E.g.:

sudo fscrypt encrypt /mnt/data/police_reports --user=redsandro

This way, your data protectors become write protected and cannot accidentally or 
maliciously be removed without acquiring root privileges.

Are you sure you want to continue _without_ root privileges? [y/N]

We could make fscrypt recommend (in command-line output and in documentation) backups more strongly, and provide some convenience commands to backup and restore metadata.

If nothing else can realistically be done, I would recommend this. Give a user a recovery passphrase, and they will assume that's all they need in case of trouble. Only should they ever lose their metadata for whatever reason, they will find out the recovery passphrase will unexpectedly no longer work.

Perhaps (also) add a paragraph to fscrypt_recovery_readme.txt:

(...)

Copy this passphrase to a safe place if you want to still be able to unlock this
directory if you re-install your system or connect this storage media to a
different system (which would result in your login protector being lost).

This directory is encrypted using the following metadata:
policy "xyzxyz"
login protector "uvwuvw"
recovery protector "abcabc"

For absolute recoverability, you should backup policy "xyzxyz" and protector "abcabc" 
located at "/mnt/data/.fscrypt" in case your metadata becomes damaged. 
The recovery passphrase *will not work* without this metadata.

You can run [convenience commands to backup metadata].
In case of emergency, run [convenience commands to restore metadata].

Perhaps damaged is a good word. I hadn't thought about it before, but I can imagine a user with a 50 GB encrypted home will be a bit salty when 200 bytes of metadata get corrupted and all files become unreadable.

@ebiggers
Copy link
Collaborator

It looks like you're right about eCryptfs being recoverable from the "mount passphrase" only.

It's strange because that means that the KDF to go from the eCryptfs mount passphrase to the actual encryption key is unsalted. And the "mount passphrase" isn't normally a passphrase the user specifies, but rather a value that gets unwrapped via the real passphrase. So it might as well just be the key itself.

In fact the eCryptfs documentation "highly advises" providing a salt. I guess that no one actually does that!

Anyway, the closest equivalent for fscrypt would be if we provided the ability to directly export the policy key for a directory. It would be a 64-byte key, so 128 hex characters. It could be used to unlock the directory (or create a policy and protector file for it) without access to any other files.

Of course, once you unlock your directory after a hypothetical ransomware attack deleted/encrypted its fscrypt metadata files, you'll probably find that the ransomware also encrypted all the files in the directory too :-)

@Redsandro
Copy link
Author

@ebiggers commented:

It's strange because that means that the KDF to go from the eCryptfs mount passphrase to the actual encryption key is unsalted. And the "mount passphrase" isn't normally a passphrase the user specifies, but rather a value that gets unwrapped via the real passphrase. So it might as well just be the key itself.

In fact the eCryptfs documentation "highly advises" providing a salt. I guess that no one actually does that!

Exactly! It was a known security risk for years. Ubuntu never really implemented salt and your metadata could be cracked using John the Ripper. I started storing most my files outside my home in custom encrypted ecryptfs directories.

I don't believe full disk encryption is a good workaround for reasons and I don't understand why Canonical or any other party didn't choose to fix/update/fork ecryptfs rather than shoot the mosquito (protect my word document) with a canon (encrypt the universe with LUKS). Perhaps they're just buying valuable developer time since ZoL ZFS 0.8.3 landed in Ubuntu 20.04 and it includes native encryption that really works quite beautifully. You can even boot from ZFS now.

So yes for the servers where we use RAIDZ arrays to store our media footage ZFS and native encryption is really nice and fast. But for laptops, homes, ssd, ext4, documents, pictures we really like something native, light-weight and file-based. fscrypt is really the winner on portable devices and desktops in my opinion.

the closest equivalent for fscrypt would be if we provided the ability to directly export the policy key for a directory. It would be a 64-byte key, so 128 hex characters. It could be used to unlock the directory (or create a policy and protector file for it) without access to any other files.

This would be a good option! No need to backup some files and figure out where to keep them, just a string that you can keep in your private password manager. Or in a safe. Or in a small business environment the IT manager could keep hex recovery keys and the other personnel wouldn't have to worry about it.

The metadata description could follow the hex key after whitespace, similar to rsa keys, so that when you paste the line back into fscrypt, the newly created protector will have the same description as the exported hex key.

Of course, once you unlock your directory after a hypothetical ransomware attack deleted/encrypted its fscrypt metadata files, you'll probably find that the ransomware also encrypted all the files in the directory too :-)

At least then it wasn't fscrypt's door that was left open. 💪

@Maryse47
Copy link

Maryse47 commented Sep 11, 2020

I still don't see why you won't just backup protectors instead of making such complicated schemes. You want to this project highly recommend something for all users but you are the only person actually recommending it so far.

At least then it wasn't fscrypt's door that was left open

You seem unable to realize that you still lost in this scenario and attacker won so proposed solution achieved nothing. fscrypt doesn't make your system less secure.

@ebiggers
Copy link
Collaborator

As discussed above, implementing this feature (requiring root to remove protectors) would require that all fscrypt metadata creation be done using a setuid binary, as far as I can tell. Adding a setuid binary would likely be counterproductive security-wise, as bugs in setuid binaries can be exploited to escalate privileges. It would be really nice to avoid adding a setuid binary.

Therefore, I am going to close this issue, in favor of encouraging users to back up their fscrypt metadata (and their files). Documentation was recently added that describes how to do this now (https://github.com/google/fscrypt#backup-restore-and-recovery), and there is already an open issue for dedicated backup and recovery commands which would make this easier (#125).

@hirak99
Copy link

hirak99 commented Mar 11, 2022

Why not make the two directories under .fscrypt 644, i.e. not u,g writable by default?
In fact, looks like the files are already 644, it's the directories which are 777. That should be a simple solution that also resolves this bug.

I find it weird that we leave something so critical in root, and give it 777 attribute.

Non root users being able to lock out critical files can lead to inadvertent catastrophies.

This allows any non-root user to run equivalent of sudo rm -rf / (as far as all encrypted directories go).

I still don't see why you won't just backup protectors instead of making such complicated schemes.

The problem with backing up, is that automating a script that backs up every drive which uses fscrypt is also complicated enough. And without an automated backup, you risk getting out of sync with new keys.

Honestly, I think as an alternative to backing up, a better solution is to run sudo chmod og-w <mount>/.fscrypt/*. It seems the best thing to do is to make that a default (and until then this the interim solution I am planning to use).

I surmise that your goal is to allow every end user ability to create system-level key. If that's the case, may be instead we should allow every user to create their own ~/.fscrypt (in addition to /.fscrypt which remains readonly).

In my view, the priorities should be as follows -

  1. [P0] Allow super users to create and manage file-system spanning keys readable by all
  2. [P0] DO NOT allow standard users to delete or tinker with these keys, i.e. revoke their write access
  3. [P2] May be allow standard users to also create their own keys

I'd in fact argue that (3) is not very useful, and definitely does not trump (2).
Both LUKS and ecryptfs, for example, require you to have root access to create keys. All home users of linux distros will have super user access. So we are not gaining much by allowing anyone to create or touch root level keys, but weakening the filesystem.

@hirak99
Copy link

hirak99 commented Mar 11, 2022

Hold on a minute... looks like I was mistaken, what I was concerned about is not an issue.

I see the sticky bit set for directories under /.fscrypt/*. So no one else other than who created a protector will be able to erase it; which is great :)

Sorry to necro the old thread. I was searching about fscrypt security, and came across this thread.

@ebiggers
Copy link
Collaborator

The directories have always had the sticky bit, so users can't delete other user's policies and protectors. I think you misunderstand this issue -- it was complaining about users being able to delete their own policies and protectors.

Anyway, it's worth noting that fscrypt v0.3.3 does offer a choice of non-world-writable metadata directories for another reason (not allowing users to exhaust filesystem space). Though, login protectors are still made owned by the user, not root.

@hirak99
Copy link

hirak99 commented Mar 11, 2022

Yes I did misunderstand. Sorry about that!

Also yes I saw the commit on "filesystem: create metadata files with mode 0600", and the current pending pull request. Saw that there's going to be more options for changing the .fscrypt directory. Can't wait for this to come to Arch =)


@Redsandro you might want to consider the option of always using sudo encrypt for encrypting. This will ensure that users will have access to the protector, but will not be able to delete it.

(And on existing protectors, sudo chown root:root protectors/* to ensure they are not accidentally or maliciously deleted.)

@ebiggers
Copy link
Collaborator

Can't wait for this to come to Arch =)

Arch Linux already packages fscrypt v0.3.3.

Though keep in mind that just upgrading won't change the permissions on existing files and directories.

@Redsandro
Copy link
Author

Redsandro commented Mar 12, 2022

@hirak99 commented:

@Redsandro you might want to consider the option of always using sudo encrypt for encrypting. This will ensure that users will have access to the protector, but will not be able to delete it.

Yes exactly. That's why I proposed the following:

Would something like this imaginary output be something to think about?

$ fscrypt encrypt /mnt/data/police_reports

You are attempting to encrypt a directory without root privileges.
It is *highly recommended* to use root privileges to set up an encrypted directory. E.g.:

sudo fscrypt encrypt /mnt/data/police_reports --user=redsandro

This way, your data protectors become write protected and cannot accidentally or 
maliciously be removed without acquiring root privileges.

Are you sure you want to continue _without_ root privileges? [y/N]

If I understand correctly, @ebiggers recommended against this at the time, because it could lead to privilege escalation attacks. I may be misparaphrasing here, because it is a while ago and I have not really taken a second look at this since. I figured I'd take another look by the time I'd do a clean install on a new machine, which hasn't occurred yet.

@ebiggers
Copy link
Collaborator

Being able to encrypt directories as a non-root user is a feature, not a bug. There are certainly some trade-offs, though.

@hirak99
Copy link

hirak99 commented Mar 13, 2022

+1
I didn't realize that you already proposed this @Redsandro - sorry.
But I strongly agree it is good as is.

It's great that powerusers can protect it.

It should be expected that anything you do without sudo, can also be destroyed by anyone who gets your access (that includes your encryption keys and your data).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants