-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Security] It's really easy to remove a protector. Can this be abused by ransomware? #251
Comments
If the user has sudo rights. If the hacker catches your password. If the hacker targets the subset of sudo-distributions specifically as opposed to distributions where admin rights are acquired otherwise. @Maryse47 If one makes enough assumptions, no security is relevant. I agree that the question could be: Is this relevant enough? In my opinion it is, or at least seems to be, from my limited understanding. First, I don't believe it's fair to equate removal of one little key that instantly makes all your most important (because encrypted) files across all shares - possibly even offline ones - unreadable, to actual file removal. I have been that idiot that accidentally removed a huge parent directory rather than a small child directory almost a dozen times in my life. With a Ctrl+C, and one time a quick unplug of the computer when it froze, damage was always limited, and in dated directory names (like photography) Second, a malicious actor being able to destroy your files is not the same as being exploitable or open to blackmail due to a malicious actor being able to hold your files hostage. Since this discussion will be similar to the famous FileZilla master password discussion, perhaps we can move past the questions of relevance, and think about possible solutions. Now, for your entertainment, the FileZilla discussion as quoted from Bleeping Computer on Slashdot:
|
I'm leaning towards requiring root permissions to remove fscrypt metadata. Two concerns:
|
You make assumptions that makes your proposal relevant while in reality it isn't. You assume that when attacker get arbitrary code exec in your system then they go for your protector file instead of for example removing any of your files, encrypting any your files, sending any your files to remote host or anything else among infinity number of ways they can harm you. You want to block exactly one door while leaving the other 1000 doors open and feel more secure.
What if I told you that you can backup your protector and make a good laugh at attacker that was so stupid when break into your system to only take one file as hostage? Current fscrypt threat model is to make sure your encrypted files cannot by decrypted without the key not to prevent someone creating a mess with your system. The latter is impossible for fscrypt to defend against and any attempt to do it will look like security theater. I would recommend looking for some sandbox solutions instead.
No, this discussion won't be similar to FileZilla encryption of its own files especially as we talk about a tool that only job is to encrypt stuff. |
|
Just FYI, this was possible with eCryptfs too, which the Ubuntu installer used for home directory encryption for years, and I don't think people really complained about it:
Ubuntu did encourage people to back up the eCryptfs metadata files, though. We could make Also note that if
I don't think this is possible without forcing all metadata creation to go through a setuid binary. For the metadata files to be undeletable and unmodifiable by a user, they'd need to be owned by a different user. But if the user creates a file, they will own it. And on Linux, a user doesn't have permission to chown their files to someone else.
If a user has permission to write to the protector file, then they could just overwrite it with garbage too.
FWIW, these cases are quite different. The FileZilla thing was about protecting a file containing FTP passwords, which could be used to log into remote servers, and that file could be exposed via a route other than local malware -- such as a path traversal bug or compromise of a backup. Those types of things really happen -- I've fixed path traversal bugs in software before, where anyone could download any file from a server's filesystem, etc. I'm not sure why it was so controversial. With |
@ebiggers commented:
I believe you are not entirely correct. This directory was only used for the home encryption mechanism built on top of ecryptfs. It contained mount path metadata and a keyring signature. It was used for the automatic unlocking of the home directory after logging in, comparable to the However, the unwrapped passphrase was printed to a text file in your home directory, and Ubuntu kept reminding you to write it down as a recovery passphrase. Now the big difference is that, unlike the recovery protector, this unwrapped passphrase does not require metadata. With all metadata removed, you could simply mount the files using this passphrase alone. I've used manual
So I can This could be an opportunity. Would something like this imaginary output be something to think about?
If nothing else can realistically be done, I would recommend this. Give a user a recovery passphrase, and they will assume that's all they need in case of trouble. Only should they ever lose their metadata for whatever reason, they will find out the recovery passphrase will unexpectedly no longer work. Perhaps (also) add a paragraph to
Perhaps damaged is a good word. I hadn't thought about it before, but I can imagine a user with a 50 GB encrypted home will be a bit salty when 200 bytes of metadata get corrupted and all files become unreadable. |
It looks like you're right about eCryptfs being recoverable from the "mount passphrase" only. It's strange because that means that the KDF to go from the eCryptfs mount passphrase to the actual encryption key is unsalted. And the "mount passphrase" isn't normally a passphrase the user specifies, but rather a value that gets unwrapped via the real passphrase. So it might as well just be the key itself. In fact the eCryptfs documentation "highly advises" providing a salt. I guess that no one actually does that! Anyway, the closest equivalent for Of course, once you unlock your directory after a hypothetical ransomware attack deleted/encrypted its |
@ebiggers commented:
Exactly! It was a known security risk for years. Ubuntu never really implemented salt and your metadata could be cracked using John the Ripper. I started storing most my files outside my home in custom encrypted ecryptfs directories. I don't believe full disk encryption is a good workaround for reasons and I don't understand why Canonical or any other party didn't choose to fix/update/fork ecryptfs rather than shoot the mosquito (protect my word document) with a canon (encrypt the universe with LUKS). Perhaps they're just buying valuable developer time since ZoL ZFS 0.8.3 landed in Ubuntu 20.04 and it includes native encryption that really works quite beautifully. You can even boot from ZFS now. So yes for the servers where we use RAIDZ arrays to store our media footage ZFS and native encryption is really nice and fast. But for laptops, homes, ssd, ext4, documents, pictures we really like something native, light-weight and file-based.
This would be a good option! No need to backup some files and figure out where to keep them, just a string that you can keep in your private password manager. Or in a safe. Or in a small business environment the IT manager could keep hex recovery keys and the other personnel wouldn't have to worry about it. The metadata description could follow the hex key after whitespace, similar to rsa keys, so that when you paste the line back into
At least then it wasn't fscrypt's door that was left open. 💪 |
I still don't see why you won't just backup protectors instead of making such complicated schemes. You want to this project highly recommend something for all users but you are the only person actually recommending it so far.
You seem unable to realize that you still lost in this scenario and attacker won so proposed solution achieved nothing. fscrypt doesn't make your system less secure. |
As discussed above, implementing this feature (requiring root to remove protectors) would require that all Therefore, I am going to close this issue, in favor of encouraging users to back up their |
Why not make the two directories under .fscrypt 644, i.e. not u,g writable by default? I find it weird that we leave something so critical in root, and give it 777 attribute. Non root users being able to lock out critical files can lead to inadvertent catastrophies. This allows any non-root user to run equivalent of
The problem with backing up, is that automating a script that backs up every drive which uses fscrypt is also complicated enough. And without an automated backup, you risk getting out of sync with new keys. Honestly, I think as an alternative to backing up, a better solution is to run I surmise that your goal is to allow every end user ability to create system-level key. If that's the case, may be instead we should allow every user to create their own In my view, the priorities should be as follows -
I'd in fact argue that (3) is not very useful, and definitely does not trump (2). |
Hold on a minute... looks like I was mistaken, what I was concerned about is not an issue. I see the sticky bit set for directories under /.fscrypt/*. So no one else other than who created a protector will be able to erase it; which is great :) Sorry to necro the old thread. I was searching about fscrypt security, and came across this thread. |
The directories have always had the sticky bit, so users can't delete other user's policies and protectors. I think you misunderstand this issue -- it was complaining about users being able to delete their own policies and protectors. Anyway, it's worth noting that |
Yes I did misunderstand. Sorry about that! Also yes I saw the commit on "filesystem: create metadata files with mode 0600", and the current pending pull request. Saw that there's going to be more options for changing the .fscrypt directory. Can't wait for this to come to Arch =) @Redsandro you might want to consider the option of always using (And on existing protectors, |
Arch Linux already packages Though keep in mind that just upgrading won't change the permissions on existing files and directories. |
Yes exactly. That's why I proposed the following:
If I understand correctly, @ebiggers recommended against this at the time, because it could lead to privilege escalation attacks. I may be misparaphrasing here, because it is a while ago and I have not really taken a second look at this since. I figured I'd take another look by the time I'd do a clean install on a new machine, which hasn't occurred yet. |
Being able to encrypt directories as a non-root user is a feature, not a bug. There are certainly some trade-offs, though. |
+1 It's great that powerusers can protect it. It should be expected that anything you do without sudo, can also be destroyed by anyone who gets your access (that includes your encryption keys and your data). |
It's really easy to remove a protector/policy even if the protector is in the unlocked state.
Can this be abused by ransomware? Should we enforce
a sudo roundtripadmin rights for those actions, so that if user steps into a malicious buffer overflow or tricked into running something, it will be stopped at the administrator elevation prompt? Or can we think of another solution?The text was updated successfully, but these errors were encountered: