-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple users or ability to access subfolders in buckets #4
Comments
Howdy my friend. At your service!! I'm always delighted to partner to get problems solved so realize that I am here and reading/thinking about all posts. Am I sensing two puzzles in the post? One about the authentication message and one on user permissions? I'm thinking about the later one first. As it currently stands, the demon runs as a single GCP service identity. This means that all interactions with GCS are done under the auspices of a single identity. This means that whomever you are ... if you pass the SFTP login access, then you will be reading and writing to GCS using one single identity. What might we do to resolve this? Option 1 Option 2 Both of these are available (I believe) for immediate deployment on GCP through the GCP Marketplace. |
That's commitment ! Such a fast reply (Edit : forgot to THANK YOU) |
It should be possible to code it up. I have to think about it some. Let's make sure that I have understood the requirements. Lets imagine you have two GCP users. User "A" and user "B". Let's now also imagine you have a GCS bucket called gs://my-bucket. I'm assuming that there might be two folders ... /user-A-files and /user-B-files. We want user "A" to login and do work against their folder and not be allowed to do work against user "B" ... and visa versa. I'm assuming that security of "who is allowed to do what" (otherwise known as authorization) is going to be governed by GCS security and GCP IAM. As such, the puzzle from the implementation of this project becomes "How do we make requests to GCS as user 'A' vs user 'B' as opposed to the current story which is a configured constant user?". The only way I could see that ever working is if the demon logically performs requests "as" those users which means that the demon has to "become" those users (even transiently). I have never researched (and hence attempted) to perform such logic before so don't know what would be involved (yet). My guess is that we would have to learn the passwords for the users which would mean that we would have to supply the passwords via SFTP login. IF this were implemented (and I'm not committing to that :-) ... does this sound/feel like a solution to the puzzle for your needs? |
I love open-source projects omg Because if so; since - as you said - this is not a vendor product and it is not meant to be one for this project therefore I don't think that could be an issue as I don't need 1,000 users -> opening a specific port would make us launch a daemon per user per port; but with one single bucket Do you think this workaround would be ok ? |
It would be relatively easy to constrain file access to forced root folders on per instance basis. When a request is made to access file "X", that could be remapped to "/userA/X". We would also have disable relative folders (eg. "..") so we can't access "../userB/Y". How would you be asking your users to connect? Via a userid/password or a public key? If via userid/password, one option would be that we could supply a configuration file that maps users to passwords and root directories? |
Ok I believe you found the way User/password seems the easiest here |
The "authenticates on the fly" was what I originally had in mind but I don't know how to do that yet (nor know if its possible). The demon would appear to have to know the userid/password for a user and then authenticate with GCP as that user and then run the GCS APIs as that identity once authorized by GCP. I've never walked that path ... it may be possible or it may not. To be continued on that front. In the story with "remapping" paths, the demon would still run as one identity that would have full authority over the bucket. That would still be a single identity. |
I dont mind maintaining an open port/sftp process per user tbh, this sounds like a great workaround already |
Sadly, its not going to be one single place (as the code currently exists). Scattered througout the code (search on the string "bucket") you will find lots of places where GCP bucket access is performed. It seems all of these take a file name as input. Each place would have to be found and replace with a call to a function that does not yet exist. For example .. if today we had:
we would have to change that to:
where we would then add:
Now ... I haven't thought all this through yet ... so if you do decide to fork and play ... go for it and enjoy. But I need to think more on this whole notion before I'll accept a pull request to merge it all back in if you change. I'd prefer to get the design/thinking right than make the demon have hundreds of options that contradict each other. |
Oh wow for sure ! |
Since this is a pet project and not one for work, it has to be outside of work. Next week I have to resit one of the GCP certifications (they expire after 2 years) so I have to swot all weekend otherwise I could have studied then. As it stands, we might be looking a few weeks out. I'm assuming this is for a personal project or some small/medium sized business? If it was corporate, I'd have said email at [email protected] your business details and I could reach out to your Google colleagues and maybe get approval to carve out some work hours based on that. |
It's actually for one of my clients ! You are a great developer, this is what I love the most about open-sourcing, kudos! I'll type you an email then with my corporate address from this client and continue from there ? |
Hi !
Thanks for your package!
Although I encounter the
No supported authentication methods available (server sent: )
issue in every sftp client apart from the shell (I don't understand why FileZilla sends anone
authentication method though)...I wondered : since you can't launch the daemon on the same port (I thought of maintaining a bucket per user), but you can't also give a subdirectory as a bucket (I tried it gave me a
Couldn't read directory: Failure
error : How can I use this to manage a single instance of the daemon for multiple users ?The text was updated successfully, but these errors were encountered: