-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix no buckets #2
base: master
Are you sure you want to change the base?
Conversation
What do you think the buckets method should return if there are no buckets? Currently this method returns a hash of info, which still works if @buckets is = (), but I'm not so sure that's very perlish... I imagine Id like the return of ->buckets to evaluate to false in boolean context if there are no buckets. Perhaps this should return a buckets object with a ->has_buckets or similar, that way you don't have to do @{$storage->buckets->{buckets}} to discover you have no buckets. I'm merging this now but would be interested in your thoughts. My company pulled resources from this project last month so I haven't been able to spend contrib time (and have several other owed patches on other open source stuff promised from further back) so am happy to see some activity. Let me know if you can catch up on IRC sometime and do some planning. If not I commit my todo and project plan for this for your review. |
Currently the method buckets returns either the status_line (on failure) or a hashref (on success). If it were to go the Net::Amazon::S3 way, Google/Storage.pm would have an internal 'err' which would be set on failure, and each method would (catch their own errors and) set it on failure, returning undef. Other modules, as I'm sure you're well aware, die on failure. The beauty of modules like Net::Amazon::S3 and I'm hopeful Google::Storage is they do not die unexpectedly, but usually perform according to the API and the perlish returning undef on failure. I can cook up something for the above if you want; I'd use Try::Tiny internally for handling any external exceptions. |
Ach, hit comment too soon. I'm on freenode & irc.perl as mfontani, mostly an irssi running detached. If there's a channel you may want to use I may join and could discuss when actively at the pc && connected. |
I agree with the sentiment that gs should not throw exceptions so readily, that's not a particular perlish way to had 'no found' style situations. I think some of the stuff in the current repo was stuff I put in as part of testing, perhaps it doesn't belong. I think the thing that made me pause on this code (besides loosing work support, moving and sending my wife to Italy for the semester :) ) was I suddenly thought instead of just cloning Net::Amazon::S3 to take a step back and wonder if there was not some better way to express this problem. Part of me was thinking this could be done in the form of a Toolkit for building persisted blobs/data objects. The idea is that there's be something to subclass, or perhaps a Moose style role that you apply to something that needs to be saved and served over the web. I wanted to spend a bit more time pondering the direction Google was taking. Although I guess in the end you will still need a low level interface so I guess finishing this from a low level perspective isn't so bad either. I'm typically on irc.perl.org like #catalyst, catalyst-dev, dbix-class, moose, and similar. I'm New York City time, why not catch up later this week and plan a bit? I will poke around and see who else is interested. |
I agree there is possibly a need for an all-encompassing module which just sends data "to the cloud", with a number of back-ends (s3, cloudfiles, google storage, scp, rsync, whatnot). You'd then instantiate this "cloud sender" object, state the backend(s) to use, and just use the object's methods with no need to know the intricacies of each back-end other than needing to have the proper authentication bits set up. These backends could possibly be implemented as roles (traits?), but that relies on having one common interface to all the "real" back-ends (Net::Amazon::S3, Google::Storage, etc). Currently there's no common API for, say, s3 and cloudservers: this "cloud sender" object will have to be shipped with wrappers around the most common back-ends which expose the same interface: they would all need to provide the same objects (with a specific defined interface) for buckets (directories?), files, and upload/download. This though assumes all back-ends have an inherent concept of bucket/file or directory/file. For that to work, the "cloud send" module has to rest on the shoulders of "real" back-end modules like this one (Google::Storage) or Net::Amazon::S3 which have their own API, do the real work, and have no need to provide an interface common to any other module. ++ for the idea, it would be good to have such module; I feel Google::Storage does not need to be that module, though: its only concern should be to provide {up,down}load of buckets and files, with a defined interface any other module (including the "cloud send" one) can use. What are your thoughts? I'll have irssi join one of the above channels, so see you there if I'm online and available when you're there also -- I'm on GMT/BST. |
The map{} failed when there are no buckets in the user's account (as was my case).
This only map{}s when the $buckets are defined, making tests pass on an account with or without defined buckets