You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think there should be a a function that goes through all the recordings in an endaq.cloud.com account and on your local machine computes all the attribute values for the files which don't have them populated (because they are big files and timed out during the original computation in the cloud).
The idea here is you can easily iterate over the files with the endaq.cloud package, and then check if the attributes are present. If not then it'd download the file, and use the processing functions @CrepeGoat wrote which compute attributes in the cloud to calculate them locally, then use the endaq.cloud package again to set the newly calculated attribute values in the cloud.
Right now anyone who wants to work with the attributes characteristics of big files simply can't, an example would be me not able to put my anomaly detection datasets into the custom cloud report script I've made since all the files are too large to have attributes assigned.
The idea here is that this would be a simple low dev-time solution rather than having to deal with limitations of lambda functions (which is where I've been told the issues with calculating attributes on big files lies).
The one hurdle I see needing to be overcome is that the processing scripts @CrepeGoat made aren't public, and would need to be to put in this open-source repo to make the functionality I've just described a reality.
The text was updated successfully, but these errors were encountered:
I think there should be a a function that goes through all the recordings in an endaq.cloud.com account and on your local machine computes all the attribute values for the files which don't have them populated (because they are big files and timed out during the original computation in the cloud).
The idea here is you can easily iterate over the files with the endaq.cloud package, and then check if the attributes are present. If not then it'd download the file, and use the processing functions @CrepeGoat wrote which compute attributes in the cloud to calculate them locally, then use the endaq.cloud package again to set the newly calculated attribute values in the cloud.
Right now anyone who wants to work with the attributes characteristics of big files simply can't, an example would be me not able to put my anomaly detection datasets into the custom cloud report script I've made since all the files are too large to have attributes assigned.
The idea here is that this would be a simple low dev-time solution rather than having to deal with limitations of lambda functions (which is where I've been told the issues with calculating attributes on big files lies).
The one hurdle I see needing to be overcome is that the processing scripts @CrepeGoat made aren't public, and would need to be to put in this open-source repo to make the functionality I've just described a reality.
The text was updated successfully, but these errors were encountered: