You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The starter pack broker keeps track of provisioned services in-memory and is blissfully ignorant of any previous doings when restarted. At a minimum, a persistent table to map instance IDs between the marketplace and the service provider worlds would be very useful.
Is there a best practice for keeping state and metadata of service instances in the scope of a specific broker? Would having the broker annotate the service catalog serviceinstance objects be a reasonable approach?
I've tried keeping state in a kubernetes custom resource. It's proven a convenient way to achieve persistence, but this is probably not the ideal use of CRDs and introducing another entity with it's own lifecycle is a sure way to end up with inconsistency issues, speaking from experience.
The text was updated successfully, but these errors were encountered:
Minibroker uses configmaps and labels on related k8s resources to persist data. If that weren't enough, I would lean towards using CRDs over a separate data store, because then I'd have to backup 2 databases. 😀
The starter pack broker keeps track of provisioned services in-memory and is blissfully ignorant of any previous doings when restarted. At a minimum, a persistent table to map instance IDs between the marketplace and the service provider worlds would be very useful.
Is there a best practice for keeping state and metadata of service instances in the scope of a specific broker? Would having the broker annotate the service catalog serviceinstance objects be a reasonable approach?
I've tried keeping state in a kubernetes custom resource. It's proven a convenient way to achieve persistence, but this is probably not the ideal use of CRDs and introducing another entity with it's own lifecycle is a sure way to end up with inconsistency issues, speaking from experience.
The text was updated successfully, but these errors were encountered: