You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently all TOSCA nodes are deployed in a single WF task (via code, for simplicity).
To support more complex and bigger templates, that task should be done at the WF level (also enabling single node failure handling and retrying).
This enhancement is also related to the Parameter Sweep feature (which would generate very large templates - ~10k nodes).
The text was updated successfully, but these errors were encountered:
- Workflows have been improved using DeploymentMessage as the main
DeploymentService interface method's parameter in order to allow for
greater customization of the Deployment (i.e. chosen CloudProvider,
OneData settings, etc).
- Workflows now support Deploy/Poll/Undeploy task iteration to allow the
underlying command to work on a subset of TOSCA nodes (i.e. creating one
Job at a time to avoid transaction timeout).
Related to #48, #51, #53, #55Fixes#44
Chronos job are created/polled/deleted in chuncks instead of one for
each WF task invocation to improve performance (to avoid
serialization/deserialization overhead).
The `orchestrator.chronos.jobChunkSize` property allows customization of
this behavior.
See #48
Currently all TOSCA nodes are deployed in a single WF task (via code, for simplicity).
To support more complex and bigger templates, that task should be done at the WF level (also enabling single node failure handling and retrying).
This enhancement is also related to the Parameter Sweep feature (which would generate very large templates - ~10k nodes).
The text was updated successfully, but these errors were encountered: