-
-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
request: process groups in a single yaml file #131
Comments
Hi @colemickens, Have you considered using the merge functionality of process compose? |
Agreed with @F1bonacc1, that should be the correct way to do it.
|
While I appreciate the answer, the UX for all of this feels... undesirable, compared to what I was suggesting. I'm thinking about hacking on this myself and seeing what I come up with. Or maybe an alternative approach as a zellij plugin. I really don't mean to sound ungrateful and I wish I could better articulate why I feel the way I do about the UX. Maybe I'll come up with some more examples and try out how it feels. |
i think I have a similar situation. I have multiple vertical slices that we run separately (we use group1:
command: open http://localhost:1080/group1
depends_on:
process1: { condition: process_started }
process2: { condition: process_started }
group2:
command: open http://localhost:1080/group2
depends_on:
process1: { condition: process_started }
process3: { condition: process_started } could this be an alternative approach? this allows for a simple |
Another option that might work for you, in case you prefer to use a single file, is to use the
And for the LB configuration:
|
No worries :) This approach was recommend mainly as it's how these kind of service composition is done in If you talk about end-user UX, I almost always create some kind of aliases for these invocations, as there is no problem that one more layer of abstraction doesn't solve :) See my ci-test invocation @ https://github.com/schemamap/schemamap#day-to-day-operations |
We're already approaching 3 duplicated process-compose files with very tiny adjustments between them, and then layering on more complexity with optionally disabled commands. This is quickly becoming untenable. @F1bonacc1 can you give an indication if you'd be open to merging this feature if I take a weekend and write it? (Not committing to this, don't let me stop someone else, I'm very busy for the next couple of months.) |
I’d suggest that helping to tame duplication in config files should be out of scope of process-compose. Instead of cramming more features into single tool, it’s better to delegate to other tools, more suitable for the job. One such tool is https://github.com/Platonic-Systems/process-compose-flake which uses nix language for managing PC configurations. If you’re into nix then next step up would be: https://github.com/juspay/services-flake |
@adrian-gierakowski great suggestions, including at least one I know I haven't seen before. Indeed, we're a Nix shop, so I've been considering looking into devenv.sh which I think exactly uses nix to abstract/generate process-compose files. Maybe throwing Nix at it is the better direction, given that there's other "stuff" we want to configure that could be driven from a singular Nix config too. Appreciate the suggestion, I'll definitely look into those more before adding features to p-c. |
An example of how devenv.sh-based processes look like: https://github.com/schemamap/schemamap/blob/main/devenv.nix#L74-L90 |
Feature Request
We have different scenarios for which we run our app:
For now, we add in default-disabled commands, or copy/paste/edit the process-compose file to enable these various scenarios.
However, I can imagine a concept of "process groups" (a group of pre-declared tasks maybe), that would allow us to DRY and have these scenarios/configs declared in a single YAML file
Use Case:
Proposed Change:
Who Benefits From The Change(s)?
Users
Alternative Approaches
Lots of separate files, using advanced yaml features inclusions, etc. All of these are subpar options, in my view.
The text was updated successfully, but these errors were encountered: