Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: support resource.reserve configuration parameter #6547

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

grondo
Copy link
Contributor

@grondo grondo commented Jan 10, 2025

This WIP PR addresses #6315 and #5240 by adding a new config key resource.exclude, which allows specification of a set of cores to hold back from the scheduler along with optional ranks on which to reserve them. Cores are specified in the form:

cores[@ranks] [cores[@ranks]]...

Where cores and the optional ranks are idsets. If ranks is not specified, then the same cores are reserved on all ranks. To reserve different sets of cores on different ranks, multiple specifications may be used separated by whitespace.
For example, 0 would withold core 0 on all ranks, 0-1@0 0 would reserve 0-1 on rank 0 and 0 on all other ranks, etc.

Marking this as a WIP because it is just one possible approach. We could also easily extend the resource.exclude config key to allow a similar syntax. There are possibly some drawbacks here:

  • exclude is encoded as an idset in the resource.status response - we'd have to always condense the excluded resource set into an idset for backwards compatibility.
  • exclude allows specification of a hostlist, which doesn't make as much sense for resource reservation (though it could be easily handled.)
  • The common case for resource.exclude is perhaps different than resource.reserve -- e.g. reserving the same set of cores across all nodes of a job would be slightly more awkward if only resource.exclude syntax was supported (since specification of ranks would be obligatory)

However, it does seem awkward to have two config keys that do similar things.

The other question here is how to present the reserved resources in either flux resource status (which currently does not include sub-rank resources in default output anyway) or flux resource list.

Problem: rlist_copy_internal() is declared static which means all
copy implementations must be defined directly in rlist.c.

Expose rlist_copy_internal() in rlist_private.h to allow copy
implementations to be placed in separate source files.
Problem: There is no way to copy a set of cores from an rlist, but this
would be useful when creating a set of cores to reserve for the OS.

Add a new function rlist_copy_core_spec() which can copy cores from
an rlist object using a specification of the form:

 cores[@Ranks] [cores[@Ranks]]...

where `cores` and the optional `ranks` are idsets of the cores/ranks
to include in the copy.
Problem: There are no unit tests of the new function
rlist_copy_core_spec().

Add unit tests.
Problem: It would be useful to have an rlist diff routine the modifies
the rlist argument, but this code is embedded in rlist_diff(), which
returns a new rlist.

Split rlist_subtract() out of rlist_diff() and make it public. Have
rlist_diff() call rlist_subtract() internally.
Problem: There is no way to reserve cores so they are not used for
scheduling.

Add a 'reserve' subsystem to the core resource module that takes a
reserved set of cores via the 'cores[@Ranks]' spec. Nothing is done
with the reserve subsystem at this point.
Problem: The `rlist` output format has `+` and `:` transposed, which
results in an extraneous `+` in the output.

Fix the format.
Problem: There is no way to reserve a set of cores so they cannot
be used by the scheduler of a Flux instance.

Add support for a new config key `resource.reserve`, which takes a
string of the form `cores[@Ranks]` where `cores` and the optional
`ranks` are idsets specifying the cores to reserve and the ranks
on which to reserve them. If ranks is not specified, then the spec
applies to all ranks.

The reserved set of cores are subtracted from the resource set before
it is handed off to the scheduler. The reserved resource set is also
removed from the status response used by `flux resource list`.
Problem: The `reserve` key in the `[resource]` table is not documented.

Document it.
@garlick
Copy link
Member

garlick commented Jan 21, 2025

This seems workable to me.

I can't really think of a case where a system instance would want to reserve different cores on different node types and run into awkwardness specifying those as ranks. If that ever does come up then hostlist support might be nice.

@grondo
Copy link
Contributor Author

grondo commented Jan 21, 2025

It would be good to get feedback from @ryanday36 if this is the correct approach for what's needed in the near term. I'm not sure if the requirements are to allow users to reserve cores for the system in their batch jobs, while the default is not to do so, or if the need is to reserve cores in the system instance, then give users an option to opt-out.

I'd hate to add a new config key (and whole new way to select resources) if it isn't going to end up useful.

@ryanday36
Copy link

My understanding for what we want with the "OS" or "system" cores is to have no user tasks run on the system cores unless the user explicitly asks for them. So, this resource.reserve approach would work as long as there is also some sort of --all-resources flag to the flux run type commands that would allow the users to specify that they want to run on reserved resources.

@grondo
Copy link
Contributor Author

grondo commented Jan 22, 2025

So, this resource.reserve approach would work as long as there is also some sort of --all-resources flag to the flux run type commands that would allow the users to specify that they want to run on reserved resources.

In this case the resources are excluded from the scheduler so there is no way to ask for them, except perhaps by launching a new instance (i.e. via either flux batch or flux alloc), setting the resource.rediscover option for that instance, then using flux run or flux submit in the new instance.

@garlick's idea of a partition might be slightly more suitable, except that, AIUI, the proposed partitions are not allowed to overlap so you would not be able to run a job across all partitions. That may be easily resolved however.

In any event, I think this requirement implies that the scheduler needs to know about reserved resources specifically, so withholding them like exclude won't work.

@garlick
Copy link
Member

garlick commented Jan 22, 2025

except perhaps by launching a new instance (i.e. via either flux batch or flux alloc), setting the resource.rediscover option for that instance, then using flux run or flux submit in the new instance.

Would that be all that bad? I think we decided the partition idea was going to take significantly more effort.

@ryanday36
Copy link

In this case the resources are excluded from the scheduler so there is no way to ask for them, except perhaps by launching a new instance (i.e. via either flux batch or flux alloc), setting the resource.rediscover option for that instance, then using flux run or flux submit in the new instance.

Having users get a new instance with resource.rediscover would probably work. I can't actually think of a use case where a users would need some flux run to see the cores and others not see the cores, and if one did come up, it would probably be a pretty savvy user. Those users could get their instance with resource.rediscover and then manage things themselves.

@grondo
Copy link
Contributor Author

grondo commented Jan 23, 2025

Well, perhaps this is ready for a review then. Is the current interface ok? Happy to take suggestions on improvements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants