Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Serving Catalog] Add llama3.1-405b vLLM GKE support with LWS #11

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

Edwinhr716
Copy link

No description provided.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 2, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Edwinhr716
Once this PR has been reviewed and has the lgtm label, please assign ahg-g for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Sep 2, 2024
@Edwinhr716 Edwinhr716 changed the title Add llama3.1-405b vLLM support Add llama3.1-405b vLLM GKE support Sep 2, 2024
serving-catalog/core/lws/base/kustomization.yaml Outdated Show resolved Hide resolved
serving-catalog/core/lws/base/leaderworkerset.yaml Outdated Show resolved Hide resolved
until ray status --address $LWS_LEADER_ADDRESS:6380; do
sleep 5;
done
entrypoint.sh: |-
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jjk-g can we propose to vllm to create containers that embeds this logic? we can create an issue on vllm repo.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 vllm support for creating Ray cluster via their container is preferred

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue has been created on the vllm repo vllm-project/vllm#8302

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the issue hasn't been addressed, would it be better to merge this PR right now, and modify it later once a multihost vllm image is created?

serving-catalog/core/lws/vllm/llama3-405b/gke/README.md Outdated Show resolved Hide resolved
key: hf_api_token
- name: MODEL_ID
valueFrom:
configMapKeyRef:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a configmap is inflating the lws yaml, assuming we can bake the logic in the container, do we still need it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't need it, although in practice I liked defining the environment variables shared across leaders/workers in one place.

Copy link
Contributor

@jjk-g jjk-g left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

One serving-catalog style nit, let's have file extensions for resources be .yaml the first time the resource is introduced in the hierarchy (in a base), and .patch.yaml in overlays that patch the existing base resources.

until ray status --address $LWS_LEADER_ADDRESS:6380; do
sleep 5;
done
entrypoint.sh: |-
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 vllm support for creating Ray cluster via their container is preferred

key: hf_api_token
- name: MODEL_ID
valueFrom:
configMapKeyRef:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't need it, although in practice I liked defining the environment variables shared across leaders/workers in one place.

@Edwinhr716
Copy link
Author

Switched names to .patch.yaml for files that patch existing resources

serving-catalog/core/lws/vllm/base/configmap.yaml Outdated Show resolved Hide resolved
serving-catalog/core/lws/README.md Outdated Show resolved Hide resolved
@@ -0,0 +1,5 @@
# LeaderWorkerSet (lws)

In order to be able to run the workloads on this directory you will need to install the lws controller. Instructions on how to do so can be found here:
Copy link

@skonto skonto Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be the only way or just use statefulsets as well so that we provide more options for folks who do not want to deploy LWS?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding statefulsets as well is outside the scope of this PR. I'll edit the title of the PR to reflect that it only covers an example using lws

name: vllm-multihost-config
key: model_id
- name: PIPELINE_PARALLEL_SIZE
value: $(LWS_GROUP_SIZE)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the user use leaderworkerset.sigs.k8s.io/size to inject it instead? Is there a recommended value for the specific deployment?

Copy link
Author

@Edwinhr716 Edwinhr716 Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I understand the question correctly. PIPELINE_PARALLEL_SIZE corresponds to the number of nodes that the model is deployed in, so it is the same value as leaderworkerset.sigs.k8s.io/size.

The value of $(LWS_GROUP_SIZE) contains the same value as leaderworkerset.sigs.k8s.io/size

@Edwinhr716 Edwinhr716 changed the title Add llama3.1-405b vLLM GKE support Add llama3.1-405b vLLM GKE support with LWS Oct 17, 2024
@Edwinhr716 Edwinhr716 changed the title Add llama3.1-405b vLLM GKE support with LWS [Serving Catalog] Add llama3.1-405b vLLM GKE support with LWS Oct 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants