Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Perf] Reduce peak memory usage of llama #10339

Merged
merged 1 commit into from
Nov 15, 2024

Conversation

andoorve
Copy link
Collaborator

@andoorve andoorve commented Nov 14, 2024

Maintaining multiple names here will cause both to be refcounted which increases the peak memory. This will manifest as more blocks on top of each other in the memory profile:
image

This change will increase the number of available blocks as a result of profiling especially with longer context lengths. I will follow up with a more detailed investigation in another PR/Issue that discusses this in more depth. However, creating this PR as well now as this is more or less a well-contained low-risk change. Can add to more models as well once we review this.

@andoorve andoorve marked this pull request as ready for review November 14, 2024 18:38
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Maintaining multiple names here will cause both to be refcounted which increases the peak memory. This will manifest as more blocks on top of each other in the memory profile.

Signed-off-by: andoorve <[email protected]>
@mgoin
Copy link
Member

mgoin commented Nov 14, 2024

Great idea! We could apply this to many other models

@andoorve andoorve requested a review from mgoin November 14, 2024 20:12
@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 14, 2024
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) November 14, 2024 23:38
@@ -90,8 +90,8 @@ def __init__(
self.act_fn = SiluAndMul()

def forward(self, x):
gate_up, _ = self.gate_up_proj(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think torch.compile can do something similar, without renaming variables.

to keep the original semantic, maybe adding del x would be more intuitive.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think torch.compile can do something similar, without renaming variables.

Yes, it can completely alleviate this problem, even when we consider cross-function refcounting which I'll cover in my investigation write-up.

to keep the original semantic, maybe adding del x would be more intuitive.

I think you might mean in this case del gate_up? Yes indeed we can add dels and make the variable names more descriptive. I just kept it as x to avoid adding extra dels and be similar to style of the rest of the function.

@DarkLight1337 DarkLight1337 merged commit b2e0ad3 into vllm-project:main Nov 15, 2024
63 checks passed
@andoorve andoorve deleted the llama-memory branch November 15, 2024 00:56
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
mfournioux pushed a commit to mfournioux/vllm that referenced this pull request Nov 20, 2024
rickyyx pushed a commit to rickyyx/vllm that referenced this pull request Nov 20, 2024
tlrmchlsmth pushed a commit to neuralmagic/vllm that referenced this pull request Nov 23, 2024
sleepwalker2017 pushed a commit to sleepwalker2017/vllm that referenced this pull request Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants