Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update doc for server arguments #2742

Open
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

simveit
Copy link

@simveit simveit commented Jan 5, 2025

Motivation

As explained here the current documentation of the backend needs update which we intend to implement here.

Checklist

  • Update documentation as needed, including docstrings or example tutorials.

@simveit simveit force-pushed the feature/server-arguments-docs branch 2 times, most recently from fbc1a63 to abb44cf Compare January 6, 2025 19:13
@simveit simveit force-pushed the feature/server-arguments-docs branch from abb44cf to 0a288d7 Compare January 6, 2025 19:18
Copy link
Collaborator

@zhaochenyang20 zhaochenyang20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love the detailed and educational docs of parameters. Having two suggestions:

  1. We are documenting the official usage, so we can move the educational part to other unoffical repos, like my ML sys tutorial. 😂
  2. Keep the things concise. If we want to explain the concept, I think just one sentence of educational explanation and give a link to details, which could be bettter.

```

</details>
## Model and tokenizer
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. But for the docs, always keep one first-order title # and several second-order title ##, do not use forth-order title ####.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adjusted to include Server Arguments title.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect

docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved

* `tp_size`: This parameter is important if we have multiple GPUs and our model doesn't fit on a single GPU. *Tensor parallelism* means we distribute our model weights over multiple GPUs. Note that his technique is mainly aimed at *memory efficency* and not at a *higher throughput* as there is inter GPU communication needed to obtain the final output of each layer. For better understanding of the concept you may look for example [here](https://pytorch.org/tutorials/intermediate/TP_tutorial.html#how-tensor-parallel-works).

* `stream_interval`: If we stream the output to the user this parameter determines at which interval we perform streaming. The interval length is measured in tokens.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not so sure. Could you double check this and make it more clear.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will look this more carefully up. For now I left it as to do and come back to it at the end.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool!

docs/backend/server_arguments.md Outdated Show resolved Hide resolved

* `random_seed`: Can be used to enforce deterministic behavior.

* `constrained_json_whitespace_pattern`: When using `Outlines` grammar backend we can use this to allow JSON with syntatic newlines, tabs or multiple spaces.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can create a ## for constraint decoding parameters.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I think that we could restructure the whole sections. I suggest to do that after I included all parameters.

docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
@simveit simveit force-pushed the feature/server-arguments-docs branch from 58efd67 to b939c56 Compare January 8, 2025 16:51
Copy link
Collaborator

@zhaochenyang20 zhaochenyang20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect! Thanks so much for help!

docs/backend/server_arguments.md Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
* `dist_init_addr`: The TCP address used for initializing PyTorch’s distributed backend (e.g. `192.168.0.2:25000`).
* `nnodes`: Total number of nodes in the cluster.
* `node_rank`: Rank (ID) of this node among the `nnodes` in the distributed setup.


## Model override args in JSON
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better call this ## Constraint Decoding

docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
Copy link
Collaborator

@zhaochenyang20 zhaochenyang20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing work! We are close to the end!

```

</details>
## Model and tokenizer
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect

docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
Copy link
Collaborator

@zhaochenyang20 zhaochenyang20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, we are close to the end. Are there any parameters left? If not, after fixing these parameters, we can let yineng to review.

docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
docs/backend/server_arguments.md Outdated Show resolved Hide resolved
Copy link
Collaborator

@zhaochenyang20 zhaochenyang20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! We made it!

@@ -66,7 +66,7 @@ In this document we aim to give an overview of the possible arguments when deplo
* `watchdog_timeout`: Adjusts the watchdog thread’s timeout before killing the server if batch generation takes too long.
* `download_dir`: Use to override the default Hugging Face cache directory for model weights.
* `base_gpu_id`: Use to adjust first GPU used to distribute the model across available GPUs.

* `allow_auto_truncate`: Automatically truncate requests that exceed the maximum input length.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great!

@zhaochenyang20 zhaochenyang20 marked this pull request as ready for review January 18, 2025 22:20
@zhaochenyang20
Copy link
Collaborator

@zhyncs Wait for a final go over.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants