Skip to content

Commit

Permalink
[ci][distributed] shorten wait time if server hangs (vllm-project#7098)
Browse files Browse the repository at this point in the history
  • Loading branch information
youkaichao authored Aug 3, 2024
1 parent ed812a7 commit 69ea15e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion tests/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ def _nvml():

class RemoteOpenAIServer:
DUMMY_API_KEY = "token-abc123" # vLLM's OpenAI server does not need API key
MAX_SERVER_START_WAIT_S = 600 # wait for server to start for 60 seconds
MAX_SERVER_START_WAIT_S = 120 # wait for server to start for 120 seconds

def __init__(
self,
Expand Down

0 comments on commit 69ea15e

Please sign in to comment.