Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Infinite loop after generate token length near context_windows_size/chunk_prefill_size. #3057

Open
gesanqiu opened this issue Dec 6, 2024 · 0 comments
Labels
bug Confirmed bugs

Comments

@gesanqiu
Copy link
Contributor

gesanqiu commented Dec 6, 2024

🐛 Bug

To Reproduce

Steps to reproduce the behavior:

I compile my model with following command:

mlc_llm convert_weight /home/tsbj/rubik_v0.0.0.25/ --quantization q4f16_1 -o mtk-weights/rubik_v0.0.0.25
mlc_llm gen_config /home/tsbj/rubik_v0.0.0.25/ --quantization q4f16_1 --conv-template rubik --context-window-size 4096 --sliding-window-size 4096 --prefill-chunk-size 4096 -o mtk-weights/rubik_v0.0.0.25/
mlc_llm compile mtk-weights/rubik_v0.0.0.25/mlc-chat-config.json --device cuda -o mtk-weights/rubik_v0.0.0.25/ts-rubik-rw-1_4b-q4f16-cuda.so

However I believe it should work with other models, you just need:

  1. Run model with mlc-llm serve.
  2. Find a query that can't stop normally untill the context reach the compiled chunk_prefill_size length, like repeated output.
  3. Send a /v1/chat/completions api request and set the max_tokens=4096.

Expected behavior

Output should stop with finish_reason=length when total_tokens=4096.
However I added some logs in cpp/serve/engine_actions/new_request_prefill.cc and cpp/serve/engine_actions/batch_decode.cc and cpp/serve/model.cc. It shows that after about about 3954 times batch_decode forward, it turns back to do a whole context length new_request_prefill forward and after context length reach 4088 it will run into an infinite loop forward, thus block the engine.
The full log is in the attach file.
nohup.txt

Environment

  • Platform (e.g. WebGPU/Vulkan/IOS/Android/CUDA): CUDA
  • Operating system (e.g. Ubuntu/Windows/MacOS/...): Ubuntu
  • Device (e.g. iPhone 12 Pro, PC+RTX 3090, ...) jetson orin
  • How you installed MLC-LLM (conda, source): source
  • How you installed TVM-Unity (pip, source): source
  • Python version (e.g. 3.10): 3.8
  • GPU driver version (if applicable): -
  • CUDA/cuDNN version (if applicable): CUDA11.4
  • TVM Unity Hash Tag (python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))", applicable if you compile models):
root@tegra-ubuntu:/home/nvidia# python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
USE_NVTX: OFF
USE_GTEST: OFF
SUMMARIZE: OFF
TVM_DEBUG_WITH_ABI_CHANGE: OFF
USE_IOS_RPC: OFF
USE_MSC: OFF
USE_ETHOSU:
CUDA_VERSION: 11.4
USE_LIBBACKTRACE: OFF
DLPACK_PATH: 3rdparty/dlpack/include
USE_TENSORRT_CODEGEN: OFF
USE_THRUST: OFF
USE_TARGET_ONNX: OFF
USE_AOT_EXECUTOR: OFF
BUILD_DUMMY_LIBTVM: ON
USE_CUDNN: OFF
USE_TENSORRT_RUNTIME: OFF
USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF
USE_CCACHE: AUTO
USE_ARM_COMPUTE_LIB: OFF
USE_CPP_RTVM:
USE_OPENCL_GTEST: /path/to/opencl/gtest
TVM_LOG_BEFORE_THROW: OFF
USE_MKL: OFF
USE_PT_TVMDSOOP: OFF
MLIR_VERSION: NOT-FOUND
USE_CLML: OFF
USE_STACKVM_RUNTIME: OFF
USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF
ROCM_PATH: /opt/rocm
USE_DNNL: OFF
USE_MSCCL: OFF
USE_NNAPI_RUNTIME: OFF
USE_VITIS_AI: OFF
USE_MLIR: OFF
USE_RCCL: OFF
USE_LLVM: OFF
USE_VERILATOR: OFF
USE_TF_TVMDSOOP: OFF
USE_THREADS: ON
USE_MSVC_MT: OFF
BACKTRACE_ON_SEGFAULT: OFF
USE_GRAPH_EXECUTOR: OFF
USE_NCCL: OFF
USE_ROCBLAS: OFF
GIT_COMMIT_HASH: 79a69ae4a92c9d4f23e62f93ce5b0d90ed29e5ed
USE_VULKAN: OFF
USE_RUST_EXT: OFF
USE_CUTLASS: OFF
USE_CPP_RPC: OFF
USE_HEXAGON: OFF
USE_CUSTOM_LOGGING: OFF
USE_UMA: OFF
USE_FALLBACK_STL_MAP: OFF
USE_SORT: ON
USE_RTTI: ON
GIT_COMMIT_TIME: 2024-11-11 00:56:50 -0500
USE_HIPBLAS: OFF
USE_HEXAGON_SDK: /path/to/sdk
USE_BLAS: none
USE_ETHOSN: OFF
USE_LIBTORCH: OFF
USE_RANDOM: ON
USE_CUDA: /usr/local/cuda
USE_COREML: OFF
USE_AMX: OFF
BUILD_STATIC_RUNTIME: OFF
USE_CMSISNN: OFF
USE_KHRONOS_SPIRV: OFF
USE_CLML_GRAPH_EXECUTOR: OFF
USE_TFLITE: OFF
USE_HEXAGON_GTEST: /path/to/hexagon/gtest
PICOJSON_PATH: 3rdparty/picojson
USE_OPENCL_ENABLE_HOST_PTR: OFF
INSTALL_DEV: OFF
USE_PROFILER: OFF
USE_NNPACK: OFF
LLVM_VERSION: NOT-FOUND
USE_MRVL: OFF
USE_OPENCL: ON
COMPILER_RT_PATH: 3rdparty/compiler-rt
USE_NNAPI_CODEGEN: OFF
RANG_PATH: 3rdparty/rang/include
USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF
USE_OPENMP: OFF
USE_BNNS: OFF
USE_FLASHINFER: OFF
USE_CUBLAS: OFF
USE_METAL: OFF
USE_MICRO_STANDALONE_RUNTIME: OFF
USE_HEXAGON_EXTERNAL_LIBS: OFF
USE_ALTERNATIVE_LINKER: AUTO
USE_BYODT_POSIT: OFF
USE_NVSHMEM: OFF
USE_HEXAGON_RPC: OFF
USE_MICRO: OFF
DMLC_PATH: 3rdparty/dmlc-core/include
INDEX_DEFAULT_I64: ON
USE_RELAY_DEBUG: OFF
USE_RPC: OFF
USE_TENSORFLOW_PATH: none
TVM_CLML_VERSION:
USE_MIOPEN: OFF
USE_ROCM: OFF
USE_PAPI: OFF
USE_CURAND: OFF
TVM_CXX_COMPILER_PATH: /usr/bin/aarch64-linux-gnu-g++
HIDE_PRIVATE_SYMBOLS: ON
  • Any other relevant information:

Additional context

My question is:

  1. why decode forward will run back into prefill forward.
  2. I think the second infinite loop is because mlc-llm has not a default value of max_single_sequence_length?
@gesanqiu gesanqiu added the bug Confirmed bugs label Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Confirmed bugs
Projects
None yet
Development

No branches or pull requests

1 participant