-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while importing import flexflow.serve as ff
#18
Comments
I have a similar problem following the build and installation with pip. What is your machine configuration? cd /content/FlexFlow/ && pip install -r requirements.txt
pip install flexflow import flexflow.serve as ff
# FlexFlow initialisation
ff.init(
num_gpus=1,
memory_per_gpu=14000,
zero_copy_memory_per_node=8000,
tensor_parallelism_degree=2,
pipeline_parallelism_degree=1,
num_cpus=2,
profiling=True
)
# Specify the LLM
llm = ff.LLM("meta-llama/Llama-2-7b-hf")
# Specify a list of SSMs (just one in this case)
ssms=[]
ssm = ff.SSM("JackFram/llama-68m")
ssms.append(ssm)
# Create the sampling configs
generation_config = ff.GenerationConfig(
do_sample=False, temperature=0.9, topp=0.8, topk=1
)
# Compile the SSMs for inference and load the weights into memory
for ssm in ssms:
ssm.compile(generation_config)
# Compile the LLM for inference and load the weights into memory
llm.compile(generation_config,
max_requests_per_batch = 16,
max_seq_length = 256,
max_tokens_per_batch = 128,
ssms=ssms)
llm.start_server()
result = llm.generate("Here are some travel tips for Tokyo:\n")
#llm.stop_server() # This invocation is optional Output [/usr/local/lib/python3.10/dist-packages/flexflow/serve/__init__.py](https://localhost:8080/#) in <module>
15 from typing import Optional
16 from ..type import *
---> 17 from flexflow.core import *
18 from .serve import LLM, SSM, GenerationConfig, GenerationResult
19
[/usr/local/lib/python3.10/dist-packages/flexflow/core/__init__.py](https://localhost:8080/#) in <module>
32 else:
33 # print("Using cffi flexflow bindings.")
---> 34 from .flexflow_cffi import *
35
36 ff_arg_to_sysarg = {
[/usr/local/lib/python3.10/dist-packages/flexflow/core/flexflow_cffi.py](https://localhost:8080/#) in <module>
36 )
37 from flexflow.config import *
---> 38 from .flexflowlib import ffi, flexflow_library
39
40
[/usr/local/lib/python3.10/dist-packages/flexflow/core/flexflowlib.py](https://localhost:8080/#) in <module>
18 from .flexflow_cffi_header import flexflow_header
19
---> 20 from legion_cffi import ffi
21 from distutils import sysconfig
22
ModuleNotFoundError: No module named 'legion_cffi' |
@hygorjardim Ubuntu 22.04, Cuda 12.3, GPU compute capability 7.0 |
@goelayu Here is my output from config.linux, some versions are a little different from yours, I'm using an env in google colab to run.
|
I solved this problem by 3 steps:
Then I reinstall setuptools :
I modify the file in
It can build successfully. Then add the directory of legion_cffi.py to the PYTHONPATH.
|
I installed FlexFlow using the pip command:
pip install flexflow
While importing it,
import flexflow.serve as ff
, I get the errorModuleNotFoundError: No module named 'legion_cffi'
I doubt this is expected behavior?
I am able to overcome this error by installing flexflow using the source, however then I run into the following error while running the
ff.init
commandI believe I use the latest version of ucx so not sure why I am getting this error.
Command used for configuration while building from source:
FF_CUDA_ARCH=70 FF_USE_PYTHON=ON FF_LEGION_NETWORKS=ucx UCX_DIR=/software/ucx-1.15.0/install ./config/config.linux
Any tips on overcoming the no module found error without having to build from source? And ideas on how to avoid the ucx version error?
The text was updated successfully, but these errors were encountered: