-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon 7th No.1】 Integrate PaddlePaddle as a New Backend #704
base: master
Are you sure you want to change the base?
Conversation
Thanks! Regarding the error, is this only on the most recent PySR version, or the previous one as well? There was a change in how multithreading was handled in juliacall so I wonder if it’s related. Just to check, are you launching this with Python multithreading or multiprocessing? I am assuming it is a bug somewhere in PySR but just want to check if there’s anything non standard in how you are launching things. Another thing to check — does it change if you import paddle first, before PySR? Or does the import order not matter? There is a known issue with PyTorch and JuliaCall where importing torch first prevents an issue with LLVM symbol conflicts (since Torch and Julia are compiled against different LLVM libraries if I remember correctly). Numba has something similar. Not sure if Paddle has something similar |
I believe the issue is indeed related to importing Paddle, as the error only occurs when |
Quick followup – did you also try 0.19.4? There were some changes to juliacall that were integrated in 0.19.4 of PySR. |
It looks like Julia uses LLVM 18: https://github.com/JuliaLang/julia/blob/647753071a1e2ddbddf7ab07f55d7146238b6b72/deps/llvm.version#L8 (or LLVM 17 on the last version) whereas Paddle uses LLVM 11. I wonder if that is causing one of the issues. Can you try running some generic juliacall stuff as shown in the guide here? https://juliapy.github.io/PythonCall.jl/stable/juliacall/. Hopefully we should be able to get a simpler example of the crash. I would be surprised if it is only PySR (and not Julia more broadly) but it could very well be just PySR. |
It still occurs. |
I created a minimal reproducible example, as shown in the code below. import numpy as np
import pandas as pd
from pysr import PySRRegressor
import paddle
paddle.disable_signal_handler()
X = pd.DataFrame(np.random.randn(100, 10))
y = np.ones(X.shape[0])
model = PySRRegressor(
progress=True,
max_evals=10000,
model_selection="accuracy",
extra_sympy_mappings={},
output_paddle_format=True,
# multithreading=True,
)
model.fit(X, y) Interestingly, when running To summarize, when the environment variable |
Are you able to build a MWE that only uses juliacall, rather than PySR? We should hopefully be able to boil it down even further. Maybe you could do something like from juliacall import Main as jl
import paddle
paddle.disable_signal_handler()
jl.seval("""
Threads.@threads for i in 1:5
println(i)
end
""") which is like the simplest way of using multithreading in Julia. |
It will work as expected ➜ python test.py (base)
1
2
3
4
5 |
Since those numbers appear in order it seems like you might not have multi-threading turned on for Julia? Normally they will appear in some random order as each will get printed by a different thread. (Note the environment variables I mentioned above.) |
➜ PYTHON_JULIACALL_HANDLE_SIGNALS=yes PYTHON_JULIACALL_THREADS=auto python test.py (base) |
Excuse me, is the current problem that paddle does not support multi-threading for Julia without setting PYTHON_JULIACALL_HANDLE_SIGNALS=yes PYTHON_JULIACALL_THREADS=auto? from juliacall import Main as jl
jl.seval("""
Threads.@threads for i in 1:5
println(i)
end
""") and the result is also 1 2 3 4 5 |
This PR introduces PaddlePaddle backend support to the PySR library as part of my contribution to the PaddlePaddle Hackathon 7th.
Key Changes:
Related Issues:
Issue Encountered:
During my integration process, I discovered that the method
SymbolicRegression.equation_search
causes a termination by signal, as shown in the following code snippet:Interestingly, this error does not occur when I set
PYTHON_JULIACALL_THREADS=1
. Below are some of the steps I took to investigate the issue:After the termination, I checked
dmesg
, which returned the following:I also used
gdb --args python -m pysr test paddle
to inspect the stack trace. The output is as follows:To be honest, I’m not very familiar with Julia, but it seems that the issue is related to multithreading within the SymbolicRegression library. I ran similar tests with the Torch backend, and below are the results:
I am encountering some challenges understanding the internal behavior of the SymbolicRegression library. I would greatly appreciate any guidance or suggestions on how to resolve this issue.