-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory leak from boosters with CUDA #207
Comments
Could you try to reproduce with XGBoost.jl only? It's hard for me to imagine how the wrapper can cause this: it's really not doing anything during training other than calling the update function from the library; though it's been a while since I've worked on this package so my memory might be hazy. |
I guess my question is what does XGBoost.jl do fundamentally when training / inference on large data -- is there a way to manually tell it to partition more finely? |
106 function cross_train(df_all; model = XGBoostClassifier(; tree_method="hist", eta=0.08, max_depth=7, num_round=90), Nfolds=5) 107 y, eventNumber, X = unpack(df_all, ==(:proc), ==(:eventNumber), ∈(BDT_in put_names))
108 eventNumber_modN = mod1.(eventNumber, Nfolds) 109 machines = []
110 @views for ith_fold in 1:Nfolds
111 ▏ fold_mask = eventNumber_modN .!= ith_fold
112 ▏ fold_y = y[fold_mask]
113 ▏ fold_X = X[fold_mask, :]
114 ▏ mach_bdt = machine(model, fold_X, fold_y)
115 ▏ fit!(mach_bdt)
116 ▏
117 ▏ push!(machines, mach_bdt)
118 end
119 machines
120 end I think my problem is each MLJ machine would hold a reference to the data, and the data probably got moved to CUDA so it's now a memory leak |
setting |
The library itself doesn't exactly give us a huge amount of leeway. Basically you have to put your entire dataset into memory and it runs on it. There is a little bit of utility for larger datasets but last I looked it was pretty half-baked, at least within the C API. I wonder if there is memory leaking when creating lots of |
Uh. You could try: mach = restore!(serializable(mach)) Methods exported by MLJ and MLJBase, I think. |
according to JuliaAI/MLJBase.jl#750 it doesn't fully remove reference to data somehow? But looks like it helps quite a lot at least w/ the "fix":
w/o the "fix":
|
I'm not 100% sure, but I doubt the cited issue applies here. |
Could you please share what's half baked? I can help take a look. |
Anthony's solution worked for MLJ, but actually if I directly use XGBoost it now has cuda OOM: 106 function cross_train(df_all; model = (; tree_method="hist", eta=0.08, max_dep th=7, num_round=90), Nfolds=5)
107 y, eventNumber, X = unpack(df_all, ==(:proc), ==(:eventNumber), ∈(BDT_input_names))
108 eventNumber_modN = mod1.(eventNumber, Nfolds)
109 machines = []
~ 110 @views for ith_fold in 1:Nfolds
111 ▏ fold_mask = eventNumber_modN .!= ith_fold
112 ▏ fold_y = y[fold_mask]
113 ▏ fold_X = X[fold_mask, :]
~ 114 ▏ # mach_bdt = machine(model, fold_X, fold_y; cache=false)
~ 115 ▏ bst = xgboost((fold_X, fold_y); model...)
+ 116 ▏ # fit!(mach_bdt)
117 ▏
~ 118 ▏ # remove reference to data
+ 119 ▏ # mach = restore!(serializable(mach_bdt))
+ 120 ▏ push!(machines, bst)
+ 121 ▏ GC.gc()
122 end
123 machines
124 end
# calling with
model = (;
tree_method="gpu_hist",
nthread=10,
eta=0.2,
max_depth=10,
scale_pos_weight = 1,
max_delta_step = 1,
num_round=2,
) |
I suspect the thing returned by |
Sorry, this is a little off topic, but if you use EvoTrees.jl, the pure Julia gradient tree boosting implementation, then I expect these sorts of issues either do not occur, or are readily diagnosed and mitigated. EvoTrees.jl may not have all the bells and whistles you're used to but it has had a considerable number of dev-hours put into it (compared to your average Julia package - shout out to @jeremiedb) and is still actively maintained. |
yeah I know EvoTrees and a big fan of the package as well as Jeremie (met at this year's JuliaCon). Although it looks like Jeremie is on a long holiday since the summer, going by the github activities |
No long holiday on my end unfortunately, just more busy with our firm :) |
@trivialfis Sorry, please don't take that comment (about accommodation for large datasets being half-baked) too seriously, I should have qualified it better. It's been a good while since I've looked at it. I recall that the last time that happened I had a lot of trouble with it, but at this point there's a lot of reasons that could have been including me possibly doing something wrong. Suffice it to say that if I return to it and find something that I consider a problem I will open an issue, but off the top of my head I don't even know that that's needed. |
No worries, feel free to ping me. We have made some more progress recently for GPU memory usage and external memory etc. Was wondering what you have run into. I'm quite excited about the upcoming release and would like to fix any remaining issues you ran into. |
can you reproduce with my code above -- all you need is a big dataframe and then you partition it in the loop and then keep the returned object from You should see that you end up with residual VRAM |
I can look into it tomorrow, but before that, is there a Julia equivalent of https://github.com/dmlc/xgboost/blob/0e48cdcb361aee8459b4d1bbe6ff7bfdbbba4138/python-package/xgboost/training.py#L190 ? It's to clear out all training related cache in the booster object. Without a memory pool, GPU memory allocation is quite slow, so we have a lot of caches there. |
@ExpandingMan Could you please help integrate the workaround into the jl package: using CUDA
using XGBoost
X = cu(randn(100000, 3))
y = randn(100000)
vec = Any[]
for i in 1:10000
booster = XGBoost.xgboost((X, y), num_round=10) # no need to use `DMatrix`
bin = XGBoost.serialize(booster)
bst2 = Booster(DMatrix[])
XGBoost.deserialize!(bst2, bin)
push!(vec, bst2)
end The trick is simply copying the model so that the new model doesn't have any cache in it. |
Am I to understand that the booster objects need to be GC'd to avoid leaking memory? The wrapper as it currently sits doesn't really recognize this concept. @trivialfis would you be willing to open a more detailed issue about what exactly is going on here? |
Let me try to explain it here since it's the cause of this issue. Internally, the booster has some GPU caches in it, like the buffer for storing gradients, and we need to release these buffers somehow. Right now, the only way to do it outside of |
Thanks for the explanation. So it seems this wrapper is pretty broken with CUDA right now and this needs to be implemented. I should be able to get around to it this week, but if anyone else gets to it first I can merge it. I think the solution for the time being will be to implement @trivialfis code in |
@trivialfis just so I understand, do you consider this a bug in |
You are correct. I think it's not too difficult to have a reset function in the booster object for eliminating most caches. I will keep track of this. |
@ExpandingMan The PR is merged dmlc/xgboost#11042 . |
Great, thanks! It'll show up in the next patch release I take it? |
It will be part of the next major release. |
It will take a while before the next release, for now, the easiest workaround is simply copying the model and ditch the old one. |
this happens with less and less
Free memory:
so something is leaking memory... I'm using XGBoost via MLJ, any clue?The text was updated successfully, but these errors were encountered: