This repository has been archived by the owner on Aug 26, 2022. It is now read-only.
Any plan to support parallel computation for cluster points? #11
-
Computing density peak point costs too much time for big datasets. |
Beta Was this translation helpful? Give feedback.
Answered by
freesinger
Jul 6, 2022
Replies: 1 comment
-
hi @timberUp,thanks for your interesting density_peak_clustering/data_process.py Line 61 in d3ee8b9 You may check this line and change the 3rd parameter to a integer value [1, 100) . Smaller the parameter, less the time elapsed. Also, the precision would be reduced.No parallel computation plan for this since this repo has been half-archived for now. You could devesigate parallism mechanism in Python. Hope it will help~ |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
timberUp
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
hi @timberUp,thanks for your interesting
density_peak_clustering/data_process.py
Line 61 in d3ee8b9
You may check this line and change the 3rd parameter to a integer value
[1, 100)
. Smaller the parameter, less the time elapsed. Also, the precision would be reduced.No parallel computation plan for this since this repo has been half-archived for now. You could devesigate parallism mechanism in Python.
Hope it will help~