You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 8, 2024. It is now read-only.
I am trying to compute farthest point sampling for my data set.
It contains ~10000 frames and I want to get 1000 points.
Currently, I am running in a cluster with 10 nodes (400 CPU) and in 11 hours it computes only the first
~ 130 points.
I am using the following flags: --kernel rematch -n 12 -l 8 --gamma 0.1 -c 4.5 --peratom --distance
and my system contains 38 atoms.
Is this the expected performance? Is there any trick I can do to accelerate the calculation beyond reducing the number of considered atoms and/or the data set size?
Regards,
Yair
The text was updated successfully, but these errors were encountered:
use --kernel fastavg. rematch has to do a ton of calculations.if you really
need rematch (which tends to be better to disambiguate similar structures)
you should make sure that you have compiled the cython library for
sinkhorn, otherwise glosim will fall back on a python version which brings
slow to another level.
On Wed, 5 Dec 2018 at 16:24, litman90 ***@***.***> wrote:
Hello,
I am trying to compute farthest point sampling for my data set.
It contains ~10000 frames and I want to get 1000 points.
Currently, I am running in a cluster with 10 nodes (400 CPU) and in 11
hours it computes only the first
~ 130 points.
I am using the following flags: --kernel rematch -n 12 -l 8 --gamma 0.1 -c
4.5 --peratom --distance
and my system contains 38 atoms.
Is this the expected performance? Is there any trick I can do to
accelerate the calculation beyond reducing the number of considered atoms
and/or the data set size?
Regards,
Yair
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#13>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABESZ6JSHS23RH97JX1e08awh2utewxAks5u1-UngaJpZM4ZC9EL>
.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello,
I am trying to compute farthest point sampling for my data set.
It contains ~10000 frames and I want to get 1000 points.
Currently, I am running in a cluster with 10 nodes (400 CPU) and in 11 hours it computes only the first
~ 130 points.
I am using the following flags: --kernel rematch -n 12 -l 8 --gamma 0.1 -c 4.5 --peratom --distance
and my system contains 38 atoms.
Is this the expected performance? Is there any trick I can do to accelerate the calculation beyond reducing the number of considered atoms and/or the data set size?
Regards,
Yair
The text was updated successfully, but these errors were encountered: