-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could you clarify timeouts for the Query? #413
Comments
Do you know what causes the timeouts? Do you have an unstable cluster/network? I have a bit of trouble reproducing this issue. We have found a case in which in some default configurations, adding a new node to the cluster could exhaust the max retries, but I presume that's not what you are observing here. |
@khaf the cluster is stable. It connected with 10GB local network and there no issues except long partition scans. |
Can you also include your |
@khaf sure
|
And the |
3 nodes setup with a multicast
Estimated ~6 mlns of records of ~60 mlns in the set ExpressionFilter isn't set |
Thanks for the the detailed info. I'm on it, may take a couple of days though. |
Hi. Looks like I have the similar problem. 3 nodes cluster (aerospike-server:5.7.0.24) in k8s. Local network. 1+ billion records in set. In-memory storage.
Reading results in 10 threads. After processing 560 mlns records got error:
I executed app many times, it's scanning normal until 560 mlns and then always breaks at the same point. So full scan never finished. Cluster is stable, all nodes are alive. Try doing scan at different time when cluster is not under high load. |
+1 Same problem for us |
We have encountered the same issue |
I don't understand how to manage timeouts for Query.
I got 20 retries with a sleep of 2 seconds and still issuing timeouts.
Per record processing time 150-250ms with 300 goroutines.
What should I change to increase timeout from aerospike, because after 20 retries ~ after 60-70 seconds of working the code fails?
AS: Aerospike Community Edition build 5.6.0.5
Client: v6.13.0
The text was updated successfully, but these errors were encountered: