We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我从中转库同步到 gp,发现延迟越来越大,请问如何提高这方面性能?
库里面的bin_data 表读取表id没有问题,binlog_miner抓取过去了.
The text was updated successfully, but these errors were encountered:
binlog_loader 这个程序很慢
Sorry, something went wrong.
发现是 我们的服务器上 中转库的问题,把 中转gp 上 28g(源mysql读写量还比较大)的表删除后,用默认的 load_batch = 10 load_batch_gap = 10 就同步很正常.负载也不会高. 但 由于同步量很大,难道要每隔一段时间 就去手动清理下 中转gp上的表?
增加增量数据同步吞吐的方法就是提高每一批处理的数据量, 然后减少频率,一次处理更多的数据。 如果表很多,则建议降低频率。
No branches or pull requests
我从中转库同步到 gp,发现延迟越来越大,请问如何提高这方面性能?
库里面的bin_data 表读取表id没有问题,binlog_miner抓取过去了.
The text was updated successfully, but these errors were encountered: