Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[server/kv] Fix out of order exception after delete a not exist row #238

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

luoyuxia
Copy link
Collaborator

Purpose

Linked issue: close #230

Although change logs is empty, we should still try to append the empty logs to enable the sequence id to be awared by WriterStateManager.

Tests

FlussTableITCase#testDeleteNotExistRow

API and Format

Documentation

Copy link
Member

@wuchong wuchong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fix doesn't work. When I enable logging in local, there are many ERROR messages:

7540 [fluss-netty-server-worker-thread-1] WARN  com.alibaba.fluss.server.log.LocalLog [] - Trying to roll a new log segment with start offset 0 =max(provided offset = Optional[0], LEO = 0) while it already exists and is active with size 0., size of offset index: 0.
...
8090 [ReplicaFetcherThread-0-2] ERROR com.alibaba.fluss.server.replica.fetcher.ReplicaFetcherThread [] - Unexpected error occurred while processing data for bucket TableBucket{tableId=0, bucket=0} at offset 0
com.alibaba.fluss.exception.OutOfOrderSequenceException: Out of order batch sequence for writer 0 at offset 0 in table-bucket TableBucket{tableId=0, bucket=0} : 1 (incoming batch seq.), -1 (current batch seq.)
8093 [ReplicaFetcherThread-0-2] ERROR com.alibaba.fluss.server.replica.fetcher.ReplicaFetcherThread [] - Unexpected error occurred while processing data for bucket TableBucket{tableId=0, bucket=0} at offset 0
com.alibaba.fluss.exception.OutOfOrderSequenceException: Out of order batch sequence for writer 0 at offset 0 in table-bucket TableBucket{tableId=0, bucket=0} : 1 (incoming batch seq.), -1 (current batch seq.)
18368 [fluss-scheduler-0-thread-1] INFO  com.alibaba.fluss.server.replica.Replica [] - Shrink ISR From [2, 0, 1] to [2]. Leader: (high watermark: 0, end offset: 1, out of sync replicas: [0, 1])
18636 [coordinator-event-thread] INFO  com.alibaba.fluss.server.zk.ZooKeeperClient [] - Updated LeaderAndIsr{leader=2, leaderEpoch=0, isr=[2], coordinatorEpoch=0, bucketEpoch=1} for bucket TableBucket{tableId=0, bucket=0} in Zookeeper.
18702 [fluss-netty-client(NIO)-12-1] INFO  com.alibaba.fluss.server.replica.Replica [] - ISR updated to [2] and bucket epoch updated to 1 for bucket TableBucket{tableId=0, bucket=0}
...

This fix makes the replica leader broken and switch to a new leader (with empty writer state), that's why the following upserts can work. However, this is not a proper fix.

appendInfo.startOffsetOfMaxTimestamp(),
validRecords);
// if there are records to append
if (appendInfo.lastOffset() >= appendInfo.firstOffset()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Skipping writing log has problem, because writer state of replicas is out-of-sync with leader.

Besides, this if condition doesn't take effect, because, this method returned at the beginning as appendInfo.shallowCount() == 0.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven't noticed that the replica's writer state will be out-of-sync...
Btw, this method won't return at the beginning as appendInfo.shallowCount() won't be equal to 0 in here.....

0L,
0,
0,
false);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @swuferhong , do you remember what cases to fix when we introduced this?

Besides, could you help to review this PR? How kafka client/server handle the sequence id if the produce messages are empty.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used to fix the exception LogSegmentOffsetOverflowException when append empty log in method LogSegment#ensureOffsetInRange since the largestOffset will be 0. So I skip to append empty record to avoid that but it will cause replica's writer state out-of-sync.

Let's to see how Kafka solve this....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] OutOfOrderSequenceException may be thrown when write to primary key table
2 participants