-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Peer to chaincode optimization proposal RFC #58
base: main
Are you sure you want to change the base?
Conversation
I think the reason that a series of Have you investigated the performance of a shim that simply defers all |
Yes, this will achieve better results compared to the current implementation, while doing it with a single message is still better, given you avoid sending multiple messages. |
|
After all, it is not very hard to implement RFC, which could bring significant gains without breaking current functionality and transaction semantics, not even the programming model. |
There was an old Jira item to implement SetStateMultipleKeys() and GetStateMultipleKeys() to set and get keys in batch for precisely the reason you mention. It was even implemented at ledger level, just not at the chaincode programming model level. I don't think there was any reason it didn't get implemented, it simply wasn't prioritized for development. But let's also check with @manish-sethi to see if there were any other considerations that delayed the implementation... |
The main reason for this is to maintain the sanity of a transaction as it is intended at the application layer as a transaction is defined as a sequence of reads and writes. For instance, if a transaction sets different values to the same key, on different peers the result could differ if parallelism is to be allowed. Even if the result on different peers ends up the same, unless we prohibit setting the value of the same key more than once in a transaction, in the worst case, parallelism at shim could create a different result than the chaincode intended. |
Yes, that was the intention. It was always a matter of priority. If we expose SetStateMultipleKeys() and GetStateMultipleKeys() to the chaincode, the simple cases should benefit easily. |
```proto | ||
message ChangeStateBatch { | ||
repeated StateKV kvs = 1; | ||
} | ||
|
||
message StateKV { | ||
enum Type { | ||
UNDEFINED = 0; | ||
PUT_STATE = 9; | ||
DEL_STATE = 10; | ||
PUT_STATE_METADATA = 21; | ||
PURGE_PRIVATE_DATA = 23; | ||
} | ||
|
||
string key = 1; | ||
bytes value = 2; | ||
string collection = 3; | ||
StateMetadata metadata = 4; | ||
Type type = 5; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please note that this RFC suggests sending anything that changes state in batches.
I think it's cool.
Well, we can reveal this line of work, this RFC precisely about that. Once we will get in agreement, it's we can just do it, I do think there is general consensus to proceed, given in the past there was even plans to do it not to mention ledger already supports it. |
@denyeart, what should the following steps be? Would you like this RFC to be presented at the community call? Would you like us to vote? Something else? Note, this is a straightforward and not intrusive change, not breaking any APIs while bringing impact. |
I think the original vision for GetStateMultipleKeys() and SetStateMultipleKeys() was for the chaincode developer to be able to control getting and setting multiple keys at once. Whereas I believe your proposal would be transparent to the chaincode developer, that is, the updates would still be one at a time from the chaincode developer perspective, but at runtime they would be cached in the shim until the end of chaincode execution, at which point they would all be 'flushed' to the peer all at once right before the peer builds the final read/write set. Is my understanding correct? I do agree the transparent batching makes more sense for ledger updates. For gets I still think the original intent makes sense, allow the chaincode developer to get N keys at once, so that they can use the values in the subsequent chaincode logic. However I think batch get is lower priority and not the intent of this RFC, so I'm ok with just doing the transparent batching for the updates as part of this RFC. Question - do we need to consider splitting the batch into smaller pieces? For example if chaincode attempts to update 100000 keys should we send 10 batches of 10000 each? This may help to stay under limits such as grpc max message size (default 100MB), but on the other hand such large transactions would likely cause problems regardless due to the ultimate large write set. I'd prefer to keep things as simple as possible, just checking your thought. In terms of next steps, RFCs require approval from 3 maintainers. For relatively straightforward RFCs like this one, we can just utilize github. If you think there are more considerations to discuss we could add an agenda item to the November 20th contributor meeting. |
Yeah, that's right. |
This behavior will be configurable in core.yaml, via the new parameters UsePutStateBatch and MaxSizePutStateBatch. The thing is, I know this will be an improvement in my case. But I don't know all the fabric use cases, so through the UsePutStateBatch parameter it will be possible to disable the new behavior. Or through the MaxSizePutStateBatch parameter - 1000000000 could be set so that the post is never split into batchs. A benchmark test will also be added, which you can use as an example to research which parameters will be better in your particular case. |
I agree with the overall proposal. However, if the goal is to keep the behavior consistent with the current implementation, I recommend an exercise of reviewing the transaction simulator code carefully, as some write operations involve reading the current state as well. For example, In summary, we should assess whether it is feasible to maintain the current behavior in all cases, or alternatives should be discussed. A simple alternative could be to limit this optimization to straightforward cases or to allow for modifying the behavior so that any error is propagated at the end, rather than at the specific step where it occurs -- Also in some cases, we would have to limit ourselves to return a more general error as we would have lost the actual sequence of reads and writes. |
That's an excellent point. |
Yes, and hence it changes the error handling behavior from the current implementation. For example, if you make changes in only in the shim side of code, a user will get this error whereas in his chaincode he may have performed a write before the query. So I suggested going through the lower level code and capturing all behavior changes, and necessary changes in the error handling at the lower level that may be needed. IMO, this should be part of the RFC. |
Doesn't it work both ways? it seems to me that operations that cannot be performed in one transaction cause an error without necessarily following the order. |
Yes, it does, and in the current implementation, the appropriate error is returned. If you make changes in the shim side only, the wrong error will be returned. In that case, it will be more appropriate to return a more general error than a wrong error. |
You got that right, the main intent was to have batched update operations transparent and hence the suggested modifications the least intrusive way to get to the point.
Yes, we need to split it since we cannot send unlimited proto messages hence the batch has to respect the limitation. Honestly, I also do not think this is too difficult to implement. |
I have to admit I am not familiar to the very intimate level with this part of the code, I do agree @manish-sethi has a good point, and therefore what has to be done we need to make sure and list all such a cases. That being said, since the proposal suggests keeping state updates within the shim, validation of this sort could be completed at the shim side either, preserving error handling semantics. |
@denyeart what would be the next steps? |
The feedback from Manish certainly complicates things. The authors of the RFC need to incorporate the feedback and propose a plan for how it will be addressed. Alternatively the RFC could be scaled back, for example instead of saving all writes for the end, the chaincode could send bursts of updates to the peer. Potentially give the chaincode developer the ability to control the batching by calling an operation that sets multiple keys at a time (as was envisioned years ago with SetStateMultipleKeys). This would keep things most simple. Or as a compromise, perhaps the chaincode shim itself could manage the batches. As an example, if the chaincode does one GetState(), then 100 PutState(), and a final GetState(), the chaincode could send to the peer in 3 operations:
I would be in favor of one of these partial batching approaches. This would allow for keeping the checks and error message generation on the peer side, which will reduce the scope, complexity, and risk of the change. It will also help with consistency across various chaincode shim language implementations. Especially now that the original developers aren't as active, we need to manage the scope and risk of such changes. I'm not sure how such counter-proposals will impact the RFC author's use cases however. The RFC authors will therefore need to synthesize Manish and my responses, together with their use case requirements, and come back with an updated proposal. |
I don't know what you're talking about yet. Can you give me a specific case. Like previously sent this, and so received this message. I'll write a corresponding test and think about it. I really don't understand what you're writing about. |
I can honestly say that we already have the code ready and for our use cases we have got a good acceleration. I will put here links to commits of the changes we want to propose to make, after the approval of pr in rfcs. pfi79/fabric-protos@1c27d38 |
@pfi79 Basically what Manish is saying is that if you change the order of the reads and writes, you may get different behavior or a different error message. To address this, you could shift all these checks and error messages from peer side to shim side. However, that would be a lot of work and we'd likely miss something. I would prefer to keep the order of reads and writes the same as it is currently (the order as seen in the chaincode), so that the behavior remains the same. What I'm suggesting is that you could still batch consecutive writes, so that the order of reads and writes remains the same, while still getting the performance improvements of batching. Of course, the chaincode would have to be written intelligently to do consecutive writes rather than read-write-read-write. |
Let's say Manish is right. |
By the way, above I provided commites of my code variant. But the situation Manish is talking about was not revealed. |
Sorry Manish, I don't understand your example. |
Here is one example that is mentioned in documentation:
If chaincode does write then query private data you will get an error based on this check: If we hold all writes until the end, you won't get this error. You will likely still get an error based on the other checks, but it would be a different error that doesn't match the developer's view of the chaincode context. |
@denyeart Thanks for the example. The same can still be presented, for example: GetStateWithPagination and PutState.
|
OK, I think that a sane alternative might be to piggyback batches of updates (essentially write operations) with the get/read operations, basically not to wait until the end but rather send all writes with the get command, which anyway has to communicate with the peer. This obviously has a drawback in cases where you have a lot of interleaved reads and writes, where you will lose the benefit of the batching. Another possible solution might be to have a set of explicit APIs something like:
Of course, this is a little bit more intrusive approach, but on the other side, it introduces a clear separation between the two approaches and doesn't assume anything implicitly, which might be overall good for the end-user experience. @denyeart @manish-sethi thoughts? |
While this is indeed an important thing, we should try to avoid bringing breaking changes with new versions or at least avoid making people change a bunch of legacy code just because we decided to optimize something. Refactoring legacy code to accommodate new errors and programming models might nullify all the benefits of batching. |
I'm sorry, but I disagree. I'm not sure there are breaking changes here. So far all the examples have been with chaincode transactions that don't work, that is, that return errors always. That is, I have to imagine that now somewhere in production there is a chaincode installed and it has a transaction that calls a contradictory method, i.e. always ends with an error. The only place it can be is in chaincode tests. Can you show an example of a working transaction where this would be important? |
That could be a problem right there. The user executes GetState and receives an error on the PutState operation, which flew away with the Read operation. |
You can add our variant but leave the old way of working in the settings. Then we won't touch the inherited code. Others will switch to this mode consciously via core.yaml |
@denyeart @manish-sethi Now the proposal is made in such a way that it automatically enables new work when new components are encountered. I want to support @C0rWin suggestion, it is necessary to switch on the new work mode not automatically, but manually by calling the stub.StartBatch() (or BatchBegin) method. This mode will be switched off either by calling the stub.FinishBatch() (or BatchEnd) method. Or at the end of transaction execution. The only observation to the suggestion is that yes - we send in batches, but we apply the values one by one. Therefore, results := stub.FinishBatch() should return only one error. Then the legacy software will be safe until the new mode is enabled in the chaincodes by changing the code. Would such a change to the proposal suit you? |
@pfi79 I would be in favor of explicit batching from a chaincode developer command. This way if there are any issues developer can easily enable/disable the batching at the chaincode level and compare the results to determine if an issue is related to batching, without needing to update the peer configuration. It would also make sure the developer understands the update model. For example if we had automatic batching, if developer is watching the peer logs they may be confused to see all the reads executing prior to the writes. If they explicitly add the batch command they will understand that the writes are batched and processed at the end. Finally, for existing users it would also allow for incremental transition to the new approach. For any chaincode that writes many keys they may want to enable the explicit batching. For other chaincodes they could leave it as-is without batching enabled. Since the batching is only related to writes, maybe the command should be called |
@pfi79 @C0rWin @manish-sethi There is a Fabric contributor meeting scheduled for tomorrow November 20th. We could discuss during the meeting if you are able to attend. |
0514668
to
5257b50
Compare
- Added a new RFC proposing a batching mechanism to reduce communication overhead. - Defined new `ChangeStateBatch` message type to encapsulate multiple state changes. - Outlined the process flow, including negotiation and batch handling logic. - Included sample chaincode illustrating the issue with current communication protocol. - Provided benchmarks comparing performance before and after applying batching. - Listed required repository changes in the correct order: - github.com/hyperledger/fabric-protos - github.com/hyperledger/fabric-chaincode-go - github.com/hyperledger/fabric - Ensured backward compatibility by using a negotiation mechanism in `Ready` messages. - Added example handler code for processing batched state changes. Signed-off-by: Artem Barger <[email protected]>
5257b50
to
b9b1675
Compare
@denyeart, @manish-sethi, @satota2 guys, updated RFC as per our conversation during the community call. Please review. |
|
||
// FinishWriteBatch flushes the current batch and sends all collected write operations to the peer. | ||
// Returns a serialized set of results for each operation with the corresponding status. | ||
func (stub *ChaincodeStub) FinishWriteBatch() ([]BatchResult, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, but I'm confused by []BatchResult.
What will be written there?
Usually a change operation returns an error or nil.
Batch will be applied on peer sequentially. And if at least one operation will end with an error, then immediately the processing of the whole batch will end with an error.
Therefore, either []BatchResult will consist of all nil, or some number of nil and one error.
Wouldn't it be better to replace []BatchResult with error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was only a suggestion, I think you have a point and having only an error should be indeed enough of indication, given that implementation wise we won't continue to try and execute batch once we stumbled into the error.
@denyeart can we move forward to the voting and last call stage? |
ChangeStateBatch
message type to encapsulate multiple state changes.Ready
messages.