Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer to chaincode optimization proposal RFC #58

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

C0rWin
Copy link
Contributor

@C0rWin C0rWin commented Oct 27, 2024

  • Added a new RFC proposing a batching mechanism to reduce communication overhead.
  • Defined new ChangeStateBatch message type to encapsulate multiple state changes.
  • Outlined the process flow, including negotiation and batch handling logic.
  • Included sample chaincode illustrating the issue with current communication protocol.
  • Provided benchmarks comparing performance before and after applying batching.
  • Listed required repository changes in the correct order:
    • github.com/hyperledger/fabric-protos
    • github.com/hyperledger/fabric-chaincode-go
    • github.com/hyperledger/fabric
  • Ensured backward compatibility by using a negotiation mechanism in Ready messages.
  • Added example handler code for processing batched state changes.

@yacovm
Copy link
Contributor

yacovm commented Oct 27, 2024

I think the reason that a series of PutStates is so slow is because the chaincode shim waits for a response from the peer, which makes no sense, as the peer doesn't really do anything smart with PutState commands, except from aggregating them in the write-set.

Have you investigated the performance of a shim that simply defers all PutStates to the end of its execution, and sends them all one by one to the peer, without expecting an acknowledgement? I think this should be considered, as this probably will speedup things as well, and will not require a new message, and will probably be more backward compatible (a new shim would still work with an old peer, because it would not wait for the response, and can just probably ignore it)

@C0rWin
Copy link
Contributor Author

C0rWin commented Oct 27, 2024

I think the reason that a series of PutStates is so slow is because the chaincode shim waits for a response from the peer, which makes no sense, as the peer doesn't really do anything smart with PutState commands, except from aggregating them in the write-set.

Have you investigated the performance of a shim that simply defers all PutStates to the end of its execution, and sends them all one by one to the peer, without expecting an acknowledgement? I think this should be considered, as this probably will speedup things as well, and will not require a new message, and will probably be more backward compatible (a new shim would still work with an old peer, because it would not wait for the response, and can just probably ignore it)

Yes, this will achieve better results compared to the current implementation, while doing it with a single message is still better, given you avoid sending multiple messages.

@pfi79
Copy link

pfi79 commented Oct 28, 2024

I think the reason that a series of PutStates is so slow is because the chaincode shim waits for a response from the peer, which makes no sense, as the peer doesn't really do anything smart with PutState commands, except from aggregating them in the write-set.

Have you investigated the performance of a shim that simply defers all PutStates to the end of its execution, and sends them all one by one to the peer, without expecting an acknowledgement? I think this should be considered, as this probably will speedup things as well, and will not require a new message, and will probably be more backward compatible (a new shim would still work with an old peer, because it would not wait for the response, and can just probably ignore it)

  1. We tried sending everything at the end, but only with a waiting for a response. The result is the same as it is now.
  2. In fabric-chaincode-go, there is protection against parallel operation (no waiting). Why this is done is not quite clear to me, so I am wary of getting into this mechanism.
  3. Judging by the code, the prohibition of parallel requests in one transaction is necessary for parallel requests and distribution of responses from different transactions to work.

@C0rWin
Copy link
Contributor Author

C0rWin commented Oct 28, 2024

After all, it is not very hard to implement RFC, which could bring significant gains without breaking current functionality and transaction semantics, not even the programming model.

@denyeart
Copy link
Contributor

There was an old Jira item to implement SetStateMultipleKeys() and GetStateMultipleKeys() to set and get keys in batch for precisely the reason you mention. It was even implemented at ledger level, just not at the chaincode programming model level. I don't think there was any reason it didn't get implemented, it simply wasn't prioritized for development.

But let's also check with @manish-sethi to see if there were any other considerations that delayed the implementation...

@manish-sethi
Copy link
Contributor

  1. In fabric-chaincode-go, there is protection against parallel operation (no waiting). Why this is done is not quite clear to me, so I am wary of getting into this mechanism.

The main reason for this is to maintain the sanity of a transaction as it is intended at the application layer as a transaction is defined as a sequence of reads and writes. For instance, if a transaction sets different values to the same key, on different peers the result could differ if parallelism is to be allowed. Even if the result on different peers ends up the same, unless we prohibit setting the value of the same key more than once in a transaction, in the worst case, parallelism at shim could create a different result than the chaincode intended.

@manish-sethi
Copy link
Contributor

There was an old Jira item to implement SetStateMultipleKeys() and GetStateMultipleKeys() to set and get keys in batch for precisely the reason you mention. It was even implemented at ledger level, just not at the chaincode programming model level. I don't think there was any reason it didn't get implemented, it simply wasn't prioritized for development.

But let's also check with @manish-sethi to see if there were any other considerations that delayed the implementation...

Yes, that was the intention. It was always a matter of priority. If we expose SetStateMultipleKeys() and GetStateMultipleKeys() to the chaincode, the simple cases should benefit easily.

Comment on lines +80 to +99
```proto
message ChangeStateBatch {
repeated StateKV kvs = 1;
}

message StateKV {
enum Type {
UNDEFINED = 0;
PUT_STATE = 9;
DEL_STATE = 10;
PUT_STATE_METADATA = 21;
PURGE_PRIVATE_DATA = 23;
}

string key = 1;
bytes value = 2;
string collection = 3;
StateMetadata metadata = 4;
Type type = 5;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please note that this RFC suggests sending anything that changes state in batches.
I think it's cool.

@C0rWin
Copy link
Contributor Author

C0rWin commented Oct 29, 2024

There was an old Jira item to implement SetStateMultipleKeys() and GetStateMultipleKeys() to set and get keys in batch for precisely the reason you mention. It was even implemented at ledger level, just not at the chaincode programming model level. I don't think there was any reason it didn't get implemented, it simply wasn't prioritized for development.

But let's also check with @manish-sethi to see if there were any other considerations that delayed the implementation...

Well, we can reveal this line of work, this RFC precisely about that. Once we will get in agreement, it's we can just do it, I do think there is general consensus to proceed, given in the past there was even plans to do it not to mention ledger already supports it.

@C0rWin
Copy link
Contributor Author

C0rWin commented Oct 31, 2024

@denyeart, what should the following steps be? Would you like this RFC to be presented at the community call? Would you like us to vote? Something else? Note, this is a straightforward and not intrusive change, not breaking any APIs while bringing impact.

@denyeart
Copy link
Contributor

denyeart commented Nov 4, 2024

@C0rWin

I think the original vision for GetStateMultipleKeys() and SetStateMultipleKeys() was for the chaincode developer to be able to control getting and setting multiple keys at once. Whereas I believe your proposal would be transparent to the chaincode developer, that is, the updates would still be one at a time from the chaincode developer perspective, but at runtime they would be cached in the shim until the end of chaincode execution, at which point they would all be 'flushed' to the peer all at once right before the peer builds the final read/write set. Is my understanding correct?

I do agree the transparent batching makes more sense for ledger updates. For gets I still think the original intent makes sense, allow the chaincode developer to get N keys at once, so that they can use the values in the subsequent chaincode logic. However I think batch get is lower priority and not the intent of this RFC, so I'm ok with just doing the transparent batching for the updates as part of this RFC.

Question - do we need to consider splitting the batch into smaller pieces? For example if chaincode attempts to update 100000 keys should we send 10 batches of 10000 each? This may help to stay under limits such as grpc max message size (default 100MB), but on the other hand such large transactions would likely cause problems regardless due to the ultimate large write set. I'd prefer to keep things as simple as possible, just checking your thought.

In terms of next steps, RFCs require approval from 3 maintainers. For relatively straightforward RFCs like this one, we can just utilize github. If you think there are more considerations to discuss we could add an agenda item to the November 20th contributor meeting.

@pfi79
Copy link

pfi79 commented Nov 5, 2024

Is my understanding correct?

Yeah, that's right.

@pfi79
Copy link

pfi79 commented Nov 5, 2024

Question - do we need to consider splitting the batch into smaller pieces? For example if chaincode attempts to update 100000 keys should we send 10 batches of 10000 each? This may help to stay under limits such as grpc max message size (default 100MB), but on the other hand such large transactions would likely cause problems regardless due to the ultimate large write set. I'd prefer to keep things as simple as possible, just checking your thought.

This behavior will be configurable in core.yaml, via the new parameters UsePutStateBatch and MaxSizePutStateBatch.

The thing is, I know this will be an improvement in my case. But I don't know all the fabric use cases, so through the UsePutStateBatch parameter it will be possible to disable the new behavior. Or through the MaxSizePutStateBatch parameter - 1000000000 could be set so that the post is never split into batchs.

A benchmark test will also be added, which you can use as an example to research which parameters will be better in your particular case.

@manish-sethi
Copy link
Contributor

I agree with the overall proposal. However, if the goal is to keep the behavior consistent with the current implementation, I recommend an exercise of reviewing the transaction simulator code carefully, as some write operations involve reading the current state as well. For example, SetState causes a read for the existing metadata. Private data write operations validate the existence of a collection name here. Similarly, in the current code, certain write operations and read queries are mutually exclusive within a single transaction, as seen here and here.

In summary, we should assess whether it is feasible to maintain the current behavior in all cases, or alternatives should be discussed. A simple alternative could be to limit this optimization to straightforward cases or to allow for modifying the behavior so that any error is propagated at the end, rather than at the specific step where it occurs -- Also in some cases, we would have to limit ourselves to return a more general error as we would have lost the actual sequence of reads and writes.

@pfi79
Copy link

pfi79 commented Nov 6, 2024

I agree with the overall proposal. However, if the goal is to keep the behavior consistent with the current implementation, I recommend an exercise of reviewing the transaction simulator code carefully, as some write operations involve reading the current state as well. For example, SetState causes a read for the existing metadata. Private data write operations validate the existence of a collection name here. Similarly, in the current code, certain write operations and read queries are mutually exclusive within a single transaction, as seen here and here.

That's an excellent point.
We don't go that deep into change.
Only the handler.go file is changed
All those checks you're talking about are deeper, so there's no way to bypass them.

@manish-sethi
Copy link
Contributor

That's an excellent point. We don't go that deep into change. Only the handler.go file is changed All those checks you're talking about are deeper, so there's no way to bypass them.

Yes, and hence it changes the error handling behavior from the current implementation. For example, if you make changes in only in the shim side of code, a user will get this error whereas in his chaincode he may have performed a write before the query. So I suggested going through the lower level code and capturing all behavior changes, and necessary changes in the error handling at the lower level that may be needed. IMO, this should be part of the RFC.

@pfi79
Copy link

pfi79 commented Nov 6, 2024

That's an excellent point. We don't go that deep into change. Only the handler.go file is changed All those checks you're talking about are deeper, so there's no way to bypass them.

Yes, and hence it changes the error handling behavior from the current implementation. For example, if you make changes in only in the shim side of code, a user will get this error whereas in his chaincode he may have performed a write before the query. So I suggested going through the lower level code and capturing all behavior changes, and necessary changes in the error handling at the lower level that may be needed. IMO, this should be part of the RFC.

Doesn't it work both ways?

it seems to me that operations that cannot be performed in one transaction cause an error without necessarily following the order.
That is, it doesn't matter whether “query” or “set” is executed first.

@manish-sethi
Copy link
Contributor

That's an excellent point. We don't go that deep into change. Only the handler.go file is changed All those checks you're talking about are deeper, so there's no way to bypass them.

Yes, and hence it changes the error handling behavior from the current implementation. For example, if you make changes in only in the shim side of code, a user will get this error whereas in his chaincode he may have performed a write before the query. So I suggested going through the lower level code and capturing all behavior changes, and necessary changes in the error handling at the lower level that may be needed. IMO, this should be part of the RFC.

Doesn't it work both ways?

it seems to me that operations that cannot be performed in one transaction cause an error without necessarily following the order. That is, it doesn't matter whether “query” or “set” is executed first.

Yes, it does, and in the current implementation, the appropriate error is returned. If you make changes in the shim side only, the wrong error will be returned. In that case, it will be more appropriate to return a more general error than a wrong error.

@C0rWin
Copy link
Contributor Author

C0rWin commented Nov 7, 2024

I do agree the transparent batching makes more sense for ledger updates. For gets I still think the original intent makes sense, allow the chaincode developer to get N keys at once, so that they can use the values in the subsequent chaincode logic. However I think batch get is lower priority and not the intent of this RFC, so I'm ok with just doing the transparent batching for the updates as part of this RFC.

You got that right, the main intent was to have batched update operations transparent and hence the suggested modifications the least intrusive way to get to the point.

Question - do we need to consider splitting the batch into smaller pieces? For example if chaincode attempts to update 100000 keys should we send 10 batches of 10000 each? This may help to stay under limits such as grpc max message size (default 100MB), but on the other hand such large transactions would likely cause problems regardless due to the ultimate large write set. I'd prefer to keep things as simple as possible, just checking your thought.

Yes, we need to split it since we cannot send unlimited proto messages hence the batch has to respect the limitation. Honestly, I also do not think this is too difficult to implement.

@C0rWin
Copy link
Contributor Author

C0rWin commented Nov 7, 2024

That's an excellent point. We don't go that deep into change. Only the handler.go file is changed All those checks you're talking about are deeper, so there's no way to bypass them.

Yes, and hence it changes the error handling behavior from the current implementation. For example, if you make changes in only in the shim side of code, a user will get this error whereas in his chaincode he may have performed a write before the query. So I suggested going through the lower level code and capturing all behavior changes, and necessary changes in the error handling at the lower level that may be needed. IMO, this should be part of the RFC.

Doesn't it work both ways?
it seems to me that operations that cannot be performed in one transaction cause an error without necessarily following the order. That is, it doesn't matter whether “query” or “set” is executed first.

Yes, it does, and in the current implementation, the appropriate error is returned. If you make changes in the shim side only, the wrong error will be returned. In that case, it will be more appropriate to return a more general error than a wrong error.

I have to admit I am not familiar to the very intimate level with this part of the code, I do agree @manish-sethi has a good point, and therefore what has to be done we need to make sure and list all such a cases.

That being said, since the proposal suggests keeping state updates within the shim, validation of this sort could be completed at the shim side either, preserving error handling semantics.

@C0rWin
Copy link
Contributor Author

C0rWin commented Nov 12, 2024

@denyeart what would be the next steps?

@denyeart
Copy link
Contributor

@denyeart what would be the next steps?

The feedback from Manish certainly complicates things. The authors of the RFC need to incorporate the feedback and propose a plan for how it will be addressed.

Alternatively the RFC could be scaled back, for example instead of saving all writes for the end, the chaincode could send bursts of updates to the peer. Potentially give the chaincode developer the ability to control the batching by calling an operation that sets multiple keys at a time (as was envisioned years ago with SetStateMultipleKeys). This would keep things most simple.

Or as a compromise, perhaps the chaincode shim itself could manage the batches. As an example, if the chaincode does one GetState(), then 100 PutState(), and a final GetState(), the chaincode could send to the peer in 3 operations:

  • first GetState(), sent interactively when it is hit in the chaincode
  • the 100 PutStates(), collected on shim side and then sent as a batch upon hitting the next non-PutState() operation.
  • the final GetState(), sent interactively when it is hit in the chaincode

I would be in favor of one of these partial batching approaches. This would allow for keeping the checks and error message generation on the peer side, which will reduce the scope, complexity, and risk of the change. It will also help with consistency across various chaincode shim language implementations. Especially now that the original developers aren't as active, we need to manage the scope and risk of such changes.

I'm not sure how such counter-proposals will impact the RFC author's use cases however. The RFC authors will therefore need to synthesize Manish and my responses, together with their use case requirements, and come back with an updated proposal.

@pfi79
Copy link

pfi79 commented Nov 13, 2024

That's an excellent point. We don't go that deep into change. Only the handler.go file is changed All those checks you're talking about are deeper, so there's no way to bypass them.

Yes, and hence it changes the error handling behavior from the current implementation. For example, if you make changes in only in the shim side of code, a user will get this error whereas in his chaincode he may have performed a write before the query. So I suggested going through the lower level code and capturing all behavior changes, and necessary changes in the error handling at the lower level that may be needed. IMO, this should be part of the RFC.

Doesn't it work both ways?
it seems to me that operations that cannot be performed in one transaction cause an error without necessarily following the order. That is, it doesn't matter whether “query” or “set” is executed first.

Yes, it does, and in the current implementation, the appropriate error is returned. If you make changes in the shim side only, the wrong error will be returned. In that case, it will be more appropriate to return a more general error than a wrong error.

I don't know what you're talking about yet. Can you give me a specific case.

Like previously sent this, and so received this message.
Now you suggest and it will be different and that's the problem.

I'll write a corresponding test and think about it.

I really don't understand what you're writing about.

@pfi79
Copy link

pfi79 commented Nov 13, 2024

I can honestly say that we already have the code ready and for our use cases we have got a good acceleration.

I will put here links to commits of the changes we want to propose to make, after the approval of pr in rfcs.

pfi79/fabric-protos@1c27d38
pfi79/fabric-chaincode-go@8954ab1
pfi79/fabric@43e186d

@denyeart
Copy link
Contributor

@pfi79 Basically what Manish is saying is that if you change the order of the reads and writes, you may get different behavior or a different error message. To address this, you could shift all these checks and error messages from peer side to shim side. However, that would be a lot of work and we'd likely miss something. I would prefer to keep the order of reads and writes the same as it is currently (the order as seen in the chaincode), so that the behavior remains the same. What I'm suggesting is that you could still batch consecutive writes, so that the order of reads and writes remains the same, while still getting the performance improvements of batching. Of course, the chaincode would have to be written intelligently to do consecutive writes rather than read-write-read-write.

@pfi79
Copy link

pfi79 commented Nov 13, 2024

@pfi79 Basically what Manish is saying is that if you change the order of the reads and writes, you may get different behavior or a different error message. To address this, you could shift all these checks and error messages from peer side to shim side. However, that would be a lot of work and we'd likely miss something. I would prefer to keep the order of reads and writes the same as it is currently (the order as seen in the chaincode), so that the behavior remains the same. What I'm suggesting is that you could still batch consecutive writes, so that the order of reads and writes remains the same, while still getting the performance improvements of batching. Of course, the chaincode would have to be written intelligently to do consecutive writes rather than read-write-read-write.

Let's say Manish is right.
Can you or he provide a concrete example of this violation?
I've examined the existing code and I can't think of an example that violates as Manish says.
Most likely I haven't foreseen something and I don't know something. That's why I'm asking about an example of such a violation.

@pfi79
Copy link

pfi79 commented Nov 13, 2024

By the way, above I provided commites of my code variant. But the situation Manish is talking about was not revealed.
So we don't have tests for it. I am ready to write such a test that would clearly show the importance of execution order and check it on my changes.
Or is there no situation, or is it not so important?

@pfi79
Copy link

pfi79 commented Nov 13, 2024

For example, if you make changes in only in the shim side of code, a user will get this error whereas in his chaincode he may have performed a write before the query.

Sorry Manish, I don't understand your example.
Could you please describe it in more detail?
Then I will write a corresponding test, which we will take apart and see what is wrong.

@denyeart
Copy link
Contributor

Here is one example that is mentioned in documentation:
https://hyperledger-fabric.readthedocs.io/en/latest/private-data-arch.html#querying-private-data

Chaincode that executes key range queries or JSON queries and updates data in a single transaction is not supported, as the query results cannot be validated on the peers that don’t have access to the private data.

If chaincode does write then query private data you will get an error based on this check:
https://github.com/hyperledger/fabric/blob/6c2e87c0824604c7cd7a8f4f93b2c5876af76875/core/ledger/kvledger/txmgmt/txmgr/tx_simulator.go#L234

If we hold all writes until the end, you won't get this error. You will likely still get an error based on the other checks, but it would be a different error that doesn't match the developer's view of the chaincode context.

@pfi79
Copy link

pfi79 commented Nov 13, 2024

Here is one example that is mentioned in documentation: https://hyperledger-fabric.readthedocs.io/en/latest/private-data-arch.html#querying-private-data

Chaincode that executes key range queries or JSON queries and updates data in a single transaction is not supported, as the query results cannot be validated on the peers that don’t have access to the private data.

If chaincode does write then query private data you will get an error based on this check: https://github.com/hyperledger/fabric/blob/6c2e87c0824604c7cd7a8f4f93b2c5876af76875/core/ledger/kvledger/txmgmt/txmgr/tx_simulator.go#L234

If we hold all writes until the end, you won't get this error. You will likely still get an error based on the other checks, but it would be a different error that doesn't match the developer's view of the chaincode context.

@denyeart Thanks for the example. The same can still be presented, for example: GetStateWithPagination and PutState.

  1. A transaction that should be executed will be executed. A transaction that should not be executed will not be executed. I believe this is the most important thing.
  2. The above problem can be solved in this way: the fact that the order of calculations can be changed, and the developers of chaincodes may see other errors than the expected ones, should be described in the documentation. With an explanation of why and how to think about the execution of the chaincode in a new way.
  3. I think that the chaincode developer writes the code so that transactions are executed, not terminated with an error. The presented error variants are caught in the first step of the chaincode development. After that they never appear again.

@C0rWin
Copy link
Contributor Author

C0rWin commented Nov 14, 2024

Or as a compromise, perhaps the chaincode shim itself could manage the batches. As an example, if the chaincode does one GetState(), then 100 PutState(), and a final GetState(), the chaincode could send to the peer in 3 operations:

  • first GetState(), sent interactively when it is hit in the chaincode
  • the 100 PutStates(), collected on shim side and then sent as a batch upon hitting the next non-PutState() operation.
  • the final GetState(), sent interactively when it is hit in the chaincode

I would be in favor of one of these partial batching approaches. This would allow for keeping the checks and error message generation on the peer side, which will reduce the scope, complexity, and risk of the change. It will also help with consistency across various chaincode shim language implementations. Especially now that the original developers aren't as active, we need to manage the scope and risk of such changes.

OK, I think that a sane alternative might be to piggyback batches of updates (essentially write operations) with the get/read operations, basically not to wait until the end but rather send all writes with the get command, which anyway has to communicate with the peer.

This obviously has a drawback in cases where you have a lot of interleaved reads and writes, where you will lose the benefit of the batching.

Another possible solution might be to have a set of explicit APIs something like:

stub.StartBatch()

Put/Get

results := stub.FinishBatch() // returns a serialized set of results for each Put/Get operation with the corresponding status

Of course, this is a little bit more intrusive approach, but on the other side, it introduces a clear separation between the two approaches and doesn't assume anything implicitly, which might be overall good for the end-user experience.

@denyeart @manish-sethi thoughts?

@C0rWin
Copy link
Contributor Author

C0rWin commented Nov 14, 2024

  1. A transaction that should be executed will be executed. A transaction that should not be executed will not be executed. I believe this is the most important thing.

While this is indeed an important thing, we should try to avoid bringing breaking changes with new versions or at least avoid making people change a bunch of legacy code just because we decided to optimize something. Refactoring legacy code to accommodate new errors and programming models might nullify all the benefits of batching.

@pfi79
Copy link

pfi79 commented Nov 14, 2024

While this is indeed an important thing, we should try to avoid bringing breaking changes with new versions or at least avoid making people change a bunch of legacy code just because we decided to optimize something. Refactoring legacy code to accommodate new errors and programming models might nullify all the benefits of batching.

I'm sorry, but I disagree.

I'm not sure there are breaking changes here. So far all the examples have been with chaincode transactions that don't work, that is, that return errors always.

That is, I have to imagine that now somewhere in production there is a chaincode installed and it has a transaction that calls a contradictory method, i.e. always ends with an error.

The only place it can be is in chaincode tests.

Can you show an example of a working transaction where this would be important?

@pfi79
Copy link

pfi79 commented Nov 14, 2024

OK, I think that a sane alternative might be to piggyback batches of updates (essentially write operations) with the get/read operations, basically not to wait until the end but rather send all writes with the get command, which anyway has to communicate with the peer.

That could be a problem right there.

The user executes GetState and receives an error on the PutState operation, which flew away with the Read operation.

@pfi79
Copy link

pfi79 commented Nov 14, 2024

You can add our variant but leave the old way of working in the settings. Then we won't touch the inherited code.

Others will switch to this mode consciously via core.yaml

@pfi79
Copy link

pfi79 commented Nov 19, 2024

@denyeart @manish-sethi
Colleagues,
Let's say we have legacy software, to which order is important and nobody will rewrite it.

Now the proposal is made in such a way that it automatically enables new work when new components are encountered.

I want to support @C0rWin suggestion, it is necessary to switch on the new work mode not automatically, but manually by calling the stub.StartBatch() (or BatchBegin) method. This mode will be switched off either by calling the stub.FinishBatch() (or BatchEnd) method. Or at the end of transaction execution.

The only observation to the suggestion is that yes - we send in batches, but we apply the values one by one. Therefore, results := stub.FinishBatch() should return only one error.

Then the legacy software will be safe until the new mode is enabled in the chaincodes by changing the code.

Would such a change to the proposal suit you?

@denyeart
Copy link
Contributor

denyeart commented Nov 19, 2024

@pfi79 I would be in favor of explicit batching from a chaincode developer command. This way if there are any issues developer can easily enable/disable the batching at the chaincode level and compare the results to determine if an issue is related to batching, without needing to update the peer configuration.

It would also make sure the developer understands the update model. For example if we had automatic batching, if developer is watching the peer logs they may be confused to see all the reads executing prior to the writes. If they explicitly add the batch command they will understand that the writes are batched and processed at the end.

Finally, for existing users it would also allow for incremental transition to the new approach. For any chaincode that writes many keys they may want to enable the explicit batching. For other chaincodes they could leave it as-is without batching enabled.

Since the batching is only related to writes, maybe the command should be called stub.StartWriteBatch()?

@denyeart
Copy link
Contributor

@pfi79 @C0rWin @manish-sethi There is a Fabric contributor meeting scheduled for tomorrow November 20th. We could discuss during the meeting if you are able to attend.

- Added a new RFC proposing a batching mechanism to reduce communication overhead.
- Defined new `ChangeStateBatch` message type to encapsulate multiple state changes.
- Outlined the process flow, including negotiation and batch handling logic.
- Included sample chaincode illustrating the issue with current communication protocol.
- Provided benchmarks comparing performance before and after applying batching.
- Listed required repository changes in the correct order:
  - github.com/hyperledger/fabric-protos
  - github.com/hyperledger/fabric-chaincode-go
  - github.com/hyperledger/fabric
- Ensured backward compatibility by using a negotiation mechanism in `Ready` messages.
- Added example handler code for processing batched state changes.

Signed-off-by: Artem Barger <[email protected]>
@C0rWin
Copy link
Contributor Author

C0rWin commented Nov 20, 2024

@denyeart, @manish-sethi, @satota2 guys, updated RFC as per our conversation during the community call. Please review.


// FinishWriteBatch flushes the current batch and sends all collected write operations to the peer.
// Returns a serialized set of results for each operation with the corresponding status.
func (stub *ChaincodeStub) FinishWriteBatch() ([]BatchResult, error) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, but I'm confused by []BatchResult.
What will be written there?
Usually a change operation returns an error or nil.
Batch will be applied on peer sequentially. And if at least one operation will end with an error, then immediately the processing of the whole batch will end with an error.
Therefore, either []BatchResult will consist of all nil, or some number of nil and one error.

Wouldn't it be better to replace []BatchResult with error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was only a suggestion, I think you have a point and having only an error should be indeed enough of indication, given that implementation wise we won't continue to try and execute batch once we stumbled into the error.

@C0rWin
Copy link
Contributor Author

C0rWin commented Nov 25, 2024

@denyeart can we move forward to the voting and last call stage?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants