-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ZeroTrust - In the _splitWithdrawRequest() function, there exists an issue that causes both the from and to requestId to be 0 #41
Comments
1 comment(s) were left on this issue during the judging contest. 0xmystery commented:
|
Escalate Let’s take a look at some of the key lines of code. function _splitWithdrawRequest(address _from, address _to, uint256 vaultShares) internal {
@> WithdrawRequest storage w = VaultStorage.getAccountWithdrawRequest()[_from];
if (w.vaultShares == vaultShares) {
// If the resulting vault shares is zero, then delete the request. The _from account's
// withdraw request is fully transferred to _to
@> delete VaultStorage.getAccountWithdrawRequest()[_from];
}
// Ensure that no withdraw request gets overridden, the _to account always receives their withdraw
// request in the account withdraw slot.
WithdrawRequest storage toWithdraw = VaultStorage.getAccountWithdrawRequest()[_to];
require(toWithdraw.requestId == 0 || toWithdraw.requestId == w.requestId , "Existing Request");
// Either the request gets set or it gets incremented here.
@> toWithdraw.requestId = w.requestId;
toWithdraw.vaultShares = toWithdraw.vaultShares + vaultShares;
toWithdraw.hasSplit = true;
} Because w is in storage, after deletion, w.requestId = 0, and toWithdraw.requestId will also be 0. function getAccountWithdrawRequest() internal pure returns (mapping(address => WithdrawRequest) storage store) {
assembly { store.slot := ACCOUNT_WITHDRAW_SLOT }
}
function getSplitWithdrawRequest() internal pure returns (mapping(uint256 => SplitWithdrawRequest) storage store) {
assembly { store.slot := SPLIT_WITHDRAW_SLOT }
}
WithdrawRequest and SplitWithdrawRequest are different data structures. struct WithdrawRequest {
uint256 requestId;
uint256 vaultShares;
bool hasSplit;
}
struct SplitWithdrawRequest {
uint256 totalVaultShares; // uint64
uint256 totalWithdraw; // uint184?
bool finalized;
} Because both VaultStorage.getAccountWithdrawRequest()[_from].requestId and VaultStorage.getAccountWithdrawRequest()[_to].requestId are set to 0, VaultStorage.getSplitWithdrawRequest()[right requestId] does not have an index that can be accessed. So funds cannot be withdraw from third-party protocols, leading to a loss of funds. |
You've created a valid escalation! To remove the escalation from consideration: Delete your comment. You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final. |
Same Issue as #6. |
Please see my comment on #6. |
Agree it's a valid issue, planning to accept the escalation and duplicate with #6. |
Result: |
Escalations have been resolved successfully! Escalation status:
|
ZeroTrust
High
In the _splitWithdrawRequest() function, there exists an issue that causes both the from and to requestId to be 0
Summary
In the _splitWithdrawRequest() function, there exists an issue that causes both the from and to requestId to be 0
Vulnerability Detail
The ‘w’ is the storage WithdrawRequest of _from. In the case of w.vaultShares equaling vaultShares, the storage WithdrawRequest of _from is deleted (meaning the storage is reset to 0). Then w.requestId becomes 0, so toWithdraw.requestId is 0.
Funds cannot be retrieved from third-party protocols, leading to a loss of funds.
Impact
Funds cannot be retrieved from third-party protocols, leading to a loss of funds.
Code Snippet
https://github.com/sherlock-audit/2024-06-leveraged-vaults/blob/14d3eaf0445c251c52c86ce88a84a3f5b9dfad94/leveraged-vaults-private/contracts/vaults/common/WithdrawRequestBase.sol#L205
Tool used
Manual Review
Recommendation
Please use memory cached variables.
Duplicate of #6
The text was updated successfully, but these errors were encountered: