Skip to content

Commit

Permalink
Aggregate typo fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
trianglesphere committed Dec 13, 2023
1 parent fe8e121 commit d820a8a
Show file tree
Hide file tree
Showing 19 changed files with 23 additions and 23 deletions.
2 changes: 1 addition & 1 deletion docs/handbook/pr-guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

## Overview

This document contains guidelines best practices in PRs that should be enforced as much as possible. The motivations and goals behind these best practices are:
This document contains guidelines and best practices in PRs that should be enforced as much as possible. The motivations and goals behind these best practices are:

- **Ensure thorough reviews**: By the time the PR is merged, at least one other person—because there is always at least one reviewer—should understand the PR’s changes just as well as the PR author. This helps improve security by reducing bugs and single points of failure (i.e. there should never be only one person who understands certain code).
- **Reduce PR churn**: PRs should be quickly reviewable and mergeable without much churn (both in terms of code rewrites and comment cycles). This saves time by reducing the need for rebases due to conflicts. Similarly, too many review cycles are a burden for both PR authors and reviewers, and results in “review fatigue” where reviews become less careful and thorough, increasing the likelihood of bugs.
Expand Down
2 changes: 1 addition & 1 deletion indexer/api/api.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ func (a *APIService) Addr() string {
func (a *APIService) initDB(ctx context.Context, connector DBConnector) error {
db, err := connector.OpenDB(ctx, a.log)
if err != nil {
return fmt.Errorf("failed to connect to databse: %w", err)
return fmt.Errorf("failed to connect to database: %w", err)
}
a.dbClose = db.Closer
a.bv = db.BridgeTransfers
Expand Down
2 changes: 1 addition & 1 deletion indexer/api/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ type DBConfigConnector struct {
func (cfg *DBConfigConnector) OpenDB(ctx context.Context, log log.Logger) (*DB, error) {
db, err := database.NewDB(ctx, log, cfg.DBConfig)
if err != nil {
return nil, fmt.Errorf("failed to connect to databse: %w", err)
return nil, fmt.Errorf("failed to connect to database: %w", err)
}
return &DB{
BridgeTransfers: db.BridgeTransfers,
Expand Down
2 changes: 1 addition & 1 deletion indexer/docs/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Header traversal is a client abstraction that allows the indexer to sequentially
* This error occurs when the indexer is operating on a different block state than the node. This is typically caused by network reorgs and is the result of `l1-confirmation-count` or `l2-confirmation-count` values being set too low. To resolve this issue, increase the confirmation count values and restart the indexer service.

2. `the HeaderTraversal's internal state is ahead of the provider`
* This error occurs when the indexer is operating on a block that the upstream provider does not have. This is typically occurs when resyncing upstream node services. This issue typically resolves itself once the upstream node service is fully synced. If the problem persists, please file an issue.
* This error occurs when the indexer is operating on a block that the upstream provider does not have. This typically occurs when resyncing upstream node services. This issue typically resolves itself once the upstream node service is fully synced. If the problem persists, please file an issue.

### L1/L2 Processor Failures
The L1 and L2 processors are responsible for processing new blocks and system txs. Processor failures can spread and contaminate other downstream processors (i.e, bridge) as well. For example, if a L2 processor misses a block and fails to index a `MessagePassed` event, the bridge processor will fail to index the corresponding `WithdrawalProven` event and halt progress. The following are some common failure modes and how to resolve them:
Expand Down
2 changes: 1 addition & 1 deletion indexer/e2e_tests/bridge_messages_e2e_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ func TestE2EBridgeL1CrossDomainMessenger(t *testing.T) {
require.Equal(t, aliceAddr, sentMessage.Tx.ToAddress)
require.ElementsMatch(t, calldata, sentMessage.Tx.Data)

// (2) Process RelayedMesssage on inclusion
// (2) Process RelayedMessage on inclusion
// - We dont assert that `RelayedMessageEventGUID` is nil prior to inclusion since there isn't a
// a straightforward way of pausing/resuming the processors at the right time. The codepath is the
// same for L2->L1 messages which does check for this so we are still covered
Expand Down
4 changes: 2 additions & 2 deletions indexer/etl/etl.go
Original file line number Diff line number Diff line change
Expand Up @@ -131,8 +131,8 @@ func (etl *ETL) processBatch(headers []types.Header) error {
batchLog.Warn("mismatch in FilterLog#ToBlock number", "queried_to_block_number", lastHeader.Number, "reported_to_block_number", logs.ToBlockHeader.Number)
return fmt.Errorf("mismatch in FilterLog#ToBlock number")
} else if logs.ToBlockHeader.Hash() != lastHeader.Hash() {
batchLog.Error("mismatch in FitlerLog#ToBlock block hash!!!", "queried_to_block_hash", lastHeader.Hash().String(), "reported_to_block_hash", logs.ToBlockHeader.Hash().String())
return fmt.Errorf("mismatch in FitlerLog#ToBlock block hash!!!")
batchLog.Error("mismatch in FilterLog#ToBlock block hash!!!", "queried_to_block_hash", lastHeader.Hash().String(), "reported_to_block_hash", logs.ToBlockHeader.Hash().String())
return fmt.Errorf("mismatch in FilterLog#ToBlock block hash!!!")
}

if len(logs.Logs) > 0 {
Expand Down
2 changes: 1 addition & 1 deletion indexer/processors/contracts/cross_domain_messenger.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ func CrossDomainMessengerSentMessageEvents(chainSelector string, contractAddress
default:
// NOTE: We explicitly fail here since the presence of a new version means finalization
// logic needs to be updated to ensure L1 finalization can run from genesis and handle
// the changing version formats. Any unrelayed OVM1 messages that have been harcoded with
// the changing version formats. Any unrelayed OVM1 messages that have been hardcoded with
// the v1 hash format also need to be updated. This failure is a serving indicator
return nil, fmt.Errorf("expected cross domain version 0 or version 1: %d", version)
}
Expand Down
2 changes: 1 addition & 1 deletion indexer/processors/contracts/legacy_ctc.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ func LegacyCTCDepositEvents(contractAddress common.Address, db *database.DB, fro
return nil, err
}

// Enqueued Deposits do not carry a `msg.value` amount. ETH is only minted on L2 via the L1StandardBrige
// Enqueued Deposits do not carry a `msg.value` amount. ETH is only minted on L2 via the L1StandardBridge
ctcTxDeposits[i] = LegacyCTCDepositEvent{
Event: &events[i].ContractEvent,
GasLimit: txEnqueued.GasLimit,
Expand Down
2 changes: 1 addition & 1 deletion op-chain-ops/crossdomain/legacy_withdrawal.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ func NewLegacyWithdrawal(msgSender, target, sender common.Address, data []byte,
}
}

// Encode will serialze the Withdrawal in the legacy format so that it
// Encode will serialize the Withdrawal in the legacy format so that it
// is suitable for hashing. This assumes that the message is being withdrawn
// through the standard optimism cross domain messaging system by hashing in
// the L2CrossDomainMessenger address.
Expand Down
2 changes: 1 addition & 1 deletion op-chain-ops/crossdomain/legacy_withdrawal_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ func init() {
if err := readStateDiffs(); err != nil {
panic(err)
}
// Initialze the message passer ABI
// Initialize the message passer ABI
var err error
passMessage, err = abi.JSON(strings.NewReader(passMessageABI))
if err != nil {
Expand Down
2 changes: 1 addition & 1 deletion op-chain-ops/genesis/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -686,7 +686,7 @@ func NewL1Deployments(path string) (*L1Deployments, error) {

var deployments L1Deployments
if err := json.Unmarshal(file, &deployments); err != nil {
return nil, fmt.Errorf("cannot unmarshal L1 deployements: %w", err)
return nil, fmt.Errorf("cannot unmarshal L1 deployments: %w", err)
}

return &deployments, nil
Expand Down
2 changes: 1 addition & 1 deletion op-preimage/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# op-preimage

`op-preimage` offers simple Go bindings to interact as client or sever over the Pre-image Oracle ABI.
`op-preimage` offers simple Go bindings to interact as client or server over the Pre-image Oracle ABI.

Read more about the Preimage Oracle in the [specs](../specs/fault-proof.md).

Expand Down
2 changes: 1 addition & 1 deletion op-program/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Implements a fault proof program that runs through the rollup state-transition t
This verifiable output can then resolve a disputed output on L1.

The program is designed such that it can be run in a deterministic way such that two invocations with the same input
data wil result in not only the same output, but the same program execution trace. This allows it to be run in an
data will result in not only the same output, but the same program execution trace. This allows it to be run in an
on-chain VM as part of the dispute resolution process.

## Compiling
Expand Down
4 changes: 2 additions & 2 deletions packages/contracts-bedrock/STYLE_GUIDE.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Smart Contract Style Guide

This document providing guidance on how we organize and write our smart contracts. For cases where
This document provides guidance on how we organize and write our smart contracts. For cases where
this document does not provide guidance, please refer to existing contracts for guidance,
with priority on the `L2OutputOracle` and `OptimismPortal`.

Expand Down Expand Up @@ -154,7 +154,7 @@ Test contracts should be named one of the following according to their use:
To minimize clutter, getter functions can be grouped together into a single test contract,
ie. `TargetContract_Getters_Test`.

## Withdrawaing From Fee Vaults
## Withdrawing From Fee Vaults

See the file `scripts/FeeVaultWithdrawal.s.sol` to withdraw from the L2 fee vaults. It includes
instructions on how to run it. `foundry` is required.
2 changes: 1 addition & 1 deletion packages/contracts-ts/CODE_GEN.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Summary -

- This package is generated from [contracts-bedrock](../contracts-bedrock/)
- It's version is kept in sync with contracts bedrock via the [changeset config](../../.changeset/config.json) e.g. if contracts-bedrock is `4.2.0` this package will have the same version.
- Its version is kept in sync with contracts bedrock via the [changeset config](../../.changeset/config.json) e.g. if contracts-bedrock is `4.2.0` this package will have the same version.

## Code gen instructions

Expand Down
2 changes: 1 addition & 1 deletion proxyd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This tool implements `proxyd`, an RPC request router and proxy. It does the foll
5. Re-write requests and responses to enforce consensus.
6. Load balance requests across backend services.
7. Cache immutable responses from backends.
8. Provides metrics the measure request latency, error rates, and the like.
8. Provides metrics to measure request latency, error rates, and the like.


## Usage
Expand Down
2 changes: 1 addition & 1 deletion specs/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -749,7 +749,7 @@ of even benign consensus issues.

The L2 block time is 2 second, meaning there is an L2 block at every 2s [time slot][time-slot].

Post-[merge], it could be said the that L1 block time is 12s as that is the L1 [time slot][time-slot]. However, in
Post-[merge], it could be said that the L1 block time is 12s as that is the L1 [time slot][time-slot]. However, in
reality the block time is variable as some time slots might be skipped.

Pre-merge, the L1 block time is variable, though it is on average 13s.
Expand Down
2 changes: 1 addition & 1 deletion specs/predeploys.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ used for depositing native L1 tokens into. These ERC20 contracts can be created
and implement the interface required by the `StandardBridge` to just work with deposits and withdrawals.

Each ERC20 contract that is created by the `OptimismMintableERC20Factory` allows for the `L2StandardBridge` to mint
and burn tokens, depending on if the user is depositing from L1 to L2 or withdrawaing from L2 to L1.
and burn tokens, depending on if the user is depositing from L1 to L2 or withdrawing from L2 to L1.

## OptimismMintableERC721Factory

Expand Down
6 changes: 3 additions & 3 deletions ufm-test-services/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ Starting from left to right in the above diagram:

1. Github Workflow files are created for each time interval Test Services should be ran
- All Test Services that should be ran for a specific time interval (e.g. 1 hour) should be defined in the same Github Workflow file
2. Github will run a workflow at it's specified time interval, triggering all of it's defined Test Services to run
2. Github will run a workflow at its specified time interval, triggering all of it's defined Test Services to run
3. `docker-compose.yml` builds and runs each Test Service, setting any environment variables that can be sourced from Github secrets
4. Each Test Service will run it's defined tasks, generate it's metrics, and push them to an already deployed instance of Prometheus Pushgateway
4. Each Test Service will run its defined tasks, generate its metrics, and push them to an already deployed instance of Prometheus Pushgateway
5. An already deployed instance of Prometheus will scrape the Pushgateway for metrics
6. An already deployed Grafana dashboard will query Prometheus for metric data to display

Expand Down Expand Up @@ -71,7 +71,7 @@ Starting from left to right in the above diagram:
# Runs every 1 day
0 12 * * * /usr/local/bin/docker-compose -f /path/to/docker-compose.yml --profile 1day up -d

# Runs every 7 day
# Runs every 7 days
0 12 */7 * * /usr/local/bin/docker-compose -f /path/to/docker-compose.yml --profile 7day up -d
```

Expand Down

0 comments on commit d820a8a

Please sign in to comment.