-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem: no enough db type tested for nodes #1710
Conversation
add goleveldb
WalkthroughThe changes in this pull request introduce a new validator configuration in the Changes
Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1710 +/- ##
=======================================
Coverage 16.87% 16.88%
=======================================
Files 72 72
Lines 6163 6167 +4
=======================================
+ Hits 1040 1041 +1
- Misses 5000 5002 +2
- Partials 123 124 +1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (5)
scripts/.env (1)
4-4
: Security consideration: Review the handling of sensitive data.While these appear to be test accounts, storing mnemonics in plaintext within the repository could pose a security risk if accidentally used in non-test environments. Consider:
- Using environment variables during CI/CD
- Documenting clearly that these are test-only credentials
- Adding checks to prevent usage in production
integration_tests/test_rollback.py (1)
70-71
: Documentation clearly specifies db types for each nodeThe comments effectively document the db types being tested for each node. Consider adding brief explanations of why these specific db types were chosen for testing.
Consider expanding the documentation:
- node3: test memiavl node - node4: test iavl node + node3: test memiavl node (in-memory IAVL+ tree implementation) + node4: test iavl node (persistent IAVL+ tree implementation)integration_tests/test_versiondb.py (1)
Line range hint
1-24
: Update docstring to reflect the third node's roleThe test's docstring should be updated to document the role of node2 in the version database migration test, particularly its database configuration and how it contributes to testing different database types.
Consider updating the docstring like this:
def test_versiondb_migration(cronos: Cronos): """ test versiondb migration commands. - node0 has memiavl and versiondb enabled while node1 don't, + node0 has memiavl and versiondb enabled, node1 uses rocksdb, and node2 uses goleveldb,integration_tests/configs/default.jsonnet (1)
68-80
: Consider documenting the database backend strategy.Since we now have three different database backends (
rocksdb
,pebbledb
, andgoleveldb
), it would be helpful to document:
- The rationale for testing different database backends
- Any performance characteristics or trade-offs
- Recommended usage scenarios for each backend
Would you like me to help create a documentation template for this?
integration_tests/test_upgrade.py (1)
283-285
: Consider refactoring duplicated node start commands.The node start command is duplicated at lines 160-162. Consider extracting this into a helper function to maintain DRY principles and ensure consistent node management.
Example refactor:
+def start_all_nodes(supervisorctl): + supervisorctl( + "start", + "cronos_777-1-node0", + "cronos_777-1-node1", + "cronos_777-1-node2" + ) -c.supervisorctl( - "start", - "cronos_777-1-node0", - "cronos_777-1-node1", - "cronos_777-1-node2" -) +start_all_nodes(c.supervisorctl)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
integration_tests/configs/default.jsonnet
(1 hunks)integration_tests/test_mempool.py
(1 hunks)integration_tests/test_rollback.py
(2 hunks)integration_tests/test_upgrade.py
(2 hunks)integration_tests/test_versiondb.py
(1 hunks)integration_tests/utils.py
(1 hunks)scripts/.env
(1 hunks)
🧰 Additional context used
🪛 GitHub Check: integration_tests (slow)
integration_tests/test_mempool.py
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
🔇 Additional comments (8)
scripts/.env (1)
4-4
: Verify the integration of the new validator mnemonic.
The new VALIDATOR3_MNEMONIC appears to be properly formatted. Let's verify its integration with the test configurations.
✅ Verification successful
The new VALIDATOR3_MNEMONIC is properly integrated
The mnemonic is correctly integrated in both test configurations:
- Used in
integration_tests/utils.py
to create validator3's account usingAccount.from_mnemonic
- Properly configured in
integration_tests/configs/default.jsonnet
with the expected stake allocation and client configuration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of VALIDATOR3_MNEMONIC in test configurations
# Expected: Find references to this mnemonic in test configuration files
# Search for references to the new validator in test configs
rg -l "VALIDATOR3_MNEMONIC" integration_tests/
# Check if the mnemonic is used in test configurations
fd -e jsonnet -e json . integration_tests/configs/ --exec grep -l "VALIDATOR3_MNEMONIC" {}
Length of output: 250
Script:
#!/bin/bash
# Let's check how VALIDATOR3_MNEMONIC is used in the found files
echo "=== Content from integration_tests/utils.py containing VALIDATOR3_MNEMONIC ==="
rg -A 5 -B 5 "VALIDATOR3_MNEMONIC" integration_tests/utils.py
echo -e "\n=== Content from integration_tests/configs/default.jsonnet containing VALIDATOR3_MNEMONIC ==="
rg -A 5 -B 5 "VALIDATOR3_MNEMONIC" integration_tests/configs/default.jsonnet
Length of output: 1455
integration_tests/test_mempool.py (1)
64-64
:
Fix incorrect assertion syntax causing TypeError
The current assertion has a syntax error where the subtraction operation is misplaced, causing a TypeError when attempting to subtract 1 from the dictionary items view object.
Apply this fix:
- assert len(all_pending) == len(KEYS.items() - 1)
+ assert len(all_pending) == len(KEYS.items()) - 1
Note: Since this test is marked as flaky (max_runs=5), please verify that the assertion consistently passes with the fix.
✅ Verification successful
Fix incorrect assertion syntax causing TypeError
The assertion is incorrect as it's trying to subtract 1 from the dictionary's items view object. Looking at the context:
ACCOUNTS
dictionary has 6 entries (validator, validator2, validator3, community, signer1, signer2)- The test explicitly excludes "signer1" in the
send_txs
call using[v for k, v in KEYS.items() if k != "signer1"]
- Therefore,
all_pending
should contain exactly 5 transactions
Fix:
- assert len(all_pending) == len(KEYS.items() - 1)
+ assert len(all_pending) == len(KEYS.items()) - 1
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the KEYS dictionary structure to confirm expected length
rg -A 5 "KEYS\s*=" integration_tests/utils.py
Length of output: 291
Script:
#!/bin/bash
# Let's check the ACCOUNTS dictionary definition to understand the full context
rg -A 10 "ACCOUNTS\s*=" integration_tests/utils.py
Length of output: 667
Script:
#!/bin/bash
# Let's check the test context to understand how all_pending is populated and why we're excluding one key
rg -B 5 -A 5 "all_pending\s*=" integration_tests/test_mempool.py
Length of output: 1060
🧰 Tools
🪛 GitHub Check: integration_tests (slow)
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
[failure] 64-64:
test_mempool
TypeError: 'int' object is not iterable
integration_tests/test_rollback.py (2)
35-35
: LGTM: Node 4 initialization added consistently
The addition of node 4 initialization follows the same pattern as node 3, maintaining consistency in the test setup.
73-73
: Verify test configurations for nodes 3 and 4
The implementation correctly includes both nodes for testing different db types. However, we should verify that the corresponding configurations exist in the test setup.
✅ Verification successful
Let me gather more information to specifically check the validator configurations in the rollback test.
Test configurations for nodes 3 and 4 are properly defined
The rollback test configurations are correctly set up with both memiavl and iavl nodes:
- Node 3 (
rollback-test-memiavl
) is configured with memiavl enabled - Node 4 (
rollback-test-iavl
) is configured with memiavl disabled (using iavl)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that test configurations exist for both nodes
# Expected: Find configurations for both memiavl and iavl nodes
# Check for node configurations in the rollback test config
rg -A 5 "validator[34]" integration_tests/configs/rollback.jsonnet
# Check for db backend configurations
rg -A 5 "(memiavl|iavl)" integration_tests/configs/
Length of output: 3122
Script:
#!/bin/bash
# Check the complete rollback.jsonnet file to understand validator configurations
cat integration_tests/configs/rollback.jsonnet
Length of output: 424
integration_tests/test_versiondb.py (1)
75-81
: LGTM! Verify node startup sequence
The changes correctly handle the addition of the third node by:
- Starting all three nodes simultaneously
- Dynamically waiting for all validator ports to be available
Let's verify the node configuration and startup sequence:
✅ Verification successful
Node startup sequence verified successfully
The changes are correct and properly configured:
- The base configuration in
default.jsonnet
shows three validators with different database configurations:- node0: memiavl + versiondb enabled (rocksdb)
- node1: pebbledb backend
- node2: goleveldb backend
- The test correctly starts all three nodes and waits for their ports to be available
- The dynamic port check using
len(cronos.config["validators"])
ensures all validator nodes are ready
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify node configuration and startup sequence
# Test 1: Check if node2 is properly configured in the validators list
rg -A 5 "validators" integration_tests/configs/
# Test 2: Verify the supervisor configuration for all nodes
rg -A 5 "cronos_777-1-node" integration_tests/configs/
Length of output: 5104
Script:
#!/bin/bash
# Let's try a different approach to verify the node configuration
# Check the test file itself for validator configuration
rg -B 5 -A 10 "def test_versiondb" integration_tests/test_versiondb.py
# Check supervisor configuration in the test
rg -B 5 -A 10 "supervisorctl" integration_tests/test_versiondb.py
# Look for any configuration setup in the test file
rg "config\[" integration_tests/test_versiondb.py
Length of output: 2046
Script:
#!/bin/bash
# Let's check the Cronos class configuration to understand the validator setup
# Check the network.py file for validator configuration
rg -B 5 -A 10 "class Cronos" integration_tests/network.py
# Look for validator configuration in the base configuration
cat integration_tests/configs/default.jsonnet
Length of output: 4430
integration_tests/configs/default.jsonnet (1)
68-80
: LGTM! The validator configuration is consistent with existing patterns.
The new validator configuration properly introduces goleveldb
as another database backend type for testing, maintaining consistency with the existing validator configurations.
Let's verify the environment variable setup and database backend support:
✅ Verification successful
Let me gather more information about the database backend support in the codebase.
Configuration is valid and environment variable is properly set
The verification confirms:
- The
VALIDATOR3_MNEMONIC
environment variable is properly defined inscripts/.env
goleveldb
is a supported database backend, evidenced by its presence in multiple core dependencies (versiondb
,store
,memiavl
)- The configuration follows the established pattern, alongside existing validators using
rocksdb
andpebbledb
Consider documenting the supported database backends in the documentation for better visibility.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify environment variable and database backend support
# Check if VALIDATOR3_MNEMONIC is defined in the .env file
echo "Checking for VALIDATOR3_MNEMONIC in .env file..."
grep -l "VALIDATOR3_MNEMONIC" scripts/.env || echo "Warning: VALIDATOR3_MNEMONIC not found in .env"
# Check if goleveldb is mentioned in documentation or code
echo "Checking for goleveldb support..."
rg -l "goleveldb" docs/ || echo "Warning: goleveldb not documented"
Length of output: 340
Script:
#!/bin/bash
# Check for goleveldb support in the entire codebase
echo "Checking for goleveldb support in the codebase..."
rg -l "goleveldb"
# Check if there are any existing database backend configurations
echo -e "\nChecking existing database backend configurations..."
rg -A 2 "db_backend.*:" integration_tests/configs/
Length of output: 1061
integration_tests/test_upgrade.py (1)
160-162
: LGTM! Verify node2's configuration.
The addition of cronos_777-1-node2
to the test setup aligns with the PR objective of testing more database types.
Let's verify the configuration of node2:
✅ Verification successful
Configuration for node2 (cronos_777-1-node2) is properly set up
The configuration file shows that the third validator (which corresponds to node2) is correctly configured with:
- Database backend: goleveldb (both for Tendermint and application level)
- Proper staking and initial token allocation
- Correct broadcast mode settings
This aligns with the test setup's goal of testing different database types across nodes:
- node0: rocksdb with memiavl
- node1: pebbledb
- node2: goleveldb
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify node2's configuration matches the expected database type
# Look for goleveldb configuration in the node2 config files
rg -A 5 "goleveldb|db_backend" integration_tests/configs/
Length of output: 1381
Script:
#!/bin/bash
# Let's verify how the nodes are configured in the test setup
# by checking the node configuration assignments
rg -B 5 -A 10 "nodes = \[" integration_tests/configs/
Length of output: 55
Script:
#!/bin/bash
# Let's try a different approach to find node configurations
# Search for node configuration patterns in the test setup
rg -A 10 "'nodes':" integration_tests/configs/
Length of output: 54
Script:
#!/bin/bash
# Let's examine the entire default.jsonnet configuration file
# to understand the node setup structure
cat integration_tests/configs/default.jsonnet
Length of output: 3815
integration_tests/utils.py (1)
39-39
: LGTM! Verify environment variable setup.
The addition of validator3
follows the established pattern and integrates well with the test utilities.
Let's verify the environment variable setup:
✅ Verification successful
Environment variable setup verified successfully
The VALIDATOR3_MNEMONIC
environment variable is properly configured:
- Present in
scripts/.env
file - Contains a valid 24-word mnemonic phrase, following the BIP39 standard
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify VALIDATOR3_MNEMONIC is properly set in the environment file
# Check if VALIDATOR3_MNEMONIC is defined in .env file
rg -l 'VALIDATOR3_MNEMONIC' 'scripts/.env'
# Verify the mnemonic format (should be 12 or 24 words)
cat scripts/.env | grep 'VALIDATOR3_MNEMONIC' | awk -F '=' '{print $2}' | tr -d '"' | tr ' ' '\n' | wc -l
Length of output: 160
add goleveldb
👮🏻👮🏻👮🏻 !!!! REFERENCE THE PROBLEM YOUR ARE SOLVING IN THE PR TITLE AND DESCRIBE YOUR SOLUTION HERE !!!! DO NOT FORGET !!!! 👮🏻👮🏻👮🏻
PR Checklist:
make
)make test
)go fmt
)golangci-lint run
)go list -json -m all | nancy sleuth
)Thank you for your code, it's appreciated! :)
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
These updates enhance the robustness and flexibility of integration testing within the application.