Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem: no enough db type tested for nodes #1710

Merged
merged 3 commits into from
Nov 29, 2024

Conversation

mmsqe
Copy link
Collaborator

@mmsqe mmsqe commented Nov 29, 2024

add goleveldb

👮🏻👮🏻👮🏻 !!!! REFERENCE THE PROBLEM YOUR ARE SOLVING IN THE PR TITLE AND DESCRIBE YOUR SOLUTION HERE !!!! DO NOT FORGET !!!! 👮🏻👮🏻👮🏻

PR Checklist:

  • Have you read the CONTRIBUTING.md?
  • Does your PR follow the C4 patch requirements?
  • Have you rebased your work on top of the latest master?
  • Have you checked your code compiles? (make)
  • Have you included tests for any non-trivial functionality?
  • Have you checked your code passes the unit tests? (make test)
  • Have you checked your code formatting is correct? (go fmt)
  • Have you checked your basic code style is fine? (golangci-lint run)
  • If you added any dependencies, have you checked they do not contain any known vulnerabilities? (go list -json -m all | nancy sleuth)
  • If your changes affect the client infrastructure, have you run the integration test?
  • If your changes affect public APIs, does your PR follow the C4 evolution of public contracts?
  • If your code changes public APIs, have you incremented the crate version numbers and documented your changes in the CHANGELOG.md?
  • If you are contributing for the first time, please read the agreement in CONTRIBUTING.md now and add a comment to this pull request stating that your PR is in accordance with the Developer's Certificate of Origin.

Thank you for your code, it's appreciated! :)

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced a new validator with enhanced configurations for better integration testing.
    • Expanded the number of validator accounts available for testing.
  • Bug Fixes

    • Improved assertions in mempool tests to dynamically reflect the number of pending transactions.
    • Enhanced rollback and upgrade tests to include additional nodes, ensuring comprehensive validation.
  • Documentation

    • Added a new environment variable for the mnemonic of the new validator.

These updates enhance the robustness and flexibility of integration testing within the application.

Copy link
Contributor

coderabbitai bot commented Nov 29, 2024

Walkthrough

The changes in this pull request introduce a new validator configuration in the integration_tests/configs/default.jsonnet file, which includes specific attributes and utilizes goleveldb as its database backend. Additionally, modifications are made to several test files, enhancing the functionality and robustness of the tests by including a new validator and adjusting the node configurations. Assertions in the tests are updated to reflect these changes, ensuring that the tests remain relevant to the new setup.

Changes

File Change Summary
integration_tests/configs/default.jsonnet Added a new validator configuration with attributes coins, staked, and mnemonic, using goleveldb as the database backend.
integration_tests/test_mempool.py Updated assertion in test_mempool to dynamically check the number of pending transactions based on the KEYS dictionary.
integration_tests/test_rollback.py Modified post_init to include an additional call for node 4 and updated the node list in test_rollback to reflect this change, maintaining the original test logic.
integration_tests/test_upgrade.py Enhanced upgrade process by starting an additional node (cronos_777-1-node2) and refined the upgrade proposal handling logic with improved error checks.
integration_tests/test_versiondb.py Updated test_versiondb_migration to start three nodes instead of two, and adjusted the waiting logic to ensure all nodes are initialized before assertions.
integration_tests/utils.py Added a new entry for "validator3" in the ACCOUNTS dictionary, initialized with an environment variable for its mnemonic.
scripts/.env Introduced a new environment variable VALIDATOR3_MNEMONIC with a specific mnemonic string value.

Possibly related PRs

Suggested reviewers

  • calvinaco
  • devashishdxt

Poem

🐇 In the meadow where the validators play,
A new friend joins us, hip-hip-hooray!
With coins and mnemonics, oh what a sight,
Together we'll hop through the day and the night.
So let’s test and roll back, let’s upgrade and cheer,
For every new change brings us closer, my dear! 🌼


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 3f86279 and bbe7c9e.

📒 Files selected for processing (2)
  • integration_tests/configs/default.jsonnet (1 hunks)
  • integration_tests/test_mempool.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • integration_tests/configs/default.jsonnet
  • integration_tests/test_mempool.py

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Nov 29, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 16.88%. Comparing base (a3c2f70) to head (bbe7c9e).
Report is 13 commits behind head on main.

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #1710   +/-   ##
=======================================
  Coverage   16.87%   16.88%           
=======================================
  Files          72       72           
  Lines        6163     6167    +4     
=======================================
+ Hits         1040     1041    +1     
- Misses       5000     5002    +2     
- Partials      123      124    +1     

@mmsqe mmsqe marked this pull request as ready for review November 29, 2024 13:38
@mmsqe mmsqe requested a review from a team as a code owner November 29, 2024 13:38
@mmsqe mmsqe requested review from JayT106 and devashishdxt and removed request for a team November 29, 2024 13:38
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (5)
scripts/.env (1)

4-4: Security consideration: Review the handling of sensitive data.

While these appear to be test accounts, storing mnemonics in plaintext within the repository could pose a security risk if accidentally used in non-test environments. Consider:

  • Using environment variables during CI/CD
  • Documenting clearly that these are test-only credentials
  • Adding checks to prevent usage in production
integration_tests/test_rollback.py (1)

70-71: Documentation clearly specifies db types for each node

The comments effectively document the db types being tested for each node. Consider adding brief explanations of why these specific db types were chosen for testing.

Consider expanding the documentation:

-    node3: test memiavl node
-    node4: test iavl node
+    node3: test memiavl node (in-memory IAVL+ tree implementation)
+    node4: test iavl node (persistent IAVL+ tree implementation)
integration_tests/test_versiondb.py (1)

Line range hint 1-24: Update docstring to reflect the third node's role

The test's docstring should be updated to document the role of node2 in the version database migration test, particularly its database configuration and how it contributes to testing different database types.

Consider updating the docstring like this:

 def test_versiondb_migration(cronos: Cronos):
     """
     test versiondb migration commands.
-    node0 has memiavl and versiondb enabled while node1 don't,
+    node0 has memiavl and versiondb enabled, node1 uses rocksdb, and node2 uses goleveldb,
integration_tests/configs/default.jsonnet (1)

68-80: Consider documenting the database backend strategy.

Since we now have three different database backends (rocksdb, pebbledb, and goleveldb), it would be helpful to document:

  1. The rationale for testing different database backends
  2. Any performance characteristics or trade-offs
  3. Recommended usage scenarios for each backend

Would you like me to help create a documentation template for this?

integration_tests/test_upgrade.py (1)

283-285: Consider refactoring duplicated node start commands.

The node start command is duplicated at lines 160-162. Consider extracting this into a helper function to maintain DRY principles and ensure consistent node management.

Example refactor:

+def start_all_nodes(supervisorctl):
+    supervisorctl(
+        "start",
+        "cronos_777-1-node0",
+        "cronos_777-1-node1",
+        "cronos_777-1-node2"
+    )

-c.supervisorctl(
-    "start",
-    "cronos_777-1-node0",
-    "cronos_777-1-node1",
-    "cronos_777-1-node2"
-)
+start_all_nodes(c.supervisorctl)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between de33114 and 3f86279.

📒 Files selected for processing (7)
  • integration_tests/configs/default.jsonnet (1 hunks)
  • integration_tests/test_mempool.py (1 hunks)
  • integration_tests/test_rollback.py (2 hunks)
  • integration_tests/test_upgrade.py (2 hunks)
  • integration_tests/test_versiondb.py (1 hunks)
  • integration_tests/utils.py (1 hunks)
  • scripts/.env (1 hunks)
🧰 Additional context used
🪛 GitHub Check: integration_tests (slow)
integration_tests/test_mempool.py

[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable


[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable


[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable


[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable


[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable

🔇 Additional comments (8)
scripts/.env (1)

4-4: Verify the integration of the new validator mnemonic.

The new VALIDATOR3_MNEMONIC appears to be properly formatted. Let's verify its integration with the test configurations.

✅ Verification successful

The new VALIDATOR3_MNEMONIC is properly integrated

The mnemonic is correctly integrated in both test configurations:

  • Used in integration_tests/utils.py to create validator3's account using Account.from_mnemonic
  • Properly configured in integration_tests/configs/default.jsonnet with the expected stake allocation and client configuration
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of VALIDATOR3_MNEMONIC in test configurations
# Expected: Find references to this mnemonic in test configuration files

# Search for references to the new validator in test configs
rg -l "VALIDATOR3_MNEMONIC" integration_tests/

# Check if the mnemonic is used in test configurations
fd -e jsonnet -e json . integration_tests/configs/ --exec grep -l "VALIDATOR3_MNEMONIC" {}

Length of output: 250


Script:

#!/bin/bash
# Let's check how VALIDATOR3_MNEMONIC is used in the found files
echo "=== Content from integration_tests/utils.py containing VALIDATOR3_MNEMONIC ==="
rg -A 5 -B 5 "VALIDATOR3_MNEMONIC" integration_tests/utils.py

echo -e "\n=== Content from integration_tests/configs/default.jsonnet containing VALIDATOR3_MNEMONIC ==="
rg -A 5 -B 5 "VALIDATOR3_MNEMONIC" integration_tests/configs/default.jsonnet

Length of output: 1455

integration_tests/test_mempool.py (1)

64-64: ⚠️ Potential issue

Fix incorrect assertion syntax causing TypeError

The current assertion has a syntax error where the subtraction operation is misplaced, causing a TypeError when attempting to subtract 1 from the dictionary items view object.

Apply this fix:

-    assert len(all_pending) == len(KEYS.items() - 1)
+    assert len(all_pending) == len(KEYS.items()) - 1

Note: Since this test is marked as flaky (max_runs=5), please verify that the assertion consistently passes with the fix.

✅ Verification successful

Fix incorrect assertion syntax causing TypeError

The assertion is incorrect as it's trying to subtract 1 from the dictionary's items view object. Looking at the context:

  1. ACCOUNTS dictionary has 6 entries (validator, validator2, validator3, community, signer1, signer2)
  2. The test explicitly excludes "signer1" in the send_txs call using [v for k, v in KEYS.items() if k != "signer1"]
  3. Therefore, all_pending should contain exactly 5 transactions

Fix:

-    assert len(all_pending) == len(KEYS.items() - 1)
+    assert len(all_pending) == len(KEYS.items()) - 1
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the KEYS dictionary structure to confirm expected length
rg -A 5 "KEYS\s*=" integration_tests/utils.py

Length of output: 291


Script:

#!/bin/bash
# Let's check the ACCOUNTS dictionary definition to understand the full context
rg -A 10 "ACCOUNTS\s*=" integration_tests/utils.py

Length of output: 667


Script:

#!/bin/bash
# Let's check the test context to understand how all_pending is populated and why we're excluding one key
rg -B 5 -A 5 "all_pending\s*=" integration_tests/test_mempool.py

Length of output: 1060

🧰 Tools
🪛 GitHub Check: integration_tests (slow)

[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable


[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable


[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable


[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable


[failure] 64-64:
test_mempool

TypeError: 'int' object is not iterable

integration_tests/test_rollback.py (2)

35-35: LGTM: Node 4 initialization added consistently

The addition of node 4 initialization follows the same pattern as node 3, maintaining consistency in the test setup.


73-73: Verify test configurations for nodes 3 and 4

The implementation correctly includes both nodes for testing different db types. However, we should verify that the corresponding configurations exist in the test setup.

✅ Verification successful

Let me gather more information to specifically check the validator configurations in the rollback test.


Test configurations for nodes 3 and 4 are properly defined

The rollback test configurations are correctly set up with both memiavl and iavl nodes:

  • Node 3 (rollback-test-memiavl) is configured with memiavl enabled
  • Node 4 (rollback-test-iavl) is configured with memiavl disabled (using iavl)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that test configurations exist for both nodes
# Expected: Find configurations for both memiavl and iavl nodes

# Check for node configurations in the rollback test config
rg -A 5 "validator[34]" integration_tests/configs/rollback.jsonnet

# Check for db backend configurations
rg -A 5 "(memiavl|iavl)" integration_tests/configs/

Length of output: 3122


Script:

#!/bin/bash
# Check the complete rollback.jsonnet file to understand validator configurations
cat integration_tests/configs/rollback.jsonnet

Length of output: 424

integration_tests/test_versiondb.py (1)

75-81: LGTM! Verify node startup sequence

The changes correctly handle the addition of the third node by:

  1. Starting all three nodes simultaneously
  2. Dynamically waiting for all validator ports to be available

Let's verify the node configuration and startup sequence:

✅ Verification successful

Node startup sequence verified successfully

The changes are correct and properly configured:

  1. The base configuration in default.jsonnet shows three validators with different database configurations:
    • node0: memiavl + versiondb enabled (rocksdb)
    • node1: pebbledb backend
    • node2: goleveldb backend
  2. The test correctly starts all three nodes and waits for their ports to be available
  3. The dynamic port check using len(cronos.config["validators"]) ensures all validator nodes are ready
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify node configuration and startup sequence
# Test 1: Check if node2 is properly configured in the validators list
rg -A 5 "validators" integration_tests/configs/

# Test 2: Verify the supervisor configuration for all nodes
rg -A 5 "cronos_777-1-node" integration_tests/configs/

Length of output: 5104


Script:

#!/bin/bash
# Let's try a different approach to verify the node configuration

# Check the test file itself for validator configuration
rg -B 5 -A 10 "def test_versiondb" integration_tests/test_versiondb.py

# Check supervisor configuration in the test
rg -B 5 -A 10 "supervisorctl" integration_tests/test_versiondb.py

# Look for any configuration setup in the test file
rg "config\[" integration_tests/test_versiondb.py

Length of output: 2046


Script:

#!/bin/bash
# Let's check the Cronos class configuration to understand the validator setup

# Check the network.py file for validator configuration
rg -B 5 -A 10 "class Cronos" integration_tests/network.py

# Look for validator configuration in the base configuration
cat integration_tests/configs/default.jsonnet

Length of output: 4430

integration_tests/configs/default.jsonnet (1)

68-80: LGTM! The validator configuration is consistent with existing patterns.

The new validator configuration properly introduces goleveldb as another database backend type for testing, maintaining consistency with the existing validator configurations.

Let's verify the environment variable setup and database backend support:

✅ Verification successful

Let me gather more information about the database backend support in the codebase.


Configuration is valid and environment variable is properly set

The verification confirms:

  • The VALIDATOR3_MNEMONIC environment variable is properly defined in scripts/.env
  • goleveldb is a supported database backend, evidenced by its presence in multiple core dependencies (versiondb, store, memiavl)
  • The configuration follows the established pattern, alongside existing validators using rocksdb and pebbledb

Consider documenting the supported database backends in the documentation for better visibility.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify environment variable and database backend support

# Check if VALIDATOR3_MNEMONIC is defined in the .env file
echo "Checking for VALIDATOR3_MNEMONIC in .env file..."
grep -l "VALIDATOR3_MNEMONIC" scripts/.env || echo "Warning: VALIDATOR3_MNEMONIC not found in .env"

# Check if goleveldb is mentioned in documentation or code
echo "Checking for goleveldb support..."
rg -l "goleveldb" docs/ || echo "Warning: goleveldb not documented"

Length of output: 340


Script:

#!/bin/bash
# Check for goleveldb support in the entire codebase
echo "Checking for goleveldb support in the codebase..."
rg -l "goleveldb"

# Check if there are any existing database backend configurations
echo -e "\nChecking existing database backend configurations..."
rg -A 2 "db_backend.*:" integration_tests/configs/

Length of output: 1061

integration_tests/test_upgrade.py (1)

160-162: LGTM! Verify node2's configuration.

The addition of cronos_777-1-node2 to the test setup aligns with the PR objective of testing more database types.

Let's verify the configuration of node2:

✅ Verification successful

Configuration for node2 (cronos_777-1-node2) is properly set up

The configuration file shows that the third validator (which corresponds to node2) is correctly configured with:

  • Database backend: goleveldb (both for Tendermint and application level)
  • Proper staking and initial token allocation
  • Correct broadcast mode settings

This aligns with the test setup's goal of testing different database types across nodes:

  • node0: rocksdb with memiavl
  • node1: pebbledb
  • node2: goleveldb
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify node2's configuration matches the expected database type
# Look for goleveldb configuration in the node2 config files

rg -A 5 "goleveldb|db_backend" integration_tests/configs/

Length of output: 1381


Script:

#!/bin/bash
# Let's verify how the nodes are configured in the test setup
# by checking the node configuration assignments

rg -B 5 -A 10 "nodes = \[" integration_tests/configs/

Length of output: 55


Script:

#!/bin/bash
# Let's try a different approach to find node configurations
# Search for node configuration patterns in the test setup

rg -A 10 "'nodes':" integration_tests/configs/

Length of output: 54


Script:

#!/bin/bash
# Let's examine the entire default.jsonnet configuration file
# to understand the node setup structure

cat integration_tests/configs/default.jsonnet

Length of output: 3815

integration_tests/utils.py (1)

39-39: LGTM! Verify environment variable setup.

The addition of validator3 follows the established pattern and integrates well with the test utilities.

Let's verify the environment variable setup:

✅ Verification successful

Environment variable setup verified successfully

The VALIDATOR3_MNEMONIC environment variable is properly configured:

  • Present in scripts/.env file
  • Contains a valid 24-word mnemonic phrase, following the BIP39 standard
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify VALIDATOR3_MNEMONIC is properly set in the environment file

# Check if VALIDATOR3_MNEMONIC is defined in .env file
rg -l 'VALIDATOR3_MNEMONIC' 'scripts/.env'

# Verify the mnemonic format (should be 12 or 24 words)
cat scripts/.env | grep 'VALIDATOR3_MNEMONIC' | awk -F '=' '{print $2}' | tr -d '"' | tr ' ' '\n' | wc -l

Length of output: 160

@mmsqe mmsqe added this pull request to the merge queue Nov 29, 2024
Merged via the queue into crypto-org-chain:main with commit 3beaf73 Nov 29, 2024
35 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants