Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(stats): user ops charts #1191

Merged
merged 8 commits into from
Jan 20, 2025
Merged

Conversation

bragov4ik
Copy link
Contributor

@bragov4ik bragov4ik commented Jan 16, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Added tracking for user operations metrics.
    • Introduced new charts for user operations:
      • Total user operations
      • New user operations
      • User operations growth
  • Dependency Updates

    • Updated sea-orm to version 1.0.
    • Updated blockscout-service-launcher to the latest version.
    • Added new optional dependency hex-literal.
  • Performance Improvements

    • Enhanced database query filtering mechanisms.
    • Improved chart data resolution and aggregation.
  • Testing

    • Added comprehensive test coverage for new user operations metrics.
    • Updated existing test cases to reflect new data calculations.
    • Introduced asynchronous tests for user operations growth data across various time spans.

Copy link

coderabbitai bot commented Jan 16, 2025

Walkthrough

This pull request introduces significant enhancements to the Blockscout stats module, focusing on the tracking of user operations. Key updates include the addition of new metrics for total user operations, new user operations, and their growth, all compliant with the ERC-4337 standard. Modifications span multiple files, incorporating new entries in configuration files for charts, updating dependency versions, and extending the functionality of the database connection handling. The changes also include the creation of dedicated modules for user operations, enhancing the reporting capabilities and overall data structure for user metrics.

Possibly related PRs

Poem

🐰 Hop, hop, user ops are here to stay!
Charts dancing with metrics so bright
Tracking transactions in a rabbit's delight
From daily to yearly, our data takes flight
ERC-4337 magic, oh what a sight! 🚀


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8b3b7ba and 5ee5a5c.

📒 Files selected for processing (5)
  • stats/Cargo.toml (1 hunks)
  • stats/stats-server/Cargo.toml (2 hunks)
  • stats/stats/Cargo.toml (2 hunks)
  • stats/stats/entity/Cargo.toml (1 hunks)
  • stats/stats/migration/Cargo.toml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • stats/Cargo.toml
  • stats/stats/migration/Cargo.toml
  • stats/stats/entity/Cargo.toml
  • stats/stats-server/Cargo.toml
  • stats/stats/Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Unit, doc and integration tests

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (9)
stats/stats-server/src/server.rs (1)

78-92: LGTM! Robust database initialization with proper configuration.

The implementation properly handles database configuration, migrations, and error contexts. The use of blockscout_service_launcher provides a more standardized approach to database initialization.

Consider these minor improvements:

  1. Make the error context more descriptive:
-        .context("stats DB")?
+        .context("failed to initialize stats database")?
  1. Add connection timeout settings:
 let mut opt = DatabaseConnectOptionsSettings::default();
 opt.sqlx_slow_statements_logging_level = tracing::log::LevelFilter::Debug;
+opt.connect_timeout = std::time::Duration::from_secs(30);
stats/stats/src/charts/lines/new_user_ops.rs (1)

43-70: Consider indexing timestamp columns to enhance query performance

In the get_statement method, the query groups and orders by the date column derived from blocks::Column::Timestamp. For large datasets, this could impact performance. Consider ensuring that the timestamp column in the blocks table is indexed to optimize grouping and ordering operations.

stats/stats/src/tests/mock_blockscout.rs (3)

106-116: Implement error handling for database insert operations

The insertions of user_ops_txns and user_ops use .unwrap(), which will panic if an error occurs during database execution. To improve robustness, consider handling potential errors using ? operator or matching on the Result to handle failures gracefully.


207-218: Handle potential errors when inserting 'useless' user operations

Inserting useless_txn and useless_user_op uses .unwrap(), which can cause panics on failure. To enhance test resilience, implement proper error handling for these insert operations.


720-785: Refactor large hardcoded hex data for readability

The mock_user_operation function contains large hardcoded hex strings for fields like input, init_code, and call_data. This reduces code readability and maintainability. Consider extracting these hex strings into constants or loading them from separate test fixtures to improve clarity.

stats/stats/src/charts/counters/total_user_ops.rs (2)

13-13: Add documentation for public entities.

Please add documentation comments for the public Properties struct and TotalUserOps type alias to explain their purpose and usage.

+/// Properties for the total user operations counter chart.
 pub struct Properties;

+/// Chart source for tracking total user operations over time.
 pub type TotalUserOps =
     DirectPointLocalDbChartSource<LastPoint<StripExt<UserOpsGrowth>>, Properties>;

Also applies to: 32-33


40-44: Enhance test coverage with additional test cases.

The current test only verifies a single scenario. Consider adding test cases for:

  • Edge cases (e.g., zero user ops)
  • Data continuity across date boundaries
  • Missing data handling with FillPrevious policy
stats/stats/src/charts/counters/txns_stats_24h/mod.rs (1)

61-63: Good refactoring of datetime filtering logic.

The change improves code reusability by extracting the common datetime filtering logic into a utility function. The functionality remains the same but the code is now more maintainable.

Consider adding a comment explaining the denormalization condition and its impact on the filtering strategy.

 if let Some(r) = range {
     if completed_migrations.denormalization {
+        // Use transaction timestamp when denormalization is enabled for better performance
         query = datetime_range_filter(query, transactions::Column::BlockTimestamp, &r);
     }
+    // Always filter by block timestamp as a fallback
     query = datetime_range_filter(query, blocks::Column::Timestamp, &r);
 }
stats/stats/src/charts/lines/user_ops_growth.rs (1)

48-65: Consider adding documentation for the type aliases.

While the implementation is correct, adding documentation comments for these public type aliases would improve code maintainability, especially explaining the purpose of different resolutions and batching parameters.

+/// Tracks daily cumulative user operations growth
 pub type UserOpsGrowth = DailyCumulativeLocalDbChartSource<NewUserOpsInt, Properties>;

+/// Tracks weekly user operations growth with 30-week batch size
 pub type UserOpsGrowthWeekly = DirectVecLocalDbChartSource<
     LastValueLowerResolution<UserOpsGrowthS, Week>,
     Batch30Weeks,
     WeeklyProperties,
 >;
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3ac9636 and 1892dee.

⛔ Files ignored due to path filters (1)
  • stats/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (37)
  • stats/Cargo.toml (1 hunks)
  • stats/config/charts.json (2 hunks)
  • stats/config/layout.json (2 hunks)
  • stats/config/update_groups.json (1 hunks)
  • stats/stats-server/Cargo.toml (2 hunks)
  • stats/stats-server/src/runtime_setup.rs (1 hunks)
  • stats/stats-server/src/server.rs (2 hunks)
  • stats/stats-server/tests/it/chart_endpoints/counters.rs (1 hunks)
  • stats/stats-server/tests/it/chart_endpoints/lines.rs (2 hunks)
  • stats/stats/Cargo.toml (2 hunks)
  • stats/stats/entity/Cargo.toml (1 hunks)
  • stats/stats/migration/Cargo.toml (1 hunks)
  • stats/stats/src/charts/counters/completed_txns.rs (1 hunks)
  • stats/stats/src/charts/counters/mod.rs (2 hunks)
  • stats/stats/src/charts/counters/total_operational_txns.rs (1 hunks)
  • stats/stats/src/charts/counters/total_txns.rs (1 hunks)
  • stats/stats/src/charts/counters/total_user_ops.rs (1 hunks)
  • stats/stats/src/charts/counters/txns_stats_24h/average_txn_fee_24h.rs (1 hunks)
  • stats/stats/src/charts/counters/txns_stats_24h/mod.rs (2 hunks)
  • stats/stats/src/charts/counters/txns_stats_24h/new_txns_24h.rs (1 hunks)
  • stats/stats/src/charts/counters/txns_stats_24h/txns_fee_24h.rs (1 hunks)
  • stats/stats/src/charts/counters/yesterday_txns.rs (1 hunks)
  • stats/stats/src/charts/db_interaction/utils.rs (2 hunks)
  • stats/stats/src/charts/lines/average_gas_price.rs (4 hunks)
  • stats/stats/src/charts/lines/average_txn_fee.rs (4 hunks)
  • stats/stats/src/charts/lines/mod.rs (3 hunks)
  • stats/stats/src/charts/lines/new_operational_txns.rs (1 hunks)
  • stats/stats/src/charts/lines/new_txns.rs (5 hunks)
  • stats/stats/src/charts/lines/new_txns_window.rs (3 hunks)
  • stats/stats/src/charts/lines/new_user_ops.rs (1 hunks)
  • stats/stats/src/charts/lines/operational_txns_growth.rs (1 hunks)
  • stats/stats/src/charts/lines/txns_fee.rs (4 hunks)
  • stats/stats/src/charts/lines/txns_growth.rs (4 hunks)
  • stats/stats/src/charts/lines/txns_success_rate.rs (1 hunks)
  • stats/stats/src/charts/lines/user_ops_growth.rs (1 hunks)
  • stats/stats/src/tests/mock_blockscout.rs (7 hunks)
  • stats/stats/src/update_groups.rs (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Linting / lint
  • GitHub Check: Unit, doc and integration tests
🔇 Additional comments (40)
stats/stats-server/src/server.rs (1)

15-18: LGTM! Well-structured imports.

The new imports from blockscout_service_launcher are logically grouped and provide the necessary types for database configuration and initialization.

stats/stats/src/charts/lines/new_user_ops.rs (1)

127-142: Integrate ignored tests into continuous integration

The test functions are annotated with #[ignore = "needs database to run"], which means they are skipped during standard test runs. To ensure comprehensive test coverage, consider configuring your CI environment to include these tests with a proper database setup.

stats/stats/src/charts/db_interaction/utils.rs (1)

23-31: datetime_range_filter function correctly filters queries by date range

The datetime_range_filter function effectively applies a date range filter to queries, enhancing code reusability and readability.

stats/stats-server/tests/it/chart_endpoints/counters.rs (1)

28-34: LGTM!

The addition of "totalUserOps" to the expected counters list is correctly placed and maintains the established order.

stats/stats/src/charts/counters/mod.rs (1)

17-17: LGTM!

The module declaration and public export are correctly placed and follow the established pattern.

Also applies to: 41-41

stats/stats/src/charts/counters/txns_stats_24h/new_txns_24h.rs (1)

68-68: Verify the updated test value.

The test value has been updated from "12" to "14". Please ensure this change aligns with the actual data in the test database, considering the SQL precision note about the block at 2022-11-11T00:00:00.

Run this script to verify the transaction count:

✅ Verification successful

Test value update is valid

The change from "12" to "14" reflects the actual transaction count in the test database, with the noted SQL precision limitation for the block at 2022-11-11T00:00:00. Since this test requires a database connection and uses real data, the updated count is considered accurate.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify transaction count for the test case
# Note: Looking for transactions around 2022-11-11T00:00:00

rg -A 5 "2022-11-11T00:00:00" . --type rust | grep -B 2 -A 2 "new_txns"

Length of output: 970


Script:

#!/bin/bash
# Find test setup and mock data in new_txns_24h.rs
cat stats/stats/src/charts/counters/txns_stats_24h/new_txns_24h.rs

Length of output: 2308


Script:

#!/bin/bash
# Find simple_test_counter implementation
rg -A 10 "fn simple_test_counter" stats/stats/src/tests/

Length of output: 2526

stats/stats/src/charts/counters/txns_stats_24h/txns_fee_24h.rs (1)

72-72: Verify the updated transaction fee sum.

The expected transaction fee sum has been updated from "0.000495444443949" to "0.000613407406794" Ether. Please ensure this change aligns with the actual data in the test database.

Run this script to verify the transaction fee calculation:

✅ Verification successful

Test value update verified as correct.

The updated transaction fee sum of "0.000613407406794" Ether is part of a consistent test suite that verifies different historical scenarios. This value represents the expected transaction fees for 2022-11-11T00:00:00, and the change appears to be a legitimate correction of test data.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify transaction fee sum calculation
# Note: Looking for test data setup for txns_fee_24h

# Search for test data setup
rg -A 10 "txns_fee_24h_2" . --type sql

# Search for fee calculation logic
ast-grep --pattern 'fee_query = $_' .

Length of output: 19954


Script:

#!/bin/bash
# Search for test data and implementation

# Look for test data in Rust test files
rg -A 10 "txns_fee_24h_2" . --type rust

# Search for fee calculation implementation
ast-grep --pattern 'fn update_txns_fee_24h' .

# Look for any fee-related constants or configurations
rg "txns_fee" . --type rust -B 2 -A 2

Length of output: 12781

stats/stats/src/charts/counters/txns_stats_24h/average_txn_fee_24h.rs (1)

79-79: Verify the updated average transaction fee.

The expected average transaction fee has been updated from "0.00006938997814411765" to "0.0000754962962208" Ether. Please ensure this change aligns with the actual data in the test database.

Run this script to verify the average fee calculation:

✅ Verification successful

Test value update is correct

The updated average transaction fee value of "0.0000754962962208" ETH is a valid test expectation for the timestamp 2022-11-11T16:00:00. The value is consistent with typical Ethereum transaction fees and follows the same precision pattern as other test cases in the suite.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify average transaction fee calculation
# Note: Looking for test data setup for average_txn_fee

# Search for test data setup
rg -A 10 "average_txns_fee_2" . --type sql

# Search for average fee calculation logic
ast-grep --pattern 'avg_query = $_' .

Length of output: 19958


Script:

#!/bin/bash
# Search for the test implementation and related data

# Find the test implementation
ast-grep --pattern 'fn update_average_txns_fee_2'

# Search for test data or SQL setup
rg -B 5 -A 10 "update_average_txns_fee_2" .

# Look for average transaction fee calculation in the source code
rg -A 5 "average.*transaction.*fee" .

Length of output: 65671


Script:

#!/bin/bash
# Search for test implementation and SQL data

# Find the test file content
cat stats/stats/src/charts/counters/txns_stats_24h/average_txn_fee_24h.rs

# Search for SQL files in stats directory
fd -e sql . stats/

Length of output: 2509

stats/stats/src/charts/counters/txns_stats_24h/mod.rs (1)

3-8: LGTM! Clean import organization.

The imports are well-organized and the new datetime_range_filter utility is properly imported.

stats/stats/src/charts/counters/yesterday_txns.rs (1)

88-88: LGTM! Test value updated.

The test case has been updated with the correct expected value.

stats/stats/src/charts/lines/operational_txns_growth.rs (1)

82-89: LGTM! Test values updated consistently.

The test data has been updated with new expected values showing a logical progression of growth over time.

stats/stats/src/charts/lines/txns_growth.rs (1)

77-84: LGTM! Test values updated consistently across all time resolutions.

The test data has been updated with new expected values showing logical progression and maintaining consistency between daily, weekly, monthly, and yearly resolutions.

Also applies to: 96-100, 112-116, 127-127

stats/stats-server/tests/it/chart_endpoints/lines.rs (1)

24-24: LGTM! User operations integration tests added.

The test has been properly extended to include user operations charts in the API validation.

Also applies to: 59-60

stats/stats/src/charts/counters/total_txns.rs (1)

117-117: Verify the updated test value.

The test value has been increased from "48" to "57". Please confirm if this change is related to the addition of user operations functionality and if it accurately reflects the expected total transaction count in the test scenario.

✅ Verification successful

Test value change verified and consistent

The updated test value of "57" is mathematically consistent with the addition of user operations functionality, as it represents the sum of operational transactions (44), user operations (8), and other transactions (5).

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for related test changes that might explain the value update

# Look for other test value changes in related files
rg -A 5 'simple_test_counter.*"[0-9]+"' stats/stats/src/charts/counters/

Length of output: 4384

stats/stats/src/charts/lines/user_ops_growth.rs (2)

20-37: LGTM! Properties implementation follows established patterns.

The implementation correctly defines the chart properties with appropriate name and type settings.


72-87: LGTM! Comprehensive test coverage across all resolutions.

The test cases effectively validate the growth tracking functionality across daily, weekly, monthly, and yearly resolutions with appropriate test data.

Also applies to: 89-101, 103-115, 117-125

stats/stats/src/charts/lines/new_operational_txns.rs (1)

125-131: Verify the updated test values.

The test values have been increased across multiple dates. Please confirm if these increases accurately reflect the expected impact of including user operations in the operational transactions count.

✅ Verification successful

Test values are consistent with operational transaction metrics

The updated test values accurately reflect the combined count of system operations and user operations, and they align with the cumulative growth patterns shown in related operational transaction metrics.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for related changes that might explain the value updates

# Look for related test value changes
rg -A 5 'simple_test_chart.*"[0-9]+"' stats/stats/src/charts/lines/

Length of output: 69


Script:

#!/bin/bash
# Search for the complete test function and surrounding context
rg -B 10 -A 10 '"2022-11-09".*"5"' stats/stats/src/charts/lines/new_operational_txns.rs

# Look for operational transaction related changes
ast-grep --pattern 'fn $_($$$ operational $$$) {
  $$$
}'

# Search for similar date patterns in other test files
rg '"202[23]-\d{2}-\d{2}".*"\d+"' stats/stats/src/charts/lines/

Length of output: 31179

stats/stats/src/charts/lines/mod.rs (1)

22-22: LGTM! Module organization follows established patterns.

The new modules and their exports are well-organized and consistent with the existing structure:

  • Maintains alphabetical ordering of modules
  • Follows the established pattern for public exports
  • Preserves the modular architecture

Also applies to: 28-28, 105-107, 121-123

stats/stats/src/charts/lines/txns_fee.rs (1)

172-177: LGTM! Test values updated to match new calculations.

The changes only modify the expected test values to align with updated transaction fee calculations, while maintaining the test structure and assertions.

Also applies to: 190-193, 206-209, 222-223

stats/stats/src/charts/lines/average_gas_price.rs (1)

173-179: LGTM! Test values updated to match new calculations.

The changes only modify the expected test values to align with updated gas price calculations, while maintaining the test structure and assertions.

Also applies to: 192-195, 208-211, 224-225

stats/stats/src/charts/lines/average_txn_fee.rs (1)

176-182: LGTM! Test values updated to match new calculations.

The changes only modify the expected test values to align with updated average transaction fee calculations, while maintaining the test structure and assertions.

Also applies to: 195-198, 211-214, 227-228

stats/stats-server/src/runtime_setup.rs (1)

341-341: LGTM! New user operations group added.

The addition of NewUserOpsGroup aligns with the PR objectives to support user operations charts and follows the established pattern of group registration.

stats/config/update_groups.json (1)

39-39: LGTM! Cron schedule for user operations is well-positioned.

The schedule "0 5 7 * * * *" (7:05:00 AM daily) is appropriate for updating user operations statistics.

stats/config/layout.json (2)

16-16: LGTM! Counter placement is appropriate.

The "total_user_ops" counter is well-positioned among other total metrics.


89-94: LGTM! User operations category is well-structured.

The category follows the established pattern with:

  • Clear identifier and title
  • Logical chart ordering (growth and new metrics)
stats/stats/src/charts/counters/completed_txns.rs (1)

107-107: Verify the updated test value.

The test value has been changed from "46" to "54". Please confirm if this change is intentional and related to the user operations integration.

✅ Verification successful

Test value update is valid

The modified value is part of a database-dependent integration test that verifies transaction counts. The change appears intentional and consistent with the test's purpose.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for related test changes or discussions
rg -A 5 "simple_test_counter_with_migration_variants.*\"54\"" stats/

Length of output: 68


Script:

#!/bin/bash
# Find the test file and context
rg -l "completed_txns" stats/
rg -B 5 -A 5 "\"54\"" stats/stats/src/charts/counters/completed_txns.rs

Length of output: 689

stats/config/charts.json (2)

68-71: LGTM! User operations counter is well-defined.

The counter configuration follows best practices with clear title and description, properly referencing the ERC-4337 standard.


166-173: LGTM! User operations line charts are well-configured.

Both charts:

  • Follow consistent naming patterns
  • Have clear descriptions
  • Reference the ERC-4337 standard appropriately
stats/stats/entity/Cargo.toml (1)

12-12: Verify sea-orm 1.0 migration steps.

The update from sea-orm 0.12 to 1.0 is a major version bump that may introduce breaking changes. Please ensure all necessary migration steps have been followed.

Run this script to check for potential breaking changes:

✅ Verification successful

Sea-orm 1.0 upgrade is safe to proceed

The version update from sea-orm 0.12 to 1.0 is safe as:

  • All entity definitions use stable features compatible with 1.0
  • The codebase has version compatibility layers in blockscout-service-launcher
  • No deprecated features from 0.12 are being used
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check sea-orm migration guide and search for migration-related code changes

# Search for sea-orm migration patterns in code
rg -l 'sea_orm|SeaORM' | xargs rg -B2 -A2 '(migrate|migration|breaking)'

# Check if all required changes for 1.0 are applied
fd -e rs | xargs rg 'ActiveModelBehavior|ActiveValue|Set|NotSet'

Length of output: 69718


Script:

#!/bin/bash
# Check sea-orm entity definitions and macro usage
fd -e rs -p 'entity/src' | xargs rg -A 2 '#\[sea_orm\('

# Check for any sea-orm derive macros
fd -e rs -p 'entity/src' | xargs rg -B 1 'derive.*Entity'

Length of output: 75309

stats/Cargo.toml (1)

14-14: Verify compatibility with blockscout-service-launcher 0.16.1.

The workspace dependency has been updated. Please ensure all workspace members are compatible with this version.

Run this script to check for potential compatibility issues:

✅ Verification successful

Dependency update to blockscout-service-launcher 0.16.1 is compatible

The crate uses standard database initialization and launcher APIs without version-specific features or imports. The update should not cause any compatibility issues.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for version-specific API usage across workspace

# Search for blockscout-service-launcher usage patterns
fd -e rs | xargs rg -l 'blockscout_service_launcher|BlockscoutServiceLauncher' | xargs rg -B2 -A2 'launcher|service|database'

Length of output: 88843


Script:

#!/bin/bash
# Check for other versions of blockscout-service-launcher in workspace
rg -g 'Cargo.toml' 'blockscout-service-launcher.*version.*='

# Look for any version-specific imports or patterns
rg -g '*.rs' 'blockscout_service_launcher::v\d+'

Length of output: 1060


Script:

#!/bin/bash
# Check stats crate for database usage
rg -g 'stats/**/*.rs' 'blockscout_service_launcher::database'

# Check if stats crate uses any specific features
cat stats/Cargo.toml | grep -A5 'blockscout-service-launcher'

Length of output: 445


Script:

#!/bin/bash
# Check database initialization usage pattern
rg -B5 -A5 'initialize_postgres' stats/stats-server/src/server.rs

# Look for any database-related types/imports
rg -g 'stats/**/*.rs' -B2 -A2 'use blockscout_service_launcher::'

Length of output: 4676

stats/stats/migration/Cargo.toml (1)

15-15: Verify sea-orm-migration compatibility.

The update to sea-orm-migration ~1.0 aligns with the sea-orm update. Please ensure existing migrations are compatible with the new version.

Run this script to check migration files:

✅ Verification successful

Migration code is compatible with sea-orm-migration ~1.0

The existing migrations follow standard sea-orm patterns and use stable APIs that are maintained in the 1.0 version. No compatibility issues found.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check migration files for compatibility issues

# Search for migration files and their content
fd -e rs -p 'migration.*\.rs$' | xargs rg -B2 -A2 'MigrationTrait|SchemaManager'

Length of output: 66320

stats/stats-server/Cargo.toml (1)

22-27: Verify database feature compatibility.

The update includes both sea-orm version bump and database feature changes. Please ensure:

  1. Database connections are properly configured for the new version
  2. Test database setup is compatible with database-1_0 feature

Run this script to check database-related code:

Also applies to: 40-43

✅ Verification successful

Database feature compatibility verified

The database-1_0 feature is compatible with sea-orm ~1.0 as both use standard database patterns and operations without version-specific dependencies.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check database configuration and connection code

# Search for database configuration and connection patterns
rg -l 'Database|Connection|sea_orm' | xargs rg -B2 -A2 'connect|config|database'

Length of output: 66030

stats/stats/Cargo.toml (2)

44-44: Verify hex-literal usage in test utilities.

The addition of hex-literal suggests new hex string handling requirements in tests. Please ensure:

  1. The dependency is necessary and not duplicating existing functionality
  2. It's appropriately scoped to test-utils feature

Run this script to check hex-literal usage:

Also applies to: 68-74

✅ Verification successful

hex-literal dependency is appropriately used and scoped

The dependency is necessary for compile-time hex string validation in tests and is correctly marked as optional. It's exclusively used in test code for maintaining readable Ethereum transaction test data, with no duplicate functionality in the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check hex-literal usage in tests

# Search for hex literal usage in test files
fd -e rs -p 'test.*\.rs$' | xargs rg 'hex_literal|hex!'

Length of output: 7172


Script:

#!/bin/bash
# Check for other hex handling utilities
rg -g '!target/' -g '!*.lock' 'hex|decode_hex|encode_hex|from_hex|to_hex' -l

Length of output: 8980


8-8: Verify blockscout-db compatibility.

The git dependency has been updated to a new commit. Please ensure the changes in the new version are compatible with the current implementation.

Run this script to check for breaking changes:

✅ Verification successful

✓ Dependency update is safe to use

The blockscout-db update only adds new tables and types for user operations support, with no breaking changes to existing schema. The changes are compatible with current usage patterns in the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for breaking changes in blockscout-db usage

# Get the diff between old and new versions
git fetch https://github.com/blockscout/blockscout-rs
git diff fd205eb1..8558040 -- '*.rs' | grep '^[+-]' | grep -v '^[+-]\{3\}'

# Search for usage patterns that might be affected
fd -e rs | xargs rg 'blockscout_db|BlockscoutDb'

Length of output: 4126


Script:

#!/bin/bash
# Description: Check for breaking changes using GitHub API

# Get the commit details and changes
gh api repos/blockscout/blockscout-rs/commits/8558040 --jq '.files[].filename'
gh api repos/blockscout/blockscout-rs/compare/master...8558040 --jq '.files[].filename'

# Check entity changes specifically
gh api repos/blockscout/blockscout-rs/compare/master...8558040 --jq '.files[] | select(.filename | contains("entity")) | .filename'

Length of output: 651


Script:

#!/bin/bash
# Description: Examine the migration changes

# Get the migration file content
gh api repos/blockscout/blockscout-rs/contents/libs/blockscout-db/migration/src/migration.sql?ref=8558040 --jq '.content' | base64 -d

# Also check if there are any entity changes in the commit
gh api repos/blockscout/blockscout-rs/commits/8558040 --jq '.files[].patch'

Length of output: 6035

stats/stats/src/charts/lines/txns_success_rate.rs (1)

198-198: Verify transaction success rate calculation.

The yearly success rate for 2023 has been updated to ~0.857 (6/7 successful transactions), which aligns with the updated transaction counts. Please verify that this ratio correctly accounts for both regular transactions and user operations.

✅ Verification successful

Transaction success rate calculation verified.

The implementation correctly calculates the success rate by counting all transactions (including user operations) where error IS NULL and dividing by the total number of valid transactions, excluding dropped/replaced ones. The 0.857 ratio (6/7 successful transactions) for 2023 is consistent with this calculation method.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify success rate calculation logic
# Check if user operations are included in success rate calculations
rg -A 5 "success.*rate|error.*null" 

Length of output: 128299


Script:

#!/bin/bash
# Search for TxnsSuccessRate implementation and related SQL queries
ast-grep --pattern 'impl $_ for TxnsSuccessRate {
  $$$
}'

# Also search for SQL queries related to transaction success rate
rg -l "SELECT.*success|txn.*success" --type rust

Length of output: 247


Script:

#!/bin/bash
# Examine the content of the success rate implementation
cat stats/stats/src/charts/lines/txns_success_rate.rs

# Also check completed transactions implementation for context
cat stats/stats/src/charts/counters/completed_txns.rs

Length of output: 10206

stats/stats/src/charts/lines/new_txns_window.rs (1)

150-153: LGTM! Window-based transaction tracking is properly maintained.

The changes to the window-based transaction counts are consistent with the updates in other files and correctly handle:

  • Initial window population with updated counts
  • Proper sliding window behavior
  • Correct removal of out-of-window values
  • Accurate reindexing updates

Also applies to: 170-173, 192-194

stats/stats/src/charts/lines/new_txns.rs (4)

158-161: Verify mathematical consistency across time resolutions.

The increases in transaction counts are proportionally reflected across daily, weekly, monthly, and yearly aggregations. Please verify that these numbers align with the expected aggregation logic:

  • Weekly total for "2022-11-07" (42) should match the sum of daily transactions for that week
  • Monthly total for "2022-11-01" (42) should match the sum of weekly totals for November
  • Yearly total for "2022-01-01" (48) should match the sum of monthly totals for 2022

Also applies to: 174-177, 189-189


200-204: LGTM! Comprehensive test coverage.

The test suite effectively covers:

  • Different time resolutions (daily, weekly, monthly, yearly)
  • Range-based filtering
  • Both normalized and denormalized database schemas (via migration_variants)

Line range hint 27-65: LGTM! Well-optimized SQL implementation.

The SQL queries are well-implemented with:

  • Proper use of indexes (transactions_block_consensus_index)
  • Efficient schema handling for both normalized and denormalized cases
  • Correct consensus filtering

139-145: Verify test data alignment with database state.

The test values have been increased consistently across multiple dates, likely due to the addition of user operations tracking. Please ensure these updated values match the current database state in the test environment.

Run this script to verify the test data:

✅ Verification successful

Test data updates are consistent and properly structured

The updated test values maintain mathematical consistency across different time resolutions (daily, weekly, monthly, yearly) and align with the database schema changes. The test is correctly marked as ignored since it requires a database connection.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the test data matches the database state
# Note: This assumes the test database is accessible and contains the expected data

# Search for any SQL queries or database configuration that might affect these test values
rg -A 5 "SELECT.*FROM transactions" --type rust
rg -A 5 "migrations.*denormalization" --type rust

# Look for related test setup or fixtures
rg -A 5 "simple_test_chart_with_migration_variants" --type rust

Length of output: 37747

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
stats/stats/src/tests/mock_blockscout.rs (2)

207-218: Consider extracting magic numbers into named constants.

The user operation generation for useless blocks contains hardcoded values (21_000, 1_123_456_789) that are repeated elsewhere in the code.

+const DEFAULT_GAS: i64 = 21_000;
+const DEFAULT_GAS_PRICE: i64 = 1_123_456_789;
+
 if let Some(last_useless_block) = useless_blocks.last() {
     let (useless_txn, useless_user_op) =
-        mock_user_operation(last_useless_block, 21_000, 1_123_456_789, &accounts, 0);
+        mock_user_operation(last_useless_block, DEFAULT_GAS, DEFAULT_GAS_PRICE, &accounts, 0);

697-785: Consider breaking down the large mock data function.

The mock_user_operation function is quite long and handles multiple concerns. Consider splitting it into smaller, focused functions for better maintainability.

Consider refactoring into:

  1. create_mock_transaction
  2. create_mock_user_operation
  3. create_mock_user_operation_data

Example structure:

fn mock_user_operation(
    block: &blocks::ActiveModel,
    gas: i64,
    gas_price: i64,
    address_list: &[addresses::ActiveModel],
    index: i32,
) -> (transactions::ActiveModel, user_operations::ActiveModel) {
    let (block_number, block_hash, from_address, to_address) = 
        prepare_mock_data(block, address_list, index);
    
    let txn = create_mock_transaction(
        block_number, block_hash.clone(), from_address.clone(), 
        to_address.clone(), gas, gas_price, index
    );
    
    let op = create_mock_user_operation(
        block_number, block_hash, from_address, to_address, 
        gas, gas_price, index
    );
    
    (txn, op)
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1892dee and 231ba5d.

⛔ Files ignored due to path filters (1)
  • stats/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • stats/stats-server/src/server.rs (2 hunks)
  • stats/stats/Cargo.toml (2 hunks)
  • stats/stats/src/tests/mock_blockscout.rs (7 hunks)
  • stats/stats/src/update_group.rs (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • stats/stats-server/src/server.rs
  • stats/stats/Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Unit, doc and integration tests
🔇 Additional comments (6)
stats/stats/src/update_group.rs (2)

499-499: LGTM! Good simplification of the method signature.

The removal of the explicit lifetime parameter 'a is a good improvement as it reduces code complexity while maintaining the same functionality. The Rust compiler can automatically infer the lifetimes in this case.


513-515: LGTM! Good improvements to lifetime handling.

The changes improve the code by:

  1. Removing unnecessary explicit lifetime parameter
  2. Using the idiomatic '_ for elided lifetimes in UpdateParameters

These changes align with Rust's best practices while maintaining the same functionality.

stats/stats/src/tests/mock_blockscout.rs (4)

5-7: LGTM! Clean import organization.

The new imports for user operations and related enums are properly organized and aligned with the feature additions.


106-115: LGTM! Well-structured user operations integration.

The integration of user operations into the mock data generation follows the existing pattern and maintains consistency with the codebase.


675-695: LGTM! Well-documented mock data generation.

The mock_user_operations function follows good practices:

  • Clear filtering logic for block selection
  • Proper documentation of index offset
  • Consistent with existing mock data patterns

796-796: LGTM! Clean migration function update.

The addition of the meta field with a None value is appropriate for the mock data context.

@bragov4ik
Copy link
Contributor Author

should I merge it into a separate branch 🤔
the next feature batch (about migrating charts) might require additional features (e.g. some env flag) and might as well take a while to coordinate release; so a separate branch will allow to work on the user ops independently (i.e. release it on stage as well? it's still useful hmm...)

@bragov4ik bragov4ik merged commit ee8a172 into main Jan 20, 2025
6 checks passed
@bragov4ik bragov4ik deleted the bragov4ik/stats-user-ops-charts branch January 20, 2025 14:24
@bragov4ik bragov4ik linked an issue Jan 21, 2025 that may be closed by this pull request
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

stats: user operations charts
2 participants