Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Correlation Between Lead Time and Pull Request Size #54

Merged
merged 1 commit into from
Nov 19, 2024

Conversation

AlexSim93
Copy link
Owner

@AlexSim93 AlexSim93 commented Nov 19, 2024

Pull Request

Description

  • Added Correlation Graphs: Introduced graphs showing the correlation between lead time and pull request size.
  • New Metrics: Displayed the number of PRs without reviews and the number of PRs without approvals.
  • Removed Comment Count: Removed the display of comment counts from the contribution table.
  • Improved Time Calculation: Updated the calculation for time to review, approval, and merge. If an action did not occur, the closing time is no longer substituted. Instead, PRs without reviews or approvals are explicitly displayed.

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Other (please specify)

How Has This Been Tested?

Used for different months and teams.

Checklist:

  • I have read the CONTRIBUTING document.
  • My code follows the code style of this project.
  • I have added tests to cover my changes.
  • My changes generate no new warnings.
  • I have updated the documentation accordingly.

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced new metrics for pull request tracking, including unreviewed and unapproved counts.
    • Added functionality to generate correlation graphs in issue comments.
    • Implemented size dependency XY chart generation for visualizing pull request metrics.
  • Bug Fixes

    • Adjusted PR size calculations for improved accuracy in categorization.
  • Documentation

    • Updated constants for better clarity in headers related to pull request statuses.
  • Refactor

    • Enhanced logic in various functions to streamline calculations and improve data handling.

Copy link

coderabbitai bot commented Nov 19, 2024

Walkthrough

The pull request introduces substantial modifications to the Collection type in src/converters/types.ts, adding new optional properties for tracking pull request metrics. A new function, calcPRsize, is introduced to compute pull request sizes, utilizing a constant deletionCoefficient. The getPullRequestSize function is updated to leverage this new utility. Additional changes include enhancements to the preparePullRequestStats and preparePullRequestTimeline functions, which now incorporate new metrics and refined calculations. New utility functions for generating charts and reports are also added, improving the overall functionality of the codebase.

Changes

File Path Change Summary
src/converters/types.ts - Added properties: unreviewed?: number, unapproved?: number, sizes?: Record<string, {...} in Collection type.
src/converters/utils/calculations/calcPRsize.ts - Added method: `calcPRsize(additions: number
src/converters/utils/calculations/constants.ts - Added constant: export const deletionCoefficient = 0.2;.
src/converters/utils/calculations/getPullRequestSize.ts - Updated method to use calcPRsize: `getPullRequestSize(additions: number
src/converters/utils/calculations/index.ts - Added export: deletionCoefficient from ./constants.
src/converters/utils/preparePullRequestStats.ts - Updated function to include calculations for sizes property in collection. Signature: export const preparePullRequestStats = (collection: Collection).
src/converters/utils/preparePullRequestTimeline.ts - Modified to use getPullRequestSize and calcPRsize. Updated handling of timeToReview and timeToApprove. Added properties: unreviewed and unapproved. Signature: export const preparePullRequestTimeline.
src/createOutput.ts - Added functionality to post comments with correlation graphs if SHOW_CORRELATION_GRAPHS is true.
src/view/utils/constants.ts - Updated constant: additionsDeletionsHeader to "Additions / Deletions". Added constants: unreviewedPrsHeader and unapprovedPrsHeader.
src/view/utils/createConfigParamsCode.ts - Added parameter: "SHOW_CORRELATION_GRAPHS" in the report settings.
src/view/utils/createDependencyMarkdown.ts - Added method: createDependencyMarkdown(data, users, references).
src/view/utils/createReviewTable.ts - Modified PR size calculation formula to additions + deletions * 0.2.
src/view/utils/createSizeDependencyXYChart.ts - Added method: createSizeDependencyXYChart(data, type, user).
src/view/utils/createTimelineMonthComparisonChart.ts - Enhanced filtering logic for dates in createTimelineMonthComparisonChart.
src/view/utils/createTotalTable.ts - Added headers: unreviewedPrsHeader, unapprovedPrsHeader. Modified PR size calculation formula.
src/view/utils/index.ts - Added export: createDependencyMarkdown from ./createDependencyMarkdown.

Possibly related PRs

  • Feature: Added largest PRs list #53: The changes in the PullRequestTimelineInfo type in src/converters/types.ts are related to the modifications made to the Collection type in the main PR, as both involve enhancements to the data structures that track pull request metrics.

Poem

🐰 In the burrow where code does play,
New metrics hop and dance today.
With sizes tracked and graphs to show,
Pull requests bloom, watch them grow!
Unreviewed and unapproved, we cheer,
A brighter path for all, oh dear! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary or @coderabbitai review to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@AlexSim93
Copy link
Owner Author

@coderabbitai review

Copy link

coderabbitai bot commented Nov 19, 2024

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Outside diff range and nitpick comments (18)
src/converters/utils/calculations/calcPRsize.ts (2)

1-8: Add JSDoc documentation explaining the PR size calculation

This function implements an important metric for PR size calculation, but lacks documentation explaining the formula and the purpose of the deletion coefficient.

Add comprehensive documentation:

import { deletionCoefficient } from "./constants";

+/**
+ * Calculates the size of a pull request based on the number of additions and deletions.
+ * The formula weights deletions differently from additions using a coefficient to better
+ * reflect the actual complexity and review effort required.
+ *
+ * @param additions - The number of lines added in the PR
+ * @param deletions - The number of lines deleted in the PR
+ * @returns A numerical value representing the PR size
+ *
+ * @example
+ * calcPRsize(100, 50) // returns 100 + (50 * 0.2) = 110
+ */
export const calcPRsize = (

1-8: Add unit tests for the PR size calculation

The PR objectives mention that tests were not added. This utility function is critical for the PR size correlation feature and should be thoroughly tested.

Would you like me to help generate comprehensive unit tests covering:

  • Basic addition/deletion calculations
  • Edge cases (undefined inputs, zero values)
  • Negative value handling
  • Coefficient application accuracy
src/converters/utils/calculations/getPullRequestSize.ts (1)

Line range hint 10-24: Consider documenting size thresholds.

The size thresholds (50, 200, 400, 700) play a crucial role in PR categorization. Consider adding documentation explaining the rationale behind these thresholds and their impact on lead time correlation analysis.

 export const getPullRequestSize = (
   additions: number | undefined,
   deletions: number | undefined
 ): PullRequestSize => {
+  // Size thresholds based on weighted sum of additions and deletions:
+  // xs: ≤50 - Very small changes like typo fixes
+  // s:  ≤200 - Small feature changes or bug fixes
+  // m:  ≤400 - Medium-sized features or multiple bug fixes
+  // l:  ≤700 - Large features or significant refactoring
+  // xl: >700 - Very large changes that might need special attention
   const size = calcPRsize(additions, deletions);
src/converters/utils/calculations/index.ts (1)

13-13: Consider documenting the coefficient's purpose.

Since deletionCoefficient is used in PR size calculations, it would be helpful to document its purpose and impact on the size calculation formula in the constants file.

Would you like me to help generate the documentation for the constant?

src/view/utils/createConfigParamsCode.ts (1)

Line range hint 6-41: Consider grouping related configuration parameters.

The configuration parameters could be organized into logical groups for better maintainability. For example:

  • Repository/Organization settings
  • Time-related settings
  • Display/UI settings
  • Filtering settings

Consider applying this grouping:

 ${[
   // Repository settings
   "GITHUB_OWNERS_REPOS",
   "ORGANIZATIONS",
   "GITHUB_OWNER_FOR_ISSUE",
   "GITHUB_REPO_FOR_ISSUE",
   
   // Time settings
   "REVIEW_TIME_INTERVALS",
   "APPROVAL_TIME_INTERVALS",
   "MERGE_TIME_INTERVALS",
   "TIMEZONE",
   "CORE_HOURS_START",
   "CORE_HOURS_END",
   "HOLIDAYS",
   "REPORT_PERIOD",
   "REPORT_DATE_START",
   "REPORT_DATE_END",
   
   // Display settings
   "SHOW_STATS_TYPES",
   "SHOW_CORRELATION_GRAPHS",
   "USE_CHARTS",
   "TOP_LIST_AMOUNT",
   
   // Filtering settings
   "ASSIGNEES",
   "LABELS",
   "HIDE_USERS",
   "SHOW_USERS",
   "INCLUDE_LABELS",
   "EXCLUDE_LABELS",
   
   // Other settings
   "AGGREGATE_VALUE_METHODS",
   "PERCENTILE",
   "AMOUNT",
   "PERIOD_SPLIT_UNIT",
   "EXECUTION_OUTCOME",
   "ISSUE_NUMBER",
 ]
src/view/utils/createDependencyMarkdown.ts (1)

7-11: Add JSDoc documentation for better maintainability

While the types are well-defined, adding JSDoc documentation would improve maintainability by explaining:

  • The purpose of the function
  • The expected structure of the data parameter
  • The format requirements for users
  • The impact of references on the output

Example improvement:

+/**
+ * Creates a markdown string containing dependency charts for user data
+ * @param data - Nested record of collections containing user statistics
+ * @param users - Array of user IDs to generate charts for
+ * @param references - Optional array of reference links to include
+ * @returns Markdown string containing charts and references
+ */
 export const createDependencyMarkdown = (
src/view/utils/constants.ts (1)

10-11: Consider using full words instead of abbreviations.

While the constants are functionally correct and align with the PR objectives to track unreviewed and unapproved PRs, consider using "without" instead of "w/o" for better clarity:

-export const unreviewedPrsHeader = "PRs w/o review";
-export const unapprovedPrsHeader = "PRs w/o approval";
+export const unreviewedPrsHeader = "PRs without review";
+export const unapprovedPrsHeader = "PRs without approval";
src/view/utils/createSizeDependencyXYChart.ts (4)

1-5: Consider enhancing type safety for size categories

The sizes array could benefit from being defined as a union type or enum to ensure type safety throughout the codebase.

+type PRSize = "xs" | "s" | "m" | "l" | "xl";
-const sizes = ["xs", "s", "m", "l", "xl"];
+const sizes: PRSize[] = ["xs", "s", "m", "l", "xl"];

7-11: Consider simplifying the data structure

The nested Record type Record<string, Record<string, Collection>> suggests a complex data structure that might be difficult to maintain. Consider creating a dedicated interface to represent this structure more explicitly.

interface UserStats {
  [userId: string]: {
    [category: string]: Collection;
  };
}

13-15: Simplify title construction logic

The title construction logic is complex and could be simplified for better readability.

-    title: `Pull request's time/size graph(${
-      type === "percentile" ? parseInt(getValueAsIs("PERCENTILE")) : ""
-    }${type === "percentile" ? "th " : ""}${type}) ${user}`,
+    title: createChartTitle(type, user),

+function createChartTitle(type: StatsType, user: string): string {
+  const percentile = type === "percentile" ? `${parseInt(getValueAsIs("PERCENTILE"))}th ` : "";
+  return `Pull request's time/size graph(${percentile}${type}) ${user}`;
+}

7-75: Add unit tests for the chart creation utility

The PR objectives mention that tests were not added. This utility function would benefit from unit tests to verify:

  • Correct calculation of time conversions
  • Proper handling of null/undefined values in the data structure
  • Accurate chart configuration generation

Would you like me to help create unit tests for this utility function?

src/view/utils/createTotalTable.ts (1)

57-57: Consider documenting the new metrics in the description.

While the PR size calculation is well documented, consider adding explanations for the new "unreviewed" and "unapproved" metrics to help users understand these columns.

-        "**Reviews conducted** - number of reviews conducted. 1 PR may have only single review.\n**PR Size** - determined using the formula: `additions + deletions * 0.2`. Based on this calculation: 0-50: xs, 51-200: s, 201-400: m, 401-700: l, 701+: xl\n**Total reverted PRs** - The number of reverted PRs based on the branch name pattern `/^revert-d+/`. This pattern is used for reverts made via GitHub.",
+        "**Reviews conducted** - number of reviews conducted. 1 PR may have only single review.\n**Unreviewed PRs** - number of PRs that haven't received any reviews.\n**Unapproved PRs** - number of PRs that haven't received approval.\n**PR Size** - determined using the formula: `additions + deletions * 0.2`. Based on this calculation: 0-50: xs, 51-200: s, 201-400: m, 401-700: l, 701+: xl\n**Total reverted PRs** - The number of reverted PRs based on the branch name pattern `/^revert-d+/`. This pattern is used for reverts made via GitHub.",
src/converters/types.ts (2)

53-54: LGTM! Consider future extensibility.

The new optional properties align well with the PR objectives for tracking unreviewed and unapproved PRs. The implementation is type-safe and consistent with existing patterns.

Consider creating a separate interface for tracking metrics in the future if more tracking properties are added, to keep the Collection type manageable:

interface TrackingMetrics {
  unreviewed?: number;
  unapproved?: number;
  // future metrics
}

type Collection = {
  tracking?: TrackingMetrics;
  // other properties
}

86-108: LGTM! Consider extracting reusable types.

The structure effectively captures size-based metrics with proper typing for both raw data and statistical calculations. However, the nested structure could be more maintainable if broken down into smaller, reusable types.

Consider extracting the repeated time metric types into separate interfaces:

interface TimeMetrics {
  timeToApprove?: number;
  timeToReview?: number;
  timeToMerge?: number;
}

interface SizeMetrics {
  timeToApprove: number[];
  timeToReview: number[];
  timeToMerge: number[];
  percentile: TimeMetrics;
  average: TimeMetrics;
  median: TimeMetrics;
}

type Collection = {
  // ...other properties
  sizes?: Record<string, SizeMetrics>;
}

This refactoring would:

  1. Reduce repetition in the type definition
  2. Make it easier to maintain consistent time metric properties
  3. Allow reuse of these types in other parts of the codebase
src/createOutput.ts (2)

61-61: Consider using boolean configuration value.

The string comparison === "true" for configuration is error-prone and inconsistent with boolean configuration patterns.

Consider using a boolean value:

-if (getValueAsIs("SHOW_CORRELATION_GRAPHS") === "true") {
+const showCorrelationGraphs = getValueAsIs("SHOW_CORRELATION_GRAPHS") === "true";
+if (showCorrelationGraphs) {

61-77: Consider extracting comment creation logic to reduce duplication.

The correlation graphs feature follows a similar pattern to the timeline comparison feature above it. Consider extracting the common pattern to reduce code duplication and improve maintainability.

Consider creating a utility function:

async function createCommentFromMarkdown(
  issueNumber: number,
  markdown: string,
  title: string
): Promise<{ comment: any; title: string }> {
  const comment = await octokit.rest.issues.createComment({
    repo: getValueAsIs("GITHUB_REPO_FOR_ISSUE"),
    owner: getValueAsIs("GITHUB_OWNER_FOR_ISSUE"),
    issue_number: issueNumber,
    body: markdown,
  });
  return { comment, title };
}

Then use it for both timeline and correlation graphs:

if (monthComparison) {
  comments.push(
    await createCommentFromMarkdown(
      issue.data.number,
      monthComparison,
      "retrospective timeline"
    )
  );
}

if (showCorrelationGraphs) {
  const dependencyMarkdown = createDependencyMarkdown(/*...*/);
  if (dependencyMarkdown) {
    comments.push(
      await createCommentFromMarkdown(
        issue.data.number,
        dependencyMarkdown,
        "Correlation Graphs"
      )
    );
  }
}
src/converters/utils/preparePullRequestStats.ts (1)

37-60: Improve code readability and maintainability.

The current implementation could be improved in several ways:

  1. Use destructuring in the reducer parameters
  2. Extract the metrics calculation into a separate function
  3. Use more descriptive variable names

Consider refactoring to:

- sizes: Object.entries(collection?.sizes || {}).reduce(
-   (acc, el) => ({
-     ...acc,
-     [el[0]]: {
-       ...acc[el[0]],
-       percentile: {
-         timeToReview: calcPercentileValue(acc[el[0]]?.timeToReview),
-         timeToApprove: calcPercentileValue(acc[el[0]]?.timeToApprove),
-         timeToMerge: calcPercentileValue(acc[el[0]]?.timeToMerge),
-       },
-       median: {
-         timeToReview: calcMedianValue(acc[el[0]]?.timeToReview),
-         timeToApprove: calcMedianValue(acc[el[0]]?.timeToApprove),
-         timeToMerge: calcMedianValue(acc[el[0]]?.timeToMerge),
-       },
-       average: {
-         timeToReview: calcAverageValue(acc[el[0]]?.timeToReview),
-         timeToApprove: calcAverageValue(acc[el[0]]?.timeToApprove),
-         timeToMerge: calcAverageValue(acc[el[0]]?.timeToMerge),
-       },
-     },
-   }),
-   collection?.sizes || {}
- ),
+ const calculateMetrics = (data: typeof collection.sizes[string]) => ({
+   percentile: {
+     timeToReview: calcPercentileValue(data?.timeToReview),
+     timeToApprove: calcPercentileValue(data?.timeToApprove),
+     timeToMerge: calcPercentileValue(data?.timeToMerge),
+   },
+   median: {
+     timeToReview: calcMedianValue(data?.timeToReview),
+     timeToApprove: calcMedianValue(data?.timeToApprove),
+     timeToMerge: calcMedianValue(data?.timeToMerge),
+   },
+   average: {
+     timeToReview: calcAverageValue(data?.timeToReview),
+     timeToApprove: calcAverageValue(data?.timeToApprove),
+     timeToMerge: calcAverageValue(data?.timeToMerge),
+   },
+ });
+
+ sizes: Object.entries(collection?.sizes || {}).reduce(
+   (acc, [size, data]) => ({
+     ...acc,
+     [size]: {
+       ...data,
+       ...calculateMetrics(data),
+     },
+   }),
+   collection?.sizes || {}
+ ),
🧰 Tools
🪛 Biome

[error] 39-39: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

src/converters/utils/preparePullRequestTimeline.ts (1)

118-141: Refactor 'sizes' object construction for better readability

The nested ternary operations and spread syntax in the sizes object construction make the code difficult to read and maintain. Consider refactoring to improve clarity and reduce complexity.

For example, you can abstract the repetitive logic into a helper function:

const addTimeMetric = (metricName, timeValue) => {
  return timeValue !== undefined && timeValue !== null
    ? [
        ...(collection?.sizes?.[pullRequestSize]?.[metricName] || []),
        timeValue,
      ]
    : collection?.sizes?.[pullRequestSize]?.[metricName];
};

sizes: {
  ...(collection.sizes || {}),
  [pullRequestSize]: {
    ...(collection.sizes?.[pullRequestSize] || {}),
    timeToReview: addTimeMetric('timeToReview', timeToReview),
    timeToApprove: addTimeMetric('timeToApprove', timeToApprove),
    timeToMerge: addTimeMetric('timeToMerge', timeToMerge),
  },
},
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 2bb1d1a and 17361fd.

⛔ Files ignored due to path filters (4)
  • README.md is excluded by none and included by none
  • action.yml is excluded by none and included by none
  • build/index.js is excluded by !build/** and included by none
  • package.json is excluded by none and included by none
📒 Files selected for processing (16)
  • src/converters/types.ts (2 hunks)
  • src/converters/utils/calculations/calcPRsize.ts (1 hunks)
  • src/converters/utils/calculations/constants.ts (1 hunks)
  • src/converters/utils/calculations/getPullRequestSize.ts (1 hunks)
  • src/converters/utils/calculations/index.ts (1 hunks)
  • src/converters/utils/preparePullRequestStats.ts (1 hunks)
  • src/converters/utils/preparePullRequestTimeline.ts (5 hunks)
  • src/createOutput.ts (2 hunks)
  • src/view/utils/constants.ts (1 hunks)
  • src/view/utils/createConfigParamsCode.ts (1 hunks)
  • src/view/utils/createDependencyMarkdown.ts (1 hunks)
  • src/view/utils/createReviewTable.ts (1 hunks)
  • src/view/utils/createSizeDependencyXYChart.ts (1 hunks)
  • src/view/utils/createTimelineMonthComparisonChart.ts (1 hunks)
  • src/view/utils/createTotalTable.ts (3 hunks)
  • src/view/utils/index.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/converters/utils/calculations/constants.ts
🧰 Additional context used
🪛 Biome
src/converters/utils/preparePullRequestStats.ts

[error] 39-39: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

🔇 Additional comments (17)
src/converters/utils/calculations/calcPRsize.ts (1)

1-8: Verify the deletion coefficient value's impact

The deletion coefficient of 0.2 significantly reduces the weight of deletions in the PR size calculation. Let's verify this aligns with your team's needs.

✅ Verification successful

Let me gather more context about the PR size categorization to better understand the impact of the deletion coefficient.


Let me check the documentation to understand if there's any rationale provided for the deletion coefficient value.


The deletion coefficient of 0.2 is appropriate for this codebase

The documentation and usage patterns show that this PR size calculation is used for categorizing PRs into size buckets (xs/s/m/l/xl) for analytics purposes. The lower weight for deletions (0.2) is actually beneficial because:

  1. The correlation graphs in the documentation show that even smaller PRs (xs/s) have meaningful review times, suggesting the current size categorization is working well for analysis.
  2. The workload stats across multiple months show a good distribution of PR sizes, with most PRs falling into xs/s categories, which is a healthy pattern.
  3. The formula helps prevent large-scale deletions (like removing deprecated code) from artificially inflating PR sizes and skewing the analytics.
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Analyze the impact of the deletion coefficient on PR sizes

# Find the constant definition
echo "Current deletion coefficient:"
rg "deletionCoefficient.*=.*0\.2" -A 2

# Find existing PR size calculations to understand typical values
echo "\nCurrent usage in codebase:"
rg "calcPRsize\(.*\)" -A 2

Length of output: 631


Script:

#!/bin/bash
# Find the complete PR size categorization logic
echo "PR size categorization logic:"
rg -A 10 "const size = calcPRsize" 

# Look for any tests or documentation about PR size calculation
echo "\nTests or documentation about PR size:"
rg -l "PR size|pull request size" --type md
rg -l "test.*PR.*size|test.*calcPR" --type ts

Length of output: 1155


Script:

#!/bin/bash
# Check documentation for any mention of deletion coefficient or PR size calculation
echo "Documentation about PR size calculation:"
cat README.md examples/periodReport.md examples/nPRsReport.md | grep -A 5 -B 5 "size\|deletion"

# Get the complete PR size calculation logic file
echo "\nComplete PR size calculation implementation:"
cat src/converters/utils/calculations/getPullRequestSize.ts

Length of output: 21626

src/converters/utils/calculations/getPullRequestSize.ts (2)

1-2: LGTM! Good separation of concerns.

The extraction of size calculation logic into a separate utility function improves code maintainability and reusability.


9-9: Verify the impact of the coefficient change.

The change from 0.5 to 0.2 for the deletion coefficient in calcPRsize significantly affects how deletions contribute to the PR size calculation. This could impact PR size categorization and consequently affect metrics and correlations.

✅ Verification successful

Impact of deletion coefficient change is limited and well-defined

The change from 0.5 to 0.2 for the deletion coefficient has a clear and contained impact:

  1. It affects the raw size points calculation (calcPRsize) which weighs deletions at 0.2x instead of 0.5x
  2. These size points are used to categorize PRs into fixed size buckets (xs: ≤50, s: ≤200, m: ≤400, l: ≤700, xl: >700)
  3. The only visualization using raw size points is the total table sorting, which is a non-critical display order

The change makes deletions count less towards the total size, which better reflects that deletions often require less review effort than additions. The fixed size thresholds remain unchanged, providing stable categorization boundaries.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Analyze the impact of coefficient change on PR size categorization
# Look for patterns of PR size usage in the codebase to understand potential impacts

# Find files using PR size categories
echo "Files potentially affected by PR size changes:"
rg -l "xs\"|s\"|m\"|l\"|xl\"" 

# Look for size-based logic or metrics
echo "\nSize-based logic or metrics:"
rg -A 3 "getPullRequestSize|calcPRsize"

# Check for any size-based configurations
echo "\nSize-based configurations:"
fd -e json -e yaml -e yml -e config | xargs rg "size|threshold"

Length of output: 12483


Script:

#!/bin/bash
# Let's analyze the specific implementation and usage of the PR size calculation

# Check the implementation of calcPRsize and its coefficient
echo "PR size calculation implementation:"
cat src/converters/utils/calculations/calcPRsize.ts

# Check the size thresholds and categorization
echo -e "\nPR size categorization logic:"
cat src/converters/utils/calculations/getPullRequestSize.ts

# Check where size points are used in visualization/reporting
echo -e "\nSize points usage in visualization:"
rg -A 3 "sizePoints" src/view/

# Check if there are any size-based configurations or thresholds
echo -e "\nSize configurations:"
cat src/converters/utils/calculations/constants.ts

Length of output: 1583

src/converters/utils/calculations/index.ts (1)

13-13: LGTM! Verify the usage of deletionCoefficient.

The export addition is clean and well-organized. Let's verify its usage in the codebase.

✅ Verification successful

Export is correctly used in PR size calculations

The deletionCoefficient is properly imported and used in calcPRsize.ts to weight deletions differently (0.2x) when calculating PR size. The constant is well-organized within the calculations module and its usage aligns with the PR's objectives.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of deletionCoefficient constant
# Expected: Should be used in PR size calculations

# Check for imports of deletionCoefficient
rg -l "import.*deletionCoefficient.*from.*calculations" 

# Check for direct usage of deletionCoefficient
rg "deletionCoefficient" --type ts

Length of output: 488

src/view/utils/createConfigParamsCode.ts (1)

20-20: LGTM! The new configuration parameter aligns with the PR objective.

The addition of "SHOW_CORRELATION_GRAPHS" follows the existing naming convention and integrates well with the configuration generation logic.

src/view/utils/createDependencyMarkdown.ts (2)

1-5: LGTM! Well-organized imports

The imports are logically organized and all appear to be necessary for the function's implementation.


23-23: LGTM! Clean output construction

The output construction is clean and handles whitespace appropriately.

src/view/utils/index.ts (2)

1-1: LGTM! The new export aligns with PR objectives.

The addition of createDependencyMarkdown follows the established patterns and naming conventions while supporting the new correlation graph feature.


1-1: Verify the implementation of createDependencyMarkdown

Let's verify the implementation of the newly exported function to ensure it properly handles the correlation graph generation.

✅ Verification successful

Let me gather more information about the implementation details of createDependencyMarkdown.


Implementation of createDependencyMarkdown is properly handling correlation graph generation

The implementation correctly:

  • Filters users with sufficient data points (more than 1 size category)
  • Creates XY charts showing time/size correlations for review, approval, and merge times
  • Handles different aggregation methods (mean, median, percentile)
  • Properly formats the output with references and charts
  • Converts time values from minutes to hours with appropriate rounding
  • Uses consistent color coding (gold, chartreuse, blueviolet) for different metrics
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the implementation and usage of createDependencyMarkdown

# Check the implementation
echo "Checking createDependencyMarkdown implementation:"
ast-grep --pattern 'export function createDependencyMarkdown' -A 10

# Check its usage in createOutput
echo -e "\nChecking usage in createOutput:"
rg "createDependencyMarkdown" -A 5

# Verify the SHOW_CORRELATION_GRAPHS parameter handling
echo -e "\nChecking correlation graphs parameter handling:"
rg "SHOW_CORRELATION_GRAPHS" -A 5

Length of output: 10242


Script:

#!/bin/bash
# Check the full implementation of createDependencyMarkdown
cat src/view/utils/createDependencyMarkdown.ts

# Check the implementation of related utility functions
echo -e "\nChecking createSizeDependencyXYChart implementation:"
cat src/view/utils/createSizeDependencyXYChart.ts

Length of output: 3110

src/view/utils/createTimelineMonthComparisonChart.ts (1)

30-36: LGTM! The updated filtering logic aligns with PR objectives.

The modified date filtering now correctly handles PRs in different stages of the review process by including dates where any timing metric is present. This change supports the PR's goal of better tracking PRs without reviews or approvals.

Let's verify the impact of this change on the chart visualization:

✅ Verification successful

Based on the shell script results, I can now generate the final response:

The updated filtering logic is correct and properly handles partial timing data

The code inspection reveals that:

  • The function correctly processes the Collection type data structure which contains timing metrics
  • The filtering change aligns with the chart's usage in createOutput.ts
  • The OR condition in the filter is intentional as it supports tracking PRs at different stages (review/approve/merge)
  • The implementation properly handles the StatsType values ("average", "median", "percentile") with appropriate type safety
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for potential edge cases in the data that might affect visualization

# Look for instances where only some timing metrics are present
ast-grep --pattern 'timeToReview: $_' # Check review timing patterns
ast-grep --pattern 'timeToApprove: $_' # Check approval timing patterns
ast-grep --pattern 'timeToMerge: $_' # Check merge timing patterns

# Check for test cases or examples
rg -l 'createTimelineMonthComparisonChart' --type test

Length of output: 203


Script:

#!/bin/bash
# Let's try a different approach to verify the data structure and usage

# Find all files that might contain test data or usage of the chart
rg -l "createTimelineMonthComparisonChart"

# Look for type definitions or interfaces related to timing metrics
rg -A 5 "interface.*\{" --type typescript | grep -A 5 "time"

# Check for mock data or test data structures
rg -A 10 "const.*data.*=.*{" --type typescript | grep -A 10 "time"

# Look for any Jest/test files regardless of extension
fd -e test.ts -e test.tsx -e spec.ts -e spec.tsx

Length of output: 698


Script:

#!/bin/bash
# Let's fix the previous queries and look for more specific information

# Check the actual implementation and its usage
rg -A 5 "createTimelineMonthComparisonChart" src/view/utils/createTimelineMonthComparisonChart.ts

# Look for interfaces and types with corrected syntax
rg -A 5 "interface" src/view/utils/createTimelineMonthComparisonChart.ts
rg -A 5 "type" src/view/utils/createTimelineMonthComparisonChart.ts

# Check how the chart is used in createOutput
rg -A 5 "createTimelineMonthComparisonChart" src/createOutput.ts

# Look for test data structures with fixed syntax
rg "const.*data.*=" src/view/utils/createTimelineMonthComparisonChart.ts

Length of output: 2180

src/view/utils/createTotalTable.ts (3)

8-9: LGTM! Import changes align with new metrics.

The new imports for unreviewed and unapproved PR headers align well with the PR's objective to track PRs lacking reviews and approvals.


64-65: LGTM! Header changes match data structure.

The new headers are correctly positioned to match the corresponding data columns in tableRowsTotal.


28-29: Verify data structure compatibility.

The implementation looks correct, but let's verify that the Collection type properly includes these new fields.

✅ Verification successful

Data structure compatibility confirmed

The Collection type in src/converters/types.ts properly includes both unreviewed?: number and unapproved?: number fields, making the implementation fully type-safe.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify Collection type includes unreviewed and unapproved fields
# Expected: Find type definition with these fields

ast-grep --pattern 'type Collection = {
  $$$
  unreviewed?: number;
  unapproved?: number;
  $$$
}'

Length of output: 3450

src/view/utils/createReviewTable.ts (1)

61-61: Verify if size ranges need adjustment with new formula

The PR size calculation formula has been updated to use a lower deletion coefficient (0.2 instead of 0.5), which means PRs will generally be categorized as smaller sizes. Consider if the size ranges (0-50: xs, 51-200: s, etc.) should be adjusted accordingly to maintain a balanced distribution.

Let's analyze the impact of this change:

src/createOutput.ts (2)

7-7: LGTM! Import properly grouped with related utilities.

The addition of createDependencyMarkdown import aligns with the PR objectives and is correctly grouped with other view utility functions.


61-77: Verify correlation metrics alignment with PR objectives.

The implementation adds correlation graphs as specified in the PR objectives. However, let's verify that the createDependencyMarkdown function includes all required metrics:

  • Lead time vs PR size correlation
  • PRs without reviews
  • PRs without approvals
✅ Verification successful

Based on the analysis of the implementation, I can now provide the final response:

Correlation metrics implementation aligns with PR objectives

The implementation successfully includes all required correlation metrics:

  • Lead time vs PR size correlation through timeToMerge metric
  • Review time vs PR size correlation through timeToReview metric
  • Approval time vs PR size correlation through timeToApprove metric

The correlation graphs are generated for different PR sizes (xs, s, m, l, xl) and support multiple aggregation methods (percentile, average, median) as configured through AGGREGATE_VALUE_METHODS. The implementation is complete and properly integrated with the issue comment creation flow.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for the implementation of createDependencyMarkdown
ast-grep --pattern 'export function createDependencyMarkdown($_) {
  $$$
}'

Length of output: 77


Script:

#!/bin/bash
# Let's try searching for createDependencyMarkdown function with a more flexible pattern
ast-grep --pattern 'function createDependencyMarkdown'

# Also search for the file containing this function using ripgrep
rg "createDependencyMarkdown" -l

# Get the content of files containing this function for detailed analysis
rg "createDependencyMarkdown" -A 20

Length of output: 11874


Script:

#!/bin/bash
# Let's check the implementation of createSizeDependencyXYChart to understand what metrics are included
rg "createSizeDependencyXYChart" -A 20

# Also search for any configuration or constants related to correlation metrics
rg "AGGREGATE_VALUE_METHODS" -A 5

Length of output: 20777

src/converters/utils/preparePullRequestStats.ts (1)

37-60: Implementation aligns well with PR objectives.

The added code successfully implements the size-based metrics calculation needed for analyzing the correlation between lead time and pull request size. The structure allows for easy comparison of review, approval, and merge times across different PR sizes.

🧰 Tools
🪛 Biome

[error] 39-39: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

Comment on lines +3 to +8
export const calcPRsize = (
additions: number | undefined,
deletions: number | undefined
) => {
return (additions || 0) + (deletions || 0) * deletionCoefficient;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace || with ?? operator for proper zero handling

The current implementation using || will treat 0 as a falsy value, potentially causing incorrect calculations when either additions or deletions are legitimately 0.

Apply this diff to fix the issue:

export const calcPRsize = (
  additions: number | undefined,
  deletions: number | undefined
) => {
-  return (additions || 0) + (deletions || 0) * deletionCoefficient;
+  return (additions ?? 0) + (deletions ?? 0) * deletionCoefficient;
};
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const calcPRsize = (
additions: number | undefined,
deletions: number | undefined
) => {
return (additions || 0) + (deletions || 0) * deletionCoefficient;
};
export const calcPRsize = (
additions: number | undefined,
deletions: number | undefined
) => {
return (additions ?? 0) + (deletions ?? 0) * deletionCoefficient;
};

🛠️ Refactor suggestion

Add input validation for negative values

The function should validate inputs to ensure meaningful PR size calculations. Negative additions or deletions could lead to misleading metrics.

Consider this implementation:

export const calcPRsize = (
  additions: number | undefined,
  deletions: number | undefined
) => {
+  const validAdditions = Math.max(0, additions ?? 0);
+  const validDeletions = Math.max(0, deletions ?? 0);
-  return (additions || 0) + (deletions || 0) * deletionCoefficient;
+  return validAdditions + validDeletions * deletionCoefficient;
};
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const calcPRsize = (
additions: number | undefined,
deletions: number | undefined
) => {
return (additions || 0) + (deletions || 0) * deletionCoefficient;
};
export const calcPRsize = (
additions: number | undefined,
deletions: number | undefined
) => {
const validAdditions = Math.max(0, additions ?? 0);
const validDeletions = Math.max(0, deletions ?? 0);
return validAdditions + validDeletions * deletionCoefficient;
};

Comment on lines +12 to +21
const charts = users
.filter((user) => Object.keys(data[user]?.total?.sizes || {}).length > 1)
.map((user) => {
return getMultipleValuesInput("AGGREGATE_VALUE_METHODS")
.map((type) =>
createSizeDependencyXYChart(data, type as StatsType, user)
)
.join("\n");
})
.join("\n");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Type assertion is unnecessary and can be improved

The verification reveals that:

  1. getMultipleValuesInput is a generic string array utility that splits comma-separated values from environment/inputs
  2. StatsType is properly defined as "average" | "median" | "percentile" in types.ts
  3. The type assertion in the original code is unnecessary since we can validate the values at the input level

Here's a safer approach:

const methods = getMultipleValuesInput("AGGREGATE_VALUE_METHODS");
if (!methods.every((m): m is StatsType => ['average', 'median', 'percentile'].includes(m))) {
  throw new Error('Invalid aggregate value method');
}
return methods
  .map(type => createSizeDependencyXYChart(data, type, user))
  .join("\n");
🔗 Analysis chain

Consider improving data processing robustness and type safety

A few suggestions to enhance the implementation:

  1. The filtering condition data[user]?.total?.sizes uses multiple optional chaining which could be made more explicit
  2. The type assertion type as StatsType suggests potential type safety issues

Consider this refactoring:

 const charts = users
   .filter((user) => {
-    Object.keys(data[user]?.total?.sizes || {}).length > 1
+    const userSizes = data[user]?.total?.sizes;
+    return userSizes != null && Object.keys(userSizes).length > 1;
   })
   .map((user) => {
-    return getMultipleValuesInput("AGGREGATE_VALUE_METHODS")
-      .map((type) =>
-        createSizeDependencyXYChart(data, type as StatsType, user)
-      )
+    const methods = getMultipleValuesInput("AGGREGATE_VALUE_METHODS") as StatsType[];
+    return methods
+      .map((type) => createSizeDependencyXYChart(data, type, user))
       .join("\n");
   })
   .join("\n");

Let's verify the possible values returned by getMultipleValuesInput:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for the definition and usage of AGGREGATE_VALUE_METHODS
rg -l "AGGREGATE_VALUE_METHODS"

# Search for StatsType definition to verify alignment
ast-grep --pattern 'type StatsType = $_'

Length of output: 582


Script:

#!/bin/bash
# Check the implementation of getMultipleValuesInput
rg -A 5 "getMultipleValuesInput.*=" 

# Check how AGGREGATE_VALUE_METHODS is defined/used
rg "AGGREGATE_VALUE_METHODS.*=" 

# Check createSizeDependencyXYChart implementation for type usage
ast-grep --pattern 'function createSizeDependencyXYChart($_: $_,$_: StatsType,$_: $_) {
  $$$
}'

Length of output: 9673

@@ -6,8 +6,9 @@ export const timeToMergeHeader = "Time to merge";
export const totalMergedPrsHeader = "Total merged PRs";
export const totalOpenedPrsHeader = "Total opened PRs";
export const totalRevertedPrsHeader = "Total reverted PRs";
export const additionsDeletionsHeader = "Additions/Deletions";
export const reviewCommentsHeader = "Total comments";
export const additionsDeletionsHeader = "Additions / Deletions";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Spacing around slashes is inconsistent in header constants

The codebase shows inconsistent spacing patterns around slashes in header constants:

  • "Additions / Deletions" (with spaces)
  • "Changes requested / Commented / Approved" (with spaces)
  • "PR size: xs/s/m/l/xl" (without spaces)

Consider standardizing the spacing pattern across all header constants for better consistency.

🔗 Analysis chain

LGTM! Consider checking spacing consistency.

The added spaces around the slash improve readability.

Let's verify spacing consistency in other similar headers:


Let's search for actual header constants with slashes to check their spacing pattern.


Let me fix the regex syntax:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for consistent spacing around slashes in header constants
# Expected: All similar headers should follow the same spacing pattern

rg -l "/.+" --type ts

Length of output: 3632


Script:

#!/bin/bash
# Search for string constants containing slashes in TypeScript files
rg -g "*.ts" '(const|let|var).*=.*".*\/.*"' -A 1

Length of output: 168


Script:

#!/bin/bash
# Search for string constants containing slashes in TypeScript files
rg -g "*.ts" "(const|let|var).*=.*['\"].*/.+['\"]" -A 1

Length of output: 1066

Comment on lines +19 to +33
max: Math.ceil(
Math.max(
...sizes.map((size) =>
Math.max(
...["timeToReview", "timeToApprove", "timeToMerge"].map(
(key) =>
data[user]?.total?.sizes?.[size]?.[type]?.[
key as "timeToReview" | "timeToApprove" | "timeToMerge"
] || 0
)
)
),
1
) / 60
),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Extract time conversion logic and simplify max calculation

The max value calculation is complex and the time conversion from seconds to hours is repeated throughout the code.

const SECONDS_TO_HOURS = 60;

function convertSecondsToHours(seconds: number): number {
  return Math.round((seconds / SECONDS_TO_HOURS) * 100) / 100;
}

function getMaxTimeForSize(data: Record<string, Record<string, Collection>>, user: string, size: string, type: StatsType): number {
  const timeMetrics = ["timeToReview", "timeToApprove", "timeToMerge"] as const;
  return Math.max(...timeMetrics.map(metric => 
    data[user]?.total?.sizes?.[size]?.[type]?.[metric] || 0
  ));
}

Comment on lines +36 to +73
lines: [
{
color: "gold",
name: "Time\\ To\\ Review",
values: sizes.map(
(size) =>
Math.round(
((data[user]?.total?.sizes?.[size]?.[type]?.timeToReview || 0) /
60) *
100
) / 100
),
},
{
color: "chartreuse",
name: "Time\\ To\\ Approve",
values: sizes.map(
(size) =>
Math.round(
((data[user]?.total?.sizes?.[size]?.[type]?.timeToApprove || 0) /
60) *
100
) / 100
),
},
{
color: "blueviolet",
name: "Time\\ To\\ Merge",
values: sizes.map(
(size) =>
Math.round(
((data[user]?.total?.sizes?.[size]?.[type]?.timeToMerge || 0) /
60) *
100
) / 100
),
},
],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reduce code duplication in line configurations

The line configurations contain repeated logic for time conversion and similar structure. Consider extracting common logic and using a configuration object.

interface LineConfig {
  metric: 'timeToReview' | 'timeToApprove' | 'timeToMerge';
  color: string;
  displayName: string;
}

const lineConfigs: LineConfig[] = [
  { metric: 'timeToReview', color: 'gold', displayName: 'Time\\ To\\ Review' },
  { metric: 'timeToApprove', color: 'chartreuse', displayName: 'Time\\ To\\ Approve' },
  { metric: 'timeToMerge', color: 'blueviolet', displayName: 'Time\\ To\\ Merge' }
];

function createLineData(config: LineConfig, data: Record<string, Record<string, Collection>>, user: string, type: StatsType) {
  return {
    color: config.color,
    name: config.displayName,
    values: sizes.map(size => 
      convertSecondsToHours(data[user]?.total?.sizes?.[size]?.[type]?.[config.metric] || 0)
    )
  };
}

// Usage:
lines: lineConfigs.map(config => createLineData(config, data, user, type))

Comment on lines +61 to +77
if (getValueAsIs("SHOW_CORRELATION_GRAPHS") === "true") {
const dependencyComment = await octokit.rest.issues.createComment({
repo: getValueAsIs("GITHUB_REPO_FOR_ISSUE"),
owner: getValueAsIs("GITHUB_OWNER_FOR_ISSUE"),
issue_number: issue.data.number,
body: createDependencyMarkdown(data, users, [
{
title: "Pull Request report total",
link: `${issue.data.html_url}#`,
},
]),
});
comments.push({
comment: dependencyComment,
title: "Correlation Graphs",
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for the correlation graphs feature.

The new feature block lacks error handling for the API call and validation of the markdown generation.

Consider wrapping the code in a try-catch block and adding validation:

 if (getValueAsIs("SHOW_CORRELATION_GRAPHS") === "true") {
+  try {
+    const dependencyMarkdown = createDependencyMarkdown(data, users, [
+      {
+        title: "Pull Request report total",
+        link: `${issue.data.html_url}#`,
+      },
+    ]);
+    
+    if (!dependencyMarkdown) {
+      console.warn("Skipping correlation graphs: No data to display");
+      return;
+    }
+    
     const dependencyComment = await octokit.rest.issues.createComment({
       repo: getValueAsIs("GITHUB_REPO_FOR_ISSUE"),
       owner: getValueAsIs("GITHUB_OWNER_FOR_ISSUE"),
       issue_number: issue.data.number,
-      body: createDependencyMarkdown(data, users, [
-        {
-          title: "Pull Request report total",
-          link: `${issue.data.html_url}#`,
-        },
-      ]),
+      body: dependencyMarkdown,
     });
     comments.push({
       comment: dependencyComment,
       title: "Correlation Graphs",
     });
+  } catch (error) {
+    console.error("Failed to create correlation graphs:", error);
+    // Consider whether to re-throw or continue with other operations
+  }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (getValueAsIs("SHOW_CORRELATION_GRAPHS") === "true") {
const dependencyComment = await octokit.rest.issues.createComment({
repo: getValueAsIs("GITHUB_REPO_FOR_ISSUE"),
owner: getValueAsIs("GITHUB_OWNER_FOR_ISSUE"),
issue_number: issue.data.number,
body: createDependencyMarkdown(data, users, [
{
title: "Pull Request report total",
link: `${issue.data.html_url}#`,
},
]),
});
comments.push({
comment: dependencyComment,
title: "Correlation Graphs",
});
}
if (getValueAsIs("SHOW_CORRELATION_GRAPHS") === "true") {
try {
const dependencyMarkdown = createDependencyMarkdown(data, users, [
{
title: "Pull Request report total",
link: `${issue.data.html_url}#`,
},
]);
if (!dependencyMarkdown) {
console.warn("Skipping correlation graphs: No data to display");
return;
}
const dependencyComment = await octokit.rest.issues.createComment({
repo: getValueAsIs("GITHUB_REPO_FOR_ISSUE"),
owner: getValueAsIs("GITHUB_OWNER_FOR_ISSUE"),
issue_number: issue.data.number,
body: dependencyMarkdown,
});
comments.push({
comment: dependencyComment,
title: "Correlation Graphs",
});
} catch (error) {
console.error("Failed to create correlation graphs:", error);
// Consider whether to re-throw or continue with other operations
}
}

Comment on lines +37 to +60
sizes: Object.entries(collection?.sizes || {}).reduce(
(acc, el) => ({
...acc,
[el[0]]: {
...acc[el[0]],
percentile: {
timeToReview: calcPercentileValue(acc[el[0]]?.timeToReview),
timeToApprove: calcPercentileValue(acc[el[0]]?.timeToApprove),
timeToMerge: calcPercentileValue(acc[el[0]]?.timeToMerge),
},
median: {
timeToReview: calcMedianValue(acc[el[0]]?.timeToReview),
timeToApprove: calcMedianValue(acc[el[0]]?.timeToApprove),
timeToMerge: calcMedianValue(acc[el[0]]?.timeToMerge),
},
average: {
timeToReview: calcAverageValue(acc[el[0]]?.timeToReview),
timeToApprove: calcAverageValue(acc[el[0]]?.timeToApprove),
timeToMerge: calcAverageValue(acc[el[0]]?.timeToMerge),
},
},
}),
collection?.sizes || {}
),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address potential performance bottleneck in reducer.

The current implementation uses spread operator in the reducer which can lead to O(n²) time complexity as the accumulator grows with each iteration.

Consider using Object.assign or direct property assignment for better performance:

 sizes: Object.entries(collection?.sizes || {}).reduce(
-  (acc, el) => ({
-    ...acc,
-    [el[0]]: {
-      ...acc[el[0]],
-      percentile: {
-        timeToReview: calcPercentileValue(acc[el[0]]?.timeToReview),
-        timeToApprove: calcPercentileValue(acc[el[0]]?.timeToApprove),
-        timeToMerge: calcPercentileValue(acc[el[0]]?.timeToMerge),
-      },
-      // ... other calculations
-    },
-  }),
+  (acc, [size, data]) => {
+    acc[size] = {
+      ...data,
+      percentile: {
+        timeToReview: calcPercentileValue(data?.timeToReview),
+        timeToApprove: calcPercentileValue(data?.timeToApprove),
+        timeToMerge: calcPercentileValue(data?.timeToMerge),
+      },
+      // ... other calculations
+    };
+    return acc;
+  },
   collection?.sizes || {}
 ),

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Biome

[error] 39-39: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

Comment on lines +112 to +117
unreviewed: timeToReview
? collection?.unreviewed || 0
: (collection?.unreviewed || 0) + 1,
unapproved: timeToApprove
? collection?.unapproved || 0
: (collection?.unapproved || 0) + 1,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix logical error in unreviewed and unapproved counters

Since 0 is a falsy value in JavaScript, if timeToReview or timeToApprove is 0, the ternary operator will incorrectly increment unreviewed or unapproved. This results in PRs that were reviewed or approved immediately being counted as unreviewed or unapproved.

To fix this, explicitly check for undefined or null:

     unreviewed: timeToReview !== undefined && timeToReview !== null
       ? collection?.unreviewed || 0
       : (collection?.unreviewed || 0) + 1,
     unapproved: timeToApprove !== undefined && timeToApprove !== null
       ? collection?.unapproved || 0
       : (collection?.unapproved || 0) + 1,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
unreviewed: timeToReview
? collection?.unreviewed || 0
: (collection?.unreviewed || 0) + 1,
unapproved: timeToApprove
? collection?.unapproved || 0
: (collection?.unapproved || 0) + 1,
unreviewed: timeToReview !== undefined && timeToReview !== null
? collection?.unreviewed || 0
: (collection?.unreviewed || 0) + 1,
unapproved: timeToApprove !== undefined && timeToApprove !== null
? collection?.unapproved || 0
: (collection?.unapproved || 0) + 1,

Comment on lines +155 to +157
timeToReview: timeToReview || 0,
timeToApprove: timeToApprove ? timeToApprove - (timeToReview || 0) : 0,
timeToMerge: timeToMerge ? timeToMerge - (timeToApprove || 0) : 0,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure correct time calculations when time values are zero

If timeToReview, timeToApprove, or timeToMerge are 0, the current ternary checks will treat them as falsy and assign 0, potentially leading to incorrect time intervals.

Modify the checks to explicitly handle undefined or null values:

     timeToReview: timeToReview !== undefined && timeToReview !== null ? timeToReview : 0,
     timeToApprove:
-      timeToApprove ? timeToApprove - (timeToReview || 0) : 0,
+      timeToApprove !== undefined && timeToApprove !== null
+        ? timeToApprove - (timeToReview !== undefined && timeToReview !== null ? timeToReview : 0)
+        : 0,
     timeToMerge:
-      timeToMerge ? timeToMerge - (timeToApprove || 0) : 0,
+      timeToMerge !== undefined && timeToMerge !== null
+        ? timeToMerge - (timeToApprove !== undefined && timeToApprove !== null ? timeToApprove : 0)
+        : 0,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
timeToReview: timeToReview || 0,
timeToApprove: timeToApprove ? timeToApprove - (timeToReview || 0) : 0,
timeToMerge: timeToMerge ? timeToMerge - (timeToApprove || 0) : 0,
timeToReview: timeToReview !== undefined && timeToReview !== null ? timeToReview : 0,
timeToApprove:
timeToApprove !== undefined && timeToApprove !== null
? timeToApprove - (timeToReview !== undefined && timeToReview !== null ? timeToReview : 0)
: 0,
timeToMerge:
timeToMerge !== undefined && timeToMerge !== null
? timeToMerge - (timeToApprove !== undefined && timeToApprove !== null ? timeToApprove : 0)
: 0,

@AlexSim93 AlexSim93 merged commit a112cf5 into v4 Nov 19, 2024
2 checks passed
@AlexSim93 AlexSim93 deleted the feature/size-time-dependency branch November 19, 2024 14:46
@coderabbitai coderabbitai bot mentioned this pull request Nov 21, 2024
9 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant