Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SOAR-18536] palo alto cortex xdr #3027

Merged
merged 9 commits into from
Jan 6, 2025
Merged

Conversation

ablakley-r7
Copy link
Collaborator

Proposed Changes

Description

Describe the proposed changes:

  • Update pagination decision logic to check alert limit

PR Requirements

Developers, verify you have completed the following items by checking them off:

Testing

Unit Tests

Review our documentation on generating and writing plugin unit tests

  • Unit tests written for any new or updated code

In-Product Tests

If you are an InsightConnect customer or have access to an InsightConnect instance, the following in-product tests should be done:

  • Screenshot of job output with the plugin changes
  • Screenshot of the changed connection, actions, or triggers input within the InsightConnect workflow builder

Style

Review the style guide

  • For dependencies, pin OS package and Python package versions
  • For security, set least privileged account with USER nobody in the Dockerfile when possible
  • For size, use the slim SDK images when possible: rapid7/insightconnect-python-3-38-slim-plugin:{sdk-version-num} and rapid7/insightconnect-python-3-38-plugin:{sdk-version-num}
  • For error handling, use of PluginException and ConnectionTestException
  • For logging, use self.logger
  • For docs, use changelog style
  • For docs, validate markdown with insight-plugin validate which calls icon_validate to lint help.md

Functional Checklist

  • Work fully completed
  • Functional
    • Any new actions/triggers include JSON test files in the tests/ directory created with insight-plugin samples
    • Tests should all pass unless it's a negative test. Negative tests have a naming convention of tests/$action_bad.json
    • Unsuccessful tests should fail by raising an exception causing the plugin to die and an object should be returned on successful test
    • Add functioning test results to PR, sanitize any output if necessary
      • Single action/trigger insight-plugin run -T tests/example.json --debug --jq
      • All actions/triggers shortcut insight-plugin run -T all --debug --jq (use PR format at end)
    • Add functioning run results to PR, sanitize any output if necessary
      • Single action/trigger insight-plugin run -R tests/example.json --debug --jq
      • All actions/triggers shortcut insight-plugin run --debug --jq (use PR format at end)

Assessment

You must validate your work to reviewers:

  1. Run insight-plugin validate and make sure everything passes
  2. Run the assessment tool: insight-plugin run -A. For single action validation: insight-plugin run tests/{file}.json -A
  3. Copy (insight-plugin ... | pbcopy) and paste the output in a new post on this PR
  4. Add required screenshots from the In-Product Tests section

@@ -117,8 +117,7 @@ def get_alerts_palo_alto(self, state: dict, start_time: Optional[int], now: int,
state[CURRENT_COUNT] = state.get(CURRENT_COUNT, 0) + results_count

new_alerts, new_alert_hashes, last_alert_time = self._dedupe_and_get_highest_time(results, state)

is_paginating = state.get(CURRENT_COUNT) < total_count
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the issue here just an off by 1 error or something?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we probbaly never get < on the finishing statement because on the last page it would equal the total_count. think the comparison was wrong but the new one should still work

@@ -117,8 +117,7 @@ def get_alerts_palo_alto(self, state: dict, start_time: Optional[int], now: int,
state[CURRENT_COUNT] = state.get(CURRENT_COUNT, 0) + results_count

new_alerts, new_alert_hashes, last_alert_time = self._dedupe_and_get_highest_time(results, state)

is_paginating = state.get(CURRENT_COUNT) < total_count
is_paginating = results_count >= alert_limit
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense to me! can we just log the 'results_count' in the pagination log so we can follow this? I'm guessing our results_count will equal the alert_limit each time? Although reading the logic around alert_limit it doesn't lok like we ever pass this to the API call against Palo Alto so we may need a follow on ticket, I'm guessing at the minute the default API limit from them is 100.

@ablakley-r7 ablakley-r7 merged commit 8cb6d65 into develop Jan 6, 2025
11 checks passed
ablakley-r7 added a commit that referenced this pull request Jan 6, 2025
* Update pagination decision in task

* Update unit test pagination

* testing unit test

* testing unit test

* testing unit test

* testing unit test

* testing unit test

* testing unit test

* Update logging
ablakley-r7 added a commit that referenced this pull request Jan 6, 2025
* Update pagination decision in task

* Update unit test pagination

* testing unit test

* testing unit test

* testing unit test

* testing unit test

* testing unit test

* testing unit test

* Update logging
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants