Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(api): add runner load error handling to analyze cli #15400

Merged

Conversation

sanni-t
Copy link
Member

@sanni-t sanni-t commented Jun 12, 2024

Closes AUTH-486

Overview

Fixes the issues caught in #15211

Previously, RTP validation occurred as part of a simulating runner's run process. This run process executes the protocol inside a separate task, and any errors raised while running the task are wrapped and presented in the 'Rune result" as an analysis error.

Now that RTP validation is happening as part of the runner load process, we have to handle the error raised in the load process in a similar way as well. This PR adds this missing error handling in analyze cli function.

Test Plan

  • Upload a protocol with incorrect RTP definition and verify that the in-app analysis fails with the correct error and no traceback in the error pop-up window.
  • Verify that the static data of the above protocol- metadata, requirements and files are populated in the failed analysis
  • Verify that any other, non-RTP-related errors raised during the runner load process are handled just like they would have in the previous stable software.

Review requests

  • does this approach make sense? Does it miss any edge cases?

Risk assessment

Medium. Updates a critical part of in-app analysis, but is a fix and is isolated to error handling of runner.load only.

@sanni-t sanni-t marked this pull request as ready for review June 12, 2024 16:18
@sanni-t sanni-t requested a review from a team as a code owner June 12, 2024 16:18
@sanni-t sanni-t requested review from y3rsh and sfoster1 June 12, 2024 18:47
async def _do_analyze(protocol_source: ProtocolSource) -> RunResult:

runner = await create_simulating_runner(
robot_type=protocol_source.robot_type, protocol_config=protocol_source.config
)
return await runner.run(deck_configuration=[], protocol_source=protocol_source)
if isinstance(runner, PythonAndLegacyRunner):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops! Forgot to load the non-python runner. Doing that now

Copy link
Contributor

@SyntaxColoring SyntaxColoring left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This mostly makes sense to me. The duplication between here and ProtocolAnalyzer is unfortunate, and so is the tedium in constructing the error shape, but both make sense given where we are today.

A couple of minor points:

)


@track_analysis
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this @track_analysis addition intentional, or is it a merge conflict mistake?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh! Good catch! Definitely merge conflict mistake

commands=[],
state_summary=StateSummary(
errors=[error_occ],
status=EngineStatus.IDLE,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're supplying a placeholder EngineStatus.IDLE because that's needed to construct the returned RunResult. But the returned RunResult is, elsewhere in this file, mostly just thrown away and translated into the AnalyzeResults that we actually care about.

Can we avoid this throwaway placeholder thing by directly returning the AnalyzeResults from this function?

Copy link
Member Author

@sanni-t sanni-t Jun 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to avoid creating this placeholder ends up mostly adding bloat code or some funny branching logic because when there's no error during runner load, the analysis result does need to be constructed from a RunResult.
I don't think the small placeholder is worth the trouble.

Copy link

codecov bot commented Jun 13, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 92.43%. Comparing base (7de5e83) to head (afb713d).
Report is 97 commits behind head on edge.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##             edge   #15400       +/-   ##
===========================================
+ Coverage   63.20%   92.43%   +29.23%     
===========================================
  Files         287       77      -210     
  Lines       14891     1283    -13608     
===========================================
- Hits         9412     1186     -8226     
+ Misses       5479       97     -5382     
Flag Coverage Δ
g-code-testing 92.43% <ø> (ø)
hardware ?
shared-data ?
system-server ?
update-server ?
usb-bridge ?

Flags with carried forward coverage won't be shown. Click here to find out more.

see 210 files with indirect coverage changes

@sanni-t sanni-t merged commit bb1bc23 into edge Jun 13, 2024
22 checks passed
@sanni-t sanni-t deleted the AUTH-486-in-app-analysis-failure-during-rtp-validation-is-buggy branch July 15, 2024 21:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants