Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(hardware): read tip update from firmware whenever a notification is received #13822

Merged

Conversation

ahiuchingau
Copy link
Contributor

Overview

FLEX pipettes have tip sensing ability (thanks to tip presence sensors!); it would be nice if we could get real-time status of the attached tip from the pipette while the hardware controller is running.

This PR does it by creating a TipPresenceManager in the hardware controller that monitors tip updates from the pipettes. The tip presence manager is in charge of building a TipDetector dedicated to an attached pipette and pass any tip notification from the CAN bus to any of its listeners.

The tip presence manager currently serves the two following purposes:
(1) prompting the firmware to confirm tip presence after pick up and drop tip actions
(2) caching the latest tip update from the detectors, and presenting that info to the /instruments endpoint

Changelog

  • the has_tip property of the OT3 pipette does not in fact reflect whether or not a tip is currently attached, but if a tip length has been added so we're changing the logic there
  • removing the tip_presence_detection_enabled feature flag and make it the default behavior for all FLEX pipettes

What's next

This opens up an opportunity for the protocol engine to listen to the unexpected tip change during a protocol and issue the appropriate proper error/warning to the user.

@ahiuchingau ahiuchingau requested a review from a team as a code owner October 20, 2023 19:53
@ahiuchingau ahiuchingau force-pushed the RLIQ-366-support-caching-asynchronous-messages-from-firmware branch from 0e9d031 to 205015b Compare October 20, 2023 19:57
@codecov
Copy link

codecov bot commented Oct 20, 2023

Codecov Report

Merging #13822 (037bf60) into edge (f202a74) will increase coverage by 0.02%.
Report is 58 commits behind head on edge.
The diff coverage is 90.42%.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             edge   #13822      +/-   ##
==========================================
+ Coverage   70.66%   70.68%   +0.02%     
==========================================
  Files        2477     2503      +26     
  Lines       69695    70527     +832     
  Branches     8453     8600     +147     
==========================================
+ Hits        49248    49852     +604     
- Misses      18461    18598     +137     
- Partials     1986     2077      +91     
Flag Coverage Δ
app 68.87% <ø> (+0.73%) ⬆️
g-code-testing 96.44% <ø> (ø)
hardware 56.66% <91.11%> (-0.04%) ⬇️
hardware-testing ∅ <ø> (∅)
notify-server 89.13% <ø> (ø)
protocol-designer 45.51% <ø> (+0.16%) ⬆️
shared-data 72.98% <75.00%> (-0.99%) ⬇️
step-generation 84.99% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
...entrons/hardware_control/backends/ot3controller.py 67.85% <ø> (ø)
...pentrons/hardware_control/backends/ot3simulator.py 86.44% <ø> (ø)
api/src/opentrons/hardware_control/ot3api.py 79.47% <ø> (ø)
api/src/opentrons/hardware_control/types.py 96.55% <ø> (ø)
...hardware/hardware_control/tip_presence/__init__.py 100.00% <100.00%> (ø)
...ns_hardware/hardware_control/tip_presence/types.py 100.00% <100.00%> (ø)
...-data/python/opentrons_shared_data/errors/codes.py 93.42% <100.00%> (+0.17%) ⬆️
.../python/opentrons_shared_data/errors/exceptions.py 60.59% <66.66%> (+0.15%) ⬆️
...hardware/hardware_control/tip_presence/detector.py 90.00% <90.00%> (ø)

... and 175 files with indirect coverage changes

Copy link
Member

@sfoster1 sfoster1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome. I think there's two bigger changes we should make, and then I have some inline comments.

First, I think the debounce logic is slightly wrong. The thing we want to do is ignore bouncing around within a timeout. When we see the first tip change happen, we start a timer; we keep collecting tip state changes; and the end result is the last tip state value when the timer expires, everything before that is ignored. What we have here is that any tip state change is ignored if it changes after the first time.

Consider: the goal here is that if you take your finger and like flick the tip ejector shroud upwards and then it falls down, we don't tell the engine that a tip was removed and replaced within like 800 ms. We'd want to

  • see the "tip removed" event and start the timer
  • keep capturing events during the timer; the last one will be the "tip added" event as the shroud falls down, maybe bounces a couple times, and then settles
  • and so the result, a second later, will be "no change" because the sensor went from present to present.
  1. While we don't have to add the engine subscribing to the tip state in this PR, I think we should add the framework to route events up out of the hardware controller - add it to the hardware events type, have ot3api add a listener to the tip state detector, pass it through ot3controller, etc

def tip_presence_manager(self) -> TipPresenceManager:
return self._tip_presence_manager

async def update_tip_detector(self, mount: OT3Mount, sensor_count: int) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably want a way to remove one when a pipette is removed, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When it pipette is removed, we'll pass sensor_count as 0, which then removes the attached detector

TipUpdateByMount = Dict[OT3Mount, Optional[bool]]


class TipDetectorNotFoundError(Exception):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make it enumerated probably

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some kidn of itnernal error code


async def build_detector(self, mount: OT3Mount, sensor_count: int) -> None:
# clear detector if pipette does not exist
if not sensor_count > 0:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if not sensor_count > 0:
if sensor_count <= 0:

nit, makes it easier to read for me

log = logging.getLogger(__name__)

TipListener = Callable[[OT3Mount, bool], None]
TipDetectorByMount = Dict[OT3Mount, Optional[TipDetector]]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we've done this a couple other places and it's been sort of normalized, but for my money

  • if you have a dynamic store of data with a consistent key type and a consistent value type, but no restriction on what value of key may be present, that should be a Dict
  • if you have a store of data with a consistent key type and a consistent value type, and you want to look up keys programmatically with strings, and some keys always need to be there, that should be a TypedDict with keys as literals
  • If you have a store of data where you don't need to access things programatically, that's a dataclass

So for both _detectors and _last_state, where there has to be keys for all mounts - though really it's just the pipette mounts - at all times but the value can vary, to me that should be a TypedDict with key literals

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea that makes sense

@@ -232,7 +231,6 @@ def reset_state(self) -> None:
self._working_volume = float(self.liquid_class.max_volume)
self._current_tip_length = 0.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: if we're going to do "has_tip_length" let's make the internal variable an optional rather than checking for value == 0.0

NodeId(arb_id.parts.originating_node_id) == self._node
)

def __del__(self) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we probably do not want this to be in a __del__ magicmethod. The __del__ magic method is called only when an object's reference counts hit 0 or it's garbage collected making it really only useful for cleaning up resources the interpreter doesn't know about.

Trying to use __del__ to remove other python objects will probably just result in the object never being cleaned up. The tasks in self._tasks hold references to the object because they're bound methods that hold references to self. Unless the objects in tasks are explicitly destroyed, those references will remain live.

So I'm pretty sure that __del__ will never be called as-implemented, because even if this object is popped out of the OT3Controller and gets decref'd there, the tasks will still hold refs, and the object won't be gc'd.

I think the right thing to do is have a def cleanup(): method that OT3Controller calls explicitly when we remove detectors, which cancels the tasks, which removes the references from the tasks, which will result (implicitly) in the object getting gc'd.

# create a task that ends up broadcasting this update
self._create_reject_debounce_task(tip_change)

async def _broadcast(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't love _broadcast as a name because broadcast is also a thing you do to can messages, I had to read this a couple times before I realized it was "broadcast to subscribers above me". can we call this like _update_subscribers() or something

if subscriber not in self._subscribers:
self._subscribers.append(subscriber)

def remove_subscriber(self, subscriber: TipChangeListener) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we implement this as a function object returned from add_subscriber()

) -> None:
# wait for the future to time out before broadcasting the update
try:
asyncio.wait_for(fut, timeout=1.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the 1.0 probably wants to be an instance variable called like debounce_time

# wait for the future to time out before broadcasting the update
try:
asyncio.wait_for(fut, timeout=1.0)
# this tip change is rejected, end the task here
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's less that we want to reject the change, and more that we want the final result to be the last thing we get during the debounce time. The debounce can never really end early.

@ahiuchingau ahiuchingau requested a review from a team as a code owner November 1, 2023 21:07
@ahiuchingau ahiuchingau requested a review from sfoster1 November 1, 2023 21:09
Copy link
Member

@sfoster1 sfoster1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really good to me!

@ahiuchingau ahiuchingau merged commit 7ad184c into edge Nov 1, 2023
41 of 47 checks passed
@ahiuchingau ahiuchingau deleted the RLIQ-366-support-caching-asynchronous-messages-from-firmware branch November 1, 2023 22:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants