-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Leverage CANopen & general improvements #232
Comments
The SDO protocol is a confirmed service, that is, any message we send to a CAN node generates a response. CANopen docs use the terms request/response for CAN reads, and indication/confirm for CAN writes. In the develop branch, CAN reads (from the CAN master POV) employ a tiny hardcoded delay hoping to receive the expected response before it elapses. I noticed that this delay is too small for certain operations, perhaps always. On the other hand, CAN writes do never expect a confirm message. Issue #223 adds the new Similarly, a similar wait-with-timeout mechanism has been applied on CiA 402 state machine transitions (switch on, enable, shutdown...) and made several 1-2 second hardcoded delays vanish. |
CanBusControlboard has always inherited from a Issue #226 will allow multiple read/write threads, one per CAN bus. |
The ControlBoardWrapper class queries the following data on each
All of these map to array-like methods (as opposed to single-joint and joint-group signatures). The only ones that entail a CAN network transfer and do not derive from the others are bolded. Both motor encoder reads and actual currents can fit a single TPDO frame. I'm yet to decide on whether control modes should either be handled via local variable or await a TPDO event. |
Control modes YARP queries are no longer forwarded to the CAN network, that is, no CAN request is generated upon |
According to iPOS CAN user manual (2019), these are the relevant dictionary objects we (may) want to listen to via TPDO:
Certain error registers are forwarded from the drives within an EMCY message in case a fault condition arises, see section 4.1.4.1, Emergency message structures:
|
Technosoft support has confirmed that object 1002h, Manufacturer Status Register can be indeed mapped to a PDO. |
I'm going to remove the iPOS-product-code-to-drive-peak-current mappings so that this value is parsed from .ini file. For future reference (using amperes):
|
Makes sense! |
Proposed transmit PDO mappings:
TPDO3 is meant for time critical data streams. For now, I will set a tiny event timer value in async mode and no inhibit time in order to simulate the final desired behavior in synchronous operation (via SYNC signal). Issue #223 will expand on that matter. There is room for two additional bytes which could fit the output of a temperature sensor (wink wink @smcdiaz) stream via 2058h. TPDO1 and TPDO2 are both event driven PDOs. I feel that grouping status-related stuff together is probably more efficient given that such events are prone to fire at any time, as opposed to casual emergency messages linked to fault states that hopefully don't arise that often. Hence, TPDO1 will host statusword and the MSB half of MSR - these are also state-related bits - as well as current mode of operation. On TPDO2, MER/DER should fire on special events and both provide related information. We are probably fine in reusing the default TPDO configuration, i.e. no event timer and a 30 ms inhibit time. That is, if and only if a bit changes in either PDO, it will produce a message at least 30 milliseconds away from the previous one. In case I decide to set an event timer of e.g. 1 second, those PDOs will send data with such a period, and more often if inhibit time is set as well. Finally, TPDO4 is vacant and might host IO events in the future. |
Oh.
(#232 (comment)) I should mark Edit: considering #189 (comment) and given that there is no way to query instantaneous acceleration from the drive and we want this info, I'd stick to the current TPDO config. |
Also, it is interesting to note that iPOS firmware provides a low pass filter that can be applied on a 16bit variable value, see Object 2108h. By default, this is configured to filter motor current and can be mapped to a PDO. |
It is important to note that the default inhibit time (30 milliseconds) causes some crucial messages to be lost, e.g. this may occur when modes of operation change twice before this time elapses. It happened to me upon switching from idle mode to position control mode. I did process the statusword callback correctly, but another message was supposed to be generated due to the mode transition, which got lost in the meantime (I presume it was attempted to be broadcast before those 30 milliseconds elapsed). Edit: this problem had nothing to do with the alleged cause and has been solved at 5a55c83. Edit2: see a460407 (attending to motion completion after halt command is issued). |
Drives are currently configured to stream joint state (encoder reads & current measurements) via TPDO3. This PDO is event-driven, no event timer, default inhibit time (for now) of 30 milliseconds. It means that any new encoder read or current measurement will be broadcast from the drive no sooner than 30 milliseconds after the last message sent. This copes well with our buffering mechanisms, and it is convenient to remember that encoder reads are highly unstable, i.e. the drives are steadily reporting minor variations (hence, in the practice, TPDO3 will stream data at the inhibit time rate despite the motor not being commanded at all). A question arises: is it convenient to SYNC this particular TPDO? If we take a look at how those reads and measurements are consumed, we notice that the only caller is controlboardwrapper2's periodic thread. Even if we achieve synchronous data streaming across all joints, those data will be consumed at a later time (milliseconds apart). Each drive will report a different acquisition time, so the question boils down to determining whether simultaneousness is a pressing matter here. Client code (YARP-side) will always get the precise instant of data arrival via A positive answer yields another important question: since all synchronous TPDOs obey the same SYNC signal, and we are sure we want to tell TPDO3 to stream at the SYNC rate, how can other PDOs (RPDO/TPDO) play with this config so that a different rate is used for their specific case, in case of need? See: Super-Nyquist Theorem. |
Synchronization helps design a CAN bus that copes well with traffic. Event-driven PDOs make this highly unpredictable. Also, syncing is not as hard as I initially thought.
PDOs can be configured in a cyclic synchronous manner, i.e. accept/stream data every 2/3/4... SYNC signals. |
Chapter 6, "Factor group" in iPOS CANopen user manual (2019) looks appealing:
This set of objects could replace several unit-conversion methods from the However, I'm not implementing this. There are several places across our codebase that would need a "manual" conversion anyway, that is, the iPOS drives do not apply such unit conversions and sign multiplications in all places we actually need them, hence the helper methods in
And:
|
TPDO3 configured as cyclic synchronous (every 1 SYNC). The new I chose not to communicate CanBusControlboard with its wrapped CAN nodes (and vice versa), possibly via new methods in the |
Object 2103h, Number of encoder counts per revolution reports our
Anyway, dropping the |
Velocity, torque and current commands are now forwarded to the drive via synchronous RPDOs: f4970a2. |
Event timer implemented and defaulted to 250 milliseconds at 79a475e and roboticslab-uc3m/teo-configuration-files@249edbe. |
Currently, a
Sadly, yarpmanager doesn't like a |
ASWJ we'd better ask the drive for its current "control mode" and test against the
We are good to go with our current setup, that is, treat absolute encoders as tightly coupled with their TechnosoftIpos counterpart per joint (c.f. Dextra hands which actually could pose a nice use case for this MAS interface (if they actually had encoders per physical joint)). |
Project [CAN-TEO] gathers several tickets which aim to take full advantage of CANopen protocols with a brand new application layer (#223). Apart from the usual refactorization and reduction of code bloat, it is expected to achieve new functionalities and improve existing ones. This ticket tracks said improvements.
The text was updated successfully, but these errors were encountered: