[Rock-dev] Proposal: NDLCom and Rock

Sylvain Joyeux sylvain.joyeux at dfki.de
Wed Dec 11 09:39:38 CET 2013


On 12/09/2013 12:51 PM, Martin Zenzes wrote:
>> In the same
>> effect, on the oroGen side, there should be IMO only one task per bus
>> (i.e. I/O) as well.
> What is an I/O (or Bus)? You mean to do the "routing" between of 
> multiple Busses on the Orogen-level? This would move the routing code 
> away from the cpp-library and make this whole "ndlcom routing layer" 
> heavily rock-dependent. Not what everyone is looking for...
Not a problem:
  - on the orogen level, the ruby scripts will have to build a component
network that match its topology (i.e. the burden will be placed outside
the C++ library). This can be provided by some Ruby extensions to the
tasks (so that one does not have to build the network completely
manually), or when using syskit by an extension to syskit.
  - you can still write a C++ class in the library to provide a
friendlier way at the library level. Just: separate the "control of a
single file descriptor" class (the Driver) from the class that makes the
routing so that one still has the choice at the oroGen level.
> But I see one point: putting everything in one big Task would lead to
> one process for all the (potentially blocking or slow) read/write 
> systemcalls... Increasing latency...
And making the error handling / status report a lot harder.

> --
>
> Now, some califications on for my initial wording:
>>> - Provide received Messages which are directed to the own receiverId
>>> as `ndlcom::Message` at the `readMessage()` function to a caller
> The "ndlcom::Communication" is supposed to make ndlcom-Messages sent by 
> an STM32 (for example) to the deviceId of the "ndlcom::Communication" 
> available in the "readMessage()" or "readPayload()" function, to be 
> called by the Orogen-level update-hook.
>>> - Accept `ndlcom::Message` at the `writeMessage()` function to be sent
>>> to the correct hardware interface
> Some rock-task (or other calling code, using the cpp-library) calls 
> "writePayload()" or "writeMessage()" in the Orogen-level and it is 
> expected that the resulting ndlcom-Message is then sent at the correct 
> Interface (or Bus) to be received by an STM32
So ... these two are routing aspects, right ?
> Using the iodrivers_base::IOListeners to only mirror messages which 
> where read at a hardware-interface, and mirroring the messages sent by 
> the "writeMessage()" or "writePayload()" functions of 
> "ndlcom::Communications" would do the Job?
Should in principle, yes. As far as I remember, the IOListeners also
have the ability to listen to the written data (i.e. you would only need
the IOListener).
>>> So, after *lotoftext*: This cpp-library is finally to be embedded in a
>>> Rock-Task, providing the appropriate Rock-Ports and executing the
>>> needed Hooks. Everything else, mapping certain NDLCom-Payload to
>>> certain Rock-Types or providing services based on NDLCom Messages, is
>>> done by other Rock-Tasks.
>> I would definitely not go for mirroring the MARS design. The Mars design
>> communicates "behind the scene" between a core component and the
>> "device" components, constraining the device components to be in the
>> same process than the core, and in general giving headaches.
> Yes, I see the "same process" problem. But what do you mean with 
> "general headaches"? What do you mean with "behind the scene"?
behind-the-scene: the communication between the core component and the
driver components is using its own communication channel. So: no
logging, no runtime inspection, single process and no control over it
(one cannot cut the connection). You can't mock it (create fake
datastreams and feed them to your driver components to test them), ...
headaches: this breaks the component-oriented model big time (components
are not isolated anymore), and therefore the tools that assume that you
*are* following a component-based design (syskit). Moreover, it assumes
that the "core" object is a singleton (you just CANNOT have more than
one per process) - this point can be made differently, but at the cost
of complexity.
>> Hopefully for us, there are indeed precedents for this kind of hardware:
>> the CAN bus integration as well as the Schilling DTS (the last one is
>> not in Rock proper).
> But in these, there is only one hardware-Interface (Bus)? NDLCom should 
> have multiple of them.
Well ... You can have more than one interface. It is just having more
than one driver component. With e.g. CAN busses, you create two
canbus::Task components.
>> The general design there is to split the handling of the protocol between:
>>   - a bus handling component, which represents a given bus and only does
>> the multiplexing/demultiplexing. It would create one port per deviceId
>> and only output, on this port, the raw data that is relevant for this
>> particular device. If your payload is only raw bytes, you could even use
>> iodrivers_base::RawPacket.
> Hm Ok. I can see your point, doing it like this. But I would argue that:
>
> - Much overhead for single messages? We are expecting around 2000 
> Messages per second in both directions, to be handled on a 700MHz ARM 
> while still doing the motion control and kinematics...
There would be little overhead if you run all components in a single
thread. If you *want* multiple threads, it provides you with the
thread-safety with no additional cost for the programmer (since the
isolation is already provided by the Rock component implementation), and
no or little runtime overhead compared to doing multi-threading "manually".
> - Also, as said before, this would move all the "routing" into rock... 
> Hence it will not be easily possible to reuse this ndlcom-routing-code 
> outside of rock (standalone gui, bare-metal controller).
Since we were specifically talking about the Rock task integration in
this part of your mail, I thought I would focus on that ;-). As already
mentioned, you can still implement a class to do the auto-discovery and
routing in the library. But, for myself, I am very wary of these as the
hardware topology *is known*. I don't see the point of adding quite a
bit of complexity to the code (in addition of overhead for discovery
since you have to broadcast to devices that never talked to you) for
something the system designer *knows*.
> One component (this is a rock task?) per device? For 25+ Motors, 10+ 
> Sensorboards? I tought about using one rock-task per data-type, mapping 
> "base/type/JointState" to "BLDCJointTelemetry" for example, with a fixed 
> (configured) map from "rock-device-name" to "ndlcomId".
I would say "it depends"

For motors, with the new JointState-based datatypes, I would go for a
single component for all the motors that are on a given bus.

For e.g. an IMU, I would go for one task per device.

At the deployment level, I would deploy all the tasks for a single bus
in a single thread to remove the cost of context-switching.

-- 
Sylvain Joyeux (Dr.Ing.)
Senior Researcher

Space & Security Robotics
Underwater Robotics

!!! Achtung, neue Telefonnummer!!!

Standort Bremen:
DFKI GmbH
Robotics Innovation Center
Robert-Hooke-Straße 5
28359 Bremen, Germany

Phone: +49 (0)421 178-454136
Fax:   +49 (0)421 218-454150
E-Mail: robotik at dfki.de

Weitere Informationen: http://www.dfki.de/robotik
-----------------------------------------------------------------------
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern
Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
(Vorsitzender) Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
Amtsgericht Kaiserslautern, HRB 2313
Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
USt-Id.Nr.:    DE 148646973
Steuernummer:  19/673/0060/3
----------------------------------------------------------------------- 



More information about the Rock-dev mailing list