Skip to content
This repository has been archived by the owner on Jan 31, 2022. It is now read-only.

RPC framework for the GLIB #156

Open
lpetre-ulb opened this issue Oct 15, 2019 · 8 comments
Open

RPC framework for the GLIB #156

lpetre-ulb opened this issue Oct 15, 2019 · 8 comments

Comments

@lpetre-ulb
Copy link
Contributor

lpetre-ulb commented Oct 15, 2019

With the latest developments, the GLIB back-end board can now be used with the GE1/1 v3 electronics and the vanilla GEM software.

The GEM_AMC firmware is ported to the GLIB FPGA and the CTP7 ZYNQ software is emulated with the container provided here: https://gitlab.cern.ch/lpetre/gem_glib_sw

The container image is built with CERN GitLab CI and stored on the CERN GitLab registry. A docker-compose.yml file is provided as example in the Git repository. Documentation is slightly outdated and is going to be updated. The documentation should be usable, but needs further improvements.

Improvements aiming at easing the usage will be published when available. Bugs must also be expected although the latest tests were very encouraging.

This first beta version is opened for discussion and suggestion.

IP address assignment

Contrary to the CTP7, the GLIB can only be assigned a single IP address. Different means exist for setting the IP address: based on the µTCA slot number, statically assigned in the firmware or the EEPROM, through the MMC via IPMI or via RARP.

The simplest IP addressing scheme is to use a prefix plus the µTCA slot number. This solution is perfectly fine as long as there is the guarantee that two GLIBs will never be in the same slot in two different µTCA crates in the same subnetwork.

The other main difference is that the RPC server container (aka CTP7 emulator) must also be provided a IP address. For now, the GLIBs have the IP addresses 192.168.80.x and the containers the IP addresses 192.168.81.x where x is the slot number where the GLIB is installed.

As a consequence, the RPC calls and IPBus transactions use two different paths:

RPC:   DAQ machine -> container `gem-shelfXX-amcYY`(hostname)/`192.168.81.YY` -> controlhub(container or native service) -> GLIB (`192.168.80.YY`)
IPBus: DAQ machine -> `gem.shelfXX.amcYY`(connections.xml) -> controlhub(container or native service) -> GLIB (`192.168.80.YY`)

Container configuration

For now, the container is set up at ULB with two IP addresses on two interfaces:

  1. A macvlan interface to access the µTCA sub-network with the 192.168.80.x IP address.
  2. A veth network interface connected to a bridge which NATs to the public network.

The bridge network interface can obviously be removed if one does not need to access a public network from the container or if the gateway 192.168.0.180 is configured to NAT.

The macvlan has the advantage to allow access from remote machine with multiple containers on the same machine. However, the host IP address must be moved from the physical network interface(or previously used virtual network interface) to a virtual macvlan interface (not supported by ifcfg configuration files).

If access is not required from remote machines, using a simple bridge might be an easier solution.

Compilation

Currently, xHAL, reedmuller-c and ctp7_modules are compiled inside the image, modifying the Makefiles so that the libraries build successfully. The output binaries are then copied by hand to their final location (in a CTP7-like filesystem structure).

Note the the optical.so RPC module cannot be built on another back-end than the CTP7. Since it wouldn't have any utility on the GLIB, the module is simply disabled.

DAQ software

Except a segfault (PR with a fix to come), the vanilla testConnectivity.py runs seamlessly. The last known issue with the GBT RX phase scan is now fixed in the firmware.

Update: New memory error (double free) found during the Sbit rate vs THR_ARM_DAC scan. Solution found, PR to come.

Speed

Since each memory-mapped transaction (~10µs) is converted to a IPBus packet containing a single IPBus transaction (~100µs), all operations run more slowly. The factor 10 is however mitigated by the fact that time in not only taken by registers transactions, but also wait times, ...

For example, testConnectivity.py is "only" 2-3 times longer with the GLIB than with the CTP7.

Also, the libmemsvc <-> IPBus wrapper creates a lot of very small IP packets. On the Docker host used for the tests, the limiting factor seems to be the CPU, eaten up by controlhub. I think it is worth evaluating the performances with a faster computer.

Context

Support the GLIB for, among others, QC7 operations. Currently, no support is provided for any other back-end than the CTP7.

@jsturdy
Copy link
Contributor

jsturdy commented Oct 16, 2019

As a consequence, the RPC calls and IPBus transactions use two different paths:

I guess the only drawback of this is that the connections.xml file can't use fully geographic hostnames, like it does now, as the IPBus endpoint IP addresses don't match.

The requirement I'd want to impose would be that the container IP addressing scheme matches the current expectation of:

gem-shelfXX-amcYY 192.168.XX.40+YY

I have long been thinking something like:

systemctl start [email protected]

where glib would start the glib container (envisioning a future where this same service drives a variety of potential emulated devices) with a container IP address that corresponds to shelf 1 for an AMC in slot 3, thus corresponding to the existing hostname gem-shelf01-amc03 (mapped to 192.168.1.43, while the GLIB itself would get the 192.168.0.173 IP address.

This way, the container can have the "desired" IP address, while the GLIB itself keeps the "automatic" one (192.168.0.70+slot)
I think the trick here would be in dynamically updating the connections.xml file to ensure that the new card is correctly mapped (though this could be taken care of manually, as I doubt there will be frequent changes...)

If access is not required from remote machines, using a simple bridge might be an easier solution.

I think there is no reason to expect multiple remote machines would need to connect; the changes that are going to be required here will break the current topology in a way that is already pretty invasive. The setups that will be using this will, for the most part, not need to worry about utilizing the "correct" topology, so can have hacks added to adapt

@lpetre-ulb
Copy link
Contributor Author

As a consequence, the RPC calls and IPBus transactions use two different paths:

I guess the only drawback of this is that the connections.xml file can't use fully geographic hostnames, like it does now, as the IPBus endpoint IP addresses don't match.

It might be possible fully geographic in connections.xml with a redirection in the container. For example, the endpoint hostname can be gem-shelfXX-amcYY and inside the container, you redirect the UDP packets to the GLIB IP address with socat for example. I haven't measured the impact it would have on the latency though.

Also, this does not fix the fact that the protocols are different between the CTP7s and the GLIBs.

The requirement I'd want to impose would be that the container IP addressing scheme matches the current expectation of:

gem-shelfXX-amcYY 192.168.XX.40+YY

Okay, that's a trivial change and that can be done on a per setup basis at worst.

I have long been thinking something like:

systemctl start [email protected]

where glib would start the glib container (envisioning a future where this same service drives a variety of potential emulated devices) with a container IP address that corresponds to shelf 1 for an AMC in slot 3, thus corresponding to the existing hostname gem-shelf01-amc03 (mapped to 192.168.1.43, while the GLIB itself would get the 192.168.0.173 IP address.

Great idea! I'll try to get it done podman instead of docker since the former is present by default in CC8 (and also available in EPEL for CC7). The "complicated" instance name requires a tiny wrapper script for parsing.

I have one question though. How do you envision the container volume naming? Would a GLIB (or any other emulator) be changed of (virtual) slot, how do you keep the persistent data? Volumes, for now, cannot be renamed.

One possibility is to store a configuration for each emulator in /etc/gem-emulator/<emulator bird name>.conf and use it a the instance name in systemd. Geographical address, emulated device, GLIB IP address (stored in the container volume for now) can then all be set from the host by the sysadmin.

This way, the container can have the "desired" IP address, while the GLIB itself keeps the "automatic" one (192.168.0.70+slot)

Should the GLIB have the 192.168.0.70+slot or 192.168.0.170+slot IP address? The second choice creates conflict with the manager machine 192.168.0.180 if a GLIB is plugged in slot 10 (slot which is currently used at ULB). I would be more in favor of using a new /24 prefix for the GLIB specific IP addresses.

I think the trick here would be in dynamically updating the connections.xml file to ensure that the new card is correctly mapped (though this could be taken care of manually, as I doubt there will be frequent changes...)

Dynamic updates are for sure a good idea. But I don't have good ideas for how to do it. Maybe with a configuration management tool?

Also, connections.xml can probably be bind-mounted inside the containers. There is no use for it yet, but at some point it might be useful to have the standard IPBus connection available.

If access is not required from remote machines, using a simple bridge might be an easier solution.

I think there is no reason to expect multiple remote machines would need to connect; the changes that are going to be required here will break the current topology in a way that is already pretty invasive. The setups that will be using this will, for the most part, not need to worry about utilizing the "correct" topology, so can have hacks added to adapt

At ULB the containers already run on another machine than the DAQ machine. But this specific network configuration can be done on a per setup basis.

However, I'm unsure how you would configure the routing if you use a NAT'ed bridge (like the Docker default). Indeed, if you assign IP addresses from 192.168.XX.0/24 to the containers, how do configure the IP routes for the the CTP7 plugged in the same XX crate? This will be the case at QC7 I think.

And using a bridge which includes the physical interface wouldn't bring anything over macvlan interfaces (except, in general, a slight performance loss).

@jsturdy
Copy link
Contributor

jsturdy commented Oct 18, 2019

Should the GLIB have the 192.168.0.70+slot or 192.168.0.170+slot IP address? The second choice creates conflict with the manager machine 192.168.0.180 if a GLIB is plugged in slot 10 (slot which is currently used at ULB). I would be more in favor of using a new /24 prefix for the GLIB specific IP addresses.

Ah, good catch, it must be 192.168.0.160+slot (I believe..., as a slot "15", the bench mode, gets 192.168.0.175, if I recall correctly)

This was just for the case that we don't do any change to how the GLIB gets the slot-based IP address; I have nothing against some other mechanism (or protected IP range)

@jsturdy
Copy link
Contributor

jsturdy commented Oct 18, 2019

However, I'm unsure how you would configure the routing if you use a NAT'ed bridge (like the Docker default). Indeed, if you assign IP addresses from 192.168.XX.0/24 to the containers, how do configure the IP routes for the the CTP7 plugged in the same XX crate? This will be the case at QC7 I think.

Yeah, to be honest, I've never done anything special with docker networking. I was going to try out a couple of things I had read in the past to see how they'd play with the network topology in 904 and get back to this issue, but I suppose you've probably tried a number of different things already, which is how you settled upon this recommendation :-)

@jsturdy
Copy link
Contributor

jsturdy commented Oct 18, 2019

I have one question though. How do you envision the container volume naming? Would a GLIB (or any other emulator) be changed of (virtual) slot, how do you keep the persistent data? Volumes, for now, cannot be renamed.

Can you expand a little on what's going on here that you want to address? What persistent data can't be shared between all card of the same type, e.g., GLIB (OK, VFAT DAC calibrations for sure, but anything else?)
I'd be fine if we had some /data/gem-emulator/volume/gem-shelfXX-amcYY area that could be mounted as the persistent volume though, bind mounting could happen fully on the host filesystem side for things that really should be common...

One possibility is to store a configuration for each emulator in /etc/gem-emulator/<emulator bird name>.conf and use it a the instance name in systemd. Geographical address, emulated device, GLIB IP address (stored in the container volume for now) can then all be set from the host by the sysadmin.

I think the systemd script (well, it would have to call another script in my example, because the %i parsing would, as far as I know, have to be done outside of the systemd script) would handle everything except for possibly the GLIB IP (which would be set automatically in the slot-based mode, or could be done in the systemd script if we used some RARP or IPMI commands to set it to some specific value)

Maybe related, maybe not, but what I was hoping for from the emulator card users is that if there are multiple cards in their setup (host PC), they'd all use the same FW/SW combo, so this could be bind mounted from the host system.
However, this may end up being an unrealistic use case (I wanted this from the packaging perspective and from the interoperability standpoint: if the PC is running a particular version of xhal, then the cards being communicated with should need the same version, and bind mounting solved this trivially)

@jsturdy
Copy link
Contributor

jsturdy commented Oct 18, 2019

I've also been bringing the packaging changes to the other repos, and to bring them here, I'll have to replicate the gymnastics of cms-gem-daq-project/xhal#133 on the legacyPreBoost branch, which shouldn't be a problem because that branch will not be merged into the templated RPC developments

@jsturdy
Copy link
Contributor

jsturdy commented Oct 21, 2019

OK, I have a version of xhal for legacyPreBoost that uses the new make strategy, and also ctp7_modules that should easily also allow the same set of reproducible builds

  • ctp7_modules doesn't build for x86_64 because I don't have all the internals of the emulator (missing libmemsvc), but if you want to test that you can build the libs in the standard build environment, we can figure out how to ensure the dependencies are available as a development pacakge

@lpetre-ulb
Copy link
Contributor Author

Ah, good catch, it must be 192.168.0.160+slot (I believe..., as a slot "15", the bench mode, gets 192.168.0.175, if I recall correctly)

This was just for the case that we don't do any change to how the GLIB gets the slot-based IP address; I have nothing against some other mechanism (or protected IP range)

You are right, the IP address in the example firmware is 192.168.0.160+slot with slot=15 in bench mode. It is trivial to change the base IP address to something more convenient, so let me know what is the easiest for 904 operation.

ip_addr_o  <= x"c0a8500"     & amc_slot_i;  -- 192.168.80.[0:15]

However, I'm unsure how you would configure the routing if you use a NAT'ed bridge (like the Docker default). Indeed, if you assign IP addresses from 192.168.XX.0/24 to the containers, how do configure the IP routes for the the CTP7 plugged in the same XX crate? This will be the case at QC7 I think.

Yeah, to be honest, I've never done anything special with docker networking. I was going to try out a couple of things I had read in the past to see how they'd play with the network topology in 904 and get back to this issue, but I suppose you've probably tried a number of different things already, which is how you settled upon this recommendation :-)

Well, it really depends on the constraints and the network topology you have in 904. The macvlan is effective and quite standard to get a flat network. It fits very well in the ULB network, but it is maybe less convenient at 904.

I have one question though. How do you envision the container volume naming? Would a GLIB (or any other emulator) be changed of (virtual) slot, how do you keep the persistent data? Volumes, for now, cannot be renamed.

Can you expand a little on what's going on here that you want to address? What persistent data can't be shared between all card of the same type, e.g., GLIB (OK, VFAT DAC calibrations for sure, but anything else?)
I'd be fine if we had some /data/gem-emulator/volume/gem-shelfXX-amcYY area that could be mounted as the persistent volume though, bind mounting could happen fully on the host filesystem side for things that really should be common...

Here is what you can currently find in the container persistent volume:

[gemuser@gem-glib persistent]$ tree -L 2
.
├── config
│   ├── host_rsa
│   ├── host_rsa.pub
│   ├── libmemsvc.conf
│   ├── rpcsvc.acl
│   ├── ssh_host_ecdsa_key
│   ├── ssh_host_ecdsa_key.pub
│   ├── ssh_host_ed25519_key
│   ├── ssh_host_ed25519_key.pub
│   ├── ssh_host_rsa_key
│   └── ssh_host_rsa_key.pub
├── gembuild
│   ├── ctp7_modules
│   ├── reedmuller-c
│   └── xhal
├── gemdaq
│   ├── address_table.mdb
│   ├── bin
│   ├── fw
│   ├── lib
│   ├── scripts
│   ├── vfat3
│   └── xml
├── gemuser
│   ├── NominalValues-CFG_BIAS_CFD_DAC_1.txt
│   ├── NominalValues-CFG_BIAS_CFD_DAC_2.txt
│   ├── NominalValues-CFG_BIAS_PRE_I_BIT.txt
│   ├── NominalValues-CFG_BIAS_PRE_I_BLCC.txt
│   ├── NominalValues-CFG_BIAS_PRE_I_BSF.txt
│   ├── NominalValues-CFG_BIAS_PRE_VREF.txt
│   ├── NominalValues-CFG_BIAS_SD_I_BDIFF.txt
│   ├── NominalValues-CFG_BIAS_SD_I_BFCAS.txt
│   ├── NominalValues-CFG_BIAS_SD_I_BSF.txt
│   ├── NominalValues-CFG_BIAS_SH_I_BDIFF.txt
│   ├── NominalValues-CFG_BIAS_SH_I_BFCAS.txt
│   ├── NominalValues-CFG_HYST.txt
│   └── NominalValues-CFG_IREF.txt
├── rpcmodules
│   ├── amc.so
│   ├── calibration_routines.so
│   ├── daq_monitor.so
│   ├── extras.so
│   ├── gbt.so
│   ├── memhub.so
│   ├── memory.so
│   ├── optohybrid.so
│   ├── utils.so
│   └── vfat3.so
└── rpcsvc
    └── log

So, yes, most of the files could be shared between containers (RPC modules, libraries, address table) and others moved to better places (logs sent to syslog, libmemsvc.conf to environment variables). Everything, but the VFAT configuration files and the SSH keys, could be bind mounted. This solution is however not very convenient for development, tests (I've been thinking since a long time about using the GLIB in the last step of an integration CI) and GE1/1-GE2/1 (the RPC modules are different).

About the persistent area, what happens with /data/gem-emulator/volume/gem-shelfXX-amcYY (which I guess is bind mounted, so not a volume) if two emulators are in the same shelf-slot pair in two different setups? Or is /data not on the NAS? If so, why not using the FHS /var/lib directory? And do you need rename the directory manually if an emulator changes of position?

I think the systemd script (well, it would have to call another script in my example, because the %i parsing would, as far as I know, have to be done outside of the systemd script) would handle everything except for possibly the GLIB IP (which would be set automatically in the slot-based mode, or could be done in the systemd script if we used some RARP or IPMI commands to set it to some specific value)

Ok, this is what I thought.

About RARP and IPMI. RARP should be easy to use, but would require to list all the GLIBs and get their MAC addresses in a way similar to the CTP7s. For IPMI, this is more complicated. First, all my attempts to set the GLIB IP address were unsuccessful; the IPMI commands always failed. Second, there is no "FPGA sensor" such as with the UW MMC. Therefore, the software cannot be asked to send an IP address to the FPGA. If the GLIB is restarted after the container, it will not automatically get an IP address (the GLIB could get an IP inside the recover.sh script if the emulator could can send commands to the MCH).

Maybe related, maybe not, but what I was hoping for from the emulator card users is that if there are multiple cards in their setup (host PC), they'd all use the same FW/SW combo, so this could be bind mounted from the host system.
However, this may end up being an unrealistic use case (I wanted this from the packaging perspective and from the interoperability standpoint: if the PC is running a particular version of xhal, then the cards being communicated with should need the same version, and bind mounting solved this trivially)

Yes, it is partly related to the systemd script. I agree this would be really nice for updates (although it should be enough to update the container image) and interoperability. However, I strongly believe this mount bind should be optionnal. For example, at ULB, the container runs on an hypervisor where CMS and GEM softwares are not installed at all.

This would also imply that the RPC modules are installed on the host PC so that they are kept in sync with the libraries.

I'll try to sketch a few use cases for the meeting of tomorrow.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants