-
Notifications
You must be signed in to change notification settings - Fork 13
RPC framework for the GLIB #156
Comments
I guess the only drawback of this is that the The requirement I'd want to impose would be that the container IP addressing scheme matches the current expectation of:
I have long been thinking something like:
where This way, the container can have the "desired" IP address, while the GLIB itself keeps the "automatic" one (
I think there is no reason to expect multiple remote machines would need to connect; the changes that are going to be required here will break the current topology in a way that is already pretty invasive. The setups that will be using this will, for the most part, not need to worry about utilizing the "correct" topology, so can have hacks added to adapt |
It might be possible fully geographic in Also, this does not fix the fact that the protocols are different between the CTP7s and the GLIBs.
Okay, that's a trivial change and that can be done on a per setup basis at worst.
Great idea! I'll try to get it done I have one question though. How do you envision the container volume naming? Would a GLIB (or any other emulator) be changed of (virtual) slot, how do you keep the persistent data? Volumes, for now, cannot be renamed. One possibility is to store a configuration for each emulator in
Should the GLIB have the
Dynamic updates are for sure a good idea. But I don't have good ideas for how to do it. Maybe with a configuration management tool? Also,
At ULB the containers already run on another machine than the DAQ machine. But this specific network configuration can be done on a per setup basis. However, I'm unsure how you would configure the routing if you use a NAT'ed bridge (like the Docker default). Indeed, if you assign IP addresses from And using a bridge which includes the physical interface wouldn't bring anything over |
Ah, good catch, it must be This was just for the case that we don't do any change to how the GLIB gets the slot-based IP address; I have nothing against some other mechanism (or protected IP range) |
Yeah, to be honest, I've never done anything special with |
Can you expand a little on what's going on here that you want to address? What persistent data can't be shared between all card of the same type, e.g., GLIB (OK, VFAT DAC calibrations for sure, but anything else?)
I think the Maybe related, maybe not, but what I was hoping for from the emulator card users is that if there are multiple cards in their setup (host PC), they'd all use the same FW/SW combo, so this could be bind mounted from the host system. |
I've also been bringing the packaging changes to the other repos, and to bring them here, I'll have to replicate the gymnastics of cms-gem-daq-project/xhal#133 on the |
OK, I have a version of
|
You are right, the IP address in the example firmware is ip_addr_o <= x"c0a8500" & amc_slot_i; -- 192.168.80.[0:15]
Well, it really depends on the constraints and the network topology you have in 904. The
Here is what you can currently find in the container persistent volume: [gemuser@gem-glib persistent]$ tree -L 2
.
├── config
│ ├── host_rsa
│ ├── host_rsa.pub
│ ├── libmemsvc.conf
│ ├── rpcsvc.acl
│ ├── ssh_host_ecdsa_key
│ ├── ssh_host_ecdsa_key.pub
│ ├── ssh_host_ed25519_key
│ ├── ssh_host_ed25519_key.pub
│ ├── ssh_host_rsa_key
│ └── ssh_host_rsa_key.pub
├── gembuild
│ ├── ctp7_modules
│ ├── reedmuller-c
│ └── xhal
├── gemdaq
│ ├── address_table.mdb
│ ├── bin
│ ├── fw
│ ├── lib
│ ├── scripts
│ ├── vfat3
│ └── xml
├── gemuser
│ ├── NominalValues-CFG_BIAS_CFD_DAC_1.txt
│ ├── NominalValues-CFG_BIAS_CFD_DAC_2.txt
│ ├── NominalValues-CFG_BIAS_PRE_I_BIT.txt
│ ├── NominalValues-CFG_BIAS_PRE_I_BLCC.txt
│ ├── NominalValues-CFG_BIAS_PRE_I_BSF.txt
│ ├── NominalValues-CFG_BIAS_PRE_VREF.txt
│ ├── NominalValues-CFG_BIAS_SD_I_BDIFF.txt
│ ├── NominalValues-CFG_BIAS_SD_I_BFCAS.txt
│ ├── NominalValues-CFG_BIAS_SD_I_BSF.txt
│ ├── NominalValues-CFG_BIAS_SH_I_BDIFF.txt
│ ├── NominalValues-CFG_BIAS_SH_I_BFCAS.txt
│ ├── NominalValues-CFG_HYST.txt
│ └── NominalValues-CFG_IREF.txt
├── rpcmodules
│ ├── amc.so
│ ├── calibration_routines.so
│ ├── daq_monitor.so
│ ├── extras.so
│ ├── gbt.so
│ ├── memhub.so
│ ├── memory.so
│ ├── optohybrid.so
│ ├── utils.so
│ └── vfat3.so
└── rpcsvc
└── log So, yes, most of the files could be shared between containers (RPC modules, libraries, address table) and others moved to better places (logs sent to About the persistent area, what happens with
Ok, this is what I thought. About RARP and IPMI. RARP should be easy to use, but would require to list all the GLIBs and get their MAC addresses in a way similar to the CTP7s. For IPMI, this is more complicated. First, all my attempts to set the GLIB IP address were unsuccessful; the IPMI commands always failed. Second, there is no "FPGA sensor" such as with the UW MMC. Therefore, the software cannot be asked to send an IP address to the FPGA. If the GLIB is restarted after the container, it will not automatically get an IP address (the GLIB could get an IP inside the
Yes, it is partly related to the This would also imply that the RPC modules are installed on the host PC so that they are kept in sync with the libraries. I'll try to sketch a few use cases for the meeting of tomorrow. |
With the latest developments, the GLIB back-end board can now be used with the GE1/1 v3 electronics and the vanilla GEM software.
The
GEM_AMC
firmware is ported to the GLIB FPGA and the CTP7 ZYNQ software is emulated with the container provided here: https://gitlab.cern.ch/lpetre/gem_glib_swThe container image is built with CERN GitLab CI and stored on the CERN GitLab registry. A
docker-compose.yml
file is provided as example in the Git repository.Documentation is slightly outdated and is going to be updated.The documentation should be usable, but needs further improvements.Improvements aiming at easing the usage will be published when available. Bugs must also be expected although the latest tests were very encouraging.
This first beta version is opened for discussion and suggestion.
IP address assignment
Contrary to the CTP7, the GLIB can only be assigned a single IP address. Different means exist for setting the IP address: based on the µTCA slot number, statically assigned in the firmware or the EEPROM, through the MMC via IPMI or via RARP.
The simplest IP addressing scheme is to use a prefix plus the µTCA slot number. This solution is perfectly fine as long as there is the guarantee that two GLIBs will never be in the same slot in two different µTCA crates in the same subnetwork.
The other main difference is that the RPC server container (aka CTP7 emulator) must also be provided a IP address. For now, the GLIBs have the IP addresses
192.168.80.x
and the containers the IP addresses192.168.81.x
where x is the slot number where the GLIB is installed.As a consequence, the RPC calls and IPBus transactions use two different paths:
Container configuration
For now, the container is set up at ULB with two IP addresses on two interfaces:
macvlan
interface to access the µTCA sub-network with the192.168.80.x
IP address.veth
network interface connected to a bridge which NATs to the public network.The bridge network interface can obviously be removed if one does not need to access a public network from the container or if the gateway
192.168.0.180
is configured to NAT.The
macvlan
has the advantage to allow access from remote machine with multiple containers on the same machine. However, the host IP address must be moved from the physical network interface(or previously used virtual network interface) to a virtualmacvlan
interface (not supported byifcfg
configuration files).If access is not required from remote machines, using a simple bridge might be an easier solution.
Compilation
Currently,
xHAL
,reedmuller-c
andctp7_modules
are compiled inside the image, modifying the Makefiles so that the libraries build successfully. The output binaries are then copied by hand to their final location (in a CTP7-like filesystem structure).Note the the
optical.so
RPC module cannot be built on another back-end than the CTP7. Since it wouldn't have any utility on the GLIB, the module is simply disabled.DAQ software
Except a
segfault
(PR with a fix to come), the vanillatestConnectivity.py
runs seamlessly. The last known issue with the GBT RX phase scan is now fixed in the firmware.Update: New memory error (double free) found during the Sbit rate vs THR_ARM_DAC scan. Solution found, PR to come.
Speed
Since each memory-mapped transaction (~10µs) is converted to a IPBus packet containing a single IPBus transaction (~100µs), all operations run more slowly. The factor 10 is however mitigated by the fact that time in not only taken by registers transactions, but also wait times, ...
For example,
testConnectivity.py
is "only" 2-3 times longer with the GLIB than with the CTP7.Also, the
libmemsvc
<->IPBus
wrapper creates a lot of very small IP packets. On the Docker host used for the tests, the limiting factor seems to be the CPU, eaten up bycontrolhub
. I think it is worth evaluating the performances with a faster computer.Context
Support the GLIB for, among others, QC7 operations. Currently, no support is provided for any other back-end than the CTP7.
The text was updated successfully, but these errors were encountered: