Skip to content
This repository has been archived by the owner on Jan 31, 2022. It is now read-only.

[feature request] Compiling for x86 misses libmemsvc #166

Open
1 task done
jsturdy opened this issue Jan 15, 2020 · 9 comments
Open
1 task done

[feature request] Compiling for x86 misses libmemsvc #166

jsturdy opened this issue Jan 15, 2020 · 9 comments

Comments

@jsturdy
Copy link
Contributor

jsturdy commented Jan 15, 2020

Brief summary of issue

In order to compile for x86_64 (GLIB emulator) an equivalent of libmemsvc must be provided.
Currently this file does not exist in ctp7_modules and @lpetre-ulb has a version that has been used for generating the GLIB emulator lpetre-ulb/gem_glib_sw
Somehow this needs to be included in the set of GEM tools, whether added here to ctp7_modules or independently as a dependency elsewhere.
I and @mexanick propose that it is included here in ctp7_modules to reduce the number of multiplying packages.

Types of issue

  • Feature request (request for change which adds functionality)

Expected Behavior

Compiling for x86_64 should work

Current Behavior

Doesn't work

Steps to Reproduce (for bugs)

make

Possible Solution (for bugs)

Include sources for IPBus based libmemsvc to ctp7_modules (this was how I resolved the issue for my testing, it was added to a server subdirectory as seen in the ctp7_modules.mk file)

Your Environment

  • Version used: 366ee77
@jsturdy
Copy link
Contributor Author

jsturdy commented Jan 15, 2020

If my "possible solution" is desired, I will add those changes and push that.

@lpetre-ulb
Copy link
Contributor

First of all, the maintained version of the gem-emulator is on the CERN GitLab for its great CI integration and no longer on GitHub ;)

About the libmemsvc wrapper, I would prefer to not include it in the ctp7_modules. Simply for a separation of concerns between repositories. The infrastructure/helpers would better live in xhal rather than in the ctp7_modules which implements the real actions.

I'd consider two other ideas:

  1. Create a sysroot image similar to what exists for the CTP7; the libmemsvc wrapper would then continue to live in gem-emulator. The mid-term goal is to include gem-emulator into the GEM_AMC repository since the container image and the firmware need to be kept in sync. One can consider the container as the Linux image of the CTP7.

  2. Add the libmemsvc wrapper in xhal. In fact, this would be the opportunity to abstract the memory accesses from libmemsvc. Indeed there is no guarantee that is the API will continue to be supported with the new ATCA board. Even if it is, the current wrapper is a hack which re-implements a lightly documented API. It also gives more freedom to implement register accesses having the desired behavior (throwing exception, ...).

An important point to take into consideration is that different GEM emulators on x86_64 might have different memory access interfaces. For example, IPBus is probably not the most efficient to access the envisioned PCIe emulator.

Finally, a possible long-term solution would be to get rid of the container altogether. The multiple mount-binds seem to show that this is actually the goal. That solution would require: (1) remove all the SSH calls and the configuration files on the CTP7/GEM emulator, (2) not use IPBus anymore and (3) be able to change the rpcsvc listening port.

@jsturdy
Copy link
Contributor Author

jsturdy commented Jan 15, 2020

First of all, the maintained version of the gem-emulator is on the CERN GitLab for its great CI integration and no longer on GitHub ;)

About the libmemsvc wrapper, I would prefer to not include it in the ctp7_modules. Simply for a separation of concerns between repositories. The infrastructure/helpers would better live in xhal rather than in the ctp7_modules which implements the real actions.

I'd consider two other ideas:

1. Create a sysroot image similar to what exists for the CTP7; the `libmemsvc` wrapper would then continue to live in `gem-emulator`. The mid-term goal is to include `gem-emulator` into the `GEM_AMC` repository since the container image and the firmware need to be kept in sync. One can consider the container as the Linux image of the CTP7.

So now ctp7_modules will require yet another build time dependency?
If this can happen on the timescale of "already done" then have the GLIB (allowing for eventual non-GLIB x86_64 options) "peta stage" install to:

/opt/gem-peta-stage/glib/

as it would for the CTP7 peta stage:

/opt/gem-peta-stage/ctp7/usr/lib/libmemsvc.so
/opt/gem-peta-stage/ctp7/usr/lib/libmemsvc.so.1.0.1
/opt/gem-peta-stage/ctp7/usr/lib/libmemsvc.so.1
/opt/gem-peta-stage/ctp7/usr/include/libmemsvc.h

accommodating this would require minimal changes to the makefiles.

2. Add the `libmemsvc` wrapper in `xhal`. In fact, this would be the opportunity to abstract the memory accesses from `libmemsvc`. Indeed there is no guarantee that is the API will continue to be supported with the new ATCA board. Even if it is, the current wrapper is a hack which re-implements a lightly documented API. It also gives more freedom to implement register accesses having the desired behavior (throwing exception, ...).

I don't know what benefit this offers over bundling the wrapper in ctp7_modules... it seems to me that it is more semantically/functionally part of the ctp7_modules side of the equation than the xhal side of the equation (and as far as I recall, it's also not needed outside of ctp7_modules)

An important point to take into consideration is that different GEM emulators on x86_64 might have different memory access interfaces. For example, IPBus is probably not the most efficient to access the envisioned PCIe emulator.

Possibly, but IPBus has long had PCIe integration, so rather than reinventing the wheel, using IPBus offers a continued continuity with minimal additional development.

Finally, a possible long-term solution would be to get rid of the container altogether. The multiple mount-binds seem to show that this is actually the goal. That solution would require: (1) remove all the SSH calls and the configuration files on the CTP7/GEM emulator, (2) not use IPBus anymore and (3) be able to change the rpcsvc listening port.

I can't speak to this right now, the containerized approach offered the long-desired emulator for a GLIB, and changing this now is definitely not something I'd advocate putting any time on in the near term.

Long story short, we need a solution on the timescale of "right now" and I see two possibilities:

  1. what i did initially to make things compile (this can eventually even be moved to any other location)
  2. what @lpetre-ulb proposes in his first point, but with an RPM ready to go now

@lpetre-ulb
Copy link
Contributor

So now ctp7_modules will require yet another build time dependency?

Yes, but it makes sense for the ctp7_modules to depend on a sysroot for each supported target.

If this can happen on the timescale of "already done" then have the GLIB (allowing for eventual non-GLIB x86_64 options) "peta stage" install to:

/opt/gem-peta-stage/glib/

as it would for the CTP7 peta stage:

/opt/gem-peta-stage/ctp7/usr/lib/libmemsvc.so
/opt/gem-peta-stage/ctp7/usr/lib/libmemsvc.so.1.0.1
/opt/gem-peta-stage/ctp7/usr/lib/libmemsvc.so.1
/opt/gem-peta-stage/ctp7/usr/include/libmemsvc.h

accommodating this would require minimal changes to the makefiles.

Since the container image already exists, it is trivial to extract a sysroot out of it. What files do you expect to be present in the sysroot? If /opt/gem-peta-stage/glib is supposed to be used for the --sysroot compiler option, it must include more files than just the libmemsvc's.

We obviously come back to the synchronization between the container base image and the host OS. Using a real --sysroot provides the best compatibility between the ctp7_modules and the container image.

Also, why is the sysroot still called "Peta stage"? It is not a staging area and the "Peta" refers to the Xilinx Peta Linux which might or might not be true for future boards/emulators.

I don't know what benefit this offers over bundling the wrapper in ctp7_modules... it seems to me that it is more semantically/functionally part of the ctp7_modules side of the equation than the xhal side of the equation (and as far as I recall, it's also not needed outside of ctp7_modules)

Actually, I'm still not sure what the xHAL repository is supposed to host... I had always thought about it as support repository which contains everything required to build the RPC modules (xcompile and xhalarm) and interface them with the client (xhalcore).

Possibly, but IPBus has long had PCIe integration, so rather than reinventing the wheel, using IPBus offers a continued continuity with minimal additional development.

I would tend to agree. However, the register accesses in the ctp7_modules do not map well at all with the IPBus transactions, i.e. you can't dispatch batch of transactions. The overhead is huge.

Xilinx provides IP cores which map PCIe MMIOs to an AXI bus. This is extremely close to what is done on the CTP7. There is the question of how to map the PCIe address space to the user space, but that shouldn't be hard to solve (the CTP7 solution wouldn't work since the base address is dynamic; and it is dangerous anyway).

Long story short, we need a solution on the timescale of "right now" and I see two possibilities:

1. what i did initially to make things compile (this can eventually even be moved to any other location)
2. what @lpetre-ulb proposes in his first point, but with an RPM ready to go now

Since the RPM content can be extracted from the container image, producing it should be doable quickly.

@jsturdy
Copy link
Contributor Author

jsturdy commented Jan 17, 2020

Eliding everything except the core issue at this time.

Since the container image already exists, it is trivial to extract a sysroot out of it. What files do you expect to be present in the sysroot? If /opt/gem-peta-stage/glib is supposed to be used for the --sysroot compiler option, it must include more files than just the libmemsvc's.

It isn't really, but mostly because I always felt that treating the glib container as different from a standard x86 build was unnecessary overhead...

We obviously come back to the synchronization between the container base image and the host OS. Using a real --sysroot provides the best compatibility between the ctp7_modules and the container image.

Currently, what "compile"-time (container build, not ctp7_modules) dependencies are separate from my (possibly unrealistic) view of "run-time binds" into the container? Is it just the libmemsvc that would potentially need to be recompiled if, e.g., uhal were updated on the host and the so is bind mounted into the container (again, my idealistic vision of having the container be the minimal bridge between host and HW)

@lpetre-ulb
Copy link
Contributor

Since the container image already exists, it is trivial to extract a sysroot out of it. What files do you expect to be present in the sysroot? If /opt/gem-peta-stage/glib is supposed to be used for the --sysroot compiler option, it must include more files than just the libmemsvc's.

It isn't really, but mostly because I always felt that treating the glib container as different from a standard x86 build was unnecessary overhead...

Since the glib emulator is a container, software should be build against what is present inside the image even if the architecture is the same. It is possible to work against that, but it increases the possibility of issues, mainly at run-time.

If the glib sysroot is not a complete sysroot, it would probably be best to install the libmemsvc wrapper in another location, as a regular library. I'm not sure where the sources should live, but definitively not under a new repository.

Another possibility would be to build the RPMs for a given distribution and then install them the container during its build. If there is an incompatibility, it should be caught by the package manager. However, the synchronization between the host and the container would be lost.

Mount-binding executable files cannot provide any safety.

We obviously come back to the synchronization between the container base image and the host OS. Using a real --sysroot provides the best compatibility between the ctp7_modules and the container image.

Currently, what "compile"-time (container build, not ctp7_modules) dependencies are separate from my (possibly unrealistic) view of "run-time binds" into the container? Is it just the libmemsvc that would potentially need to be recompiled if, e.g., uhal were updated on the host and the so is bind mounted into the container (again, my idealistic vision of having the container be the minimal bridge between host and HW)

The whole root systems are independent, from the libc and libstdc++ to xerces-c and lmdb. The chances are high for them to be the same version, or at least compatible, but it is impossible to provide any guarantee. Ensuring that the two systems (host and container) are always compatible goes against the container design.

I understand you idea of a minimal bridge between the host and the HW. It is a good idea for compatibility and system administration, but it doesn't fit well with a container architecture. The only solid implementation I can think of is the long-term solution proposed in my first reply. It might be possible to hack something using the Linux namespaces, but that would be very tricky to get right and stable.

@jsturdy
Copy link
Contributor Author

jsturdy commented Jan 17, 2020

Yeah... OK, if you can put together the RPM I can try plugging it into the builds to see how much more things need to change (xhal will now have to generate an additional package built during the x86 build to generate the appropriate library for developing the container modules against)

If it looks like it will be too onerous, then I'll drop the x86 build from the makefile, as it's not currently a critical path item, and push the current changes for review.

@lpetre-ulb
Copy link
Contributor

OK, that shouldn't be too complicated. The RPM should have the following structure:

/opt/gem-peta-stage/glib
├── lib -> usr/lib
├── lib64 -> usr/lib64
└── usr
    ├── include
    ├── lib
    └── lib64

(maybe without the symlinks)

@lpetre-ulb
Copy link
Contributor

Currently, what "compile"-time (container build, not ctp7_modules) dependencies are separate from my (possibly unrealistic) view of "run-time binds" into the container? Is it just the libmemsvc that would potentially need to be recompiled if, e.g., uhal were updated on the host and the so is bind mounted into the container (again, my idealistic vision of having the container be the minimal bridge between host and HW)

The whole root systems are independent, from the libc and libstdc++ to xerces-c and lmdb. The chances are high for them to be the same version, or at least compatible, but it is impossible to provide any guarantee. Ensuring that the two systems (host and container) are always compatible goes against the container design.

I've got a first incompatibility while installing the GLIBs for the QC7 setup. The uHAL libraries versions were different between the container and the host. This shouldn't have been an issue, but the Python bindings are installed in /usr while the C++ binaries are installed in /opt (crazy filesystem organization...). Using the uHAL Python binding proved to be impossible without updating cactuscore-uhal-tests:

[gemuser@glib-shelf02-slot11 ~]$ reset_gtx.py
Traceback (most recent call last):
  File "/usr/local/bin/reset_gtx.py", line 3, in <module>
    import uhal
  File "/usr/lib/python2.7/site-packages/uhal/__init__.py", line 14, in <module>
    exec('raise type(e), type(e)(e.message + msg_suffix), sys.exc_info()[2]')
  File "/usr/lib/python2.7/site-packages/uhal/__init__.py", line 5, in <module>
    from ._core import *
ImportError: /usr/lib/python2.7/site-packages/uhal/_core.so: undefined symbol: _ZN4uhal5tests22measureFileReadLatencyERKSsjjmb
N.B. ImportError (or derived exception) raised when uHAL's __init__.py tries to load python bindings library
     Maybe you need to add "/opt/cactus/lib", or some other path, to the "LD_LIBRARY_PATH" environment variable?

This consolidate me in my idea that a container, designed as an isolated environment, should remain one. Binaries shouldn't be bind-mounted, but compiled specifically for the container.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants