-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rethinking the runner configuration interface: integration of the OSVVM methodology into VUnit #772
Comments
The advantage of generics over file inputs or similar methods is that such an input is a constant and can be used in elaboration. Other techniques might not be available early enough or might not ne static enough for some use cases. A JSON parser doesn't need VHDL-2019, but an implementation might be easier and have less repeated code. While a JSON parser already exists and can be transformed to an improved variant for simulation (no fixed arrays, but dynamic memory allocation and pointer structures instead of index arrays), I consider YAML a better solution, especially as it can be written easier to a file. If INI is needed, I propose to support TOML, which is a superset. |
The VUnit dependency scanner recognizes configurations as it is today. So an intermediate (?) solution would be to combine our ability to control the testbench from a lower level test control entity (see https://vunit.github.io/run/user_guide.html#distributed-testbenches) with a new feature the recognizes a testbench configuration as a test case for that testbench. All the user would have to do to a standard OSVVM testbench is to add the Update:
Ouch, I didn't know that. Well, that is a problem |
WRT using file inputs, ActiveHDL really has a hard time keeping rooted in the simulation start directory and seems to creep into its library directory any time you turn your back or during certain updates - was just wrestling with an issue in ActiveHDL 12 that was not there in 11. |
@JimLewis In general tests should be given private directories to work from or input and output files from different test cases will interfere/overwrite each other. Just need a way to pass that info Not knowing the testbench is running from becomes another problem. |
@LarsAsplund For me, all tests in the same test suite (testing similar functionality like a UART core or a particular package) have a unique name and unique files they operate on (often named similar to their test name). Only when I switch to a new test suite do I need to run them out of a different directory. The only time I have had to worry about name conflicts is when I am reusing a test case on a similar design. |
@LarsAsplund, I see two main challenges in your proposal:
That's why I proposed specifying what is the content/format/syntax of the In other words, the challenge here is the complement of pyEDAA.Reports. Reports is about the outcome of running testsuites, tests, bins, etc. while Runner is about how to tell which of those to schedule. Hence the Runner API/specification needs to conceptually be, in fact, a subset of Reports. With regard to where to execute the simulations, IMHO that is a completely different topic. VUnit can already run simulations using ActiveHDL and OSVVM. None of that is affected by the proposal in this issue. OSVVM's TCL plumbing uses a different approach to VUnit's Python plumbing in that regard, however OSVVM's TCL is out of scope here because the point is precisely to run/use OSVVM through VUnit (using EDAA to treat |
@umarcor The issue with ActiveHDL is not being able to run simulations within a given environment. It is to get the VHDL to write files in the expected locations when running that is interesting. Especially if the VHDL code does not specify path information. WRT EDAA, why not run TCL using something like tkinter in python? |
@umarcor WRT environment variables, you have to convince vendors other than one who has proactively implemented VHDL-2019 to implement the 2019 feature that allows environment variables to be read. That would be my only concern with that path. It would be nice though. |
Not sure there needs to be 1 method for discovering testbenches or passing test configurations that everyone has to support. Hell, there might even be multiple ways for passing configurations for each methodology due to tool or language limitations. There should be multiple kinds of testbench objects and the runner should dispatch to the testbench object to decide how to run those tests instead of baking this into the tool classes. I have suggested this previously. Each methodology should provide these classes so they don't have to be included in VUnit, but VUnit will need to be updated to defer decisions to the testbench objects where appropriate. That way VUnit can leave alone it's current testbench discovery, configuration passing, and test result system. |
I understand. However, the limitations of one simulator of one vendor for dealing with relative file paths is out of scope of this issue.
We don't want to depend on TCL. There are other people providing TCL based solutions, such as most vendors or OSVVM.
Python and C can access environment variables easily, and any of those can be used through the cosimulation interfaces supported by the tools already. Actually, by using a protected type, it would be possible to support multiple transport strategies. Anyway, in OSVVM's case, I believe it's better to start with just files, which is the most likely solution to communicate TCL and VHDL. |
@umarcor TCL can access environment variables. Is there something I am missing here? Setting them is nothing more than: Does VUnit have a library that gives VHDL the ability to read environment variables. I would be open to reusing it - if it can address everything we need. |
Actually, I mentioned the discovery but I left the details out of the discussion on purpose. I wanted someone else to bring it up!
Nevertheless, I believe that both @LarsAsplund and me agree on doing an effort in that regard, as long as at least a few people are explicit about willing to use that. We don't want all OSVVM/cocotb users to depend on VUnit, not at all. But we want to have some potential users for that solution (apart from ourselves).
It would be nice if each project supported any of the transport mechanisms (CLI, envvars or files); thus, the first of the diagrams in the figure above. However, that should be unrelated to the content: the actual configuration (list of tests to run and maybe attributes for those tests). Hence, each of the projects can start with using one transport mechanism only (the other diagrams in the figure).
I don't understand the having multiple kinds of testbench objects. My understanding is that we need a single object with attributes. Then, we can set/get matching attributes in the Python part and in the VHDL/OtherPython part. In essense, all three frameworks run exactly the same simulations and use the tools in exactly the same manner. The only difference between VUnit, OSVVM or cocotb testbenches is what the code does inside the simulator; that is, after the execution of a testbench is started. Neither of them can do anything before run initialization. That's the reason for VUnit's Python, OSVVM's TCL or cocotb's makefiles to exist.
I agree with each framework/methodology providing a class for discovering the testbenches and the tests within them. That is needed by VUnit in order to support filtering the tests (in the VUnit CLI) and in order to know which is the entrypoint for each testbench. However, I'm not sure about the multiple testbench objects thing. The fundamental information that VUnit needs is whether each of the tests passed or failed. That is used for setting the overall exit code and for generating summaries (in the terminal or through an XUnit report). Any additional information can be processed in the post hook, through the results object. Hence, particular post hook processing functions might be provided for cocotb and/or OSVVM, without necessarily modifying VUnit's codebase. |
@JimLewis the problem is not setting environments variables. That is easy regardless of the language (TCL, Python, bash, C...). The problem is accessing envvars from VHDL. As we discussed, that requires VHDL 2019 or direct cosimulation (which is not standardized yet).
It does not. I could provide it through ghdl/ghdl-cosim (for GHDL), or through VUnit/cosim (for GHDL, and hopefully ModelSim/QuestaSim). However, that's "killing flies with cannon shots". Providing a library for reading envvars through co-simulation (as a workaround for the lack of VHDL 2019 support) would be a "tiny" project itself, such as JSON-for-VHDL; and it does not solve our problem: an agreement on the format/syntax of the runner interface/objects and which attributes it needs. I prefer to rely on JSON-for-VHDL, which exists already and it meets the needs. In VUnit, the runner tells the tesbench which tests to run and which parameters (top-level generic values) to use for each test, along with the location of the tesbench source file. Then, the testbench produces an output that VUnit can use to know which specific tests failed and which passed. So, what are the equivalent requirements in OSVVM? You mentioned that you wanted to pass some parameters from TCL to VHDL. Which are those parameters? For the second part, I know the answer: you produce a YAML file which VUnit could read/use. |
@umarcor What do you mean by discovering? Is your intent automating the discovery of all tests? That is going to be an interesting challenge. For example, in the OSVVM AXI we have two flavors of AXI interfaces - our record based ports (CTI) and our internal signals which are connected to via external names (VTI). The internals of the designs are identical - in fact the differences are limited to primarily the entity declarations. Hence, the test framework for the CTI is in AXI4/Axi4/testbench. The testbench framework for the VTI is in AXI4/Axi4/testbenchVti. In both frameworks, the entity for the test sequencer (the part that implements a test) is named TestCtrl. Both versions of TestCtrl - either through ports or external name references make the same names available to test architectures. Hence, each can and does run the same test case architectures. Is it your thought that you are going automate the discovery of these tests to run? OSVVM currently instead relies on the scripting to explicitly specify which tests to run. By explicitly listing the tests to run, the scripts can put the information into the YAML file that says, I am planning on running this test case. Hence, if for any reason, the test case fails to run, then that information that it was supposed to get results for a particular test case is already in the YAML file. |
@JimLewis the discovery strategy is radically different for OSVVM and cocotb; hence, discussing both at the same time can be slightly misleading, although necessary. From cocotb's and Kaleb's point of view, testbenches are Python scripts/modules, and VUnit provides a Python based infrastructure. By default VUnit and cocotb will run in completely different instances of Python; however, because both are using the same language, there are more opportunities for "discovery". VUnit can import cocotb testbenches without executing them, and use Python's inspection features. It gets all the semantic information about what a module is, which functions it has, etc. In OSVVM's case, first we need to solve the problem of making |
In OSVVM, there is one test case per file. It has a name. We don't have any means to run or report on a portion of a test. That is a cool feature of VUnit. I generally don't think that small. If I am testing AxiStream byte enables, I am going to write a test that tests all permutations of the byte enables and have to always run all of them - for better or worse. What I see a need to communicate is what mode is the test running it? Debug - and hence, enable some extra flags for logging vs regression - and hence, only log to files (in particular, turn off OSVVM mirror mode if it is on), vs artifact collection - a final simulation run, hence, turn on reporting for artifact collection that is not normally needed (OSVVM's Curiously, many simulators allow you to run with multiple top level structures. This may be a curious way to accomplish what I need to do. Load a regular simulation with a second top-level that simply changes settings. There could be multiple versions of the second top-level that could accomplish the different classes of settings that I need to make. While the multiple-tops sounds interesting, I think the path that VUnit has been using may prove to be more flexible. I am not against setting top-level generics if all simulators can do it and they can reach beyond the configuration and set it for the test case entity that is specified by the configuration. If all simulators do that in a similar fashion, I think maybe we should codify it in the standard. Not the simulator part, but explicitly say that the language requires the simulators to provide a mechanism to do it - it would not be a change for existing vendors, but as an expectation to anyone new - or to put pressure on certain vendors. |
We are all coming to the same page now! 🎉 I am particularly happy because I did not expect to sort this out today 😄
That makes sense. I assumed you might have support for multiple test per file because both VUnit and cocotb allow it. However, the methodology hits here. In VUnit and cocotb, the generation of parameter sets is done outside of VHDL, while OSVVM handles it inside. Conceptually, the inputs used for generating VUnit configurations in Python, are the inputs that a single OSVVM test needs, because it does the generation internally. That equivalency is actually cool. However, you do have the concept of testsuites in OSVVM, which means you do have two levels of hierarchy for organising/calling the tests. Is that done by the location/name of the
Nice! This is the information we need. The attributes (Debug, flags, artifact collection) and the directory information. VUnit does provide the Once we understand all of this information, we might write a prototype using JSON-for-VHDL on the VHDL side. That is not the ideal solution, but it is usable already, and we should be able to wrap it in a protected type easily. On the other side (TCL or Python) I think that VUnit supports attributes for testbenches already. I'd need to look better into it.
That is very interesting. I have never seen multiple top level units used for simulation. I think that GHDL supports
We did have some issues with VUnit's approach because passing complex generics through the CLI is not without issues. Fortunately, the usage of JSON-for-VHDL along with basic encoding (in order to ensure a "simple" character set") proved to be quite robust. I am particularly happy of how that collaboration turned out, even though we should pay attention to JSON-for-VHDL's technical debt (when VHDL 2019 is supported by vendors).
I think this is sensible. Most simulators can handle strings and naturals at least. Negatives and reals might be more challenging. EDIT See #588. Note that we do have a "standard" argparse or getopts in VHDL already: JSON-for-VHDL. I.e., accept the arguments as a JSON file or string, instead of "traditional" GNU arguments with |
@umarcor If you look at the OSVVM YAML results file, within a given test case, the VHDL code currently only writes out the Name, Status, and Results parts. As we discuss it, it may makes sense to move the FunctionalCoverage to the VHDL code also - because it is easy enough to do there. The rest of the information comes from the scripts. The build name is the conjunction of the script name and the directory from which it runs (with a few modifications) - the intention is to get a unique name - for example, in all of the VC have a RunAllTests.pro. If we were then to build the scripts from multiple VC separately, without the directory name, they would not be unique. |
I think we should support providing The fact that Active-HDL has an unreliable working directory is a problem but Aldec is fairly responsive to bug reports so that is probably something that could be solved much quicker than updating VHDL. @JimLewis Do you have a MVE showing this behavior? To get somewhere without waiting for third parties I started on a prototype which leaves VUnit untouched but enables discovery and execution of OSVVM testbenches with a reusable mechanism. It's not completed yet but basically it takes VUnit parsing capabilities to scan and find OSVVM tests. It targets the simple use case where the configuration is only used to determine what test controller architecture to instantiate. These architectures are the tests. Then we can use the VUnit preprocessor to transform the top-level test-controller component instantiation to a case generate statement based on the test/architecture name. Each branch of that case state contains a direct entity instantiation of that test case. With the configurations out of the loop we can use top-level generics and the test to run is provided that way and is controlled by a VUnit Python configuration. What basically happens is that VHDL configurations are transformed into VUnit Python configurations. Pros/cons:
|
@LarsAsplund MVE: Write a program that opens a file and writes to it. Now look for the file. :) Does running a test using a configuration disable a tools ability to map a generic? I have never used a tools capability to map a generic, but can it map a generic anywhere or does it have to be the top level? |
Looking at the Questa documentation, it will map any unset generic at any level in the design. So at least for QuestaSim, it looks possible to have a generic on TestCtrl and set it in the vsim call. |
@JimLewis Ok, I'll create a MVE and report the issue. The global generic setting that ModelSim and Questa use is unique to them I think so it wouldn't work generally. It's also too magic/implicit for my taste. It could easily lead to some very tricky bugs. In fact, I think it was a bug report on VUnit that made us discover this behavior. Today we include the top-level path when passing generics to ModelSim/Questa: |
Looks like ModelSim/Questa and ActiveHDL/RivieraPRO would support paths to the top level. Since they are path based, you should be able to apply them when running with a configuration and should be able to set something on any component, so in addition to your example, it should also support Yes I had noted the cuteness of being able to set all tpd generics by doing |
@JimLewis While working with configuration support and the classic OSVVM test structure it strikes me that I don't know if this is the only preferred OSVVMstructure and if so why that is. Is there something lost from an OSVVM perspective if you were to flip things upside down. Rather than having a common top-level test harness with a set of different test controllers selected by a set of configurations you could have the test controllers as top levels that instantiate the common test harness. That way there is no need for configurations, no need for a well defined working directory in the simulator and generics are fully supported. |
@LarsAsplund I would need to sit with the thought of that a while. Here are my initial thoughts: It also makes the usage of external names challenging. They would only be able to be aliased in process or block statement that comes after the test harness instance. OTOH, I would have to classify my external name usage. All of the virtual transaction interfaces can be resolved in the test harness. Then what does that leave? In the OSVVM 2021.06 release, we are moving away from external names to access things like FIFOs declared in models (they were previously shared variables - they are now part of a singleton and we can easily pass the reference to the singleton). So I think the main use of external names is to interact with pieces of the design that are intentionally left out of a simulation. |
I worked a bit more with the configuration support through pre-processing and one thing that needs to be decided is how VHDL configurations should be treated in relation to VUnit testbenches, configurations, and test cases. One way of looking at it is that a VHDL configuration is a test case since it is a way to reuse a test harness/fixture in the same way a VUnit test case does. OTOH, it is also a configuration of the testbench which makes it similar to a VUnit configuration. It's also the top-level for the simulator, i.e. similar to a testbech. My initial approach is to treat it as a special case of VUnit configurations. In that way they are both parts of the same configuration concept and not two different things unfortunately given the same name. Currently you need to do an import from vunit.vhdl_configuration import VHDLConfiguration and then you create a # The VHDLConfiguration object is created before the source files are added
vhdl_configuration = VHDLConfiguration(vu)
vu.add_library("lib").add_source_files(Path(__file__).parent / "*.vhd")
# The add method adds a VUnit configuration to a testbench for every
# VHDL configuration found for that testbench
vhdl_configuration.add() I realized that it might be as easy to this in a single step so this might change. What's more important though is that if VHDL and VUnit configurations are both part of the same concept we also need to decide how VUnit and VHDL configurations work side by side. For example, how is a generic passed to the entity selected by the VHDL configuration and what is the user API. Currently, I can replace the call to the for testbench, configuration_name, generics in vhdl_configuration:
testbench.add_config(name=configuration_name, generics=generics) In fact, this is how With the generics dictionary exposed it is easy to add VUnit configurations to that. For example, let's say we have a for testbench, configuration_name, generics in vhdl_configuration:
for width in [8, 32]:
generics["width"] = width
testbench.add_config(name=f"{configuration_name}_width={width}", generics=generics) With this we get a configuration concept combining both VHDL and VUnit configurations. lib.tb_with_vhdl_configurations.test_reset_width=8
lib.tb_with_vhdl_configurations.test_reset_width=32
lib.tb_with_vhdl_configurations.test_state_change_width=8
lib.tb_with_vhdl_configurations.test_state_change_width=32 In this case my testbench ( Another problem I see is that VUnit configurations supports setting generics but generics are not supported for VHDL configurations (at least not for years to come). Any attempt combining the two configuration concept through generics will build on a mechanism to fake generics. What we need is a shared information concept at a higher level, one that doesn't care if the exchange mechanism is based on generics, files or anything else. A configuration database like in UVM comes to mind. I've done configuration databases in the past using JSON but a proper concept should also hide JSON. That is an implementation detail. Any thoughts? |
@JimLewis I tried to recreate that write/read problem you saw in ActiveHDL 12. Failed to do so. If I write and then read I get the same result and the file exists on the hard drive. Did you run through the GUI or was it a scripted run? file_open(write_fptr, "test.txt", write_mode);
swrite(write_line, "Hello");
writeline(write_fptr, write_line);
file_close(write_fptr);
file_open(read_fptr, "test.txt", read_mode);
readline(read_fptr, read_line);
sread(read_line, read_str, strlen);
file_close(read_fptr);
check_equal(read_str(1 to strlen), "Hello"); |
I submitted the test case at: https://synthworks.com/WriteToFile.zip Your test tests, if I write a file in a given tool, can I read it back in that tool. That should work independent of the frame of reference a tool uses. What we need is,
This turns out to be trivial. For Active, it searches for the files in:
A simulator is going to write relative to where it is running from, and for Active, by default, this is $dsn directory. Their response is that we need to set the environment variable:
It is unfortunate that the only way you would learn this is by reading the list of environment variables. |
@Paebbels, we need to carefully read this dialogue that Lars and Jim had during the last 3 weeks, in line with ProjectModel. Particularly, the semantics of "harness/fixture" that Lars mentioned in #772 (comment) are equivalent/similar to the discussion we had with @ktbarrett in the context of CoCoTb. |
@umarcor, @LarsAsplund, @JimLewis I'll push (soon) my extensions to ProjectModel on a side branch for discussion. Then we can discuss how it would relate to a project, design, fileset, etc. |
@JimLewis Ok, I see. I did a new test where I write from Python and read from VHDL and write from VHDL and read from Python. What I see is that everything happens in the current working directory (simulator output path) from which Python calls From a VUnit point of view the most important thing is that Python can write the
Some IPs expect input files to be in the simulator directory. For that case VUnit provides Anyway, there seems to be ways of doing this in a predictable manner so this shouldn't be a blocker |
My next step now that this is cleared is to allow VUnit to use a testbench configuration as a top-level. In that case Adding the configuration database concept to workaround the generics limitation will be a separate step. Such a configuration database will drive the need for a JSON format or similar. With that in place one could also change the format of |
@umarcor @Paebbels Since VUnit belongs to the xUnit family of test frameworks we tend to use that terminology (or similar). See https://en.wikipedia.org/wiki/XUnit |
@JimLewis After looking a bit deeper in the VUnit simulator interface for Active-HDL I see that we actually start the simulator in different directories depending on if we're running in batch or GUI mode (this is the only supported simulator that has this behavior). I didn't write that code but it's been there since we started to support Active-HDL in 2015 so it has always been like that. |
@JimLewis Ok, now I see the same problem. It was different before but now it's even more different. The workaround to set |
I've added initial support for VHDL configurations. See #784. It looks something like this If I think this is a feature that can be merged but the PR is still in draft since I want to add a few more things. See PR. |
@umarcor @JimLewis @Paebbels #785 provides a solution for supporting configurations and is ready for review. If you're not interested in the details I suggest you have a look at the examples that describes a few use cases. https://github.com/VUnit/vunit/tree/add-configuration-support/examples/vhdl/vhdl_configuration contains two use cases
https://github.com/VUnit/vunit/tree/add-configuration-support/examples/vhdl/vhdl_configuration/handling_generics_limitation contains examples on how to deal with the lack of generics supports when using VHDL configurations. Both examples are about removing the VHDL configurations altogether
I also started to work on an example where the generics are placed in a file-based database using JSON4VHDL. Unfortunately I ran into some problems. It turns out that one of the few bug fixes I did in the VHDL packages for the VHDL-2019 release (https://gitlab.com/IEEE-P1076/Packages-2019/-/merge_requests/11) hasn't been picked-up by the vendors. Full circle... I will bug report this to the vendors and add such an example later. |
@Paebbels I don't think all vendors will switch to VHDL-2019 in the near future so I made a workaround PR to JSON-4-VHDL: Paebbels/JSON-for-VHDL#14 |
@LarsAsplund The PR is merged. Thanks. |
@Paebbels Great, I will proceed and add such an example. Also realized that I didn't add support for many parallel threads. Now that the simulator reads the runner configuration from the current working directory all threads need to start from different directories or they will interfere with each other. |
The most popular open source VHDL verification projects are cocotb, OSVVM and UVVM, along with VUnit. As discussed in larsasplund.github.io/github-facts, there are some philosophical differences between them: OSVVM and UVVM are defined as "VHDL verification methodologies", cocotb is for (Python) co-routine co-simulation, and VUnit is a Python-aided framework of HDL utilities. Naturally, there are some overlapping capabilities because all of them provide basic features such as logging and building/simulation. Therefore, methodologies can be seen as bundles of utilities (frameworks), and some users might refer to using the VUnit runner as a methodology. Nonetheless, it is in the DNA of VUnit to be non-intrusive and allow users to pick features one by one, including reusing the methodologies they are used to.
Currently, it is possible to use OSVVM utilities/libraries in a VUnit environment. Although there are still some corner cases to fix (#754, #767, #768), it is usable already. In fact, some of VUnit's features do depend on OSVVM's core. However, it is currently not possible to use the OSVVM methodology as-is within VUnit. The OSVVM methodology uses top-level entities without generics or ports, and the entrypoints are VHDL configurations. Meanwhile, VUnit needs a top-level generic of type
string
in order to pass data from Python to VHDL.Most simulators do support calling a configuration instead of an entity as the primary simulation unit. It should, therefore, be trivial to support OSVVM's configurations as entrypoints in VUnit. I am unsure about VUnit's parser supporting configurations in the parser and dependency scanning features; but that should not be the main challenge anyway.
The main challenge we need to address is that passing generics to VHDL Configurations is not supported in the language. If that was possible, the runner string might be forwarded to the entiy within the configuration. For the next revision, we might propose enhancements in this regard, since revisiting the limitations of configurations is one of the expected areas to work on. Nevertheless, that will/would take several months or years until made available in simulators.
Yesterday, I had a meeting with @JimLewis and he let me know that he's been thinking about implementing some mechanism for passing data between the TCL scripts (
.pro
files) and the VHDL testbenches. We talked about.ini
,.yml
and.json
, and I suggested to use the latter because there is a JSON reader library available already: Paebbels/JSON-for-VHDL. In fact, JSON-for-VHDL is submoduled in VUnit, in order to pass very complex generics to the testbench.I believe this is a good oportunity to document the syntax of VUnit's runner generic, make a public API from it, write a data model and provide multiple reader/writer implementations. @JimLewis said he did not put much thought into the data model yet, but he would be willing to include integration with VUnit into the scope when he works on it. Maybe there is no need for him to write a VHDL solution from scratch and it can be based on JSON-for-VHDL + VUnit's runner package.
Enhance VUnit's simulator interfaces to support writing runner (CLI) arguments to a file or to an envvar
Currently, VUnit's runner expects to receive an string, which Python passes as a top-level generic. Actually, there is no limitation in VHDL for using an alternative method. The generic might be a path, and users might read the file in the testbench before passing it to the functions from the runner package. By the same token, the generic might point to a JSON file, and users might convert that to the string syntax expected by the runner. Hence, the main challenge is that VUnit's Python simulator interfaces do not support writing runner parameters to a file.
Well, that is not 100% correct: when GHDL is used, option
ghdl_e
prevents running the simulation and instead writes all the CLI arguments in a JSON file (#606):vunit/vunit/sim_if/ghdl.py
Lines 303 to 322 in 7879504
Similarly, we might want to support passing runner arguments through an environment variable. In the context of integrating VUnit and cocotb, one of the requirements is specifying environment variables per test/testbench. That's because cocotb bootstraps an independent Python instance, and the UUT is the design; so data needs to be passed through the filesystem or envvars. In fact, this is a requested feature: #708. If we used the same mechanism for the runner, cocotb might re-implement VUnit's runner package in Python (which is "just" 1K lines of VHDL code for 100% compatibility). I believe that would allow to plug cocotb's regression management. The remaining functionality would be for VUnit to "discover" the test cases before executing the simulations.
So, we might have an enumeration option to decide passing the runner string as a top-level generic, as an environment variable or as a file. Taking VHDL 2019 features into account, OSVVM and VUnit might end up using the envvar approach indeed.
Enhance JSON-for-VHDL
The current implementation of JSON-for-VHDL is functional and synthesisable, but not optimal for simulation. Some years ago, @Paebbels and @LarsAsplund discussed about writing an alternative implementation for simulation only, which would have less constraints and better performance. They also talked about using VHDL 2019 features. I don't remember if using those was a requirement for the optimised simulation-only implementation, or whether that could be done with VHDL 2008.
If we are to use JSON for sharing configuration parameters between VUnit's Python or OSVVM's TCL and VHDL, I guess we would make JSON-for-VHDL a prioritary dependency in the ecosystem.
Co-simulation
The run package might be enhanced for getting the data from a foreign environment. By encapsulating the interaction with the "runner configuration model" in a protected type, we might provide various implementations. For instance, the VHDL testbench might query a remote service which tests to run, and where to push the results.
Terminology: configurations
OSVVM uses VHDL configurations for composing the test harness and the test cases. At the same time, VUnit use the term "configuration" for referring to a set of values for top-level generics. So, in Python users can generate multiple sets of parameters for each testbench/testcase. That is not very conflictive yet because VHDL configurations are not used much in the VUnit ecosystem. However, if we are to improve the integration of OSVVM and VUnit, we might want to reconsider the naming.
/cc @Paebbels @JimLewis @ktbarrett @jwprice100
The text was updated successfully, but these errors were encountered: