Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rethinking the runner configuration interface: integration of the OSVVM methodology into VUnit #772

Open
umarcor opened this issue Oct 27, 2021 · 40 comments
Labels
Enhancement Simulator support ThirdParty: cocotb Related to cocotb. ThirdParty: JSON Related to JSON-for-VHDL. ThirdParty: OSVVM Related to OSVVM and/or OSVVMLibraries.

Comments

@umarcor
Copy link
Member

umarcor commented Oct 27, 2021

The most popular open source VHDL verification projects are cocotb, OSVVM and UVVM, along with VUnit. As discussed in larsasplund.github.io/github-facts, there are some philosophical differences between them: OSVVM and UVVM are defined as "VHDL verification methodologies", cocotb is for (Python) co-routine co-simulation, and VUnit is a Python-aided framework of HDL utilities. Naturally, there are some overlapping capabilities because all of them provide basic features such as logging and building/simulation. Therefore, methodologies can be seen as bundles of utilities (frameworks), and some users might refer to using the VUnit runner as a methodology. Nonetheless, it is in the DNA of VUnit to be non-intrusive and allow users to pick features one by one, including reusing the methodologies they are used to.

Currently, it is possible to use OSVVM utilities/libraries in a VUnit environment. Although there are still some corner cases to fix (#754, #767, #768), it is usable already. In fact, some of VUnit's features do depend on OSVVM's core. However, it is currently not possible to use the OSVVM methodology as-is within VUnit. The OSVVM methodology uses top-level entities without generics or ports, and the entrypoints are VHDL configurations. Meanwhile, VUnit needs a top-level generic of type string in order to pass data from Python to VHDL.

Most simulators do support calling a configuration instead of an entity as the primary simulation unit. It should, therefore, be trivial to support OSVVM's configurations as entrypoints in VUnit. I am unsure about VUnit's parser supporting configurations in the parser and dependency scanning features; but that should not be the main challenge anyway.

The main challenge we need to address is that passing generics to VHDL Configurations is not supported in the language. If that was possible, the runner string might be forwarded to the entiy within the configuration. For the next revision, we might propose enhancements in this regard, since revisiting the limitations of configurations is one of the expected areas to work on. Nevertheless, that will/would take several months or years until made available in simulators.

Yesterday, I had a meeting with @JimLewis and he let me know that he's been thinking about implementing some mechanism for passing data between the TCL scripts (.pro files) and the VHDL testbenches. We talked about .ini, .yml and .json, and I suggested to use the latter because there is a JSON reader library available already: Paebbels/JSON-for-VHDL. In fact, JSON-for-VHDL is submoduled in VUnit, in order to pass very complex generics to the testbench.

I believe this is a good oportunity to document the syntax of VUnit's runner generic, make a public API from it, write a data model and provide multiple reader/writer implementations. @JimLewis said he did not put much thought into the data model yet, but he would be willing to include integration with VUnit into the scope when he works on it. Maybe there is no need for him to write a VHDL solution from scratch and it can be based on JSON-for-VHDL + VUnit's runner package.

Enhance VUnit's simulator interfaces to support writing runner (CLI) arguments to a file or to an envvar

Currently, VUnit's runner expects to receive an string, which Python passes as a top-level generic. Actually, there is no limitation in VHDL for using an alternative method. The generic might be a path, and users might read the file in the testbench before passing it to the functions from the runner package. By the same token, the generic might point to a JSON file, and users might convert that to the string syntax expected by the runner. Hence, the main challenge is that VUnit's Python simulator interfaces do not support writing runner parameters to a file.

Well, that is not 100% correct: when GHDL is used, option ghdl_e prevents running the simulation and instead writes all the CLI arguments in a JSON file (#606):

vunit/vunit/sim_if/ghdl.py

Lines 303 to 322 in 7879504

if not ghdl_e:
cmd += sim
if elaborate_only:
cmd += ["--no-run"]
else:
try:
makedirs(output_path, mode=0o777)
except OSError:
pass
with (Path(output_path) / "args.json").open("w", encoding="utf-8") as fname:
dump(
{
"bin": str(Path(output_path) / f"{config.entity_name!s}-{config.architecture_name!s}"),
"build": cmd[1:],
"sim": sim,
},
fname,
)
return cmd
That is used for building a design once (and generating an executable binary if the simulator is based on a compile & link model) and then executing it multiple times for co-simulation purposes: https://github.com/VUnit/cosim/tree/master/examples/buffer. Therefore, we might want to generalise this to all the simulator interfaces and make it optional to specify the name/location of the JSON file.

Similarly, we might want to support passing runner arguments through an environment variable. In the context of integrating VUnit and cocotb, one of the requirements is specifying environment variables per test/testbench. That's because cocotb bootstraps an independent Python instance, and the UUT is the design; so data needs to be passed through the filesystem or envvars. In fact, this is a requested feature: #708. If we used the same mechanism for the runner, cocotb might re-implement VUnit's runner package in Python (which is "just" 1K lines of VHDL code for 100% compatibility). I believe that would allow to plug cocotb's regression management. The remaining functionality would be for VUnit to "discover" the test cases before executing the simulations.

So, we might have an enumeration option to decide passing the runner string as a top-level generic, as an environment variable or as a file. Taking VHDL 2019 features into account, OSVVM and VUnit might end up using the envvar approach indeed.

runner_cfg

Enhance JSON-for-VHDL

The current implementation of JSON-for-VHDL is functional and synthesisable, but not optimal for simulation. Some years ago, @Paebbels and @LarsAsplund discussed about writing an alternative implementation for simulation only, which would have less constraints and better performance. They also talked about using VHDL 2019 features. I don't remember if using those was a requirement for the optimised simulation-only implementation, or whether that could be done with VHDL 2008.

If we are to use JSON for sharing configuration parameters between VUnit's Python or OSVVM's TCL and VHDL, I guess we would make JSON-for-VHDL a prioritary dependency in the ecosystem.

Co-simulation

The run package might be enhanced for getting the data from a foreign environment. By encapsulating the interaction with the "runner configuration model" in a protected type, we might provide various implementations. For instance, the VHDL testbench might query a remote service which tests to run, and where to push the results.

Terminology: configurations

OSVVM uses VHDL configurations for composing the test harness and the test cases. At the same time, VUnit use the term "configuration" for referring to a set of values for top-level generics. So, in Python users can generate multiple sets of parameters for each testbench/testcase. That is not very conflictive yet because VHDL configurations are not used much in the VUnit ecosystem. However, if we are to improve the integration of OSVVM and VUnit, we might want to reconsider the naming.

/cc @Paebbels @JimLewis @ktbarrett @jwprice100

@eine eine added Enhancement Simulator support ThirdParty: cocotb Related to cocotb. ThirdParty: JSON Related to JSON-for-VHDL. ThirdParty: OSVVM Related to OSVVM and/or OSVVMLibraries. labels Oct 27, 2021
@Paebbels
Copy link

The advantage of generics over file inputs or similar methods is that such an input is a constant and can be used in elaboration. Other techniques might not be available early enough or might not ne static enough for some use cases.

A JSON parser doesn't need VHDL-2019, but an implementation might be easier and have less repeated code.

While a JSON parser already exists and can be transformed to an improved variant for simulation (no fixed arrays, but dynamic memory allocation and pointer structures instead of index arrays), I consider YAML a better solution, especially as it can be written easier to a file.

If INI is needed, I propose to support TOML, which is a superset.

@LarsAsplund
Copy link
Collaborator

LarsAsplund commented Dec 3, 2021

The VUnit dependency scanner recognizes configurations as it is today. So an intermediate (?) solution would be to combine our ability to control the testbench from a lower level test control entity (see https://vunit.github.io/run/user_guide.html#distributed-testbenches) with a new feature the recognizes a testbench configuration as a test case for that testbench. All the user would have to do to a standard OSVVM testbench is to add the runner_cfg to the testbench entity and the test controller entity. The test architectures would only have test_runner_startup and test_runner_cleanup. For me the key difference between an OSVVM testbench and a VUnit testbench is that OSVVM uses a configuration to define a test case. The fact that OSVVM puts test control at a lower level or that VUnit requires a runner_cfg poses no conflict

Update:

The main challenge we need to address is that passing generics to VHDL Configurations is not supported in the language.

Ouch, I didn't know that. Well, that is a problem

@JimLewis
Copy link

JimLewis commented Dec 3, 2021

WRT using file inputs, ActiveHDL really has a hard time keeping rooted in the simulation start directory and seems to creep into its library directory any time you turn your back or during certain updates - was just wrestling with an issue in ActiveHDL 12 that was not there in 11.

@LarsAsplund
Copy link
Collaborator

LarsAsplund commented Dec 4, 2021

@JimLewis In general tests should be given private directories to work from or input and output files from different test cases will interfere/overwrite each other. Just need a way to pass that info Not knowing the testbench is running from becomes another problem.

@JimLewis
Copy link

JimLewis commented Dec 4, 2021

@LarsAsplund For me, all tests in the same test suite (testing similar functionality like a UART core or a particular package) have a unique name and unique files they operate on (often named similar to their test name). Only when I switch to a new test suite do I need to run them out of a different directory.

The only time I have had to worry about name conflicts is when I am reusing a test case on a similar design.

@umarcor
Copy link
Member Author

umarcor commented Dec 4, 2021

@LarsAsplund, I see two main challenges in your proposal:

  • As you wrote in the update/edit, overriding the top-level generics of an entity when using a configuration as a primary unit is problematic:

    • The language does not support generics for configurations.
    • It is not specified that CLI arguments can/should bypass the configuration and be applied to the entity.
    • Overriding top-level generics through the CLI is non-standard indeed. That's why support is uneven among vendors.
    • Therefore, using files (along with a JSON/YAML reader) or using environment variables (along with VHDL 2019's capabilities) are the only LRM compliant solutions at the moment. I do believe we should keep using the CLI for VUnit, because that is proven to be usable. However, I would be careful about double betting on the CLI for "passing complex objects from Python to VHDL"; we should find more robust solutions for that.
  • Neither CoCoTb nor OSVVM are willing to require VUnit's libraries to be imported/included/used in the HDL testbenches.
    @ktbarrett was vocally very explicit and @JimLewis is very implicit about it (by explicit omission).
    I do not agree with their criteria, but I believe we must allow those use cases if we are to integrate multiple methodologies into a common Python infrastructure.

That's why I proposed specifying what is the content/format/syntax of the runner_cfg string and how does test_runner_cleanup communicate the results to Python.
In the cocotb case, that'd allow them to implement the functionality in Python, so that they can keep the HDL sources untouched. That is of critical relevance because a lot of cocotb users are using cocotb to avoid any manipulation of HDL. Hence, the top-level of the simulator is the UUT, not the testbench (which is Python).
In the OSVVM case, users might use VUnit's library themselves, on top of the references/templates from OSVVM. Actually, that is something they can do already, because OSVVM users are expected to manipulate HDL sources. However, that breaks one of the critical concepts of the OSVVM Methodology. By using files or envvars, we can allow OSVVM to keep the essence that "primary units and top-level entities don't have generics". Then, it'd be up to Jim to reuse the "runner specification" in OSVVM's plumbing, or to reinvent the wheel.

In other words, the challenge here is the complement of pyEDAA.Reports. Reports is about the outcome of running testsuites, tests, bins, etc. while Runner is about how to tell which of those to schedule. Hence the Runner API/specification needs to conceptually be, in fact, a subset of Reports.


With regard to where to execute the simulations, IMHO that is a completely different topic. VUnit can already run simulations using ActiveHDL and OSVVM. None of that is affected by the proposal in this issue. OSVVM's TCL plumbing uses a different approach to VUnit's Python plumbing in that regard, however OSVVM's TCL is out of scope here because the point is precisely to run/use OSVVM through VUnit (using EDAA to treat .pro files as a DSL, independent to TCL).

@JimLewis
Copy link

JimLewis commented Dec 4, 2021

@umarcor The issue with ActiveHDL is not being able to run simulations within a given environment. It is to get the VHDL to write files in the expected locations when running that is interesting. Especially if the VHDL code does not specify path information.

WRT EDAA, why not run TCL using something like tkinter in python?

@JimLewis
Copy link

JimLewis commented Dec 4, 2021

@umarcor WRT environment variables, you have to convince vendors other than one who has proactively implemented VHDL-2019 to implement the 2019 feature that allows environment variables to be read. That would be my only concern with that path. It would be nice though.

@ktbarrett
Copy link
Contributor

Not sure there needs to be 1 method for discovering testbenches or passing test configurations that everyone has to support. Hell, there might even be multiple ways for passing configurations for each methodology due to tool or language limitations. There should be multiple kinds of testbench objects and the runner should dispatch to the testbench object to decide how to run those tests instead of baking this into the tool classes. I have suggested this previously. Each methodology should provide these classes so they don't have to be included in VUnit, but VUnit will need to be updated to defer decisions to the testbench objects where appropriate. That way VUnit can leave alone it's current testbench discovery, configuration passing, and test result system.

@umarcor
Copy link
Member Author

umarcor commented Dec 4, 2021

@JimLewis:

The issue with ActiveHDL is not being able to run simulations within a given environment. It is to get the VHDL to write files in the expected locations when running that is interesting. Especially if the VHDL code does not specify path information.

I understand. However, the limitations of one simulator of one vendor for dealing with relative file paths is out of scope of this issue.

WRT EDAA, why not run TCL using something like tkinter in python?

We don't want to depend on TCL. There are other people providing TCL based solutions, such as most vendors or OSVVM.
Moreover, .pro files are not written to be used as a library/API. Therefore, we might run/execute the scripts through tkinter, but we would not be able to get/extract information from them. We need information, not execution.

WRT environment variables, you have to convince vendors other than one who has proactively implemented VHDL-2019 to implement the 2019 feature that allows environment variables to be read. That would be my only concern with that path. It would be nice though.

Python and C can access environment variables easily, and any of those can be used through the cosimulation interfaces supported by the tools already. Actually, by using a protected type, it would be possible to support multiple transport strategies.

Anyway, in OSVVM's case, I believe it's better to start with just files, which is the most likely solution to communicate TCL and VHDL.

@JimLewis
Copy link

JimLewis commented Dec 4, 2021

@umarcor TCL can access environment variables. Is there something I am missing here? Setting them is nothing more than:
set ::env(name-to-set) value-to-set

Does VUnit have a library that gives VHDL the ability to read environment variables. I would be open to reusing it - if it can address everything we need.

@umarcor
Copy link
Member Author

umarcor commented Dec 4, 2021

@ktbarrett

Not sure there needs to be 1 method for discovering testbenches or passing test configurations that everyone has to support.

Actually, I mentioned the discovery but I left the details out of the discussion on purpose. I wanted someone else to bring it up!
As you say, we need three different test discovery classes/implementations:

  • Scan VHDL sources for VUnit testbenches and tests.
  • Scan VHDL sources for OSVVM testbench components and tests.
  • Scan (or import) Python sources for cocotb's testbenches and tests.

Nevertheless, I believe that both @LarsAsplund and me agree on doing an effort in that regard, as long as at least a few people are explicit about willing to use that. We don't want all OSVVM/cocotb users to depend on VUnit, not at all. But we want to have some potential users for that solution (apart from ourselves).
So far, neither cocotb, nor OSVVM or UVVM maintainers wanted to be clear about that in public. Conversely, a few users were very explicit (and irrationally) against it. Hence, we are focusing on synthesis (EDAA) to drive the discussion slighly away from VUnit, while we rise awareness about the potential collaborations in this area. From my point of view, the discussion we are having in this issue is far beyond the scope of VUnit (umarcor.github.io/osvb/apis/runner). VUnit is the excuse for talking about it, because it's the most complete and usable Python solution today.

Hell, there might even be multiple ways for passing configurations for each methodology due to tool or language limitations.

It would be nice if each project supported any of the transport mechanisms (CLI, envvars or files); thus, the first of the diagrams in the figure above. However, that should be unrelated to the content: the actual configuration (list of tests to run and maybe attributes for those tests). Hence, each of the projects can start with using one transport mechanism only (the other diagrams in the figure).

There should be multiple kinds of testbench objects and the runner should dispatch to the testbench object to decide how to run those tests instead of baking this into the tool classes.

I don't understand the having multiple kinds of testbench objects. My understanding is that we need a single object with attributes. Then, we can set/get matching attributes in the Python part and in the VHDL/OtherPython part. In essense, all three frameworks run exactly the same simulations and use the tools in exactly the same manner. The only difference between VUnit, OSVVM or cocotb testbenches is what the code does inside the simulator; that is, after the execution of a testbench is started. Neither of them can do anything before run initialization. That's the reason for VUnit's Python, OSVVM's TCL or cocotb's makefiles to exist.

I have suggested this previously. Each methodology should provide these classes so they don't have to be included in VUnit, but VUnit will need to be updated to defer decisions to the testbench objects where appropriate. That way VUnit can leave alone it's current testbench discovery, configuration passing, and test result system.

I agree with each framework/methodology providing a class for discovering the testbenches and the tests within them. That is needed by VUnit in order to support filtering the tests (in the VUnit CLI) and in order to know which is the entrypoint for each testbench. However, I'm not sure about the multiple testbench objects thing. The fundamental information that VUnit needs is whether each of the tests passed or failed. That is used for setting the overall exit code and for generating summaries (in the terminal or through an XUnit report). Any additional information can be processed in the post hook, through the results object. Hence, particular post hook processing functions might be provided for cocotb and/or OSVVM, without necessarily modifying VUnit's codebase.

@umarcor
Copy link
Member Author

umarcor commented Dec 4, 2021

TCL can access environment variables. Is there something I am missing here? Setting them is nothing more than: set ::env(name-to-set) value-to-set

@JimLewis the problem is not setting environments variables. That is easy regardless of the language (TCL, Python, bash, C...). The problem is accessing envvars from VHDL. As we discussed, that requires VHDL 2019 or direct cosimulation (which is not standardized yet).

Does VUnit have a library that gives VHDL the ability to read environment variables. I would be open to reusing it - if it can address everything we need.

It does not. I could provide it through ghdl/ghdl-cosim (for GHDL), or through VUnit/cosim (for GHDL, and hopefully ModelSim/QuestaSim). However, that's "killing flies with cannon shots". Providing a library for reading envvars through co-simulation (as a workaround for the lack of VHDL 2019 support) would be a "tiny" project itself, such as JSON-for-VHDL; and it does not solve our problem: an agreement on the format/syntax of the runner interface/objects and which attributes it needs. I prefer to rely on JSON-for-VHDL, which exists already and it meets the needs.

In VUnit, the runner tells the tesbench which tests to run and which parameters (top-level generic values) to use for each test, along with the location of the tesbench source file. Then, the testbench produces an output that VUnit can use to know which specific tests failed and which passed. So, what are the equivalent requirements in OSVVM? You mentioned that you wanted to pass some parameters from TCL to VHDL. Which are those parameters? For the second part, I know the answer: you produce a YAML file which VUnit could read/use.

@JimLewis
Copy link

JimLewis commented Dec 4, 2021

@umarcor What do you mean by discovering? Is your intent automating the discovery of all tests? That is going to be an interesting challenge.

For example, in the OSVVM AXI we have two flavors of AXI interfaces - our record based ports (CTI) and our internal signals which are connected to via external names (VTI). The internals of the designs are identical - in fact the differences are limited to primarily the entity declarations. Hence, the test framework for the CTI is in AXI4/Axi4/testbench. The testbench framework for the VTI is in AXI4/Axi4/testbenchVti. In both frameworks, the entity for the test sequencer (the part that implements a test) is named TestCtrl. Both versions of TestCtrl - either through ports or external name references make the same names available to test architectures. Hence, each can and does run the same test case architectures.

Is it your thought that you are going automate the discovery of these tests to run?

OSVVM currently instead relies on the scripting to explicitly specify which tests to run. By explicitly listing the tests to run, the scripts can put the information into the YAML file that says, I am planning on running this test case. Hence, if for any reason, the test case fails to run, then that information that it was supposed to get results for a particular test case is already in the YAML file.

@umarcor
Copy link
Member Author

umarcor commented Dec 4, 2021

@JimLewis the discovery strategy is radically different for OSVVM and cocotb; hence, discussing both at the same time can be slightly misleading, although necessary.

From cocotb's and Kaleb's point of view, testbenches are Python scripts/modules, and VUnit provides a Python based infrastructure. By default VUnit and cocotb will run in completely different instances of Python; however, because both are using the same language, there are more opportunities for "discovery". VUnit can import cocotb testbenches without executing them, and use Python's inspection features. It gets all the semantic information about what a module is, which functions it has, etc.

In OSVVM's case, first we need to solve the problem of making .pro scripts usable from Python. I.e., making OSVVM usable from any Python script/project, not only VUnit. That's something @Paebbels is working on in pyEDAA.ProjectModel. ProjectModel will be able to interpret .pro files and extract the same imperative information that the TCL plumbing has. Then, VUnit will not need to parse OSVVM's VHDL sources to find out which pieces compose a testbench or a test; it will just need to ensure that the testbenches/tests defined in the ProjectModel do exist indeed. It might also check if any other files exist which are not defined in the ProjectModel (depending on the usage of wildcards). If YAML files need to be partially pre-generated before starting the simulations, that is precisely the knowledge we need for the Runner.

@JimLewis
Copy link

JimLewis commented Dec 4, 2021

@umarcor

In VUnit, the runner tells the tesbench which tests to run and which parameters (top-level generic values) to use for each test, along with the location of the tesbench source file.

In OSVVM, there is one test case per file. It has a name. We don't have any means to run or report on a portion of a test. That is a cool feature of VUnit. I generally don't think that small. If I am testing AxiStream byte enables, I am going to write a test that tests all permutations of the byte enables and have to always run all of them - for better or worse.

What I see a need to communicate is what mode is the test running it? Debug - and hence, enable some extra flags for logging vs regression - and hence, only log to files (in particular, turn off OSVVM mirror mode if it is on), vs artifact collection - a final simulation run, hence, turn on reporting for artifact collection that is not normally needed (OSVVM's final level). It would be handy to communicate results directory information - rather than the static ./results that we currently use or communicate the validated results directory (in the event we want to do file comparisons - static paths don't work so well here).

Curiously, many simulators allow you to run with multiple top level structures. This may be a curious way to accomplish what I need to do. Load a regular simulation with a second top-level that simply changes settings. There could be multiple versions of the second top-level that could accomplish the different classes of settings that I need to make.

While the multiple-tops sounds interesting, I think the path that VUnit has been using may prove to be more flexible.

I am not against setting top-level generics if all simulators can do it and they can reach beyond the configuration and set it for the test case entity that is specified by the configuration. If all simulators do that in a similar fashion, I think maybe we should codify it in the standard. Not the simulator part, but explicitly say that the language requires the simulators to provide a mechanism to do it - it would not be a change for existing vendors, but as an expectation to anyone new - or to put pressure on certain vendors.

@umarcor
Copy link
Member Author

umarcor commented Dec 4, 2021

We are all coming to the same page now! 🎉 I am particularly happy because I did not expect to sort this out today 😄

In OSVVM, there is one test case per file. It has a name. We don't have any means to run or report on a portion of a test. That is a cool feature of VUnit. I generally don't think that small. If I am testing AxiStream byte enables, I am going to write a test that tests all permutations of the byte enables and have to always run all of them - for better or worse.

That makes sense. I assumed you might have support for multiple test per file because both VUnit and cocotb allow it. However, the methodology hits here. In VUnit and cocotb, the generation of parameter sets is done outside of VHDL, while OSVVM handles it inside. Conceptually, the inputs used for generating VUnit configurations in Python, are the inputs that a single OSVVM test needs, because it does the generation internally. That equivalency is actually cool.

However, you do have the concept of testsuites in OSVVM, which means you do have two levels of hierarchy for organising/calling the tests. Is that done by the location/name of the .pro files? Or is it written in the .pro files explicitly (say, set testcasename whatever; run atestcase). I'm trying to understand whether more than 2 levels might be required in the future. In VUnit, there are three levels Library.Testbench.Test, even though the XUnit report flattens the first two.

What I see a need to communicate is what mode is the test running it? Debug - and hence, enable some extra flags for logging vs regression - and hence, only log to files (in particular, turn off OSVVM mirror mode if it is on), vs artifact collection - a final simulation run, hence, turn on reporting for artifact collection that is not normally needed (OSVVM's final level). It would be handy to communicate results directory information - rather than the static ./results that we currently use or communicate the validated results directory (in the event we want to do file comparisons - static paths don't work so well here).

Nice! This is the information we need. The attributes (Debug, flags, artifact collection) and the directory information. VUnit does provide the tb_path and the output_path already: http://vunit.github.io/run/user_guide.html?highlight=tb_path#special-paths. One of those might be reused as the results path in OSVVM, or an additional path might be passed as an attribute (specific for OSVVM).

Once we understand all of this information, we might write a prototype using JSON-for-VHDL on the VHDL side. That is not the ideal solution, but it is usable already, and we should be able to wrap it in a protected type easily. On the other side (TCL or Python) I think that VUnit supports attributes for testbenches already. I'd need to look better into it.

Curiously, many simulators allow you to run with multiple top level structures. This may be a curious way to accomplish what I need to do. Load a regular simulation with a second top-level that simply changes settings. There could be multiple versions of the second top-level that could accomplish the different classes of settings that I need to make.

That is very interesting. I have never seen multiple top level units used for simulation. I think that GHDL supports EntityName [ArchitectureName] or ConfigurationName only. I do know that @tmeissner tried having multiple top units for synthesis, in order to pass them through Yosys -> SymbiYosys, for formal verification purposes. However, I assumed that was just a shortcut for having multiple runs. That is, the multiple top unit were/are completely unrelated. As far as I understand, your comment implies that multiple top units are used "at the same time"?

While the multiple-tops sounds interesting, I think the path that VUnit has been using may prove to be more flexible.

We did have some issues with VUnit's approach because passing complex generics through the CLI is not without issues. Fortunately, the usage of JSON-for-VHDL along with basic encoding (in order to ensure a "simple" character set") proved to be quite robust. I am particularly happy of how that collaboration turned out, even though we should pay attention to JSON-for-VHDL's technical debt (when VHDL 2019 is supported by vendors).

I am not against setting top-level generics if all simulators can do it and they can reach beyond the configuration and set it for the test case entity that is specified by the configuration. If all simulators do that in a similar fashion, I think maybe we should codify it in the standard. Not the simulator part, but explicitly say that the language requires the simulators to provide a mechanism to do it - it would not be a change for existing vendors, but as an expectation to anyone new - or to put pressure on certain vendors.

I think this is sensible. Most simulators can handle strings and naturals at least. Negatives and reals might be more challenging.
However, as far as I understand, accepting CLI arguments was added to VHDL 2019 already. Therefore, it might not make much sense to now specify that top-levels can be overriden in the CLI. Instead users can have constants in the entity/architecture which retrieve the values from the CLI. Maybe we should push for that VHDL 2019 feature to be added, and then have an "standard" argparse or getopts in VHDL. From the vendors' point of view, all they need to do is support strings, which they do already. Instead of interpreting the strings and resolving them to types (which they need to do at the moment), that would be deferred to the user, or to the argparse/getopts library.

EDIT

See #588.

Note that we do have a "standard" argparse or getopts in VHDL already: JSON-for-VHDL. I.e., accept the arguments as a JSON file or string, instead of "traditional" GNU arguments with -- and -.

@JimLewis
Copy link

JimLewis commented Dec 4, 2021

@umarcor If you look at the OSVVM YAML results file, within a given test case, the VHDL code currently only writes out the Name, Status, and Results parts. As we discuss it, it may makes sense to move the FunctionalCoverage to the VHDL code also - because it is easy enough to do there. The rest of the information comes from the scripts.

The build name is the conjunction of the script name and the directory from which it runs (with a few modifications) - the intention is to get a unique name - for example, in all of the VC have a RunAllTests.pro. If we were then to build the scripts from multiple VC separately, without the directory name, they would not be unique.

@LarsAsplund
Copy link
Collaborator

LarsAsplund commented Dec 6, 2021

I think we should support providing runner_cfg as a file as an option. runner_cfg is part of VUnit plumbing and users do not care what it means or how it is imported into the testbench. However, using files to transfer generics as a general approach would tend to hide information that the user cares about and make the code less readable. Users should not have to make an either or decision when it comes to using generics and configurations. That combo should be added to the language.

The fact that Active-HDL has an unreliable working directory is a problem but Aldec is fairly responsive to bug reports so that is probably something that could be solved much quicker than updating VHDL. @JimLewis Do you have a MVE showing this behavior?

To get somewhere without waiting for third parties I started on a prototype which leaves VUnit untouched but enables discovery and execution of OSVVM testbenches with a reusable mechanism. It's not completed yet but basically it takes VUnit parsing capabilities to scan and find OSVVM tests. It targets the simple use case where the configuration is only used to determine what test controller architecture to instantiate. These architectures are the tests. Then we can use the VUnit preprocessor to transform the top-level test-controller component instantiation to a case generate statement based on the test/architecture name. Each branch of that case state contains a direct entity instantiation of that test case. With the configurations out of the loop we can use top-level generics and the test to run is provided that way and is controlled by a VUnit Python configuration. What basically happens is that VHDL configurations are transformed into VUnit Python configurations.

Pros/cons:

  • It is based on code generation. The original source is not modified but it's still a clear sign of missing tool/language features that should be addressed.
  • It works on the typical OSVVM testbench (???) and is a reusable function that we can provide.
  • It allows for the use of generics. I imagine that people using VUnit on top of OSVVM style testbenches would use that feature sooner or later.
  • The VHDL configurations only serve as an identifier of OSVVM tests. I imagine that if VHDL were to support architecture selection from a generic string such that there is no need for a case generate statement, then the need for configurations and preprocessing is removed
  • Moving from this approach to one that builds on runner_cfg being passed to the top-level using a generic should be fairly transparent to the user.

@JimLewis
Copy link

JimLewis commented Dec 6, 2021

@LarsAsplund MVE: Write a program that opens a file and writes to it. Now look for the file. :)
I filed a bug report (without MVE), however, it has not been acknowledged yet, so I think others need to fill out a separate bug report as I think that they like their current behavior.

Does running a test using a configuration disable a tools ability to map a generic? I have never used a tools capability to map a generic, but can it map a generic anywhere or does it have to be the top level?

@JimLewis
Copy link

JimLewis commented Dec 6, 2021

Looking at the Questa documentation, it will map any unset generic at any level in the design. So at least for QuestaSim, it looks possible to have a generic on TestCtrl and set it in the vsim call.

@LarsAsplund
Copy link
Collaborator

@JimLewis Ok, I'll create a MVE and report the issue.

The global generic setting that ModelSim and Questa use is unique to them I think so it wouldn't work generally. It's also too magic/implicit for my taste. It could easily lead to some very tricky bugs. In fact, I think it was a bug report on VUnit that made us discover this behavior. Today we include the top-level path when passing generics to ModelSim/Questa: -g/tb_something/my_generic=some_value.

@JimLewis
Copy link

JimLewis commented Dec 6, 2021

Looks like ModelSim/Questa and ActiveHDL/RivieraPRO would support paths to the top level. Since they are path based, you should be able to apply them when running with a configuration and should be able to set something on any component, so in addition to your example, it should also support -g/tb_something/TestCtrl_1/my_generic=some_value

Yes I had noted the cuteness of being able to set all tpd generics by doing -gtpd=1 ns That would be interesting to debug.

@LarsAsplund
Copy link
Collaborator

@JimLewis While working with configuration support and the classic OSVVM test structure it strikes me that I don't know if this is the only preferred OSVVMstructure and if so why that is. Is there something lost from an OSVVM perspective if you were to flip things upside down. Rather than having a common top-level test harness with a set of different test controllers selected by a set of configurations you could have the test controllers as top levels that instantiate the common test harness. That way there is no need for configurations, no need for a well defined working directory in the simulator and generics are fully supported.

@JimLewis
Copy link

@LarsAsplund I would need to sit with the thought of that a while.

Here are my initial thoughts:
Sometimes I use configurations for other stuff too. So it would not always address the issue.

It also makes the usage of external names challenging. They would only be able to be aliased in process or block statement that comes after the test harness instance.

OTOH, I would have to classify my external name usage. All of the virtual transaction interfaces can be resolved in the test harness. Then what does that leave? In the OSVVM 2021.06 release, we are moving away from external names to access things like FIFOs declared in models (they were previously shared variables - they are now part of a singleton and we can easily pass the reference to the singleton). So I think the main use of external names is to interact with pieces of the design that are intentionally left out of a simulation.

@LarsAsplund
Copy link
Collaborator

I worked a bit more with the configuration support through pre-processing and one thing that needs to be decided is how VHDL configurations should be treated in relation to VUnit testbenches, configurations, and test cases. One way of looking at it is that a VHDL configuration is a test case since it is a way to reuse a test harness/fixture in the same way a VUnit test case does. OTOH, it is also a configuration of the testbench which makes it similar to a VUnit configuration. It's also the top-level for the simulator, i.e. similar to a testbech.

My initial approach is to treat it as a special case of VUnit configurations. In that way they are both parts of the same configuration concept and not two different things unfortunately given the same name.

Currently you need to do an import

from vunit.vhdl_configuration import VHDLConfiguration

and then you create a VHDLConfiguration before adding files and add the found VHDL configurations as VUnit configurations after the files have been added.

# The VHDLConfiguration object is created before the source files are added
vhdl_configuration = VHDLConfiguration(vu)

vu.add_library("lib").add_source_files(Path(__file__).parent / "*.vhd")

# The add method adds a VUnit configuration to a testbench for every
# VHDL configuration found for that testbench
vhdl_configuration.add()

I realized that it might be as easy to this in a single step so this might change. What's more important though is that if VHDL and VUnit configurations are both part of the same concept we also need to decide how VUnit and VHDL configurations work side by side. For example, how is a generic passed to the entity selected by the VHDL configuration and what is the user API. Currently, I can replace the call to the add method with this:

for testbench, configuration_name, generics in vhdl_configuration:
    testbench.add_config(name=configuration_name, generics=generics)

In fact, this is how add is implemented. The generics dictionary contains a single string generic that is set to one of the original VHDL configuration names. The generic then controls if-generate statements in the preprocessed code selecting the entity/architecture to instantiate (this is what replaces the component instantiations that the VHDL configurations orginally controlled).

With the generics dictionary exposed it is easy to add VUnit configurations to that. For example, let's say we have a width generic and want to run all VHDL configuration with width set to different values. Then we can do like this:

for testbench, configuration_name, generics in vhdl_configuration:
    for width in [8, 32]:
        generics["width"] = width
        testbench.add_config(name=f"{configuration_name}_width={width}", generics=generics)

With this we get a configuration concept combining both VHDL and VUnit configurations.

lib.tb_with_vhdl_configurations.test_reset_width=8
lib.tb_with_vhdl_configurations.test_reset_width=32
lib.tb_with_vhdl_configurations.test_state_change_width=8
lib.tb_with_vhdl_configurations.test_state_change_width=32

In this case my testbench (tb_with_vhdl_configurations) has two VHDL configurations (test_reset and test_state_change) and then I run these configuration with a width generic set to either 8 or 32.

Another problem I see is that VUnit configurations supports setting generics but generics are not supported for VHDL configurations (at least not for years to come). Any attempt combining the two configuration concept through generics will build on a mechanism to fake generics. What we need is a shared information concept at a higher level, one that doesn't care if the exchange mechanism is based on generics, files or anything else. A configuration database like in UVM comes to mind. I've done configuration databases in the past using JSON but a proper concept should also hide JSON. That is an implementation detail.

Any thoughts?

@LarsAsplund
Copy link
Collaborator

@JimLewis I tried to recreate that write/read problem you saw in ActiveHDL 12. Failed to do so. If I write and then read I get the same result and the file exists on the hard drive. Did you run through the GUI or was it a scripted run?

    file_open(write_fptr, "test.txt", write_mode);
    swrite(write_line, "Hello");
    writeline(write_fptr, write_line);
    file_close(write_fptr);

    file_open(read_fptr, "test.txt", read_mode);
    readline(read_fptr, read_line);
    sread(read_line, read_str, strlen);
    file_close(read_fptr);

    check_equal(read_str(1 to strlen), "Hello");

@JimLewis
Copy link

JimLewis commented Dec 24, 2021

I submitted the test case at: https://synthworks.com/WriteToFile.zip

Your test tests, if I write a file in a given tool, can I read it back in that tool. That should work independent of the frame of reference a tool uses.

What we need is,

  1. if I write a file with scripts (tcl or python) in a directory,
    can the VHDL program read it

This turns out to be trivial. For Active, it searches for the files in:
a. The current directory,
b. The directory pointed by the $dsn variable,
c. The $dsn/src directory.

  1. if I write a file from VHDL using any
    simulator, can I read it back in with scripts (tcl or python).

A simulator is going to write relative to where it is running from, and for Active, by default, this is $dsn directory.

Their response is that we need to set the environment variable:

set sim_working_folder [pwd]
vsim MyLib.WriteToFile

This variable sets the simulator working directory. By default, this variable is not set
and the simulation starts in the $dsn directory. Otherwise, the working
directory changes to the directory pointed by the $sim_working_folder
variable after the simulation is initialized, i.e. after the asim command
completes. When the simulation terminates, the previous working
directory is not restored.
When both $sim_working_folder and $sim_working_folder_with_return
are set, the $sim_working_folder variable takes priority unless it is set to
an empty string.

It is unfortunate that the only way you would learn this is by reading the list of environment variables.

@umarcor
Copy link
Member Author

umarcor commented Dec 24, 2021

@Paebbels, we need to carefully read this dialogue that Lars and Jim had during the last 3 weeks, in line with ProjectModel. Particularly, the semantics of "harness/fixture" that Lars mentioned in #772 (comment) are equivalent/similar to the discussion we had with @ktbarrett in the context of CoCoTb.

@Paebbels
Copy link

@umarcor, @LarsAsplund, @JimLewis I'll push (soon) my extensions to ProjectModel on a side branch for discussion. Then we can discuss how it would relate to a project, design, fileset, etc.

@LarsAsplund
Copy link
Collaborator

@JimLewis Ok, I see. I did a new test where I write from Python and read from VHDL and write from VHDL and read from Python. What I see is that everything happens in the current working directory (simulator output path) from which Python calls vsim. That is what I would expect so I can't really reproduce your problems. I'm not surprised though. VUnit contains many workarounds to handle inconsistencies between different tool flows and between versions of tools.

From a VUnit point of view the most important thing is that Python can write the runner_cfg information to file. VUnit praxis is to use the tb_path and output_path as the starting point for all file transfers and these are provided by runner_cfg. There are a number of reasons for this:

  • If there are input files in the project repository, there needs to be a known location in that repository structure to which files paths can be related. For portability the VHDL code must not make any assumptions on where the repository is installed or from where the simulator is run. tb_path is that fix location and it points to the directory where the testbench file is located.
  • Every test needs a private location for its output files that are not overwritten if other tests have the same output file names.
  • There must be a way for a user to produce output that is safe if there is a need to do a clean compile/build which removes the simulator output directory.

Some IPs expect input files to be in the simulator directory. For that case VUnit provides simulator_output_path such that files can be copied dynamically to that location without any à priori assumptions on where the simulator is executed.

Anyway, there seems to be ways of doing this in a predictable manner so this shouldn't be a blocker

@LarsAsplund
Copy link
Collaborator

My next step now that this is cleared is to allow VUnit to use a testbench configuration as a top-level. In that case runner_cfg will be stored in a file. A new null_runner_cfg will be defined. If test_runner_setup receives null_runner_cfg it will look for the runner_cfg file. I will continue with the approach that a VHDL configuration is part of the more generic VUnit configuration concept.

Adding the configuration database concept to workaround the generics limitation will be a separate step. Such a configuration database will drive the need for a JSON format or similar. With that in place one could also change the format of runner_cfg which is just another parameter in such a database.

@LarsAsplund
Copy link
Collaborator

@umarcor @Paebbels Since VUnit belongs to the xUnit family of test frameworks we tend to use that terminology (or similar). See https://en.wikipedia.org/wiki/XUnit

@LarsAsplund
Copy link
Collaborator

@JimLewis After looking a bit deeper in the VUnit simulator interface for Active-HDL I see that we actually start the simulator in different directories depending on if we're running in batch or GUI mode (this is the only supported simulator that has this behavior). I didn't write that code but it's been there since we started to support Active-HDL in 2015 so it has always been like that.

@LarsAsplund
Copy link
Collaborator

@JimLewis Ok, now I see the same problem. It was different before but now it's even more different. The workaround to set sim_working_folder works though.

@LarsAsplund
Copy link
Collaborator

I've added initial support for VHDL configurations. See #784. It looks something like this

image

If cfg1 and cfg2 are used to select a test controller implementing a test case they would be named to describe those tests. That is the OSVVM use case. If configurations are used for something else, for example to select the DUT architecture, they would probably be named rtl and synth or similar. That is another potential use case.

I think this is a feature that can be merged but the PR is still in draft since I want to add a few more things. See PR.

@LarsAsplund
Copy link
Collaborator

LarsAsplund commented Jan 2, 2022

@umarcor @JimLewis @Paebbels #785 provides a solution for supporting configurations and is ready for review. If you're not interested in the details I suggest you have a look at the examples that describes a few use cases.

https://github.com/VUnit/vunit/tree/add-configuration-support/examples/vhdl/vhdl_configuration contains two use cases

  • The OSVVM style of using VHDL configurations for tests.
  • An example using VHDL configurations for selecting DUT architecture

https://github.com/VUnit/vunit/tree/add-configuration-support/examples/vhdl/vhdl_configuration/handling_generics_limitation contains examples on how to deal with the lack of generics supports when using VHDL configurations. Both examples are about removing the VHDL configurations altogether

  • Flip the OSVVM style testbench upside down as discussed in a previous comment
  • Select DUT architecture by a generic-controlled generate statement

I also started to work on an example where the generics are placed in a file-based database using JSON4VHDL. Unfortunately I ran into some problems. It turns out that one of the few bug fixes I did in the VHDL packages for the VHDL-2019 release (https://gitlab.com/IEEE-P1076/Packages-2019/-/merge_requests/11) hasn't been picked-up by the vendors. Full circle...

I will bug report this to the vendors and add such an example later.

@LarsAsplund
Copy link
Collaborator

@Paebbels I don't think all vendors will switch to VHDL-2019 in the near future so I made a workaround PR to JSON-4-VHDL: Paebbels/JSON-for-VHDL#14

@Paebbels
Copy link

Paebbels commented Jan 6, 2022

@LarsAsplund The PR is merged. Thanks.

@LarsAsplund
Copy link
Collaborator

@Paebbels Great, I will proceed and add such an example. Also realized that I didn't add support for many parallel threads. Now that the simulator reads the runner configuration from the current working directory all threads need to start from different directories or they will interfere with each other.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement Simulator support ThirdParty: cocotb Related to cocotb. ThirdParty: JSON Related to JSON-for-VHDL. ThirdParty: OSVVM Related to OSVVM and/or OSVVMLibraries.
Projects
None yet
Development

No branches or pull requests

6 participants