Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Requirement verification functionality #2009

Open
johanenglund opened this issue Nov 21, 2024 · 6 comments
Open

Feature: Requirement verification functionality #2009

johanenglund opened this issue Nov 21, 2024 · 6 comments
Labels
Milestone

Comments

@johanenglund
Copy link
Contributor

johanenglund commented Nov 21, 2024

Description

As a developer of a software product I want to

  • write test cases for the requirements
  • show test cases related to a requirement
  • show requirements related to a test case
  • provide a link to the implementation of test cases
  • show status of test cases for each software release (success or failure)
  • show the version of the tested software release

Some requirements are verified in CI pipeline while others are verified by hand.

Problem

Currently there is no direct support of taking in dynamic data at strictdoc export runtime to facilitate creation of a verification report. A feature or recommended process to enable creation of the required traceability is desired.

UIDs of test cases should ideally be preserved over the project life cycle. Redundant/duplicated information should be avoided.

Solution

?

@stanislaw stanislaw modified the milestones: 2024-Q4, 20XX-Backlog Nov 21, 2024
@stanislaw
Copy link
Collaborator

I have several thoughts regarding this:

  • From one extreme, linking to external tools could be seen as out of scope for StrictDoc simply because there is a lot to do. Consider https://github.com/bmw-software-engineering/lobster, a tool by BMW, that tries to become a hub for connecting multiple tools in a single aggregated report. The tool is not mature enough for me to consider interfacing with it but the concept and the format they are creating is very promising.

  • At the same time, there are a lot of opportunities for doing everything within StrictDoc and have it consume data from multiple tools just like Lobster does it. This is very attractive at minimum for the topic of verification reports.

  • With the verification report we have a bit of chicken-and-egg situation - to link back to requirements, the requirements have to be published somewhere and traced to the tests. After the tests have run, a test report is obtained and then what? Does it mean that StrictDoc has to run again and include the test reports this second time?

  • Using Lobster as a reference for a possible implementation, we very likely need to interface with your specific tool. Could you specify which tool are you using and which format it uses for producing test reports?

  • StrictDoc can be used outside of StrictDoc, see its Python API sketch: https://strictdoc.readthedocs.io/en/latest/latest/docs/strictdoc_01_user_guide.html#14-Python-API. I could imagine maintaining a set of standalone scripts even without integrating them directly into StrictDoc's main program. For example, such a script for a tool XYZ could generate a test report to StrictDoc and then export .sdoc files to a test/ folder. Then a normal StrictDoc invocation would generate a larger tree with everything combined together. The advantage of this would be that there would be a dedicated XYZ script doing just that job.

It is a very interesting feature and it would be great to find a practical implementation path.

@johanenglund
Copy link
Contributor Author

I've been playing around with some potential solutions to this issue that does not require strictdoc modifications.

At the moment I'm using XSLT to transform the JUnit XML into an .sdoc together with the doxygen tagfile xml.
I've created a TESTCASE grammar where each test case specified in .sdoc gets a UID, this UID exports to the doxygen tagfile so that I can use it in the XSLT phase.

I'm using google test so the .cc code can look like this:

/**
 * @brief Verify that steering angle react to lateral displacement
 *
 * @relation(TestCaseVc.Steering1, scope=function)
 *
 */
TEST(TestCaseVc, Steering1)
{}

The corresponding UID for the test case specification in .sdoc looks like below so that the test name in the JUnit file matches the strictdoc UID.

[TESTCASE]
UID: TestCaseVc.Steering1

For test cases in the JUnit XML that has corresponding strictdoc tests the XSLT creates a [LINK:]. I imagine that very low level tests e.g. parameter range checks etc will not get a test case written in strictdoc.

The generated .sdoc is then included into the strictdoc export of the requirement repo via [DOCUMENT_FROM_FILE].

Adding the python for transformation and the xslt here if anyone would be interested.

transform.py.txt

verification_report.xslt.txt

The resulting RST renders like this for two tests where one test case was defined in .sdoc and the other not.

image

Still some details to iron out.

@johanenglund
Copy link
Contributor Author

One little problem I've encountered is that the @relation does not seem to work for my custom grammar test case, i.e. I do not get forward traceability from my .sdoc testcase to code. Not 100% sure if that is strictly necessary but something to think upon.

@stanislaw
Copy link
Collaborator

I am heading off but the last thought today is exactly similar to your message above (had 10 seconds to scan through it):

maybe a good start into discussing this would be to think through a minimal and a quickest possible script or set of scripts that we could write to achieve what you need. Then take it from there and develop into a more general solution if needed.

I will read your message in detail tomorrow.

@johanenglund
Copy link
Contributor Author

Slightly evolved today with a TEST_RESULT element which is created by the xslt transform in my CI pipeline from (JUnit and tagfile). This means that I get forward traceability from TEST_CASE to TEST_RESULT.

[GRAMMAR]
ELEMENTS:
- TAG: TEST_CASE
  FIELDS:
  - TITLE: MID
    TYPE: String
    REQUIRED: False
  - TITLE: UID
    TYPE: String
    REQUIRED: False
  - TITLE: METHOD
    TYPE: SingleChoice(Automatic, Manual)
    REQUIRED: True
  - TITLE: OBJECTIVE
    TYPE: String
    REQUIRED: True
  - TITLE: DESCRIPTION
    TYPE: String
    REQUIRED: True
  - TITLE: INPUT
    TYPE: String
    REQUIRED: False
  - TITLE: PASS_CRITERIA
    TYPE: String
    REQUIRED: False
  RELATIONS:
  - TYPE: Parent
- TAG: TEST_RESULT
  FIELDS:
  - TITLE: UID
    TYPE: String
    REQUIRED: False
  - TITLE: STATUS
    TYPE: SingleChoice(Passed, Failed)
    REQUIRED: True
  - TITLE: CONTENT
    TYPE: String
    REQUIRED: True
  RELATIONS:
  - TYPE: Parent

The report render like below at the moment:

image

@johanenglund
Copy link
Contributor Author

Inclusion of results of manually performed tests is a bit trickier. I'm thinking I'll have to resort to creating result files that are named with the git hash so that the CI can automatically find the correct file and include it, e.g. test_results_8d9abc2.sdoc. But where to store it and how to version it is the question.

I'd also need to figure out an easy and user friendly way to author these manual test results from [TEST_CASE] nodes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants