-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add options to import stdout/stderr from junit test as comment #228
Comments
thanks @moenny ;
@chrisfaragliaTestRail what do you think? |
no, neither stdout nor stderr is currently being processed. Here is an example:
import pytest
import sys
class Test:
def test_ok(self):
assert True
def test_skip(self):
pytest.skip("skip test")
def test_fail(self):
pytest.fail("fail")
def test_stderr_ok(self):
print("print to stdout", file=sys.stdout)
print("print to stderr", file=sys.stderr)
assert True
def test_stderr_fail(self):
print("print to stdout", file=sys.stdout)
print("print to stderr", file=sys.stderr)
assert False
pytest-3 tests/test.py -rsv --capture=tee-sys -o junit_logging=all --junitxml report.xml <?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="0" failures="2" skipped="1" tests="5" time="0.039" timestamp="2024-05-06T12:16:27.539909" hostname="NBG37LNX"><testcase classname="tests.test.Test" name="test_ok" time="0.000"><system-out>--------------------------------- Captured Log ---------------------------------
--------------------------------- Captured Out ---------------------------------
</system-out><system-err>--------------------------------- Captured Err ---------------------------------
</system-err></testcase><testcase classname="tests.test.Test" name="test_skip" time="0.000"><skipped type="pytest.skip" message="skip test">/home/[email protected]/git/zip/pytest/tests/test.py:9: skip test</skipped><system-out>--------------------------------- Captured Log ---------------------------------
--------------------------------- Captured Out ---------------------------------
</system-out><system-err>--------------------------------- Captured Err ---------------------------------
</system-err><system-out>--------------------------------- Captured Log ---------------------------------
--------------------------------- Captured Out ---------------------------------
</system-out><system-err>--------------------------------- Captured Err ---------------------------------
</system-err></testcase><testcase classname="tests.test.Test" name="test_fail" time="0.000"><failure message="Failed: fail">self = <test.Test object at 0x7fc3dce440d0>
def test_fail(self):
> pytest.fail("fail")
E Failed: fail
tests/test.py:12: Failed</failure><system-out>--------------------------------- Captured Log ---------------------------------
--------------------------------- Captured Out ---------------------------------
</system-out><system-err>--------------------------------- Captured Err ---------------------------------
</system-err></testcase><testcase classname="tests.test.Test" name="test_stderr_ok" time="0.000"><system-out>--------------------------------- Captured Log ---------------------------------
--------------------------------- Captured Out ---------------------------------
print to stdout
</system-out><system-err>--------------------------------- Captured Err ---------------------------------
print to stderr
</system-err></testcase><testcase classname="tests.test.Test" name="test_stderr_fail" time="0.000"><failure message="assert False">self = <test.Test object at 0x7fc3dd694520>
def test_stderr_fail(self):
print("print to stdout", file=sys.stdout)
print("print to stderr", file=sys.stderr)
> assert False
E assert False
tests/test.py:22: AssertionError</failure><system-out>--------------------------------- Captured Log ---------------------------------
--------------------------------- Captured Out ---------------------------------
print to stdout
</system-out><system-err>--------------------------------- Captured Err ---------------------------------
print to stderr
</system-err></testcase></testsuite></testsuites>
trcli -y \
-h "$BASE_URL" \
--project "$PROJECT" \
--project-id "$PROJECT_ID" \
--username "$USER" \
--key "$APIKEY" \
parse_junit \
--title "$TITLE" \
--suite-name "$SUITE" \
--case-fields "template_id:5" \
-f "$REPORT_XML"
result: <?xml version="1.0" encoding="UTF-8"?>
<run>
<id>R2585</id>
<name>Testrun capture stdout and stderr</name>
<description></description>
<config></config>
<createdon>2024-05-06T10:17:50Z</createdon>
<completed>false</completed>
<milestone></milestone>
<stats>
<passed>
<percent>40</percent>
<count>2</count>
</passed>
<blocked>
<percent>0</percent>
<count>0</count>
</blocked>
<untested>
<percent>0</percent>
<count>0</count>
</untested>
<retest>
<percent>20</percent>
<count>1</count>
</retest>
<failed>
<percent>40</percent>
<count>2</count>
</failed>
<status_unsupported>
<percent>0</percent>
<count>0</count>
</status_unsupported>
<status_in_review>
<percent>0</percent>
<count>0</count>
</status_in_review>
<status_function_incomplete>
<percent>0</percent>
<count>0</count>
</status_function_incomplete>
<status_not_testable>
<percent>0</percent>
<count>0</count>
</status_not_testable>
<status_not_a_bug>
<percent>0</percent>
<count>0</count>
</status_not_a_bug>
</stats> <sections>
<section>
<name>pytest</name>
<description></description>
<tests>
<test>
<id>T71506</id>
<title>test_ok</title>
<template>Test Case (Automation)</template>
<type>Functional</type>
<priority>Medium</priority>
<estimate></estimate>
<references></references>
<custom>
<automation_id>tests.test.Test.test_ok</automation_id>
</custom>
<caseid>C35798</caseid>
<status>Passed</status>
<assignedto></assignedto>
<inprogress></inprogress>
<changes>
<change>
<createdon>2024-05-06T10:17:51Z</createdon>
<createdby>USER</createdby>
<status>Passed</status>
<assignedto></assignedto>
<comment></comment>
<version></version>
<elapsed></elapsed>
<defects></defects>
</change>
</changes>
</test>
<test>
<id>T71507</id>
<title>test_skip</title>
<template>Test Case (Automation)</template>
<type>Functional</type>
<priority>Medium</priority>
<estimate></estimate>
<references></references>
<custom>
<automation_id>tests.test.Test.test_skip</automation_id>
</custom>
<caseid>C35799</caseid>
<status>Retest</status>
<assignedto></assignedto>
<inprogress></inprogress>
<changes>
<change>
<createdon>2024-05-06T10:17:51Z</createdon>
<createdby>USER</createdby>
<status>Retest</status>
<assignedto></assignedto>
<comment>Type: pytest.skip
Message: skip test
Text: /home/[email protected]/git/zip/pytest/tests/test.py:9: skip test</comment>
<version></version>
<elapsed></elapsed>
<defects></defects>
</change>
</changes>
</test>
<test>
<id>T71508</id>
<title>test_fail</title>
<template>Test Case (Automation)</template>
<type>Functional</type>
<priority>Medium</priority>
<estimate></estimate>
<references></references>
<custom>
<automation_id>tests.test.Test.test_fail</automation_id>
</custom>
<caseid>C35800</caseid>
<status>Failed</status>
<assignedto></assignedto>
<inprogress></inprogress>
<changes>
<change>
<createdon>2024-05-06T10:17:51Z</createdon>
<createdby>USER</createdby>
<status>Failed</status>
<assignedto></assignedto>
<comment>Type:
Message: Failed: fail
Text: self = &lt;test.Test object at 0x7fc3dce440d0&gt;
def test_fail(self):
&gt; pytest.fail(&quot;fail&quot;)
E Failed: fail
tests/test.py:12: Failed</comment>
<version></version>
<elapsed></elapsed>
<defects></defects>
</change>
</changes>
</test>
<test>
<id>T71509</id>
<title>test_stderr_ok</title>
<template>Test Case (Automation)</template>
<type>Functional</type>
<priority>Medium</priority>
<estimate></estimate>
<references></references>
<custom>
<automation_id>tests.test.Test.test_stderr_ok</automation_id>
</custom>
<caseid>C35801</caseid>
<status>Passed</status>
<assignedto></assignedto>
<inprogress></inprogress>
<changes>
<change>
<createdon>2024-05-06T10:17:51Z</createdon>
<createdby>USER</createdby>
<status>Passed</status>
<assignedto></assignedto>
<comment></comment>
<version></version>
<elapsed></elapsed>
<defects></defects>
</change>
</changes>
</test>
<test>
<id>T71510</id>
<title>test_stderr_fail</title>
<template>Test Case (Automation)</template>
<type>Functional</type>
<priority>Medium</priority>
<estimate></estimate>
<references></references>
<custom>
<automation_id>tests.test.Test.test_stderr_fail</automation_id>
</custom>
<caseid>C35802</caseid>
<status>Failed</status>
<assignedto></assignedto>
<inprogress></inprogress>
<changes>
<change>
<createdon>2024-05-06T10:17:51Z</createdon>
<createdby>USER</createdby>
<status>Failed</status>
<assignedto></assignedto>
<comment>Type:
Message: assert False
Text: self = &lt;test.Test object at 0x7fc3dd694520&gt;
def test_stderr_fail(self):
print(&quot;print to stdout&quot;, file=sys.stdout)
print(&quot;print to stderr&quot;, file=sys.stderr)
&gt; assert False
E assert False
tests/test.py:22: AssertionError</comment>
<version></version>
<elapsed></elapsed>
<defects></defects>
</change>
</changes>
</test>
</tests>
</section>
</sections>
</run>
I can currently only provide code for junit. |
What would you like the TestRail CLI to be able to do?
Add options to import system and error messages from junit test as test command.
IE:
Why is this feature necessary on the TestRail CLI?
This makes test results easier to understand.
More details
because i need this functionality, i have already started to implement it.
main...moenny:trcli:add-junit-option-system
I can provide tests on request.
Interested in implementing it yourself?
Yes
The text was updated successfully, but these errors were encountered: