Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task: Investigate using AzurePipelines.TestLogger for direct test logging via API #83

Open
NightOwl888 opened this issue Dec 31, 2023 · 1 comment
Labels
is:enhancement New feature or request pri:low up for grabs This issue is open to be worked on by anyone
Milestone

Comments

@NightOwl888
Copy link
Owner

NightOwl888 commented Dec 31, 2023

Our current test logging is done to a local file on the build server via TRX format, then the TRX file is uploaded to Azure Pipelines using the PublishTestResults@2 task. This works and allows us to add a title per file upload, allowing us to add the test project name, target framework, OS, and build platform on the top level element so we can see where the test failed.

image

This works well, but being that we have so many different test runs and tasks have to run after the tests are completed, it takes a lot of extra time to push the test results (mostly because it runs a bunch of task conditions only to find out that the task is being skipped because it is not the current configuration).

When we were on TeamCity, the test results automatically were added to the job in real time (including counting up the statistics). This allowed us to start investigating a problem as soon as it appeared in the portal instead of waiting for the whole run to finish and the file to upload before seeing anything. This could mean the difference between seeing the problem 5 minutes into the test cycle instead of waiting 20 to 30 minutes.

The AzurePipelines.TestLogger may do the trick, but I don't think it can report the test project name, target framework, OS, and build platform. So, we may need to fork it and see if we can find a way to add that info, and possibly submit a PR to them.

However, first we need to hook it up to see what information it currently provides.

Once we have a solution for this, it can also be used on ICU4N, RandomizedTesting, Spatial4n, and Morfologik.Stemming.

Not sure it can be used on Lucene.NET, though - we would need approval and we currently don't have enough permissions to setup a new Azure DevOps pipeline, much less use a plugin for it. It would work for our "unofficial" pipelines, though.

@NightOwl888 NightOwl888 added is:enhancement New feature or request pri:low up for grabs This issue is open to be worked on by anyone labels Oct 13, 2024
@NightOwl888 NightOwl888 added this to the Future milestone Oct 13, 2024
@NightOwl888
Copy link
Owner Author

I had a look at this. There are pros and cons with using it.

Pros

  1. Easy to use - simply add a reference and use the --logger argument on dotnet test
  2. Uploads all of the results inline, no need for a separate upload (except we will want to upload the TRX file as an artifact)
  3. It shows the OS and Job Name in the record, which contains the platform and target framework. It groups by test project, so it also shows that. It even shows the exact version of OS that the tests are run on.

Cons

  1. Since the tests are grouped, the count shows significantly fewer tests than actually ran
  2. There is no way to customize the title of each group
  3. It doesn't provide access to the rest of the REST API, so there is no way to extend it (for example, adding attachments to tests)

Either way, we are still left holding the bag on deciding when to report a failure because of a crash or hang, but we can probably move parsing the TRX file inside of the Powershell background job, which would improve performance by several minutes.

On the other hand, calling the REST API directly is fairly straightforward once the permissions are worked out. However, there is no API that accepts the TRX format directly. We have to parse it ourselves and create the JSON to upload it ourselves. This would be most practical in batches. If we go this route, we can probably get some ideas from the PublishTestResults@2 task on how to implement it. It may even make sense to make it a dotnet tool or a Powershell module.

And of course, we should keep a lookout for other options, as this seems like something that should be available, but last time I checked it was not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
is:enhancement New feature or request pri:low up for grabs This issue is open to be worked on by anyone
Projects
None yet
Development

No branches or pull requests

1 participant