Importing test reports from external system to Azure DevOps? - unit-testing

I know that in Azure DevOps I can create test cases for my features and I can collect those test cases in a test suite to keep track of test results for a particular release or iteration of my software.
I also know that on a code level I can integrate a test framework to run unit or integration tests. Depending on the technology stack and language that I use the frameworks differ (i.e., Mocha, Junit, NUnit, PyTest, etc.), but usually they produce a common format for test results such as an XML Test Report using the JUnit Schema.
Now, I run my unit tests in different tools (i.e., Gitlab CI/CD or Jenkins) and I would like to link the test results from those unit and integration tests to test cases that I am collecting in test suites on Azure DevOps.
If I understand this correctly, then the link to bring this together is the test case ID on Azure DevOps which I somehow need to correlate with the individual unit tests in my test framework.
Once my test suite has run in the external tool, how can I publish the test report (i.e., a JUnit XML) to Azure DevOps to correlate the report with my test suite?
Is there a specific Azure DevOps API that I can use (i.e., https://learn.microsoft.com/en-us/rest/api/azure/devops/test/) to publish a JUnit test report?

Related

Azure DevOps 'Run Only Impact Tests' doesn't work with XUnit Tests?

I have an ASP.net Web API project (.NET Framework v4.6.2) where I have written unit test cases using XUnit.
I have created a build pipeline in Azure DevOps with 'Run only impacted tests' set to checked. Also, my Visual Studio test version is set to *2.** in my test assemblies.
I have tried to run my mock tests (No data driven test cases) to check that only impacted test run but all the tests run.
I had hard luck finding the solution for It. Can anyone know how to do it right?
I have read this article and and done setting accordingly, but failed.

Scope of Integration Testing in a CI CD workflow

The question is more about the fundamental understanding of a normal/ideal CI flow and understanding the scope of integration testing in it.
As per my understanding, the basic CI CD flow is
UnitTesting --> IntegrationTesting --> Build Artifact --> Deploy to Dev/Sandbox or any other subsequent environments.
So unit tetsing and integration testing collectively decide/make sure if the build is stable and ready to be deployed.
But, recently, we had this discussion in my team where we wanted to run integration tests on deployed instances on Dev/Sandbox etc , so as to verify if the application is working fine after deployment.
And the microsoft's article on Build - Deploy - Test workflows suggests that this could be a possible way.
So , my questions are :-
Are integration tests supposed to test configuration of different environments ?
Are integration tests supposed to be run before packaging application or deploying the application ?
If at all, some automated testing is required to test deployed application functioning on all environments ?
If not integration tests then what could be alternative solutions
You're mixing Integration testing with System testing.
Integration testing checks that some components can work together (can be integrated). You may have integration tests to verify how does the Data layer API operates with a database; or how does the the Web API responds to HTTP calls. You might not have the entire system completely working in order to do integration testing of its components.
Unlike integration tests, the System tests require all the components to be implemented and configured. That is end-to-end testing (e.g. from a web request to a database record). This kind of testing requires the entire system to be deployed which makes them more 'real' but expensive.

How can I access The TestCategory attribute in the TFS API or TFS Report?

We have a rather complicated library of automated test. We are using TestCategory and TestProperty help us keep things clean. Now I want the ability to report on the categories after they run.
Does anyone know if the Test Category or Test Property in MSTest is accessible from the TFS Warehouse or better yet, through the TFS API?
The TestCategory is not available in the tests results and thus isn't available via warehouse or API. TestCategory can be used to filter the tests that are run during a build but that's it. So you could create different test builds based on test category and then report based on each build.
You could alternatively associate automation to your test cases, then organize your test cases into suites and organize your reports based on suites.

How to write unit tests for openldap?

I'm using the library openldap for c++ to implement some authenticattion and queries for an ldap DB. I want to write unit tests for my code.
My question is, is it done like with sql DBs? for instance with sql, in each unit test you do something like that: drop the test DB, create a new one, add some users, assert your apis.... etc.
All in all I want to know the convention for writing ldap-db unit tests.
If you're talking about unit tests then you should mock your LDAP API and test only your code, not the LDAP API implementation. You can use Google Mock for your mocks.
But I think you're referring to integration tests, for that same strategy as with database integration tests apply. You setup the environment - bring up the server, populate the entries, assert that the code works against it and then tear down that environment.
In Java I would use in-memory LDAP server for integration tests, you could try and find one that you can embed and run only from memory in C/C++.
See What's the difference between unit, functional, acceptance, and integration tests?.

Multiple test sets in Maven

I have written a REST server (in Java using RestEasy) with a unit test suite written in Scala. The test suite uses the mock server provided by RestEasy and runs with every Maven build.
I would like to create a second functional test suite that calls an actual tomcat server and exercises each REST service. I do not want this new suite to run with every build, but only on demand, perhaps controlled with a command line argument to Maven.
Is it possible to create multiple independent test suites in a Maven project and disable some from automatic running, or do I need to create a separate Maven project for this functional suite? How can I segregate the different functional suite code if these tests are in the same project with the unit tests (different directories)? How do I run a selected suite with command line arguments?
I never used it myself but I am aware of maven integration tests run by the Maven Failsafe plugin.
As the surefire plugin by default includes the tests named **/Test*.java, **/*Test.java, **/*TestCase.java the failsafe plugin runs the **/IT*.java, **/*IT.java, **/*ITCase.java tests.
Both test approaches have different intentions which seems to match part of your needs. It might be worth to have a look.....
Another approach would be to use maven profiles and specifiy different surefire includes for each profile.