How to display detailed test results on Azure Devops (Builds)? - unit-testing

I created a simple task on Azure Devops, to run Unit Tests on my project.
For that, I'm using the Visual Studio Test task and everything works fine.
The "problem" is that, when I navigate to build results and go to Test Results, I can only see the failed tests, or in case when all tests succeed, I can't see any.
I'm not sure if it's possible, but ideally I wanted to see all tests results, regardless if they failed or not, so that I can see it more analytical.
Thanks.

It seems there's a column that controls that visibility.

Related

Wanted to create UI interface for Business developer team which contains Test Cases

I wanted to create UI interface for Business developer team which contains Test scripts which are written in my PC in selenium with c#.
Problem :-
Business developer team has no visual studio installed in their PC
because they don't have licence for it.Now if they wanted to show
demo of Application to client then if they have UI which contains
Test scripts which i wrote in selenium.so how could i display those
Test script on business developers laptop which don't have visual
studio?
Here for demo purpose to client ,Business developer team will
click on particular test case on UI and run that feature. For
example ,If Business developer wanted to show Login feature in UI at
Business developer laptop then that script should run.
how can i define connectivity between business developer's laptop
which don't have visual studio and source laptop which has visual
studio which contain Test script?
It's kind of R & D work,If anybody has idea about this let me know.
To do this, you are going to have to setup some kind of infrastructure to support this. You could achieve this through setting up a build server and making a build plan per collection of test scripts. This is going to let the BDs run the tests, but not look into the tests till the execution is done. You will require proper reporting in order to do so.
What you could also do is use a tool like browserstack. With (I am not affiliated with browserstack what so ever) browserstack you can make every executed test viewable through their recording function and every step of every test will be administrated. This way, you execute the test once, they login into browserstack and check every test, test step, expected results, actual results, screenshots and 30 day history recording of each and every test executed.
You don't need an IDE to run your tests. The person who wishes to run tests to show off some features of the site should be able to download your test codebase and run it locally from the command line. If they don't want to pass in a lot of flags at the command line, you could bundle the commands into batch or shell scripts.

What is difference of test builds in teamcity and test builds in octopus?

I've been searching for this but no answers.
I've created a unit test project in my solution.
Since we're planning to automate the testing. We don't know where to put it.
Will the teamcity test it or just build the test project? If it will going to be tested does it mean that it is ok not put it in octopus?
You should run your tests in Team City, and fail the build if the tests fail.
Only if the tests pass should you allow the build artefacts to be sent to Octopus, which will then take care of deploying the software.
General test should be run on the build server but you may want to run integration tests from Octopus after the deployment has happened. An example of post-deploy testing would be something like Selenium smoke tests to ensure the deployment was successful and the application is running as expected (e.g. a website on IIS).
Generally you want tests to fail as early as possible (e.g. in UAT instead of production, in CI instead of Test/UAT etc..)

Using Ant as a continuous testing tool

So after much hunting I failed to find a continuous testing tool for IntelliJ 14.
I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.
I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.
Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.
Eclipse has the builder feature, you create an AntBuilder for your project, see also https://stackoverflow.com/a/15075732/130683.
IntelliJ has a trigger feature that might serve the purpose.
Also Infinitest , which provides a Continous Testing Plugin for Eclipse and IntelliJ might be helpful.
Ant is a build tool. Although IntelliJ does that for you, you need IntelliJ to do this which means you can't distribute your application without IntelliJ.
Ant uses a dependency matrix for building. This is sometimes difficult for developers to understand, but it basically means that you define the steps, how the steps are dependent upon each other, and let the build tool figure out exactly how to do its job. Ant is for Java like Make is to C and C++ applications.
Ant uses targets which are the steps you specify to do. For example, you might have a target called package that will build your jar or war. That target might depend upon another target called compile to compile the code. That target might depend upon a code generation phases (like if you had WSDL files).
Each target is a set of tasks. For example, the compile target is likely to have the <javac> task in it. It might also need the <mkdir> task to create the work directories where you classfiles are stored.
There are plenty of books on Ant, and there's a tutorial on the Ant Website. You didn't explain the issues you were having, so it's hard to be more specific than this.
Ant can also run your unit tests too. There's a <junit> target which can run the tests, and you specify whether or not you want to run almost all of your <junit> tests via the <batchtest> sub-entity or if you have a program driver you specify via the <test> entity.
Once you get an Ant script that can build and run your tests outside of IntelliJ, you can now get a Continuous Integration tool like Jenkins. A continuous integration tool watches your repository for changes, and if a change occurs, will then build your application. It's a great way to catch errors early on.
What does this have to do with Continuous Testing? Well, if you have your Ant script able to run unit tests, the Continuous Integration engine not only can build your app, but then run the unit tests with each and every change that occurs.
Jenkins is nice because it's very simple to use. You download a jenkins.war and you can launch the Jenkins webpage via the java -jar jenkins.war command. This brings up a web server on port 8080 on your machine. Obviously, Jenkins can be configured to run on different ports and under Tomcat if you so desire. It can integrate with Windows Active Directory, LDAP, and many other user verification systems.
Jenkins will show you charts and graphs of your tests, let you know which tests failed or passed, and will notify you of any problems via email, tweets, IM, Jabber, and even Facebook posts. People have even setup a traffic light in their offices that turns red when builds or tests fail.
Take it one step at a time. Get a good book on Ant. Read the tutorial on the Ant website. Then try to get a working Ant script to just to build your app. If you are having specific issues, you can ask for help.
Once you have the build going, extend the script to run your unit tests. Once that is done, download Jenkins and try to get that up and running.

Coldfusion continuous Integration

let me begin by saying I 'm a coldfusion newbie.
I 'm trying to research if its possible to do the following and what would be the best approach to achieve it.
Whenever a developer checks in code into SVN, I would like to do a get all the new changes/files and do an auto build to check if the code can be deployed successfully to production server. I guess there are two parts to it, one syntax checking and second integration test(if functionality is working as expected). For the later part some unit test tools would have to be used.
Can someone comment on their experience doing something similar for coldfusion.
Sorry for being a bit vague...I know its a very open-ended question but any feedback would be appreciated.
Thanks
There's a project called "Cloudy With A Chance of Tests" that purports to do what you require. In particular it brings together a number of other CFML code analysis projects (VarScope & QueryParam) to check code, as well as unit testing. I am not currently using it myself but did have a look at it some time ago (more than 12 months) and it appeared to be quite good.
https://github.com/mhenke/Cloudy-With-A-Chance-Of-Tests
Personally I run MXUnit tests in Jenkins using the instructions from the MXUnit site - available here:
http://wiki.mxunit.org/display/default/Continuous+Integration+--+Running+tests+with+Jenkins
Essentially this is set up as an ant task in Jenkins, which executes the MXUnit tests and reports back the results.
We're not doing fully continuos integration, but we have a process which automates some of the drudgery of our builds:
replace the site's application.cf(m|c) with one that tells users that the app is being deployed (we had QA staff raising defects that were due to re-deployments)
read a database manifest XML which lists all SQL scripts which make up the current release. We concatenate the scripts into a single upgrade script, suitable for shipping
execute the SQL script against the server's DB, noting any errors. The concatenation process also adds a line of SQL after each imported script that white to a runlog table, so we can see what ran, how long it took and which build it was associated with. If you're looking to replicate this step, take a look at Liquibase
deploy the latest code
make an http call to a ?reset=true type URL to tell the app to re-initialize
execute any tests
The build is requested manually through the build servers we have, but you click a button, make tea and it's done.
We've just extended the above to cope with multiple servers in a cluster and it ticks along nicely. I think the above suggestion of using the Jenkins SVN plugin to automate the process sounds like the way to go.

Deploying with team build and psexec -> integration tests error

I have a team build (upgrade template, tfs2010, msbuild) compiling and testing a WCF service. We use psexec and the exec task to remote install the service (wix installer) on the web server, prior to running an integration test suite against it. However sometimes our nightly build fails with a compilation error - can only see the first 1024 bytes and most of it is css styles. I've tried to delay the tests with sleeps, thinking it might be due to long JIT. However all 600+ integration tests fails. In the build log it seems that the exec task with psexec executes synchronously, as expected, and returns an exit 0. Could anyone come up with a reason to why this occurs now and then?
This doesn't sound like it's anything specific to TFS, msbuild or psexec -- it sounds like there may be an interim installation, configuration, or coding problem with the service. The point of CI and integration tests is to get early feedback on your process, and apparently something is wrong. The trick is drilling down into the problem and ruling out where the issue(s) reside.
Psexec claims the WiX deployment went fine, but did it? Are all files present? Were previous versions of the installation properly removed, or did they get upgraded correctly?
All 600 tests fail, but the tests don't contain the proper stack trace - can you reproduce the problem outside of the tests? Eg, when the tests fail can you emulate a test manually or run one of the existing tests with a debugger attached to see the same stack trace? One strategy might be to identify one or two specific tests that accurately validate the deployment -- run just these tests after the deployment and if these tests fail you should abort the build and then leave the server in a failed state for deeper analysis. this may require customization the build template but it may be worth the effort.
Can you add logging to the WCF service? Better logging to the tests?
Lastly, CI as mentioned previously is about early feedback. The general rule is, "If something is painful then you should do it more often." Focus on pain points, isolate them and iteratively improve them. When the pain diminishes, focus on other pain points. In your case, consider running your "nightly" build in a rolling fashion - you'll find your intermittent problem after a few runs rather than weeks.