Test Results Analyzer Plugin for GitLab - unit-testing

Jenkins has nice plugin for test results.
Test Results Analyzer Plugin (plugin url).
How to attach the same dashboard (with test results) for GitLab?

Here is my googling result:
Gitlab has "pages" feature but it depends on artifact life and only the last one is shown.
https://docs.gitlab.com/ee/ci/junit_test_reports.html
On the other hand there is an open and long story issue about this requirement for gitlab besides lots of closed ones.
https://gitlab.com/gitlab-org/gitlab-ce/issues/34102
Here the gitlab's info in this comparison page:
https://about.gitlab.com/comparison/gitlab-vs-jenkins.html
"Many languages use frameworks that automatically run tests on your code and create a report: one example is the JUnit format that is common to different tools. GitLab supports browsing artifacts and you can download reports, but we’re still working on a proper way to integrate them directly into the product."
I found a project for a complete solution and fully supported. I just started with this one and for now it is unbelievable. Just upload your reports to it. There are a lot of plug in support.
http://reportportal.io/

Related

Concourse CI - Build results

I've not been able to find any way in Concourse to show a 'build summary page' as you get in Jenkins/TFS etc. In those tools you can see build history (OK/failures), build durations, unit test results, code coverage, various graphs etc - but Concourse just has build history which is simple log files.
There doesn't seem to be any extensions system or other way to achieve this.
I'd prefer to use Concourse for the pipelines and build-in-containers approach, but it's a hard sell to developers who see it as a step backwards.
Thanks
Paul
The containers/workers of your plan/build are spinned up and spinned down. Everything you want to keep you need to PUT to a resource. In the concourse ecosystem there are many resources created.
The results and output logs of your builds/jobs are by default available in concourse UI (ok/not-ok, time). See for example: https://ci.concourse-ci.org/teams/main/pipelines/main/jobs/build-fly/builds/1273 In the job itself you can expand and see the logs, so output log of your unit tests for example.
If you want to keep the sonar report you just install a resource like https://github.com/cathive/concourse-sonarqube-resource and you can simply run this after your unit test by using the PUT (and store it on your Sonar server). So indeed you have no html report you can keep on that specific build, but you can put it everywhere you need it to be by using/creating a plugin. Very simple and nicer because the whole Sonar overview of your project is where it belongs :)

Using Ant as a continuous testing tool

So after much hunting I failed to find a continuous testing tool for IntelliJ 14.
I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.
I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.
Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.
Eclipse has the builder feature, you create an AntBuilder for your project, see also https://stackoverflow.com/a/15075732/130683.
IntelliJ has a trigger feature that might serve the purpose.
Also Infinitest , which provides a Continous Testing Plugin for Eclipse and IntelliJ might be helpful.
Ant is a build tool. Although IntelliJ does that for you, you need IntelliJ to do this which means you can't distribute your application without IntelliJ.
Ant uses a dependency matrix for building. This is sometimes difficult for developers to understand, but it basically means that you define the steps, how the steps are dependent upon each other, and let the build tool figure out exactly how to do its job. Ant is for Java like Make is to C and C++ applications.
Ant uses targets which are the steps you specify to do. For example, you might have a target called package that will build your jar or war. That target might depend upon another target called compile to compile the code. That target might depend upon a code generation phases (like if you had WSDL files).
Each target is a set of tasks. For example, the compile target is likely to have the <javac> task in it. It might also need the <mkdir> task to create the work directories where you classfiles are stored.
There are plenty of books on Ant, and there's a tutorial on the Ant Website. You didn't explain the issues you were having, so it's hard to be more specific than this.
Ant can also run your unit tests too. There's a <junit> target which can run the tests, and you specify whether or not you want to run almost all of your <junit> tests via the <batchtest> sub-entity or if you have a program driver you specify via the <test> entity.
Once you get an Ant script that can build and run your tests outside of IntelliJ, you can now get a Continuous Integration tool like Jenkins. A continuous integration tool watches your repository for changes, and if a change occurs, will then build your application. It's a great way to catch errors early on.
What does this have to do with Continuous Testing? Well, if you have your Ant script able to run unit tests, the Continuous Integration engine not only can build your app, but then run the unit tests with each and every change that occurs.
Jenkins is nice because it's very simple to use. You download a jenkins.war and you can launch the Jenkins webpage via the java -jar jenkins.war command. This brings up a web server on port 8080 on your machine. Obviously, Jenkins can be configured to run on different ports and under Tomcat if you so desire. It can integrate with Windows Active Directory, LDAP, and many other user verification systems.
Jenkins will show you charts and graphs of your tests, let you know which tests failed or passed, and will notify you of any problems via email, tweets, IM, Jabber, and even Facebook posts. People have even setup a traffic light in their offices that turns red when builds or tests fail.
Take it one step at a time. Get a good book on Ant. Read the tutorial on the Ant website. Then try to get a working Ant script to just to build your app. If you are having specific issues, you can ask for help.
Once you have the build going, extend the script to run your unit tests. Once that is done, download Jenkins and try to get that up and running.

Coldfusion continuous Integration

let me begin by saying I 'm a coldfusion newbie.
I 'm trying to research if its possible to do the following and what would be the best approach to achieve it.
Whenever a developer checks in code into SVN, I would like to do a get all the new changes/files and do an auto build to check if the code can be deployed successfully to production server. I guess there are two parts to it, one syntax checking and second integration test(if functionality is working as expected). For the later part some unit test tools would have to be used.
Can someone comment on their experience doing something similar for coldfusion.
Sorry for being a bit vague...I know its a very open-ended question but any feedback would be appreciated.
Thanks
There's a project called "Cloudy With A Chance of Tests" that purports to do what you require. In particular it brings together a number of other CFML code analysis projects (VarScope & QueryParam) to check code, as well as unit testing. I am not currently using it myself but did have a look at it some time ago (more than 12 months) and it appeared to be quite good.
https://github.com/mhenke/Cloudy-With-A-Chance-Of-Tests
Personally I run MXUnit tests in Jenkins using the instructions from the MXUnit site - available here:
http://wiki.mxunit.org/display/default/Continuous+Integration+--+Running+tests+with+Jenkins
Essentially this is set up as an ant task in Jenkins, which executes the MXUnit tests and reports back the results.
We're not doing fully continuos integration, but we have a process which automates some of the drudgery of our builds:
replace the site's application.cf(m|c) with one that tells users that the app is being deployed (we had QA staff raising defects that were due to re-deployments)
read a database manifest XML which lists all SQL scripts which make up the current release. We concatenate the scripts into a single upgrade script, suitable for shipping
execute the SQL script against the server's DB, noting any errors. The concatenation process also adds a line of SQL after each imported script that white to a runlog table, so we can see what ran, how long it took and which build it was associated with. If you're looking to replicate this step, take a look at Liquibase
deploy the latest code
make an http call to a ?reset=true type URL to tell the app to re-initialize
execute any tests
The build is requested manually through the build servers we have, but you click a button, make tea and it's done.
We've just extended the above to cope with multiple servers in a cluster and it ticks along nicely. I think the above suggestion of using the Jenkins SVN plugin to automate the process sounds like the way to go.

Adopting Bamboo or TeamCity as native Windows C++ build automation/CI server?

At the moment, we are running our automated (not CI as such) builds via FinalBuilder via a very simple homegrown Apache interface that just launches the FB scripts on our server. (I like FinalBuilder, and will keep it, but it's CI server, FinalBuilder Server just doesn't cut it IMHO -- especially it doesn't support any "agent" concept at the moment to distribute builds across machines.)
We are doing native C++ development on Windows with a bit .NET mixed in where it's needed and makes sense.
Our current FinalBuilder scripts do everything quite well, from creating nightly builds to full releases (build / automated translation / build / unit test / create setup / put created artifacts on a network share / ...), but our webinterface, queuing abilities, user traceability and reporting is pretty limited.
I have looked around and it seems that TeamCity and Bamboo tick similar boxes, but most descriptions I can find cover only Java and/or .NET simple builds.
So my specific question is, given
several (20-30) complicated FinalBuilder Scripts that work to my satisfaction and that I will have to integrate into ("call" from) the new automation/"CI" server
Native Windows C++ and .NET projects
The actual build (= compiler invocation(s)) is done via a few Visual Studio solution files at the moment
Currently one build server machine, wishing to scale to 2-3 atm.
Using JIRA as issue tracker
using AccuRev as SCM
which tool is better suited, and why: TeamCity (currently 6.5) or Bamboo (currently 3.1).
(Note that I also hope to get some highly subjective answers on the TeamCity and Bamboo forums.)
For TeamCity side, it integrates with Jira, has AccuRev plugin, and has a good support for VisualStudio/C++ projects. It can also run arbitrary scripts.
You can trigger a build and obtain some build results via HTTP-based API. In the UI, you can see which changes have been built and in which build configurations. Easily integrate any custom HTML reports into TeamCity UI (no coding), publish artifacts.
Probably, you should try both solutions and see which one is more suitable for you (with Teamcity, you can use full-functional server for free, the only limit is number of build agents and number of build configurations).
Disclaimer: I'm a TeamCity developer
I found Bamboo more credible than TeamCity. Here are my reasons:
Those Jira plugins for VS or Eclipse are Bamboo plug-ins too. :) no extra add-ins needed.
Better support for Jira integration.
Nice user interface, like the one you used for Jira.
Ability to better integration with other Atlassian tools, such as FishEye.
Cheaper. A 10$ license will suffice your company.
More add-ons on Bamboo than TeamCity, lots of plug-ins.
For completeness' sake: I ended up using Jenkins + Finalbuilder. :-)
I worked in a similar environment using FinalBuilder for build automation, AccuRev for source control and a native windows projects.
I ended up selecting Electric Commander as the best CI solution for the job. It is possible to reuse parts of the FinalBuilder scripts and call them from Electric Commander but simply calling the FB script as one build step would result in you missing out on some of the key advantages of using Electric Commander - realtime log file processing, the ability to parallelize right down to individual step levels in Electric Commander and data collection and reporting.
Electric Commander has an API that exposes all product functionality which can be used in combination with AccuRev triggers to achieve a very flexible solution.
Disclaimer - I liked Electric Commander so much I joined the company and am currently employed by Electric Cloud.
You can try Electric Commander by going to www.electric-cloud.com and clicking on "Try It!"

Continuous Integration: PowerShell vs. CI Server (CC.NET or Hudson)

So, a friend and I have been discussing continuous integration and bat/powershell scripts versus CI servers like CruiseControl.Net or Hudson.
The following powershell pseudo script works to update from SVN, build using msbuild, deploy/copy out, update a build/revision number in the app, and emails on failed builds. The next step would be to add calls to MSTest and email results when not successful.
svn update
msbuild > build_deploy_development_out_msbuild
([xml](svn info --xml)).info.entry.commit.revision + [char]13 + [char]10 + (echo %date% %time%) > build_revision_number.html
$linenumber = Select-String build_deploy_development_out_msbuild -pattern "Build Failed" | Select-Object Linenumber
$smtp = New-Object System.Net.Mail.SMTPClient -ArgumentList localhost | if($linenumber > 0) $smtp.Send("From:Email","To:Email", "build failed", "build failed... some one must die!")
This has lead me to the question of the value of CI servers, when you can write your own shell scripts to accomplish the same goal, using the specific tools of the project (build tool, source control, unit testing) (i.e. msbuild, nant, svn, git, nunit, mstest, etc.)
I have not experienced the maintenance cost as of yet. I wanted to get others opinions on the roll your own shell script versus a CruiseControl.Net or Hudson. Please note, I do not have experience with CI servers, thus the question, so please don't take this as being critical of CI servers; I simply don't know the best answer, and thought I would ask the community.
Best wishes!
Pete Gordon
CI Servers give you several advantages:
Web access, usually with ability to integrate with existing authentication mechanisms (see Hudson's ActiveDirectory/LDAP support)
Tons of existing support for unit testing, zip archive creationg, etc.
Hudson (and others) supports slave build nodes, for doing distributed CI tasks.
No need to maintain it yourself.
Some of these may not be things you need now but are you sure they aren't things you might need in the future?
I've installed Hudson a couple of weeks ago to replace the current CruiseControl server. The greatest advantage I see in Hudson is that pretty much anybody can use it, while launching a parametrized build with CruiseControl (or a batch file) is still scary for a lot of people.
Usually I tend to write all my build scripts with Ant (because it's portable), insert a couple of parameters and I invoke them from Hudson.
Hudson gives your scripts a great visibility (everything can be seen on the front page) and they are self explanatory. Usually with a bash script, you need to write a readme (that nobody reads) and remember where they are located.
... or have it both. Ayende (the creator of Rhino Mocks) has done that recently. He wrote a CI server using PowerShell. Perhaps this provides new insights for your discussion.
For a year I've tried to maintain custom-written python scripts to do basic CI stuff: recieving notifications of commits via e-mail, checking out and building stuff, sending back blames and congrats, then when it came to publishing this for use by everyone else in my team, it turned out raaaather unusable without monitoring, web-access, etc, etc.
Then I've dived into buildbot and found it truly beautiful. I've set up basically the same process in a couple of days. Build script is a true python object, that is customizeable at the master, from where it gets transferred to slaves and executed there. Built upon twisted framework, that is lots of stuff out-of-the-box ;)
Web UI is minimalistic, though sufficient.
Well, this is unpublished too, though I'm close to it this time %)
Below are my thoughts on CI Server over a Powershell scripts
Highly Configurable Plugins are available for all different kinds of version control, notifications and testing.
Logs These are maintained wonderfully. Failure and succesful build logs are at your finger tip.
Scheduling You can set all kinds of scheduling including triggerring based on other succesful build
Security You can set different groups to be able to execute, view only or set to see some projects
Visibility You can use a web dashboard or cctray for different audiences.
Scalability. Easy to scale when needed.
Bottom line if you have to maintain lots of builds for different environment and team projects then CI Server is the way to go. Other than that a simple PowerShell Script is enough for small projects. Once the project grows you can just hook up the existing PowerShell Script to a CI Server.