I have some cpp/header files in Github that I am currently building through 2 Jenkins jobs.
The first Jenkins job will build the tests and the second will run the tests. I could have combined both into one job, but I can't publish the test results of Google Tests with JUnit due to this issue Append multiple xml test reports of google test, so separating both jobs would make it clearer.
My solution was making the test job share the same workspace as the build job and triggering it once the build is stable.
I am new to Jenkins and don't know if this is a typical workflow. Is there any other idea on how to do this or is this reasonable?
Related
I set up a build in VSTS to run ANT migration tool to deploy to Salesforce Org. I would like to somehow publish the results of the unit tests to the VSTS, so I can leverage the VSTS test results overview. I can see the test results in the log output of the ANT task in the build job, but using the VSTS overview seems more convenient. Is there a way to do this?
If you can get the test results file, then you can use the Test: Publish Test Results task to publish the results to VSTS.
Besides, you can check if this article helps: Versioning and Deploying Salesforce Metadata using TFS/VSTS
I have many jenkins jobs which do things like
execute myProgram.exe to convert input.txt to output.txt
if (the conversion is successful) {
trigger another jenkins job
} else {
send a e-mail to notify someone that the build fails
}
All of the them are Freestyel project
I want to write unit test code to test both the success and failure cases of my jenkins jobs.
If the build succeeds, the test code should check if output.txt's content is correct and avoid triggering another jenkins job.
If the build fails, the test code should check if the e-mail was successfully sent to the receiver.
Is there any test framework for doing things like this?
It seems like I can find a solution here. But I couldn't find examples telling me how to write unit test that uses existing jenkins jobs in that tutorial.
Or should I use another tool (not jenkins) for doing this kind of jobs?
How to test Jenkins Pipelines is currently an ongoing issue; see JENKINS-33925.
Though in that thread, you'll see that a couple of people have been working on solutions, e.g. https://github.com/macg33zr/pipelineUnit
You can use Jenkins Job Builder and describe your jobs in YAML files.
Do your configuration changes in your branch and continuously deploy them to test Jenkins server.
It is pretty simple with different config files in Jenkins Job Builder.
Run your jobs on test Jenkins master and merge to master after running your jobs.
Jenkins Job Builder Docs
As a preface, our setup is somewhat unusual due to "legacy" reasons. It is fully possible that I am going against the grain with this. I would love to get an expert opinion on whether it is possible to get my current setup working or a suggestion on a different approach.
Environment
Java application with over 10K unit tests in JUnit. For legacy reasons the entire unit test run takes a long time (the ultimate goal is to fix the root of the problem, but this will not happen soon).
The application is broken up into multiple modules, with each module having its own unit tests. Executing tests module by module takes a reasonable amount of time, so that if someone commits the code to repo subtree with module's code and only module's tests get executed, they can get a result quickly.
Current Jenkins Setup
JUnit job
This is the single parameterized job that can run tests for any module. The job takes in as parameters the regexs for which tests to run and a parameter indicating which module it is running, for notification purposes. It checks out the whole repo tree and then does the run based on the parameters.
After the completion of the run this job does the analysis of JUnits, publishes the report and sends out email notifications.
Repo watchers
One repo watcher for each module. The watcher checks out only the repo subtree that it wants to monitor. When a change is detected it triggers the JUnit job telling it which tests to run and for what component this is.
Question
In general the setup works well, does exactly what I need, however it breaks a few of the nice and expected features of Jenkins and JUnit plugin.
Because the same job keeps executing different subsets of unit tests, the job to job comparison between unit tests does not provide any value. Without manually scanning between jobs it is not possible to tell what changed in terms of new failures or new fixes to unit tests.
Something very similar happens to change history:
Each repo watches runs on its own schedule. Suppose we have a change to module A and a change to module B, very close time wise to each other. If watcher A triggers first, the JUnit job triggered by watcher A will "claim" both changes. When the JUnit job triggered by watcher B runs, it will not detect any new changes in the repo. This plays havoc with email notifications, as the second JUnit job would not know who broke the build.
At the end of the day I believe I am looking for a way to establish dependency relationship between non sequential job runs in Jenkins for the same job, or alternatively a totally different approach.
Thank you!
Okay so lets see if I get this right in basic language,
You want to track what failures are caused by what changeset?
In which case I would suggest the following: Again in simple term - you need to adapt this to your current setup.
Setup a job that manages results
This job should be parameterized to take the name or change number and publish ALL the results.
This job should be triggered following each test run is completed to consolidate all results
now if a new test or failure is introduced the same job can track it and email the person who caused the failure.
Jenkins is very powerful and very generic so pretty much every secanrio if not plugins with groovy it can be resolved. So I would suggest taking a pen and map it on the board and create process of it rather than just a single job.
I'm currently running a a build job on Jenkins that generates a bunch of CUnit testing exes. What I'd like to do is take those binaries and run them automatically on a bunch of other machines upon successful completion of the build.
For example: Run the build -> success -> trigger copy of EXEs to other machines -> run said EXEs -> gather output.
My question is whether or not this is possible to automate with jenkins? I'm not entirely sure the direction I should be going in. My best guess is to configure a bunch of other jobs that will trigger on successful completion of the Build job. These jobs will retrieve the files in question from somewhere, run them, and report back.
Any input would be greatly appreciated.
In the post-build actions of your build job, mark the generated executables as artifacts, then you can use the Copy Artifact plugin to distribute the test executables to another test job (or more than one) that runs a Jenkins build slave on the test machine(s). As you've mentioned, you can configure a successful build to trigger the test jobs. Based on other answers, it looks like CUnit generates an XML report of the test output that Jenkins can parse, so in the test job's post-build actions, configure the location of the test results.
From a management perspective, it is easier if there is one test job because you don't have to figure out how to partition the executables and you can read the results in one report. But depending on your use case, it might make more sense to have separate test jobs if the tests require different environments or if it makes sense to partition the test results.
I'm trying to understand how to configure TFS Team Build to provide a CI solution for my project. I have a fairly common setup that I have several categories of unit tests. For simplicity lets say there are two categories:
Exchange2003
Exchange2007
Each test category needs particular software to be installed on the Build Agent so I would create two Build Agents, BuildAgentEx2003 and BuildAgentEx2007, with the obvious configurations.
Now when I kick off a CI build I want a few things to happen:
Exchange2003 tests to run on BuildAganetEx2003.
Exchange2007 tests to run on BuildAganetEx2007.
All tests categories get run and their results aggregated.
Is that supported and if so how would I configure it.
P.S. In reality of course the situation is very much more complicated. I have a large matrix of test categories and build agents. Each Build Agent would typically be capable of running many different categories of unit tests and each category of tests can be run by one or more Build Agent. The requirement being only that each category of tests be run once for each CI build.cat
Set up one CI build for building the code base. Set up one build for every configuration you need that is a manual build.
After the CI build is successful queue a new build for every configuration using TFSBuild.exe
Pass the original build number to the queued builds as a parameter.
As the last step in the manual builds you publish the test results to the CI build using MSTest.exe
Team Build 2010 should support this scenario out the box - although it will take some work to set up build agents and assign tags to them. But once you do that you should be able to use distributed builds to build and run tests on particular build agents.
It would be much more complicated with Team Build 2008.