I have many jenkins jobs which do things like
execute myProgram.exe to convert input.txt to output.txt
if (the conversion is successful) {
trigger another jenkins job
} else {
send a e-mail to notify someone that the build fails
}
All of the them are Freestyel project
I want to write unit test code to test both the success and failure cases of my jenkins jobs.
If the build succeeds, the test code should check if output.txt's content is correct and avoid triggering another jenkins job.
If the build fails, the test code should check if the e-mail was successfully sent to the receiver.
Is there any test framework for doing things like this?
It seems like I can find a solution here. But I couldn't find examples telling me how to write unit test that uses existing jenkins jobs in that tutorial.
Or should I use another tool (not jenkins) for doing this kind of jobs?
How to test Jenkins Pipelines is currently an ongoing issue; see JENKINS-33925.
Though in that thread, you'll see that a couple of people have been working on solutions, e.g. https://github.com/macg33zr/pipelineUnit
You can use Jenkins Job Builder and describe your jobs in YAML files.
Do your configuration changes in your branch and continuously deploy them to test Jenkins server.
It is pretty simple with different config files in Jenkins Job Builder.
Run your jobs on test Jenkins master and merge to master after running your jobs.
Jenkins Job Builder Docs
Related
I have some cpp/header files in Github that I am currently building through 2 Jenkins jobs.
The first Jenkins job will build the tests and the second will run the tests. I could have combined both into one job, but I can't publish the test results of Google Tests with JUnit due to this issue Append multiple xml test reports of google test, so separating both jobs would make it clearer.
My solution was making the test job share the same workspace as the build job and triggering it once the build is stable.
I am new to Jenkins and don't know if this is a typical workflow. Is there any other idea on how to do this or is this reasonable?
I've seen many discussions on-line about Sonar web-hooks to send scan results to Jenkins, but as a CodePipeline acolyte, I could use some basic help with the steps to supply Sonar scan results (e.g., quality-gate pass/fail status) to the pipeline.
Is the Sonar web-hook the right way to go, or is it possible to use Sonar's API to fetch the status of a scan for a given code-project?
Our code is in BitBucket. I'm working with the AWS admin who will create the CodePipeline that fires when code is attempted to be pushed into the repo. sonar-scanner will be run, and then we'd like the pipeline to stop if the quality does not pass the Quality Gate.
If I would use a Sonar web-hook, I imagine the value for host would be, what, the AWS instance running the CodeBuild?
Any pointers, references, examples welcome.
I created a powershell to use with Azure DevOps, that possible may be migrated to some shell script that runs in the code build activity
https://github.com/michaelcostabr/SonarQubeBuildBreaker
As a preface, our setup is somewhat unusual due to "legacy" reasons. It is fully possible that I am going against the grain with this. I would love to get an expert opinion on whether it is possible to get my current setup working or a suggestion on a different approach.
Environment
Java application with over 10K unit tests in JUnit. For legacy reasons the entire unit test run takes a long time (the ultimate goal is to fix the root of the problem, but this will not happen soon).
The application is broken up into multiple modules, with each module having its own unit tests. Executing tests module by module takes a reasonable amount of time, so that if someone commits the code to repo subtree with module's code and only module's tests get executed, they can get a result quickly.
Current Jenkins Setup
JUnit job
This is the single parameterized job that can run tests for any module. The job takes in as parameters the regexs for which tests to run and a parameter indicating which module it is running, for notification purposes. It checks out the whole repo tree and then does the run based on the parameters.
After the completion of the run this job does the analysis of JUnits, publishes the report and sends out email notifications.
Repo watchers
One repo watcher for each module. The watcher checks out only the repo subtree that it wants to monitor. When a change is detected it triggers the JUnit job telling it which tests to run and for what component this is.
Question
In general the setup works well, does exactly what I need, however it breaks a few of the nice and expected features of Jenkins and JUnit plugin.
Because the same job keeps executing different subsets of unit tests, the job to job comparison between unit tests does not provide any value. Without manually scanning between jobs it is not possible to tell what changed in terms of new failures or new fixes to unit tests.
Something very similar happens to change history:
Each repo watches runs on its own schedule. Suppose we have a change to module A and a change to module B, very close time wise to each other. If watcher A triggers first, the JUnit job triggered by watcher A will "claim" both changes. When the JUnit job triggered by watcher B runs, it will not detect any new changes in the repo. This plays havoc with email notifications, as the second JUnit job would not know who broke the build.
At the end of the day I believe I am looking for a way to establish dependency relationship between non sequential job runs in Jenkins for the same job, or alternatively a totally different approach.
Thank you!
Okay so lets see if I get this right in basic language,
You want to track what failures are caused by what changeset?
In which case I would suggest the following: Again in simple term - you need to adapt this to your current setup.
Setup a job that manages results
This job should be parameterized to take the name or change number and publish ALL the results.
This job should be triggered following each test run is completed to consolidate all results
now if a new test or failure is introduced the same job can track it and email the person who caused the failure.
Jenkins is very powerful and very generic so pretty much every secanrio if not plugins with groovy it can be resolved. So I would suggest taking a pen and map it on the board and create process of it rather than just a single job.
I'd like to execute a script before I execute any project/build on a Jenkins server. I've checked out the BuildWrapper script and a couple of others.
These all seem to be based on a specific job, but I'd like to get this sorted to work across any job/build on this server.
You could add an upstream job that all jobs depends on.
Specifically, In the buildsteps of your job, you can choose "Trigger/Call builds on
other projects", add ParentJob and select "Block until the triggered
projects finish their builds" before invoking the buildsteps of your job.
I've a server running a proprietary language on which I'm able to run "unit tests" in this language. I cannot install a Hudson slave on this machine, but would like to have these tests results appearing in a job of hudson (to have at least a monitoring of the code quality for this server code).
I'm currently trying to use web services to get the results and store them in Hudson workspace, but I do fear it is not the right solution.
What solutions can you advice me ?
I finally have gotten through the web services path, although it was not easy.
There are some steps in this path
I created a maven mojo with groovy (see GMaven for more infos) which, using groovyws, called a web service that, from tests results, creates the junit report.
Armed with this mojo, I created a maven project that called the web service and stores the junit.xml file in an output folder
Finally, i created in hudson a maven job for this project and called it regularly. Thanks to junit reporting integration in maven builds, my tests results are visible as a graph in Hudson and user can drill down to failing tests.
Not sure if these are possible but...
Maybe one option is when the build job finished execute a second build target or script to scp the test results from the remote server to the local build server so they appear in hudson
Or if the platform allows
Map a directory on the remote machine to the local file system by using something like sshfs etc
karl
Yup, you can scp or whatever the results (in junit xml format) to the current workspace dir using a script task. Then have a "Publish JUnit test result report" post-build task & point it at the copied-in files.
Obviously if it's not in junit-compatible format you'll have to convert it.
Sounds like you're on the right path though