Get current config in Jenkins DSL - jenkins-job-dsl

I want to access the config of the currently executing seed job from within my DSL script.
For example, I want to use the same SCM settings as my seed job for the jobs that I am creating.
How do I do this?

There is no built-in DSL way to do that. You need to have a look at the Jenkins API. To obtain the SCM settings of the currently executing job, do this:
hudson.model.Executor executor = hudson.model.Executor.currentExecutor()
hudson.model.FreeStyleBuild build = executor.currentExecutable
hudson.model.FreeStyleProject project = build.project
hudson.scm.SCM scm = project.scm

Related

Is there a way to specify ignoreExisting on pipelineJob?

Is there a way to specify ignoreExisting on pipelineJob? I don't see it listed in plugin/job-dsl/api-viewer/index.html but maybe I'm missing a way to do it.
In my setup all jobs are defined using job dsl thru the configuration as code module. All jobs defined by jobs dsl are use to load pipelines where all info for the jobs is configured. Since all configuration from the jobs are stored in the pipeline, I'd like to be able to define each job and have it not be modified by job dsl again unless the job is removed.
Current behavior is that the job dsl overwrites any changes in the job made by the pipeline which is not what I want. Any way around this? I thought ignoreExisting would do the trick but it doesn't seem to be available in pipelineJob

GCP Dataflow Job Deployment

I am trying to automate CI/CD of Classic Template.
Created and Staged template on GCS following the documentation
On code changes (Bug fixes etc), I intend to drain the existing job and create a new job with the same name.
To drain existing job, I need JOB_ID, but I have only JOB_NAME which I used during the creation of the job.
The only way i see is to use list command and fetch the active jobs, process the output to extract the job id to use it in the drain command. It seems to be quite a roundabout way. Isn't there a way to Drain a job with Job_Name or at least get JOB_ID from JOB_NAME.
When you use the gcloud dataflow jobs run command to create the job, the response from running this command should return the JOB_ID in the following way (e.g. if you create a batch job):
id: 2016-10-11_17_10_59-1234530157620696789
projectId: YOUR_PROJECT_ID
type: JOB_TYPE_BATCH
That and using the gcloud dataflow jobs list as you mention will be the straightforward way to associate a JOB_NAME and a JOB_ID using automation. The way to achieve this with a Python script is described within this other post in the community.
GCP provides REST API to update dataflow job. No need of explicitly draining the existing the job and creating a new job.
You can do it via Python Code too. Refer to my GIST for python code.

Updating env variables and parameters via a task in a job in gocd

In GOCD we are able to define env variables and parameters for the entire pipeline, stage and individual jobs. Is it possible to update those already defined env variables and parameters via a task in a job when running the pipeline?
Would you like to update them permanently or just for specific execution?
If you would like to do first, I suppose you have to work with config xml, using Pipeline Config REST API.
For latter case you can trigger pipeline with given environment variables set.
In any case, I would recommend you yagocd library, which would let you do both tasks with ease.

How to set build description in Jenkins JobDSL?

A Build Flow Plugin script can call build.setDescription() to set the build's description. Can something similar be done in a JobDSL script? Or would the script have to go through injecting an environment variable?
The Build Flow Plugin and the Job DSL Plugin are not necessarily comparable, they address different use cases. The Job DSL describe the static configuration of jobs whereas the Build Flow DSL describes a dynamic flow control of jobs.
That said, the Job DSL can configure the Description Setter Plugin as a post-build action:
job {
...
publishers {
...
buildDescription('', '${BRANCH}')
}
}
See the Job DSL wiki for details: https://github.com/jenkinsci/job-dsl-plugin/wiki/Job-reference#build-description-setter
To set the description of the seed job (the job with runs the Job DSL scripts), you can print something to the console log using println and then use the Description Setter Plugin to parse the log and set the description. Or you can use the Jenkins API from the DSL script:
def build = hudson.model.Executor.currentExecutor().currentExecutable
build.description = 'whatever'

Automated Deploys With Embedded Jetty

I've taken a fancy to including Jetty with my applications instead of deploying to a container. But there's one big issue I've run in to: How can I automated the deploy? When the container ran standalone it was enough to copy the war file over the old and it would get picked up. With Jetty as a dependency I run it at the command line and control-c it when done. I can't think of an easy way to automate this. Is there a better solution than creating scripts to manage the job, stop the container and restart, keep track of job id, etc?
Look into setting up a DeploymentManager and AppProvider that suits your needs.
// === jetty deploy ===
DeploymentManager deployer = new DeploymentManager();
deployer.setContexts(contexts);
deployer.setContextAttribute(
"org.eclipse.jetty.server.webapp.ContainerIncludeJarPattern",
".*/servlet-api-[^/]*\\.jar$");
WebAppProvider webapp_provider = new WebAppProvider();
webapp_provider.setMonitoredDirName(jetty_base + "/webapps");
webapp_provider.setDefaultsDescriptor(jetty_home + "/etc/webdefault.xml");
webapp_provider.setScanInterval(1);
webapp_provider.setExtractWars(true);
deployer.addAppProvider(webapp_provider);
server.addBean(deployer);
Full example can be found in LikeJettyXml.java