I need create a number of different modes for running jetty in my gradle build.
They differ in system properties and classpath.
How can I do it using gradle jetty plugin?
you can create multiple tasks of type JettyRun. To dynamically create different task instances you can use Groovy syntax sugar.
4.times { // this can be replaced by iterating over your different environment settings
task "jettyRun${it}(type:JettyRun) {
// do you custom configuration here
}
}
bb,
Rene
Related
I'm working in an application developed with Spring5 (not Spring boot!) that runs on Jetty. This application has module that uses the plugin liquibase-maven-plugin.
We generate a image from a dockerfile (base image jetty:9-jre8), where we add the application (war file) in the jetty application directory.
In some specific environments, where I deploy the application, I want to be able to disable that execution.
Is it possible to do so?
I've seen on spring boot documentation, that it's possible to do so by defining the property spring.liquibase.enabled (or liquibase.enabled on Spring4) to false, but that seems that doesn't work:
I've tried to define them at the properties file, define them as env properties and also as java options (-Dspring.liquibase.enabled=false).
This has the same behavior when I deploy the container, or when I execute locally the maven command: mvn jetty:run
Do you have any ideas or hints how to do this?
Thank you in advance
Well I just discovered that it's possible to disable the execution of liquibase by adding the JAVA_OPTION
-Dliquibase.shouldRun=false
For more details see here
I will keep this quesion anyway, in case someone has the same problem I did.
I am trying to build an application, where some of them should execute on a windows OS and some in linux like Centos/Ubuntu. While designing a pipeline, how can I assure that all the windows related tasks/jobs should go to the windows agent and all the linux related jobs should go to the appropriate linux OS.
What would be ideal approach to achieve this.
[Just to add more information]
You generally enable/add Go agents(which are installed on different operating systems) via the "AGENTS" tab in the Go dashboard. You can tag a Go agent with multiple resources using the "RESOURCES" button in the "AGENTS" tab.
And when you create a job in a stage(in a pipeline), you can mention the resources(which Go agent(s)) to be used to run this job in the "Resources" field of the "Job Settings" tab.
Based on the resources mentioned the Go server will use those Go agents to allocate the jobs. You can tag a Go agent with multiple resources.
Hope this helps.
I have several pipelines that must run on specific machines. I accomplish this using two methods.
The first is using environments. I assign specific pipelines to a specific environment (e.g. Developement, QA, Production). I then assign an agent that has the ability to execute Development related tasks/jobs to the Development environment. I assign agents that can execute QA related tasks/jobs to the QA environment.
This ensures that pipelines that reside in the Development environment are executed by only agents that are also assigned to the development environment.
If your agent has the ability to execute both Development and QA tasks, then assign the agent to both the Development and QA Environment.
Your can use this same concept to ensure that only windows agents reside in a particular environment.
For example, you can have a Developement-Win environment that you assign pipelines and agents that should handle windows builds. Another called Development-Linux that you assign agents and pipelines that should handle linux builds.
Since I have agents that can build for multiple environments (Dev, QA and Prop) I also use the "Resources" that are assigned under the "Job Settings" tab. If the job needs to execute on a windows host, then I will assign a resource of "win2012". This tells me that the agent must have the resource of "Windows 2012" in order to execute this particular task.
I assign the agent the resource of "win2012" to indicate that it is a windows 2012 resource.
With the combination of the environment configuration and the resource settings the job will only be executed by an agent that meets both the resource requirements and is in the proper environment.
If you have multiple resource requirements for the task you can assign each of them by using a comma separated list. Some of the resources that I use are msbuild, subversion, sqlcmd. This tells me that the agent must have access to msbuild (to compile the code), subversion (to acquire it from SVN), and sqlcmd (to execute sql queries against SQL Server). I then mark agents that contain these particular resources. Only if an agent meets all of these resource requirements will be assigned the task.
How might I make an environment variable available to jetty using the gradle plugin? Some of the code it runs in a servlet requires a particular environment variable to be set, but I can't figure out a good way to send it to the jetty process like you can for a JavaExec task (via the environment method).
Also acceptable would be a property. For example, if I were to run some java, I'd include a -Dproperty.name=blah to send it the property.name property.
We can do it for Test and JavaExec tasks... can we do it for the JettyRun task?
The container managed by the Jetty plugin runs in the Gradle process, so you need to set an environment variable or system property for that process.
The Jetty plugin is also quite outdated and limited, partly for exactly the reason that it runs inside the Gradle process. I recommend to instead give the arquillian-gradle-plugin a try. We think that this plugin paves the way to better Gradle web container support.
We have multiple groups running their own TeamCity setups inside the firm. My group provides a set of generic libraries that other project groups use in their projects. We use TeamCity to push versions of our libraries to production. What I need is a way to automatically trigger builds on other group's CI system that depend on our libraries once we push a new version to production? I already have have the scripts to upgrade to the latest version etc ready. Right now it is manual, i would like to automate it and have a new build of the dependent projects triggered once we release a version to production. I am looking for a way to push the trigger notification across Teamcity instances.
You can trigger TeamCity builds using an HTTP request so you could modify your build script to make the required requests at the end of the build. This does have the downside that you need to hardcode the list of builds that need to be triggered on the remote servers into your build script.
The syntax for the HTTP request is:
http://<user name>:<user password>#<server address>/httpAuth/action.html?add2Queue=<build type Id>
For full details take a look at this page of the TeamCity documentation:
Accessing Server by HTTP
I have a list of Jenkins jobs that are independent, but it would be convenient if I could group them together to have them all run with the click of one button. Each of the projects is concerned with deployment, not compilation.
I've found the bulk-builder plugin, but to use bulk-builder it's necessary to specify a pattern each time you wish to invoke it.
I'm looking for a Jenkins plugin that will allow me to group projects together, and the Maven system seems to suggest this is possible: I'd make a top level build job that sets up dependencies on each of the jobs I wish to run, then I'd just need to run my top level job.
If possible, has anyone found Maven to be useful in managing dependencies of anything but Java? Would I be able to use it in the way I'm expecting?
EDIT: These are all .net projects
In jenkins, you can explicitly define other projects to build upon a successful build (search for downstream). This way, you can have one project trigger bunch of others effectively grouping them.
Maven is a great tool but I wouldn't use it for this purpose.