How might I make an environment variable available to jetty using the gradle plugin? Some of the code it runs in a servlet requires a particular environment variable to be set, but I can't figure out a good way to send it to the jetty process like you can for a JavaExec task (via the environment method).
Also acceptable would be a property. For example, if I were to run some java, I'd include a -Dproperty.name=blah to send it the property.name property.
We can do it for Test and JavaExec tasks... can we do it for the JettyRun task?
The container managed by the Jetty plugin runs in the Gradle process, so you need to set an environment variable or system property for that process.
The Jetty plugin is also quite outdated and limited, partly for exactly the reason that it runs inside the Gradle process. I recommend to instead give the arquillian-gradle-plugin a try. We think that this plugin paves the way to better Gradle web container support.
Related
I'm working in an application developed with Spring5 (not Spring boot!) that runs on Jetty. This application has module that uses the plugin liquibase-maven-plugin.
We generate a image from a dockerfile (base image jetty:9-jre8), where we add the application (war file) in the jetty application directory.
In some specific environments, where I deploy the application, I want to be able to disable that execution.
Is it possible to do so?
I've seen on spring boot documentation, that it's possible to do so by defining the property spring.liquibase.enabled (or liquibase.enabled on Spring4) to false, but that seems that doesn't work:
I've tried to define them at the properties file, define them as env properties and also as java options (-Dspring.liquibase.enabled=false).
This has the same behavior when I deploy the container, or when I execute locally the maven command: mvn jetty:run
Do you have any ideas or hints how to do this?
Thank you in advance
Well I just discovered that it's possible to disable the execution of liquibase by adding the JAVA_OPTION
-Dliquibase.shouldRun=false
For more details see here
I will keep this quesion anyway, in case someone has the same problem I did.
I have an existing Windows desktop application written in C++ that needs to add support for SNMP so that a few pieces of status information are available on some SNMP OIDs. I found the net-snmp project and have been trying to understand how this can best fit into the existing program.
Questions:
Do I need to run snmpd, or can I just integrate the agent code into my application? I would prefer that starting my application does everything necessary rather than worry about deploying and running multiple processes, but the documentation doesn't speak much about doing this. The net-snmp agent daemon tutorial has an option for running the sample code as the full-agent rather than sub-agent, but I'm not sure about any limitations of doing this.
What would the PROs/CONs be for running a full agent in my application vs using snmpd and putting a subagent in my application? Is there a 3rd option I should also consider?
If I can integrate the full agent into the existing program, how do I pass it a configuration file via the API? Can I avoid the config file all together by passing these parameters in via function call instead?
So after much hunting I failed to find a continuous testing tool for IntelliJ 14.
I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.
I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.
Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.
Eclipse has the builder feature, you create an AntBuilder for your project, see also https://stackoverflow.com/a/15075732/130683.
IntelliJ has a trigger feature that might serve the purpose.
Also Infinitest , which provides a Continous Testing Plugin for Eclipse and IntelliJ might be helpful.
Ant is a build tool. Although IntelliJ does that for you, you need IntelliJ to do this which means you can't distribute your application without IntelliJ.
Ant uses a dependency matrix for building. This is sometimes difficult for developers to understand, but it basically means that you define the steps, how the steps are dependent upon each other, and let the build tool figure out exactly how to do its job. Ant is for Java like Make is to C and C++ applications.
Ant uses targets which are the steps you specify to do. For example, you might have a target called package that will build your jar or war. That target might depend upon another target called compile to compile the code. That target might depend upon a code generation phases (like if you had WSDL files).
Each target is a set of tasks. For example, the compile target is likely to have the <javac> task in it. It might also need the <mkdir> task to create the work directories where you classfiles are stored.
There are plenty of books on Ant, and there's a tutorial on the Ant Website. You didn't explain the issues you were having, so it's hard to be more specific than this.
Ant can also run your unit tests too. There's a <junit> target which can run the tests, and you specify whether or not you want to run almost all of your <junit> tests via the <batchtest> sub-entity or if you have a program driver you specify via the <test> entity.
Once you get an Ant script that can build and run your tests outside of IntelliJ, you can now get a Continuous Integration tool like Jenkins. A continuous integration tool watches your repository for changes, and if a change occurs, will then build your application. It's a great way to catch errors early on.
What does this have to do with Continuous Testing? Well, if you have your Ant script able to run unit tests, the Continuous Integration engine not only can build your app, but then run the unit tests with each and every change that occurs.
Jenkins is nice because it's very simple to use. You download a jenkins.war and you can launch the Jenkins webpage via the java -jar jenkins.war command. This brings up a web server on port 8080 on your machine. Obviously, Jenkins can be configured to run on different ports and under Tomcat if you so desire. It can integrate with Windows Active Directory, LDAP, and many other user verification systems.
Jenkins will show you charts and graphs of your tests, let you know which tests failed or passed, and will notify you of any problems via email, tweets, IM, Jabber, and even Facebook posts. People have even setup a traffic light in their offices that turns red when builds or tests fail.
Take it one step at a time. Get a good book on Ant. Read the tutorial on the Ant website. Then try to get a working Ant script to just to build your app. If you are having specific issues, you can ask for help.
Once you have the build going, extend the script to run your unit tests. Once that is done, download Jenkins and try to get that up and running.
I'm planning to deploy a web service based on LiftWeb (and Jetty) and was wondering what would be the most appropriate solution for that (in the term of process host).
The first solution I can think of is a Linux Daemon which executes Jetty and my service. Another option would be to run it from command line (JAVA ...).
I'll be glad to know if anybody experienced difficulties with one of the above solutions or have other alternative for that.
Thanks,
Gil
Jetty has a setuid option available as well, it is packaged in the jetty-hightide distribution and is also available as artifacts in maven central.
http://repo2.maven.org/maven2/org/mortbay/jetty/jetty-hightide/
My solution is runsbt package in the console,and copy the WAR file to the jetty/webapps/root.war.
https://www.assembla.com/spaces/liftweb/wiki/Apache_and_Jetty_Configuration
I'm using TeamCity 7 as CI Server, and I'd have to test several Web Application plugins, mainly written with PHP. I'm familiar with ANT and *Unit, but I have an issue to solve: to properly test a plugin, my idea would be the following:
Cleanup the testing environment.
Install a clean copy of the Web Application which will host the plugin.
Install the plugin.
Enable the plugin.
Run the tests.
Obviously, running the tests on an installed environment is the easy part. Most tests are fired by directly calling plugin's classes' methods, yet the framework must be configured, even with minimal settings, to allow calling its bootstrap file and perform due initialization. I tried running the tests in an environment I prepared manually, and they run as expected.
The issue is now automating the installation of the standard Web Application, and, most importantly, its configuration. The basic steps are the following:
Unzip framework somewhere (done).
Create a Database (done).
Create a Database User ans assign it propert privileges (done).
Run Web Application's setup.
The tricky part is that not all Web Application implement a command line interface, such as drush for Drupal, hence I came out with two possible ways to complete the installation:
Simulate manual installation via CURL
Take note of the installation steps and the forms that need to be filled.
POST data to each of the forms using CURL.
I tried this method, still manually, with acceptable results. The Web Application gets installed as expected, and it can be used.
However, this requires a Web Server where the application can run. As far as I know, TeamCity Agents work in their own, random named directories and anything "installed" in them cannot be accessed via HTTP requests.
Backup/Restore
Manually pre-install a Web Application and configure its basic settings.
Zip the Application's directory and a backup of its database.
Before running the tests, unzip the directory in Agent's working directory.
Restore the backed-up database. The application will now be "configured".
This method is a bit "rough", but it doesn't require a Web Server to be running. Although the Web Application won't be able to server HTTP requests, that doesn't necessarily matter, as the tests will be run against plugin's classes.
This method has two major drawbacks, though:
Tests involving interaction with the Web Application (e.g. hooks, event handlers, and so on) can't be run.
Since the Web Application and its database are pre-configured, their parameters will be the same at every run. Therefore, it wouldn't be possible to run two Agents at the same time, for example to test two different plugins.
I'm now wondering if there's a better solution, as both the above look less than optimal to me.
Please note that, although I'm using TeamCity, the CI Server itself should not be a big deal, as I'm running everything with ANT. Therefore, any suggestion, even related to another CI Serverm, will be welcome (I know Hudson, CruiseControl and BuildMaster, I can adapt a concept easily). Thanks.