LiftWeb on Production - jetty

I'm planning to deploy a web service based on LiftWeb (and Jetty) and was wondering what would be the most appropriate solution for that (in the term of process host).
The first solution I can think of is a Linux Daemon which executes Jetty and my service. Another option would be to run it from command line (JAVA ...).
I'll be glad to know if anybody experienced difficulties with one of the above solutions or have other alternative for that.
Thanks,
Gil

Jetty has a setuid option available as well, it is packaged in the jetty-hightide distribution and is also available as artifacts in maven central.
http://repo2.maven.org/maven2/org/mortbay/jetty/jetty-hightide/

My solution is runsbt package in the console,and copy the WAR file to the jetty/webapps/root.war.
https://www.assembla.com/spaces/liftweb/wiki/Apache_and_Jetty_Configuration

Related

Disable Liquibase execution at startup using Jetty + Spring (not Spring boot!)

I'm working in an application developed with Spring5 (not Spring boot!) that runs on Jetty. This application has module that uses the plugin liquibase-maven-plugin.
We generate a image from a dockerfile (base image jetty:9-jre8), where we add the application (war file) in the jetty application directory.
In some specific environments, where I deploy the application, I want to be able to disable that execution.
Is it possible to do so?
I've seen on spring boot documentation, that it's possible to do so by defining the property spring.liquibase.enabled (or liquibase.enabled on Spring4) to false, but that seems that doesn't work:
I've tried to define them at the properties file, define them as env properties and also as java options (-Dspring.liquibase.enabled=false).
This has the same behavior when I deploy the container, or when I execute locally the maven command: mvn jetty:run
Do you have any ideas or hints how to do this?
Thank you in advance
Well I just discovered that it's possible to disable the execution of liquibase by adding the JAVA_OPTION
-Dliquibase.shouldRun=false
For more details see here
I will keep this quesion anyway, in case someone has the same problem I did.

Migrate from Jboss 5.1 to jetty

Currently My web Application is running on jboss 5.1 and we need to migrate it to jetty server..i am looking for a complete detail and steps to do the process...Thanks in advance
I don't think there exists a step by step document along the lines you are looking for. It largely depends on your web application how simple or hard the migration would be. If you are using EJB's for instance then you would have to also factor in the openejb integration we have some support for, if it is a straight war file then it is likely much easier. There are professional services from the jetty developers available to help if you like, you can google Webtide for that. I would recommend going straight to Jetty 9.1 though, the new distribution layout makes things a fair amount easier to separate things out.
Current documentation is here though: http://www.eclipse.org/jetty/documentation/current/

Deploy of Clojure/Clojurescript application in production

I'm playing with Clojure/ClojureScript and I'm writing a web application. Everything is fine while I'm using ring as a development server.
The question is what container should I use for production? Should I use ring for production as well? Should I use Tomcat? Is there a recommended way to deploy a Clojure application? Can you point me to some documentation regarding this aspect?
Thanks!
There is nothing inherently different about deploying a java servlet that was written in Java vs. Clojure and all the Clojure web libraries and frameworks produce compatable servlets so you have many deployment options.
We use netty to run our ring based web application to great effect in production simply by running "lein run" from a system service. Many others choose to use lein uberwar to produce a war file and host that on tomcat. The specific hosting mechanism seems less relevant than the deployment process. All the JavaScript files are served from a CDN. Immutant is also a fun and very Clojure oriented choice with a strong "enterprisy" feel to it.
What strikes me as most important is building a repeatable build, including deployment. Pallet is a great way to go though it's got a bit of a learning curve.
There are a few options.
First one is easy: Heroku. They have a free tier that is ample for deployment and testing. I won't go into further detail on this, but I decided not to use Heroku anymore.
Another common option is Amazon AWS. I gather most apps on AWS use lein-beanstalk [sorry, no citation here]. Lein-beanstalk has been out for quite a while and appears to be well-maintained. It is also maintained by the same person who maintains Compojure.
I use a VPS. I set up the linux build with Nginx and deploy with git. So, basically, my flow is create the site, compile to lein uberjar, then deploy. I know that some people can and do use the leiningen "lien ring server" cantation on their apps and use many other configurations, such as Maven, Tomcat, deployment with Vagrant, etc, but I just run java -jar myApp-xxxxx on the server and it works great.
As far as documentation, there does appear to be a dearth of documentation on Clojure deployment specifically. Sort of have to bang your head against the wall and figure it out if you want to go the VPS route the first time you do it. I found that almost none of my issues involved Clojure specifically.
In development I use:
lein ring server
: then to compile it to a war file I use:
lein ring uberwar
: and just drop the resulting jar file into the Webapps directory and it works fine. I use Jetty by the way

Continuous deployment of C/C++ executable to Linux production servers

I wonder if there is any best practice or at least a more practical way to deploy C/C++ executable to Linux based production servers.
I have Jenkins up and running as CI server, and created a main SVN module which contains multiple svn:externals. This module is mainly served as a pipeline of related C++ applications. (Perhaps I should post this an another question on whether svn:externals is the correct way to do it)
So the main question is the deployment steps, I am planing to make all production servers as Jenkins' slaves with parameterized config, for the purpose of building from SVN tags. And use some scripts to copy all executables to, eg: /opt/mytools/bin in multiple production servers.
Any recommendations?
The best deployment route is the one specified by your distribution, IMHO. That is, for debian packages, bundle your applications into .deb-files, put them into a repository and let apt-get take care of the rest. This way, you have a minimal impact on the production environment and most admins are already familiar with the deployment scheme.
I'm working through some of the same questions, and I'm finding that Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Humble and Farley has been a good (technology agnostic) starting point - not perfect but it's pointed me in the right direction when I had no idea what to do next.
The continuous delivery book recommends setting up 'build pipelines' in which you run progressively more and more automated tests, with only the final manual tests and deploy rollback options being triggered by a real person.

2 VisualSVNServer instances pointing to same SVN repo?

Would it be possible/safe to run two instances of VisualSVNServer pointing to the same repo?
I've searched around and not had any luck finding anything related specifically to this question. The only reason I ask is because we have a need to enable Windows Authentication/Integration over http, and svn authentication over https. It does not seem to be an option to run both within a single instance of VisualSVNServer.
If not, do you know of alternative solution that would allow for this?
Edit: Received the following answer from VisualSVN Support
Thanks to Subversion design, repositories are ready to be accessed by several server instances simultaneously. We haven't experimented a lot with such configuration, but I think it's possible.
Am I understand properly, that you are going to store your repositories on a network storage and run two VisualSVN Server instances on different machines?
Please take care about the server.pid. file. In the current release, this file is stored in the repositories folder. So there will be a collision between two instances of VisualSVN Server. We are going to fix this problem in the upcoming release.
You can easily relocate the server.pid to another destination by adding the following command to the "C:\Program Files\VisualSVN Server\conf\httpd-custom.conf" file:
[[
PidFile "C:/Tmp/server.pid"
]]"
You can point two VisualSVN Server instances to the same repository if it stored on SMB share without any problems. It's typical configuration for active/active or active/passive cluster setups.
I wouldn't do this because as far as I know, VisualSVN brings its own web server (Apache) and SVN binaries. I would expect locking issues when running two of each on the same repo, if it's possible at all. VisualSVN probably won't install twice at all.
This sounds like a case for separate installation of SVN and Apache and custom configuration. I can't say whether what you want is possible but I would expect it is. It's probably to be fiddly, though - VisualSVN takes away a lot of configuration hassle that you have when doing the setup manually. Questions about that would be appropriate to ask on Serverfault.com.
Apart from VisualSVN, there also are other, also commercial wrappers. Maybe one of them is more flexible in this respect.
Update: Also, check this out: Supporting Multiple Repository Access Methods from the SVN book