Is there any way to run few clustered immutant2 based apps with no deploy to wildfly? I would like to test distributed cache having two REPLs opened but I see no option in immutant docs to have these 2 sessions in 1 cluster.
Looks like for immutant 1.x it was --clustered option for lein.
For Immutant 2, clustering is only available when running inside of WildFly. However, you can still get a repl inside WildFly - just create a "dev" war with the lein-immutant plugin, and it will start a repl for you when deployed to WildFly. You create a dev war with:
lein immutant war --dev
(assuming you are using [lein-immutant "2.0.0"]. See the WildFly guide for instructions on starting a WildFly instance in clustered mode.
Related
Recently I have started a Django server on Azure Web App Service, now I want to add a usage of "ChromoDriver" for web scraping, I have noticed that for that I need to install some additional Linux packages (not python) on the machine. the problem is that it gets erased on every deployment, does it mean that I should switch to Docker ?
Container works, but you can also try to pull down the additional packages in the custom start up file without messing around the machine after the deployment
https://learn.microsoft.com/en-us/azure/developer/python/tutorial-deploy-app-service-on-linux-04
I have 3 spring boot application and want to deploy all on single EC2 instance.
When i have tried to deploy war and deploy under tomcat/webapps some applications will not work as embedded tomcat in spring boot uses port 8080 and other web applications which are exists in tomcat stopped working.
Other options i have tried is changing server.port in application.properties file running jar with java -jar app.jar.
This works but for only one app if i want to run one app and if i press cntrl+c or cntrl+z or closing terminal(closing ssh connection) is stopping application.
When found during my search that we can do with AWS Elastic Beanstalk.but i have already created one free tier ec2 instance is there any way to make it work with out changing instance.
can some one help me out?
Thanks
If you want to run your app using java -jar app.jar add & to the end allowing the process to run in the background.
Using the command java -jar app.jar & you can run multiple apps in the background. This will return a pid "Process ID"
You can use this pid to kill the app later with kill -9 <pid>
To check running processes you can use ps aux | grep java (We are searching for anything that contains "java")
For running multiple wars on tomcat explicitly deploying multiple applications to Tomcat
I need to deploy my spring-boot application on compute engine in Google cloud platform. I have already created an instance and through SSH Apache and Maven have been installed. Further, war file has been uploaded into the bucket. Anybody can provide me with the remaining commands to deploy the war file on tomcat instance or any other cloud platforms with linux?
Thanks
Deploy in compute engine instance of google not substantially different from AWS, Azure or another linux host provider.
You just need an ssh connection to the remote machine and install the required software to compile, build, zip, deploy, etc
I will list some approaches from basic(manually) to advanced(automated):
#1 Bash scripting
unzip and configure git
unzip and configure java
unzip and configure maven
unzip and configure tomcat (this is not required if spring-boot is used)
configure the linux host to open 8080 port
create a script called /devops/pipeline.sh in your remote cloud linux instance, with the following steps
For war deployment :
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create war
mvn clean package
# move war to deploy tomcat folder
cp target/my_app.war /my/tomcat/webapps
# stop tomcat
bash /my/tomcat/shutdown.sh
# start tomcat
bash /my/tomcat/startup.sh
Or spring-boot startup
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create jar
mvn clean package
# kill or stop the application
killall java
# start the application
java $JAVA_OPTS -jar $jar_file_name
After push to git, just connect to you instance using ssh and execute
bash /devops/pipeline.sh
Improvements: Parametrize repository name, branch name, mvn profile, database credentials, create a tmp/uuid folder in every execution, delete the tmp/uuid after deploy,optimize start and stop of application using pid, etc
#2 Docker
Install docker in your remote cloud linux instance
Create a Dockerfile with all the steps for war or springboot (approach #1) and store it close to your source code (I mean in your git repository)
Perform a git push of your changes
Connect to your remote cloud linux instance using ssh:
Build your docker image: docker build ...
Delete previous container and run a new version:
docker rm my_app -f
docker run -d --name my_app -p 8080:8080 my-container-name
In the previous approaches, build operations are performed in the remote server. To do that, several tools are needed in that server. In the following approaches, build is performed in an intermediate server and just deploy is executed in the remote server. This is a a little better
#3 Local Build (an extra instance is required)
In this approach, the build is performed in the developer machine and its uploaded to some kind of repository. I advice you docker instead of just war or jar compilation.
In order to build and upload the docker image, one of these docker registries are required:
Docker simple registry
Amazon Elastic Container Registry (ECR)
Docker hub.
Harbor.
JFrog Container Registry.
Nexus Container Registry.
Portus
Azure Container Registry.
Choose one and install it in a new server. Configure your developer machine and your remote server to point to your new docker registry.
Final steps are:
Perform a docker build in your developer machine after. This will create a new docker image of your java application (tomcat+war or springboot jar)
Upload your local image to your new docker registry with something like:
docker push example.com/test-image
Connect to your remote cloud linux instance using ssh and just download the docker image
docker pull example.com/test-image
In the remote server, just start your new downloaded image with docker run...
#4 Use a continuous integration server (an extra instance is required)
Same to the #3 but not in the developer machine. All the steps are performed in another server called: Continuous integration server.
#4.1 Use a continuous integration server (an extra instance is required)
Install Jenkins or another Continuous integration server in the new instance
Configure plugins and other required things in jenkins in order to enable webhook url : https://jrichardsz.github.io/devops/configure-webhooks-in-github-bitbucket-gitlab
Create a job in jenkins to call the script of approach #1 or execute docker commands of approach #2. If you can, Approach #3 would be perfect.
Configure your SCM (github, bitbucket, gitlab, etc) to point to the webhook url published by Jenkins.
When you are ready to deploy, just push the code to your scm, jenkins will be notified and will execute the previous created job. As you can see, there is no human required to deploy de application in the server(With the exception of developer push)
Note: At this point, you could migrate the scripts of approaches #1 and #2 to :
Jenkins pipeline script
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_scripted_pipeline_java_mvn_basic-js
Jenkins declarative pipeline
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_declarative_pipeline_hello_world-js
These are more advanced and scalable approaches to mapping all the commands and configurations required from the beginning to the deployment.
#5 Advanced (Sysadmin team or extra people and knowledge are required )
More instances and technologies will be required.
Kubernetes
Ansible
High availability / Load balancer
Backups
Configurations Management
And more automations
This will be necessary when more and more web applications, microservices are required in your company/enterprise.
#6 Saas
All the previous approaches could be simplified using WORLD CLASS platforms like:
Jelastic
Heroku
Openshift, etc
I want to setup a unit test environment for my product. I have a web application build on nginx in Lua which use mysql and redis. I think docker will be good for this although i am new to docker. My application runs on centos server (production server).
I am planning to setup different container for mysql,redis and webapp and then write UT application (unit test for Lua using Busted framework) in my mac (My development machine is MAC) or VM to test it. The UT application will talk to docker container nginx and nginx will use container mysql and redis. Is this good ? If yes ,can someone guide me how to do this? maybe some good link? If no , what could be better way. I have already tried using vagrant but that took too much time which shouldn't be in my UT case.
For an example how we setup our project template you may have a look at phundament/app and its testing setup.
We are using a dockerized GitLab installation with a customized runner, which is able to execute docker-compose.
Note! The runner itself is running on a separate Docker host.
We are using docker-compose.yml to define the services in a stack with adjustments for development and testing.
The CI configuration is optimized to handle multiple concurrent tests of isolated stacks, this is just done by specifying a custom COMPOSE_PROJECT_NAME.
Some in-depth documentation about our testing process and useful information about docker-compose and dockerized CI.
#testing README
#testing Docs
CI builds
Extending services and Compose files
Docker-in-Docker for CI?
Finally, Travis CI also supports Docker since a while, but I haven't tested this approach at all.
If you are new to Docker based CI, please look at Drone:
Official page
Github repo
Tutorial
There some are drawbacks to this solution (like size of images), but it will get you off the grounds.
How can I configure a Tapestry5 project to run standalone (via java -jar) with embedded Jetty?
I'm looking for a short "recipe" regarding Tapestry5, Jetty, configuration of servlets/ handlers/ whatever is needed to connect the dots...
I've seen a few dots: How to Create an Executable War, Configuring Tapestry (ref Tapestry as servlet filter)
Edit: I'm thinking about a standalone running webapp due to server circumstances. It doesn't have to be embedded Jetty, but I can't rely on a stable appserver. Still looking for a recipe, though, so I don't spend much time on dead ends...
Also, I'd like for Jenkins (Hudson) to be able to stop and start the server automatically when deploying updates - I don't know if that influences what I can do with Jetty, f.ex.
Well, i believe this is a general "how to run a war question". Assuming you indeed have a war, you can use jetty or winstone to "run" it - see :
http://winstone.sourceforge.net
and
http://www.enavigo.com/2008/08/29/deploying-a-web-application-to-jetty/
In the first case, you can directly do
java -jar winstone.jar --warfile=<warfile>
https://github.com/ccordenier/tapestry5-hotel-booking/
<-- Check its maven build
http://tapestry.zones.apache.org:8180/tapestry5-hotel-booking/signin
I did some digging, and this is the short recipe I basically ended up following:
Start with the Maven Jetty plugin as configured in the pom.xml of the Tapestry 5 archetype
Add the stopKey and stopPort attribute to Maven Jetty plugin configuration
Let Jenkins CI run maven target jetty:stop and then clean install
Let Jenkins run shell script mvn jetty:run &
Voila - my Java app is up and running with automatically updated code, without any appserver.