I have 3 spring boot application and want to deploy all on single EC2 instance.
When i have tried to deploy war and deploy under tomcat/webapps some applications will not work as embedded tomcat in spring boot uses port 8080 and other web applications which are exists in tomcat stopped working.
Other options i have tried is changing server.port in application.properties file running jar with java -jar app.jar.
This works but for only one app if i want to run one app and if i press cntrl+c or cntrl+z or closing terminal(closing ssh connection) is stopping application.
When found during my search that we can do with AWS Elastic Beanstalk.but i have already created one free tier ec2 instance is there any way to make it work with out changing instance.
can some one help me out?
Thanks
If you want to run your app using java -jar app.jar add & to the end allowing the process to run in the background.
Using the command java -jar app.jar & you can run multiple apps in the background. This will return a pid "Process ID"
You can use this pid to kill the app later with kill -9 <pid>
To check running processes you can use ps aux | grep java (We are searching for anything that contains "java")
For running multiple wars on tomcat explicitly deploying multiple applications to Tomcat
Related
I have a Compute Engine instance with a java spring application
How can I execute it on my VM Instance startup ? If i start my VM, it should execute my application automatically.
Looking at the article -google doc
(the document does a lot, but I only need to run the existing war artifact)
I added a few lines - editing VM - Automation >> Startup script
#!/bin/sh
cd /home/souin/p****my/build/libs
# Start server
java -jar py*****my-0.1.0-SNAPSHOT.war
It is not working.
Do I need to create a shell file inside VM and make use of it from startup ?
Please advice how to run my application?
We just migrated our Elastic beanstalk environments from PHP 7.3 running on 64bit Amazon Linux to PHP 7.4 running on 64bit Amazon Linux 2 and we are seeing the following errors:
When deploying code to the environment with a Rolling deployment policy - we get a 3-4 seconds 502 bad gateway before servers starts working again. We did not see this downtime with previous generation of Linux.
Also: Application load balancer clear all sessions and signs out all users even though stickiness is enabled with a Load balancer generated cookie. We did not see this stickiness-issue with previous generation of Linux.
Happening on both Apache and Nginx.
Any ideas on how to resolve this?
I don't know about the Bad Gateway issue, but I recently ran into this loss of sessions issue. The sessions are lost because Amazon Linux 2 now uses the PHP-FPM service through systemd to host PHP. Session tracking in the default PHP/ElasticBeanstalk configuration is done by the use of files in the /tmp directory. However, systemd's "PrivateTmp" feature is enabled, which creates a unique directory for the PHP-FPM service to use when running. As soon as the PHP-FPM service is stopped, systemd deletes this special "private" /tmp, which deletes all the session files.
Whenever PHP ElasticBeanstalk deploys a new version, this PHP-FPM service is stopped and restarted, resulting in the loss of sessions.
There are a couple options to address this issue:
-> Configure PHP to use something like memcached/redis/etc to manage sessions, instead of using the filesystem. This is probably the most secure solution.
Or,
-> Configure your Amazon Linux 2 ElasticBeanstalk instances to handle these session files in the /tmp directory proper, instead of the "private" tmp directory provided by systemd.
This can be achieved by adding the following post-deploy configuration script into your project under the path: .platform/hooks/postdeploy/phpfpm_noprivatetmp.sh
#!/bin/bash -e
# change PrivateTmp from true to false, then reload/restart the systemd service
sed -i 's/PrivateTmp=true/PrivateTmp=false/' /usr/lib/systemd/system/php-fpm.service
# wait a moment...
sleep 2
sudo systemctl daemon-reload
# wait a moment...
sleep 2
sudo systemctl restart php-fpm.service
This will disable the "PrivateTmp" feature in systemd, causing the session files to be stored in the "real" /tmp directory where they won't get deleted automatically. Deploying new versions of your site will no longer cause everyone to get logged out.
This question seems very basic, but I wasn't able to quickly find an answer at https://cloud.google.com/compute/docs/instances/create-start-instance. I'm running a MicroMDM server on a Google Cloud VM by connecting to is using SSH (from the VM instances page in the Google Cloud Console) and then running the command
> sudo micromdm serve
However, I notice that when I shut down my laptop, the server also stops, which is actually why I wanted to run the server in a VM in the first place.
What would be the recommended way to keep the server running? Should I use systemd or perhaps run the process as a Docker container?
When you run the service from the command line, you "attach" it to your shell process, when you terminate your ssh session, your job gets terminated also.
To make a process run in background, simply append the & at the end of the command, in your case:
sudo micromdm serve &
This way your server is alive even after you quit your session.
I also suggest you to add that line in the instance startup script, if you want that server to always be up, so that you don't have to run the command by hand each time :)
More on Compute Engine startup scripts here.
As the Using MicroMDM with systemd documentation, it suggested to use systemd command to run MicroMDM service on linux.First, on our linux host, we create the micromdm.service file, then we move it to the location ‘/etc/systemd/system/micromdm.service’ . We can start the service. In this way, it will keep the service running, or restart service after the service fails or server restart.
I need to deploy my spring-boot application on compute engine in Google cloud platform. I have already created an instance and through SSH Apache and Maven have been installed. Further, war file has been uploaded into the bucket. Anybody can provide me with the remaining commands to deploy the war file on tomcat instance or any other cloud platforms with linux?
Thanks
Deploy in compute engine instance of google not substantially different from AWS, Azure or another linux host provider.
You just need an ssh connection to the remote machine and install the required software to compile, build, zip, deploy, etc
I will list some approaches from basic(manually) to advanced(automated):
#1 Bash scripting
unzip and configure git
unzip and configure java
unzip and configure maven
unzip and configure tomcat (this is not required if spring-boot is used)
configure the linux host to open 8080 port
create a script called /devops/pipeline.sh in your remote cloud linux instance, with the following steps
For war deployment :
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create war
mvn clean package
# move war to deploy tomcat folder
cp target/my_app.war /my/tomcat/webapps
# stop tomcat
bash /my/tomcat/shutdown.sh
# start tomcat
bash /my/tomcat/startup.sh
Or spring-boot startup
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create jar
mvn clean package
# kill or stop the application
killall java
# start the application
java $JAVA_OPTS -jar $jar_file_name
After push to git, just connect to you instance using ssh and execute
bash /devops/pipeline.sh
Improvements: Parametrize repository name, branch name, mvn profile, database credentials, create a tmp/uuid folder in every execution, delete the tmp/uuid after deploy,optimize start and stop of application using pid, etc
#2 Docker
Install docker in your remote cloud linux instance
Create a Dockerfile with all the steps for war or springboot (approach #1) and store it close to your source code (I mean in your git repository)
Perform a git push of your changes
Connect to your remote cloud linux instance using ssh:
Build your docker image: docker build ...
Delete previous container and run a new version:
docker rm my_app -f
docker run -d --name my_app -p 8080:8080 my-container-name
In the previous approaches, build operations are performed in the remote server. To do that, several tools are needed in that server. In the following approaches, build is performed in an intermediate server and just deploy is executed in the remote server. This is a a little better
#3 Local Build (an extra instance is required)
In this approach, the build is performed in the developer machine and its uploaded to some kind of repository. I advice you docker instead of just war or jar compilation.
In order to build and upload the docker image, one of these docker registries are required:
Docker simple registry
Amazon Elastic Container Registry (ECR)
Docker hub.
Harbor.
JFrog Container Registry.
Nexus Container Registry.
Portus
Azure Container Registry.
Choose one and install it in a new server. Configure your developer machine and your remote server to point to your new docker registry.
Final steps are:
Perform a docker build in your developer machine after. This will create a new docker image of your java application (tomcat+war or springboot jar)
Upload your local image to your new docker registry with something like:
docker push example.com/test-image
Connect to your remote cloud linux instance using ssh and just download the docker image
docker pull example.com/test-image
In the remote server, just start your new downloaded image with docker run...
#4 Use a continuous integration server (an extra instance is required)
Same to the #3 but not in the developer machine. All the steps are performed in another server called: Continuous integration server.
#4.1 Use a continuous integration server (an extra instance is required)
Install Jenkins or another Continuous integration server in the new instance
Configure plugins and other required things in jenkins in order to enable webhook url : https://jrichardsz.github.io/devops/configure-webhooks-in-github-bitbucket-gitlab
Create a job in jenkins to call the script of approach #1 or execute docker commands of approach #2. If you can, Approach #3 would be perfect.
Configure your SCM (github, bitbucket, gitlab, etc) to point to the webhook url published by Jenkins.
When you are ready to deploy, just push the code to your scm, jenkins will be notified and will execute the previous created job. As you can see, there is no human required to deploy de application in the server(With the exception of developer push)
Note: At this point, you could migrate the scripts of approaches #1 and #2 to :
Jenkins pipeline script
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_scripted_pipeline_java_mvn_basic-js
Jenkins declarative pipeline
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_declarative_pipeline_hello_world-js
These are more advanced and scalable approaches to mapping all the commands and configurations required from the beginning to the deployment.
#5 Advanced (Sysadmin team or extra people and knowledge are required )
More instances and technologies will be required.
Kubernetes
Ansible
High availability / Load balancer
Backups
Configurations Management
And more automations
This will be necessary when more and more web applications, microservices are required in your company/enterprise.
#6 Saas
All the previous approaches could be simplified using WORLD CLASS platforms like:
Jelastic
Heroku
Openshift, etc
I have an AWS EC2 Instance running Ubuntu.
I've installed Parse Server on it and MongoDB. I noticed that whenever I close the terminal on my laptop, my android app cannot reach the server.
So my question is if I close the Java terminal window, leave the instance running on AWS, and still make usage on my Parse Server?
I solved it using the nohup command:
$ nohup parse-server --appId APP_ID --masterKey MASTER_KEY --databaseURI DATABASE_URI &