Why isn't Fabric running my deployment in parallel when I use the -P command line option? - fabric

I'm running Fabric on Debian out of virtualenv to deploy a project to multiple remote Debian servers. When I run the command to deploy to a single server:
time .venv/bin/fabric server1 deploy
The server1 command sets the remote host. Deployment, which pulls code out of a repository and builds a virtualenv for the project on the remote server, takes about 7 minutes:
real 7m49.881s
user 0m52.883s
sys 0m18.345s
I configured passwordless SSH access to 3 servers and now run fabric with the -P command:
.venv/bin/fabric parallel deploy
The parallel command assigns 3 servers to env.hosts. Deployment takes 3 times as long:
real 22m22.259s
user 2m45.718s
sys 0m53.827s
I used the -P option after reading the Fabric documentation on parallel execution.

Related

How to run bash script on Google Cloud VM?

I found this auto shutdown script for VM instances on GCP and tried to add that into the VM's metadata.
Here's a link to that shutdown script.
The config sets it so that after 20 mins the idle VM will shut down, but it's been a few hours and it never shut down. Are there any more steps I have to do after adding the script to the VM metadata?
The metadata script I added:
Startup scripts are executed while the VM starts. If you execute your "shutdown script" at the boot there will be nothing for it to do. Additionally in order for this to work a proper service has to be created and it will be using this script to detect & shutdown the VM in case it's idling.
So - even if the main script ashutdown was executed at boot and there was no idling it did nothing. And since the service wasn't there to run it again your instance will run indefinatelly.
For this to work you need to install everything on the VM in question;
Download all three files to some directory in your vm, for example with curl:
curl -LJO https://raw.githubusercontent.com/GoogleCloudPlatform/ai-platform-samples/master/notebooks/tools/auto-shutdown/ashutdown
curl -LJO https://raw.githubusercontent.com/GoogleCloudPlatform/ai-platform-samples/master/notebooks/tools/auto-shutdown/ashutdown.service
curl -LJO https://raw.githubusercontent.com/GoogleCloudPlatform/ai-platform-samples/master/notebooks/tools/auto-shutdown/install.sh
Make install.sh exacutable: sudo chmod +x install.sh
Run it: sudo ./install.sh.
This should install & run the ashutdown service in your system.
You can check if it's running with service ashutdown status.
These instructions are for Debian system so if you're running CentOS or other flavour of Linux they may differ.

Deploy docker container fails on AWS Beanstalk

I have a simple application (docker) that I want to deploy through the AWS beanstalk.
U have a zip arhive with 2 files:
+simple.jar
+Dockerfile
Dockerfile contains these records:
FROM openjdk:11
WORKDIR /usr/app
COPY ./ ./
EXPOSE 80
CMD ["java", "-jar", "simple.jar"]
I am getting this error during dpeloyment
[ERROR] An error occurred during execution of command [app-deploy] - [Run Docker Container]. Stop running the command. Error: open file failed with error open /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id: no such file or directory
local build through "docker build" works and application is available on the port 80.
So I am not sure what to do and how to debug the problem more deeply.
So problem was solved by changing the Platform branch from
"Docker running on 64but Amazon Linux 2" to
"Docker running on 64but Amazon Linux".
It has different versions, but actually why it is not running needs more deep investigation.
I experienced this issue when my deployment had a ".ebextensions" directory. Removing the directory solved the issue.
AWS Notes
On Amazon Linux 2 platforms, instead of providing files and commands in .ebextensions configuration files, we highly recommend that you use Buildfile. Procfile, and platform hooks whenever possible to configure and run custom code on your environment instances during instance provisioning. For details about these mechanisms, see Extending Elastic Beanstalk Linux platforms.

How to keep a server processing running on a Google Cloud VM?

This question seems very basic, but I wasn't able to quickly find an answer at https://cloud.google.com/compute/docs/instances/create-start-instance. I'm running a MicroMDM server on a Google Cloud VM by connecting to is using SSH (from the VM instances page in the Google Cloud Console) and then running the command
> sudo micromdm serve
However, I notice that when I shut down my laptop, the server also stops, which is actually why I wanted to run the server in a VM in the first place.
What would be the recommended way to keep the server running? Should I use systemd or perhaps run the process as a Docker container?
When you run the service from the command line, you "attach" it to your shell process, when you terminate your ssh session, your job gets terminated also.
To make a process run in background, simply append the & at the end of the command, in your case:
sudo micromdm serve &
This way your server is alive even after you quit your session.
I also suggest you to add that line in the instance startup script, if you want that server to always be up, so that you don't have to run the command by hand each time :)
More on Compute Engine startup scripts here.
As the Using MicroMDM with systemd documentation, it suggested to use systemd command to run MicroMDM service on linux.First, on our linux host, we create the micromdm.service file, then we move it to the location ‘/etc/systemd/system/micromdm.service’ . We can start the service. In this way, it will keep the service running, or restart service after the service fails or server restart.

How to deploy java application in a cloud instance from the scratch to an advanced architecture?

I need to deploy my spring-boot application on compute engine in Google cloud platform. I have already created an instance and through SSH Apache and Maven have been installed. Further, war file has been uploaded into the bucket. Anybody can provide me with the remaining commands to deploy the war file on tomcat instance or any other cloud platforms with linux?
Thanks
Deploy in compute engine instance of google not substantially different from AWS, Azure or another linux host provider.
You just need an ssh connection to the remote machine and install the required software to compile, build, zip, deploy, etc
I will list some approaches from basic(manually) to advanced(automated):
#1 Bash scripting
unzip and configure git
unzip and configure java
unzip and configure maven
unzip and configure tomcat (this is not required if spring-boot is used)
configure the linux host to open 8080 port
create a script called /devops/pipeline.sh in your remote cloud linux instance, with the following steps
For war deployment :
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create war
mvn clean package
# move war to deploy tomcat folder
cp target/my_app.war /my/tomcat/webapps
# stop tomcat
bash /my/tomcat/shutdown.sh
# start tomcat
bash /my/tomcat/startup.sh
Or spring-boot startup
# get the source code
cd /tmp/folder/3dac58b7
git clone http://github.com/myrepo.git .
# create jar
mvn clean package
# kill or stop the application
killall java
# start the application
java $JAVA_OPTS -jar $jar_file_name
After push to git, just connect to you instance using ssh and execute
bash /devops/pipeline.sh
Improvements: Parametrize repository name, branch name, mvn profile, database credentials, create a tmp/uuid folder in every execution, delete the tmp/uuid after deploy,optimize start and stop of application using pid, etc
#2 Docker
Install docker in your remote cloud linux instance
Create a Dockerfile with all the steps for war or springboot (approach #1) and store it close to your source code (I mean in your git repository)
Perform a git push of your changes
Connect to your remote cloud linux instance using ssh:
Build your docker image: docker build ...
Delete previous container and run a new version:
docker rm my_app -f
docker run -d --name my_app -p 8080:8080 my-container-name
In the previous approaches, build operations are performed in the remote server. To do that, several tools are needed in that server. In the following approaches, build is performed in an intermediate server and just deploy is executed in the remote server. This is a a little better
#3 Local Build (an extra instance is required)
In this approach, the build is performed in the developer machine and its uploaded to some kind of repository. I advice you docker instead of just war or jar compilation.
In order to build and upload the docker image, one of these docker registries are required:
Docker simple registry
Amazon Elastic Container Registry (ECR)
Docker hub.
Harbor.
JFrog Container Registry.
Nexus Container Registry.
Portus
Azure Container Registry.
Choose one and install it in a new server. Configure your developer machine and your remote server to point to your new docker registry.
Final steps are:
Perform a docker build in your developer machine after. This will create a new docker image of your java application (tomcat+war or springboot jar)
Upload your local image to your new docker registry with something like:
docker push example.com/test-image
Connect to your remote cloud linux instance using ssh and just download the docker image
docker pull example.com/test-image
In the remote server, just start your new downloaded image with docker run...
#4 Use a continuous integration server (an extra instance is required)
Same to the #3 but not in the developer machine. All the steps are performed in another server called: Continuous integration server.
#4.1 Use a continuous integration server (an extra instance is required)
Install Jenkins or another Continuous integration server in the new instance
Configure plugins and other required things in jenkins in order to enable webhook url : https://jrichardsz.github.io/devops/configure-webhooks-in-github-bitbucket-gitlab
Create a job in jenkins to call the script of approach #1 or execute docker commands of approach #2. If you can, Approach #3 would be perfect.
Configure your SCM (github, bitbucket, gitlab, etc) to point to the webhook url published by Jenkins.
When you are ready to deploy, just push the code to your scm, jenkins will be notified and will execute the previous created job. As you can see, there is no human required to deploy de application in the server(With the exception of developer push)
Note: At this point, you could migrate the scripts of approaches #1 and #2 to :
Jenkins pipeline script
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_scripted_pipeline_java_mvn_basic-js
Jenkins declarative pipeline
https://gist.github.com/jrichardsz/a62e3790c6db7654808528bd5e5a385f#file-jenkins_declarative_pipeline_hello_world-js
These are more advanced and scalable approaches to mapping all the commands and configurations required from the beginning to the deployment.
#5 Advanced (Sysadmin team or extra people and knowledge are required )
More instances and technologies will be required.
Kubernetes
Ansible
High availability / Load balancer
Backups
Configurations Management
And more automations
This will be necessary when more and more web applications, microservices are required in your company/enterprise.
#6 Saas
All the previous approaches could be simplified using WORLD CLASS platforms like:
Jelastic
Heroku
Openshift, etc

Stop detached strongloop application

I installed loopback on my server (ubuntu) and then created an app and use the command slc run to run... everything works as expected.
Now i have 1 question and also 1 issue i am facing with:
The question: i need to use slc run command but to keep the app "alive" also after i close the terminal. For that i used the --detach option and it works, What i wanted to know if the --detach option is the best practice or i need to do it in a different way.
The issue: After i use the --detach i don't really know how to stop it. Is there a command that i can use to stop the process from running?
To stop a --detached process, go to the same directory it was run from and do slc runctl stop. There are a number of runctl commands, but stop is probably the one you are most interested in.
Best practices is a longer answer. The short version is: don't use --detach ever and do use an init script to run your app and keep it running (probably Upstart, since you're on Ubuntu).
Using slc run
If you wan to run slc run as an Upstart job you can install strong-service-install with npm install -g strong-service-install. This will give you sl-svc-install, a utility for creating Upstart and systemd services.
You'll end up running something like sudo sl-svc-install --name my-app --user youruser --cwd /path/to/app/root -- slc run . which should create a Upstart job named my-app which will run your app as your uid from the app's root. Your app's stdout/stderr will be sent to /var/log/upstart/my-app.log. If you are using a version of Ubuntu older than 12.04 you'll need to specify --upstart 0.6 and your logs will end up going to syslog instead.
Using slc pm
Another, possibly easier route, is to use slc pm, which operates at a level above slc run and happens to be easier to install as an OS service. For this route you already have everything installed. Run sudo slc pm-install and a strong-pm Upstart service will be installed as well as a strong-pm user to run it as with a $HOME of /var/lib/strong-pm.
Where the PM approach gets slightly more complicated is that you have to deploy your app to it. Most likely this is just a matter of going to your app root and running slc deploy http://localhost:8701/, but the specifics will depend on your app. You can configure environment variables for your app, deploy new versions, and your logs will show up in /var/log/upstart/strong-pm.log.
General Best Practices
For either of the options above, I recommend not doing npm install -g strongloop on your server since it includes things like yeoman generators and other tools that are more useful on a workstation than a server.
If you want to go the slc run route, you would do npm install -g strong-supervisor strong-service-install and replace your slc run with sl-run.
If you want to go the slc pm route, you would do npm install -g strong-pm and replace slc pm-install with sl-pm-install.
Disclaimer
I work at StrongLoop and primarily work on these tools.
View the status of running apps using:
slc ctl status
Example output:
Service ID: 1
Service Name: app
Environment variables:
No environment variables defined
Instances:
Version Agent version Debugger version Cluster size Driver metadata
5.2.1 2.0.3 n/a 1 N/A
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling? Tracing? Debugging?
1.1.2708 2708 0
1.1.5836 5836 1 0.0.0.0:3001
Service ID: 2
Service Name: default
Environment variables:
No environment variables defined
Instances:
Version Agent version Debugger version Cluster size Driver metadata
5.2.1 2.0.3 n/a 1 N/A
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling? Tracing? Debugging?
2.1.2760 2760 0
2.1.1676 1676 1 0.0.0.0:3002
To kill the first app, use slc ctrl stop
slc ctl stop app
Service "app" hard stopped
What if i have to run the application as a cluster ? Can i still do it via the upstart created.
Like
sudo sl-svc-install --name my-app --user youruser --cwd /path/to/app/root -- slc run --cluster 4 .
I tried doing this but the /etc/init/my-app.conf does not show any information about the cluster.