I'm new to gitlab and AWS EC2 and still learning .
I'm having a pipeline on gitlab to deploy an n-tiers app with 2 front in angular and a backend in Java spring boot + postgresql .
I'm doing the ng build on gitlab server then push it to my ec2 with an rsync , and for the spring part i do the ".tar " generation on my ec2 with some caching method it takes less than 20s to fully update the backend so theres not much problem there .
Each part containerized under docker with docker compose , i have to do docker-compose build no cache to take fresh data .
It takes 3 mn 38s to fully upload all parts of my app and around 2mn if i want to upload only one part ( like for exemple i want update only one of my front with a specific job) .
Is it too long or i am on an average time for CI CD ?
If you guys know other better way or more standard to push to production containerized app to AWS EC2 feel free to tell me .
thanks !
Related
Google Cloud Run allows for using Cloud SQL. But what if you need Cloud SQL when building your container in Google Cloud Build? Is that possible?
Background
I have a Next.js project, that runs in a Container on Google Cloud Run. Pushing my code to Cloud Build (installing the stuff, generating static pages and putting everything in a Container) and deploying to Cloud Run works perfectly. 👌
Cloud SQL
But, I just added some functionality in which it also needs to some data from my PostgreSQL instance that runs on Google Cloud SQL. This data is used when building the project (generating the static pages).
Locally, on my machine, this works fine as the project can connect to my CloudSQL proxy. While running in CloudRun this should also work, as Cloud Run allows for connecting to my Postgres instance on Cloud SQL.
My problem
When building my project with Cloud Build, I need access to my database to be able to generate my static pages. I am looking for a way to connect my Docker cloud builder to Cloud SQL, perhaps just like Cloud Run (fully managed) provides a mechanism that connects using the Cloud SQL Proxy.
That way I could be connecting to /cloudsql/INSTANCE_CONNECTION_NAME while building my project!
Question
So my question is: How do I connect to my PostgreSQL instance on Google Cloud SQL via the Cloud SQL Proxy while building my project on Google Cloud Build?
Things like my database credentials, etc. already live in Secrets Manager, so I should be able to use those details I guess 🤔
You can use the container that you want (and you need) to generate your static pages, and download cloud sql proxy to open a tunnel with the database
- name: '<YOUR CONTAINER>'
entrypoint: 'sh'
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=<my-project-id:us-central1:myPostgresInstance>=tcp:5432 &
<YOUR SCRIPT>
App engine has an exec wrapper which has the benefit of proxying your Cloud SQL in for you, so I use that to connect to the DB in cloud build (so do some google tutorials).
However, be warned of trouble ahead: Cloud Build runs exclusively* in us-central1 which means it'll be pathologically slow to connect from anywhere else. For one or two operations, I don't care but if you're running a whole suite of integration tests that simply will not work.
Also, you'll need to grant permission for GCB to access GCSQL.
steps:
- id: 'Connect to DB using appengine wrapper to help'
name: gcr.io/google-appengine/exec-wrapper
args:
[
'-i', # The image you want to connect to the db from
'$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME:$SHORT_SHA',
'-s', # The postgres instance
'${PROJECT_ID}:${_POSTGRES_REGION}:${_POSTGRES_INSTANCE_NAME}',
'-e', # Get your secrets here...
'GCLOUD_ENV_SECRET_NAME=${_GCLOUD_ENV_SECRET_NAME}',
'--', # And then the command you want to run, in my case a database migration
'python',
'manage.py',
'migrate',
]
substitutions:
_GCLOUD_ENV_SECRET_NAME: mysecret
_GCR_HOSTNAME: eu.gcr.io
_POSTGRES_INSTANCE_NAME: my-instance
_POSTGRES_REGION: europe-west1
* unless you're willing to pay more and get very stung by Beta software, in which case you can use cloud build workers (at the time of writing are in Beta, anyway... I'll come back and update if they make it into production and fix the issues)
The ENV VARS (including DB connections) are not available during build steps.
However, you can use ENTRYPOINT (of Docker) to run commands when the container runs (after completing the build steps).
I was having the need to run DB migrations when a new build was deployed (i.e. when the container starts running) and using ENTRYPOINT (to a file/command) was able to run migrations (which require DB connection details, not available during the build-process).
"How to" part is pretty brief and is located here : https://stackoverflow.com/a/69088911/867451
I have followed this guide
to create a CI/CD pipeline with Gitlab, Docker and AWS EC2. I managed it to work with some modifications, but now the problem I have is the following:
The first time I push my code the pipeline works ok, images are stored on Gitlab's container registry, they are pulled and deployed on my EC2.
If I push a new commit to my repo, when pulling images from container registry with deploy.sh script I have this response: [MY_IMAGE] is up-to-date, when in reality it is not. In fact if I run the command manually from the aws machine it detects the new images and pulls them.
I tried tagging images with $CI_COMMIT_SHA, but no luck.
Any way I can make this to work properly?
I figured it out: I didn't set the right env variables to use in the setup_env.sh. Now everything is ok.
As i am working on docker, i need help to take a container or image from existing AWS box. In my AWS box our application is installed and initiated.
For our application initialization, it takes more time. So i want to deploy this container(application installed) while the box launching time itself. When i am taking docker container it will have my application initiated, as per my understanding. So i can save the application initialization time.
I am launching the machine through ansible in AWS VPC. So i can call the docker container there.
Can anyone help on this how to do this activity.
With Thanks,
Ezhilmurugan M I
If you docker commit your changes into an image with a tag, you can then push to a registry, and then pull down the images on another server.
$ docker commit <hash or name> yourusername/red_panda
$ docker push yourusername/red_panda
On other host
$ docker pull yourusername/red_panda
You could also export the image, transfer however you want, and then import the image on the new server.
$ docker export red_panda > latest.tar
$ cat latest.tgz | docker import - exampleimagelocal:new
I am looking for alternative methods of deploying a Play application to Elastic Beanstalk. It is a single page app that relies on Ember.js. It would be nice to be able to edit the the contents of the /public folder so I don't need to rebuild the docker image every time something is fixed on the Ember side that doesn't affect the Play app itself.
I am currently using sbt's docker:stage command and zipping the generated docker folder along with this Dockerfile and Dockerrun.
Dockerfile
FROM java:latest
WORKDIR /opt/docker
ADD stage /
RUN ["chown", "-R", "daemon:daemon", "."]
EXPOSE 9000
USER daemon
ENTRYPOINT ["bin/myapp", "-Dconfig.resource=application-prod.conf"]
CMD []
Dockerrun
{
"AWSEBDockerrunVersion": "1",
"Ports": [{ "ContainerPort": "9000" }],
"Volumes": []
}
Once I zip the file I upload it using Beanstalk console. But this involves rebuilding the app every time a typo is fixed on the front end. It is annoying because it means all the updated front end code has to wait until I get a chance to push it up so the boss can view it and give feedback. It would be nice if there was a way to have the /public folder (Play just serves /public/index.html) accessible so the front end dev could access it directly for his edits.
Ideally I would like some method that can be used for both development and production. I don't know the requirements imposed by Beanstalk so it can properly spin up extra instances of the app when needed. Maybe something where when the instance starts it does git pull on the backend repo and git pull on the front end repo, then runs my custom build script for ember to generate the /dist folder and move into Play's /public folder and create gzips of each file. Then start the play app. Then let the front end Dev ssh into the development instance and do git pull and ember build as needed for his edits.
It would also be nice for the development server for the Play server to be run using run or ~run so I can just do git pull and have it rebuild the backend.
Or maybe I am approaching this in the completely wrong way. I have never done any of this before so I am sort of guessing my way through all of it.
Thanks for any suggestions and pointers in the correct direction.
Adam
Edit
Since we are really only using Play as a RESTful API would it be better to just run a nginx/Apache server on something like EC2 then use Beanstalk to handle the Play App without it serving any content besides API calls. I would assume the EC2 nginx could be pretty tiny since only the first access would pull files from the http server. After that it is all API calls. Then we run the Play app from Beanstalk so it can handle load balancing for the API. This at least saves me from rebuilding the image for front end edits. Would this be a more correct setup?
I found fragmented instructions here and some other places about deploying Play2 app on amazon ec2. But did not find any neat way to deploy using Beanstalk.
Play is a nice framework and AWS beanstalk is one of the most popular services then why is there no official instruction to do this?
Has anyone found any better solution?
Deploying a Play2 app on elastic beanstalk is now easy with Docker Containers in combination with sbt's experimental docker feature.
In build.sbt specify the exposed docker ports:
dockerExposedPorts in Docker := Seq(9000)
You should automate the following steps, but you can try this out manually to test that it works:
Generate a Dockerfile for the project by running the command: sbt docker:stage.
Go to the ./target/docker/ directory.
Create an elastic beanstalk Dockerrun.aws.json file with the following contents:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
Zip up everything in that directory, let's say into a file called play2-test-docker.zip. The zip file should contain the files: Dockerfile, Dockerrun.aws.json, and files/* directory.
Go to aws beanstalk console and create a new application using the m3.medium or any instance type with enough memory for the jvm to run. Any instance with too little memory will result in a JVM error.
Select "Docker Container" in the Predefined Configuration dropdown.
In the application selection screen, select "Upload" and select the zip file you created earlier. Launch the app and then go brew some tea. This can take a very long time. Minutes. Subsequent deployments of the same app version should be slightly quicker.
Once the app is running and green in the aws console, click on the app's url and you should see the welcome screen of the application (or whatever your index file is).
Here's my solution that doesn't require any additional services/containers like Docker or Jenkins.
Create a dist folder in the root of your Play application's directory. Create a Procfile file containing the following contents and put it in the dist folder (EB requires port 5000):
web: ./bin/YOUR_APP_FILE_NAME -Dhttp.port=5000 -Dconfig.file=conf/application.conf
The YOUR_APP_FILE_NAME is the name of the executable in the bin directory, which is inside the .zip created by activator dist.
After running activator dist, you can just upload the created zip file into Elastic Beanstalk and it will automatically deploy the app. You also put whatever .ebextension folders and configuration files into the dist folder that you require for Elastic Beanstalk configuration. Ex. I have dist/.ebextensions/nginx/conf.d/proxy.conf for NGINX reverse proxy settings or dist/.ebextensions/env.config for environment variables.
Edit 2016: There's now a much better way to deploy your Playframework apps onto ElasticBeanstalk using the new Java SE containers.
Here's an article that walks you through deploying step by step using Jenkins to build and deploy your project:
https://www.davemaple.com/articles/deploy-playframework-elastic-beanstalk-jenkins/
You can use custom AMIs that I keep updated here:
https://github.com/davemaple/playframework-nginx-elastic-beanstalk
These run Nginx + Playframework and support standard zip files created using "activator dist".
We also saw this as being too much of a pain and have added native Play 2 support to Boxfuse to address this.
You can now simply do boxfuse run my-play-app-1.0.zip -env=prod and this will automatically:
create a minimal AMI tailor-made for your Play 2 app
create an elastic IP
create a security group with the correct permissions
launch an instance of your app
All future updates are performed as blue/green deployments with zero downtime.
This also works with Elastic Load Balancers and Auto-Scaling Groups and the Boxfuse free tier is designed to fit the AWS free tier.
You can read more about it here: https://boxfuse.com/blog/playframework-aws
Disclaimer: I'm the founder and CEO of Boxfuse
I had some problems with other solutions found here and there. I guess that the problem is that I'm developing on Play 2.4.
Anyway, I could deploy the app to Beanstalk using Typesafe Activator and Docker:
In build.sbt I added this lines:
import com.typesafe.sbt.packager.docker.{ExecCmd, Cmd}
// [...]
dockerCommands := Seq(
Cmd("FROM","java:openjdk-8-jre"),
Cmd("MAINTAINER","myname"),
Cmd("EXPOSE","9000"),
Cmd("ADD","stage /"),
Cmd("WORKDIR","/opt/docker"),
Cmd("RUN","[\"chown\", \"-R\", \"daemon\", \".\"]"),
Cmd("RUN","[\"chmod\", \"+x\", \"bin/myapp\"]"),
Cmd("USER","daemon"),
Cmd("ENTRYPOINT","[\"bin/myapp\", \"-J-Xms128m\", \"-J-Xmx512m\", \"-J-server\"]"),
ExecCmd("CMD")
)
I went to the project's directory and ran this command in the terminal
$ ./activator clean docker:stage
I opened the [project]/target/dockerdirectory and created the file Dockerrun.aws.json. This was its content:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
In the same target/docker directory, I tested the result, built, checked and ran the image:
$ docker build -t myapp .
$ docker images
$ docker run -p 9000:9000 myapp
As everything was ok, I zipped the content:
$ zip -r myapp.zip *
My zip file had Dockerfile, Dockerrun.aws.json and stage/* files
Finally, I created a new Beanstalk app and uploaded the zip created on the last step. I took care of select "Generic Docker" on "Predefined configuration", when I was creating the app.
Beanstalk only supports WAR deployment and Play doesn't officially support WAR deployment. If you want to use EC2 then you should instead just create an EC2 instance and follow the deployment instructions: http://www.playframework.com/documentation/2.2.x/ProductionDist
Deploying play 2.* apps in aws ec2 is very diffrent until you have found this much better way to do it. I mean ansible is promising a great solution to that. though it is still needed to work with new setup of ansible, and its playbook but that must be worthy.
I have found these reads very recently and yet to apply them in my project. I hope following reads will help you to learn more:
Ansible + play + aws ec2
Read it to know more about Ansible to deply play in aws
Thanks!
Hope this will help you to kick your start. Please do share more knowledge you gain during the procedure or if there is any simple way to solve this complicated deployment problem.