I have a beanstalk environment which use Docker.
Each time I push something, jenkins build and upload my new snapshot to S3. (I use S3 to store my version). Each version is a zip which contains my app and my Dockerfile.
Then I update my BS environment with the version I just uploaded.(BS create a new version with the version uploaded to S3, if the version exist it replace it, it usefull for snapshot).
Everything works fine the first time I deploy the version.
But when i do it a second time, it continue to works but it seems that my last version is not used. Docker not re build my freshly updated app.
Why this ? Did I missed something ? this is my Dockefile
Basically it seems the update-environment call refuses to update a the same version number - and thats why we always rely on ${maven.build.timestamp} and friends. Here's your retouched pom :]
Notice I'm using properties - Thats the suggested style for the latest version (oops, someone forgot to update the docs).
I've decided to try it with the latest 1.4.0-SNAPSHOT. Here's what you should add to your profile:
<profiles>
<profile>
<id>awseb</id>
<properties>
<maven.deploy.skip>true</maven.deploy.skip>
<beanstalker.region>eu-west-1</beanstalker.region>
<beanstalk.applicationName>wisdom-demo</beanstalk.applicationName>
<beanstalk.cnamePrefix>wisdom-demo</beanstalk.cnamePrefix>
<beanstalk.environmentName>${beanstalk.cnamePrefix}</beanstalk.environmentName>
<beanstalk.artifactFile>${project.basedir}/target/${project.build.finalName}.zip</beanstalk.artifactFile>
<beanstalk.environmentRef>${beanstalk.cnamePrefix}.elasticbeanstalk.com</beanstalk.environmentRef>
<maven.build.timestamp.format>yyyyMMddHHmmss</maven.build.timestamp.format>
<beanstalk.s3Key>apps/${project.artifactId}/${project.version}/${project.artifactId}-${project.version}-${maven.build.timestamp}.zip</beanstalk.s3Key>
<beanstalk.useLatestVersion>true</beanstalk.useLatestVersion>
<beanstalk.versionLabel>${project.artifactId}-${project.version}-${maven.build.timestamp}</beanstalk.versionLabel>
<beanstalk.applicationHealthCheckURL>/ping</beanstalk.applicationHealthCheckURL>
<beanstalk.instanceType>m1.small</beanstalk.instanceType>
<beanstalk.keyName>aldrin#leal.eng.br</beanstalk.keyName>
<beanstalk.iamInstanceProfile>aws-elasticbeanstalk-ec2-role</beanstalk.iamInstanceProfile>
<beanstalk.solutionStack>64bit Amazon Linux 2014.* running Docker 1.*</beanstalk.solutionStack>
<beanstalk.environmentType>SingleInstance</beanstalk.environmentType>
</properties>
<build>
<plugins>
<plugin>
<groupId>br.com.ingenieux</groupId>
<artifactId>beanstalk-maven-plugin</artifactId>
<version>1.4.0-SNAPSHOT</version>
<executions>
<execution>
<id>default-deploy</id>
<phase>deploy</phase>
<goals>
<goal>upload-source-bundle</goal>
<goal>create-application-version</goal>
<goal>put-environment</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
From the example above, just tweak your cnamePrefix and the latest three properties. Here's a rundown:
So if you want to deploy, simply:
$ mvn -Pawseb deploy
Or, if you want to boot it
from scratch the latest version (thus using useLatestVersion) once deployed, simply do:
$ mvn -Pawseb -Dbeanstalk.versionLabel= beanstalk:create-environment
By setting to blank the versionLabel, it effectively activates the useLatestVersion behaviour: When there isn't a version, use the latest one.
Oh, a deployment failed?
Easy peasy:
$ mvn -Pawseb beanstalk:rollback-version
Thank you for your explanation and the link to the blog post.
I follow these step by step instructions and successfully deployed my first Wisdom application in a Docker container on AWS Elastic Beanstalk.
I then upgrade the Java source code, compiled with mvn package, tested locally and deployed again the new ZIP file using AWS Console.
My AWS Elastic BeansTalk environment was correctly updated.
So, it looks like the deployment problem you are observing is lying in the maven AWS Elastic Beanstalk plugin that deploys the code.
Manual deploys work correctly. Since this maven plugin is a third-party, open-source project, I am not the right person to investigate this. I would suggest you to contact the project maintainer and / or open an issue in their Issue Tracking System
As a workaround, you can deploy manually (or script this procedure from your CI/CD environment) :
Copy your artefact to your AWS Elastic Beanstalk bucket
aws s3 --region <REGION_NAME> cp ./target/YOUR_ARTIFACTID-1.0-SNAPSHOT.zip s3://<YOUR_BUCKET_NAME>/20141128-210900-YOUR_ARTIFACTID-1.0-SNAPSHOT.zip
Create an application version with your zip file
aws elasticbeanstalk create-application-version --region <REGION_NAME> --application-name <YOUR_APPLICATION_NAME> --version-label 20141128-212100 --source-bundle S3Bucket=<YOUR_BUCKET_NAME>,S3Key=20141128-210900-YOUR_ARTIFACTID-1.0-SNAPSHOT.zip
Deploy that version
aws elasticbeanstalk update-environment --region <YOUR_REGION_NAME> --environment-name <YOUR_ENVIRONMENT_NAME> --version-label 20141128-212100
These three steps might be automated from maven or jenkins, I will let you this as an exercise :-)
Related
I have to update a website on aws using serverless deploy.
This website were not created by me, it's the first time I work with serverless and AWS solutions.
I have the source code, deploy files, etc, from the last person in charge.
I run a before-deploy.js script to create all local files, check them to see if the updates went ok. Everything's fine.
But anytime I try to deploy using the simple command "serverless deploy", it fails printing this error :
CREATE_FAILED: MainStaticSite (AWS::S3::Bucket)
“mywebsite.com” already exists
I don’t really understand this error, as I know the website already exists but I just want to update it.
I tried more specific commands like :
serverless deploy -v --stage production --region eu-west-1
But this one only shows this output :
Framework Core: 3.10.1
Plugin: 6.2.0
SDK: 4.3.2
PS
And doesn't updates the website.
I changed the keys on AWS, maybe it's because of this ?
Looks like he doesn’t want to overwrite the existing files, but no idea why.
If someone has an answer or a lead.
Thank you :)
It appears that one must provide a new full task definition for each service update. Even though most of the time new deployments exclusively consists of updates to one of the underlying docker images
While this is understandable as a core architectural choice. It is quite cumbersome. Is there a command-line option that makes this easier as the full JSON spec for task definitions are quite complex?
Right now the developers needs to provide complex scripts and deployment orchestrations to achieve this relatively routine task in their CI/CD processes
I see attempts at this Here and Here. These solutions do not appear to work in all cases (for example, for Fargate launches)
I know that if the updated image uses the re-use the same tag this problem is made easier, however in dev cultures that values reproducibility and audibility that is simply not an reasonable option
Is there no other option than to leverage both the AWS API & JSON manipulation libraries?
EDIT It appears this project does a fairly good job https://github.com/fabfuel/ecs-deploy
I found a few approaches
As mentioned in my comment, use ecs-deploy script per the Github link
Create a task definition via the --generate-cli-skeleton option on awscli.
Fill out all details except for execution-rule-arn, task-role-arn, image
These cannot be filled out because they will change per commit or per environment you want to deploy to
Commit this skeleton to git, so it is part of your workspace on the CI
Then use a JSON traversing/parsing library or utility such as https://jqplay.org/ to replace at build time on the CI the roleArn and image name
Use https://github.com/fabfuel/ecs-deploy.
If you want to update only the tag of an existent task:
ecs deploy <CLUSTER NAME> <SERVICE NAME> --region <REGION NAME> --tag <NEW TAG>
e.g. ecs deploy default web-service --region us-east-1 --tag v2.0
In your ci/cd you use git hash:
Using git rev-parse HEAD, will return a hash like: d63c16cd4d0c9a30524c682fe4e7d417faae98c9
docker build -t image-name:$(git rev-parse HEAD) .
docker push image-name:$(git rev-parse HEAD)
And use the same tag on task:
ecs deploy default web-service --region us-east-1 --tag $(git rev-parse HEAD)
Is it a good approach to commit the .elasticbeanstalk/config.yml inside the git repo of a project which uses eb deploy?
We want to deploy using our CI and so we can not use the interactive eb init.
What we are thinking now is to define our dev, uat and prod inside that config.yml (if possible) and to point to that environment using eb deploy.
We saw that we could perform eb init with all necessary parameters in ebcli version2 but not in version 3 anymore? So it seems the approach is changed?
Can someone explain how to deploy EB for multiple environments, without interaction?
We want to deploy using our CI and so we can not use the interactive eb init
You can suppress the interactive mode as follows:
eb init --platform <platform-name> --region <region-name> <application-name>
Is it a good approach to commit the .elasticbeanstalk/config.yml inside the git repo of a project which uses eb deploy?
Can someone explain how to deploy EB for multiple environments, without interaction?
By design, the EBCLI avoids committing the .elasticbeanstalk/ directory since it can contain developer-specific information, which when committed to VC can cause confusion. So, it's best avoided from VC. You are free to commit it to version control. Ensure there's no sensitive information here. Logs, and saved configurations are usually stored in .elasticbeanstalk/.
You can copy pertinent portions of the .elasticbeanstalk/config.yml file into root-level file from which CI could read information such as the environment name to use.
Locally, you could create a pre-commit Git hook that would read the default environment name from the .elasticbeanstalk/config.yml file into the root-level file -- let's call it .environment_config.sh. It could be a statement as simple as export BEANSTALK_ENVIRONMENT_NAME=<environment name from .elasticbeanstalk/config.yml>
On the CI server:
3.1. Ensure PWD is git init-ed. Systems such as Jenkins usually are git init-ed with the necessary branch, so CI can simply source .environment_config.sh at this point and load the name of the environment to deploy.
3.2. eb init --platform <platform-name> --region <region-name> <application-name>
3.3. eb use $BEANSTALK_ENVIRONMENT_NAME
3.4. eb deploy
(You could combine 3.3. and 3.4. by performing eb deploy $BEANSTALK_ENVIRONMENT_NAME instead; I just wanted to demonstrate the use of eb use)
The EB CLI is really meant to be used from a workstation. I think you'd be better off scripting your CI with the AWS CLI.
A deployment with eb deploy will archive your code in S3 (or CodeCommit), create a new application version then update the environment with the new version label. All of those operations are supported with AWS CLI commands.
Or, you could write your own deployment script in Python with boto3. That's an easy option too. That's basically what the EB CLI is.
I found fragmented instructions here and some other places about deploying Play2 app on amazon ec2. But did not find any neat way to deploy using Beanstalk.
Play is a nice framework and AWS beanstalk is one of the most popular services then why is there no official instruction to do this?
Has anyone found any better solution?
Deploying a Play2 app on elastic beanstalk is now easy with Docker Containers in combination with sbt's experimental docker feature.
In build.sbt specify the exposed docker ports:
dockerExposedPorts in Docker := Seq(9000)
You should automate the following steps, but you can try this out manually to test that it works:
Generate a Dockerfile for the project by running the command: sbt docker:stage.
Go to the ./target/docker/ directory.
Create an elastic beanstalk Dockerrun.aws.json file with the following contents:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
Zip up everything in that directory, let's say into a file called play2-test-docker.zip. The zip file should contain the files: Dockerfile, Dockerrun.aws.json, and files/* directory.
Go to aws beanstalk console and create a new application using the m3.medium or any instance type with enough memory for the jvm to run. Any instance with too little memory will result in a JVM error.
Select "Docker Container" in the Predefined Configuration dropdown.
In the application selection screen, select "Upload" and select the zip file you created earlier. Launch the app and then go brew some tea. This can take a very long time. Minutes. Subsequent deployments of the same app version should be slightly quicker.
Once the app is running and green in the aws console, click on the app's url and you should see the welcome screen of the application (or whatever your index file is).
Here's my solution that doesn't require any additional services/containers like Docker or Jenkins.
Create a dist folder in the root of your Play application's directory. Create a Procfile file containing the following contents and put it in the dist folder (EB requires port 5000):
web: ./bin/YOUR_APP_FILE_NAME -Dhttp.port=5000 -Dconfig.file=conf/application.conf
The YOUR_APP_FILE_NAME is the name of the executable in the bin directory, which is inside the .zip created by activator dist.
After running activator dist, you can just upload the created zip file into Elastic Beanstalk and it will automatically deploy the app. You also put whatever .ebextension folders and configuration files into the dist folder that you require for Elastic Beanstalk configuration. Ex. I have dist/.ebextensions/nginx/conf.d/proxy.conf for NGINX reverse proxy settings or dist/.ebextensions/env.config for environment variables.
Edit 2016: There's now a much better way to deploy your Playframework apps onto ElasticBeanstalk using the new Java SE containers.
Here's an article that walks you through deploying step by step using Jenkins to build and deploy your project:
https://www.davemaple.com/articles/deploy-playframework-elastic-beanstalk-jenkins/
You can use custom AMIs that I keep updated here:
https://github.com/davemaple/playframework-nginx-elastic-beanstalk
These run Nginx + Playframework and support standard zip files created using "activator dist".
We also saw this as being too much of a pain and have added native Play 2 support to Boxfuse to address this.
You can now simply do boxfuse run my-play-app-1.0.zip -env=prod and this will automatically:
create a minimal AMI tailor-made for your Play 2 app
create an elastic IP
create a security group with the correct permissions
launch an instance of your app
All future updates are performed as blue/green deployments with zero downtime.
This also works with Elastic Load Balancers and Auto-Scaling Groups and the Boxfuse free tier is designed to fit the AWS free tier.
You can read more about it here: https://boxfuse.com/blog/playframework-aws
Disclaimer: I'm the founder and CEO of Boxfuse
I had some problems with other solutions found here and there. I guess that the problem is that I'm developing on Play 2.4.
Anyway, I could deploy the app to Beanstalk using Typesafe Activator and Docker:
In build.sbt I added this lines:
import com.typesafe.sbt.packager.docker.{ExecCmd, Cmd}
// [...]
dockerCommands := Seq(
Cmd("FROM","java:openjdk-8-jre"),
Cmd("MAINTAINER","myname"),
Cmd("EXPOSE","9000"),
Cmd("ADD","stage /"),
Cmd("WORKDIR","/opt/docker"),
Cmd("RUN","[\"chown\", \"-R\", \"daemon\", \".\"]"),
Cmd("RUN","[\"chmod\", \"+x\", \"bin/myapp\"]"),
Cmd("USER","daemon"),
Cmd("ENTRYPOINT","[\"bin/myapp\", \"-J-Xms128m\", \"-J-Xmx512m\", \"-J-server\"]"),
ExecCmd("CMD")
)
I went to the project's directory and ran this command in the terminal
$ ./activator clean docker:stage
I opened the [project]/target/dockerdirectory and created the file Dockerrun.aws.json. This was its content:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
In the same target/docker directory, I tested the result, built, checked and ran the image:
$ docker build -t myapp .
$ docker images
$ docker run -p 9000:9000 myapp
As everything was ok, I zipped the content:
$ zip -r myapp.zip *
My zip file had Dockerfile, Dockerrun.aws.json and stage/* files
Finally, I created a new Beanstalk app and uploaded the zip created on the last step. I took care of select "Generic Docker" on "Predefined configuration", when I was creating the app.
Beanstalk only supports WAR deployment and Play doesn't officially support WAR deployment. If you want to use EC2 then you should instead just create an EC2 instance and follow the deployment instructions: http://www.playframework.com/documentation/2.2.x/ProductionDist
Deploying play 2.* apps in aws ec2 is very diffrent until you have found this much better way to do it. I mean ansible is promising a great solution to that. though it is still needed to work with new setup of ansible, and its playbook but that must be worthy.
I have found these reads very recently and yet to apply them in my project. I hope following reads will help you to learn more:
Ansible + play + aws ec2
Read it to know more about Ansible to deply play in aws
Thanks!
Hope this will help you to kick your start. Please do share more knowledge you gain during the procedure or if there is any simple way to solve this complicated deployment problem.
I am trying to launch a map reduce job in amazon map reduce cluster. My map reduce job does some pre-processing before generating map/reduce tasks. This pre-processing requires third party libs such as javacv, opencv. Following the amazon's documentation, I have included those libraries in HADOOP_CLASSPATH such that I have a line HADOOP_CLASSPATH= in hadoop-user-env.sh in the location /home/hadoop/conf/ of master node. According to the documentation, the entry in this script should be included in hadoop-env.sh. Hence, I assumed that HADOOP_CLASSPATH now has my libs in the classpath. I did this in bootstrap actions. However, when i launch the job, it still complains class not found exception pointing to a class in the jar which is supposed to be in the classpath.
Can someone tell me where I am going wrong? bbtw, i am using hadoop 2.2.0. In my local infrastructure, i have a small bash script that exports HADOOP_CLASSPATH with all the libs included in it and calls hadoop jar -libjars .
I solved this with an AWS EMR bootstrap task to add a jar to the hadoop classpath:
Uploaded my jar to S3
Created a bootstrap script to copy the jar from S3 to the EMR instance and add the jar to the classpath:
#!/bin/bash
hadoop fs -copyToLocal s3://my-bucket/libthrift-0.9.2.jar /home/hadoop/lib/
echo 'export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:/home/hadoop/lib/libthrift-0.9.2.jar"' >> /home/hadoop/conf/hadoop-user-env.sh
Saved that script as "add-jar-to-hadoop-classpath.sh" and uploaded it to S3.
My "aws emr create-cluster" command adds the bootstrap script with this argument: --bootstrap-actions Path=s3://my-bucket/add-jar-to-hadoop-classpath.sh
When the EMR spins up the instance will have the file /home/hadoop/conf/hadoop-user-env.sh created and my MR job was able to instantiate the thrift classes in the jar.
UPDATE : I was able to instantiate thrift classes from the MASTER node, but not from the CORE node. I sshed into the CORE node and the lib was properly copied to /home/hadoop/lib and my HADOOP_CLASSPATH setting was there, but I was still getting class not found at runtime when the mapper tried to use thrift.
Solution ended up being to the the maven-shade-plugin and embed the thrift jar:
<plugin>
<!-- Use the maven shade plugin to embed the thrift classes in our jar.
Couldn't get the HADOOP_CLASSPATH on AWS EMR to load these classes even
with the jar copied to /home/hadoop/lib and the proper env var in
/home/hadoop/conf/hadoop-user-env.sh -->
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<includes>
<include>org.apache.thrift:libthrift</include>
</includes>
</artifactSet>
</configuration>
</execution>
</executions>
</plugin>
When your job is executed, the "controller" logfile contains the actually executed commandline. This could look something like:
2014-06-02T15:37:47.863Z INFO Fetching jar file.
2014-06-02T15:37:54.943Z INFO Working dir /mnt/var/lib/hadoop/steps/13
2014-06-02T15:37:54.944Z INFO Executing /usr/java/latest/bin/java -cp /home/hadoop/conf:/usr/java/latest/lib/tools.jar:/home/hadoop:/home/hadoop/hadoop-tools.jar:/home/hadoop/hadoop-tools-1.0.3.jar:/home/hadoop/hadoop-core-1.0.3.jar:/home/hadoop/hadoop-core.jar:/home/hadoop/lib/*:/home/hadoop/lib/jetty-ext/* -Xmx1000m -Dhadoop.log.dir=/mnt/var/log/hadoop/steps/13 -Dhadoop.log.file=syslog -Dhadoop.home.dir=/home/hadoop -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,DRFA -Djava.io.tmpdir=/mnt/var/lib/hadoop/steps/13/tmp -Djava.library.path=/home/hadoop/native/Linux-amd64-64 org.apache.hadoop.util.RunJar <YOUR_JAR> <YOUR_ARGS>
The log is located on the master node in /mnt/var/lib/hadoop/steps/ - it´s easily accessible when you SSH into the master node (requires specifying a key pair when creating the cluster).
I´ve never really worked with what´s in HADOOP_CLASSPATH, but if you define a bootstrap action to just copy your libraries into /home/hadoop/lib, that should solve the issue.