I am trying to setup Continuous deployment using jenkins and OpsWorks. I have configured Jenkins but I dont know how to integrate Jenkins with OpsWorks to auto deploy using Chef Cookbook.
Is there any plugin available for OpsWorks and Jenkins integration?(I think there is no plugin available from AWS. I dont know why....).
Can I have some steps/suggestoins to write chef cookbook to integrate OpsWorks with Jenkins?
I do this by calling the CLI tool in a Jenkins project. Something like this:
aws opsworks --region us-east-1 create-deployment --stack-id <your id> --app-id <your app id> --command "{\"Name\":\"deploy\"}"
You can find the IDs in your stack configuration.
If you want to do continuous deployment then you can use AWS CodeDeploy also instead of Jenkins.
You can now integrate OpsWorks into a CodePipeline:
https://aws.amazon.com/about-aws/whats-new/2016/06/aws-codepipeline-adds-integration-with-aws-opsworks/
This lets you automate the release of updated application code and Chef cookbooks to your applications and instances running in OpsWorks.
This allows you to implement CD into your OpsWorks stack, with or without Jenkins.
Related
I've created an AWS Elastic Beanstalk application and environment. However, I cannot for the life of me figure out how to deploy code to it. Most tutorials I read are for creating a new application directly from the CLI, but I already have one.
I've installed the AWS CLI tools. I created a SSH key-pair to the environment and added it to my .ssh folder. I created an IAM profile and logged in with that in my terminal.
If I understand correctly, I need to do eb use [my environment name] so I can then eb deploy to it. But when I use eb list, nothing comes up. How can I connect to the environment that already exists on AWS?
I am using Linux (WSL on Windows). I'm also on the Free Tier of AWS.
You can either use AWS CLI commands or EB CLI commands to deploy your applications.
Congfigure the source, GIT or S3. e.g. I have uploaded my_app.zip to my_bucket in S3.
Create a new application version. It is a good practise to use commit hash as version label.
aws elasticbeanstalk create-application-version
--application-name <EB_APP_NAME> --version-label <version-label>
--source-bundle S3Bucket="my_bucket",S3Key=my_app.zip --auto-create-application
Update the environment to point to the new application version. The value of version-label should be the same as in the previous step.
aws elasticbeanstalk update-environment
--application-name <EB_APP_NAME>
--environment-name <EB_ENV_NAME>
--version-label <version-label>
The alternative way is to use EB CLI. eb deploy handles all 3 steps above.
Initialize EB CLI using eb init.
Deploy using eb deploy.
I'm trying to deploy Kubernetes application in AWS EKS through Jenkins.
I visited few of blogs, they mentioned Jenkins X. But JenkinsX need to be configured separately. But as per instruction, we need to use our existing Jenkins for K8S app deployment.
Note : AWS EKS and Jenkins are Separate machine(We using our existing Jenkins). I may need to create New EKS environment based on requirement.
Please suggest if any AWS EKS plugin for Jenkins which can be used for deployment.
Else
Is there any way to create custom Bash script(automation script) for deploying K8S application in AWS EKS?
My Research here is : Actually AWS is providing api/sdk support for only Creating/Managing Clusters but not deploying the application in k8s environment(using kubectl).
Probably creating cluster we can do it through SDK. but How to deploy k8s application on remotely(because Jenkins is running in another machine).
Why not configuring kubectl for jenkins and deploy apps using kubectl apply deployment.yaml command?
Once you have kubectl config you can save it as secret text. I had an assignment for the interview and here is an example of such deployment:
https://github.com/mtuktarov/hello
It uses shared lib:
https://github.com/mtuktarov/hello-jenkins-lib
Finally I'm done this exercise by creating Bash automation script, following these steps:
Created Docker image with application binary.
Created EKS Cluster using eksctl create cluster <PARAM>, which creates EKS Control Plane and Worker nodes.
Created Kubernetes Deployment File using Docker image and Deployed using kubectl apply <PARAM> commandline.
Exposed the application using kubectl expose <PARAM> cli.
Latest Update From AWS EKS Service:
AWS recently announced AWS EKS Worker node creation support using AWS SDK. So now Creating EKS environment can be done using SDK itself.
===================
Update:
Now AWS has Supported Creating worker node thorugh UI and AWS SDK.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EKS.html#createNodegroup-property
The use case is like - developer makes some code changes and the below things happen automatically -
build runs, application artifact created, docker image generated with the artifact, image pushed to Docker registry, AWS ECS tasks and ECS services updated.
I want to know what are the ways to achieve the above automation of update of AWS ECS services. Till now I have implemented AWS ECS update from Jenkins build using -
1>run post build AWS CLi scripts from Jenkins to update ECS
2>post build action or pipeline step to invoke AWS Lambda function. I have created one Lambda function in Java to implement that.
Please let me the other ways we can achieve the above. Thanks.
I'm continuously deploying Docker containers from CircleCI to AWS ECS.
The outline of the deployment flow is as follows:
Build and tag a new Docker image
Login to AWS ECR and push the image
Update task definitions and services of ECS with ecs-deploy
ecs-deploy is a useful script that updates Docker images in ECS.
https://github.com/silinternational/ecs-deploy
You could use a shell script that calls aws cli commands to create cloudformation stacks or directly call the create commands in the aws cli for the ECR repository, Task Definition, Events rule and target(for scheduling).
then you just call this script on your terminal using this command: ./setup.sh and it should execute all your commands at once.
aws ecr create-repository \
--repository-name tasks-${TASK_NAME}-${TASK_ENV} \
;
or if you want to set up your resources via cloudformation templates, you can launch them using this command as long as the template exists at file://name.yml:
aws cloudformation create-stack \
--stack-name stack-name \
--capabilities CAPABILITY_IAM \
--template-body file://name.yml \
--parameters
ParameterKey=ParamName,ParameterValue=${PARAM_NAME} \
;
Take a look at Codefresh - https://docs.codefresh.io/docs/amazon-ecs
You can build your pipeline
Build Step
Push to Registry
Deply to ECS
That easy
While there are a ton of CI/CD tools out there, since I am early in my rollout, I decided to write a small script instead of having CI/CD pipelines do it.
Here is a one-click deploy script I wrote using the ecs-deploy script as a dependency to achieve a rolling deploy of a docker image to ECS.
You can run this locally from your dev or build/deployment box or use Jenkins or some local build tool.
#!/bin/bash
# automatically login to AWS
eval $(aws ecr get-login)
# build local docker image and push repo to AWS
docker build -t <yourlocaldockerimagetag> .
docker tag <yourlocaldockerimagetag>:latest <yourECSRepoURL>:latest
docker -D -l debug push <yourECSRepoURL>:latest
# deploy to ECS
ecs-deploy/ecs-deploy -m 50 -k <access-key> -s <secret-key> -r <aws-region> -c <cluster-name> -n <service-name> -i <yourECSRepoURL>:latest
Parameters:
cluster-name: Your cluster name in ECS
service-name: Your service name that you had created in ECS
yourECSRepoURL: ECS Repository URL
yourlocaldockerimagetag: Any local image tag name
access-key: your AWS access key for deployments
secret-key: your AWS secret key
Make sure you install ecs-deploy before this script.
The -m 50 tells it that it can deploy even if the number of nodes drops to 50%. Ideally you would have an extra node to do deployments, but if you can't afford that setting this would ensure that deployments continue to happen.
If you are also using an ELB (load balancer), then the default deregistration delay for target groups is 5 minutes which is a bit excessive. The deregistration delay is the time to wait for existing requests to complete BEFORE ECS sends a SIGTERM or SIGINT to your docker container. You should lower this by going to the Target Groups in EC2 dashboard and click the Edit Attributes to edit it. Otherwise your deployments may take forever.
I think nobody has mentioned CodePipeline from AWS, it really integrates easilly with many AWS Services including ECS and CodeCommit:
Push commit to CodeCommit Repo, triggering the pipeline execution.
(Optional) Configure a Manual Approval step that needs you to take an action before Build.
Run a CodeBuild Project that builds your Dockerfile and push the image to an ECR Repo.
Run a "Deploy" step that deploys to a specific ECS Service. It updates the services with a new Task Definition that points to the new ECR Image.
I have used this flow with BitBucket also, just configure a BitBucket pipeline that pushes all new code to a CodeCommit Repo as a previous step.
Exactly as #minamiyojo and #astav answers, we ended up glueing ecs-deploy with a template engine to power up our CD pipeline with some reusable component, we just open-sourced as well:
https://github.com/GuccioGucci/yoke
Please refer to Motivation section in README, hope this would help your scenario too.
I am developing an Elastic Beanstalk app. It is a Scala web application, built with sbt. I want to deploy the resulting WAR from the command line to an existing environment.
All I can find is the eb CLI which appears to require you to use git: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-getting-started.html
Is there not a way to simply specify a WAR and environment name to perform the deployment?
What is the best workaround otherwise? I can upload to S3 from the command line and then use the web app to choose that file, but it's a bit more painful than I wanted.
You can use Elastic Beanstalk CLI (eb) instead of AWS CLI. Just run eb create to create a new environment and eb deploy to update your environment.
You can set specific artifact (your *.war file), by configuring the EB CLI (read: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-configuration.html#eb-cli3-artifact):
You can tell the EB CLI to deploy a ZIP or WAR file that you generate
as part of a separate build process by adding the following lines to
.elasticbeanstalk/config.yml in your project folder.
deploy:
artifact: path/to/buildartifact.zip
I found a way - use the aws CLI instead. First upload to S3 (I actually use s3cmd) then create an application version:
$ aws elasticbeanstalk create-application-version --application-name untaggeddb --version-label myLabel --source-bundle S3Bucket="bucketName",S3Key="key.war"
I believe the application version can then be deployed with update-environment also using the aws CLI.
I want to integrate Atlassian Bamboo with AWS Elastic Beanstalk. Is there anyway to do this?
It depends a bit on your Bamboo and beanstalk config as well as the type of application you are planning to deploy on AWS Beanstalk.
We did some things for Java Web Apps:
Since Bamboo understands maven, you can have a look at the following maven plugin:
http://beanstalker.ingenieux.com.br/beanstalk-maven-plugin/configurations-and-templates.html
We are using it for some environments to create wars and upload them to elastic beanstalk. You can then create a maven task in bamboo to call the plugin.
If you downloaded and installed Bamboo on a machine you own yourself you could use the Elastic Beanstalk command line interface (CLI).
This is probably the most powerful approach, but you need to install the CLI on the bamboo instance. Then you can do almost anything. This approach should also work for other environments besides Java/Tomcat.
Another idea:
If you use Beanstalk using git (i.e. you deploy by making a code change and pushing to Beanstalk), then you can also use the new "Deployment Project" Feature in Bamboo to push the code once it passes all tests.
David's answer provides good options for cross product usage of AWS Elastic Beanstalk (+1). Nowadays I'd recommend the excellent unified AWS Command Line Interface over the now legacy AWS Elastic Beanstalk API Command Line Interface, see the resp. AWS CLI commands for elasticbeanstalk.
If you are looking for a Bamboo specific solution, you might be interested in Utoolity's Tasks for AWS (Bamboo) add-on (commercial, see disclaimer), which provides three dedicated tasks, specifically:
AWS Elastic Beanstalk Application - create, update or delete AWS Elastic Beanstalk applications.
AWS Elastic Beanstalk Application Version - create, update or delete AWS Elastic Beanstalk application versions.
AWS Elastic Beanstalk Environment - create, update, rebuild, restart, swap or terminate AWS Elastic Beanstalk environments and specify configuration settings and advanced options.
Disclaimer: I'm the co-founder of this add-on's vendor, Utoolity.
In case you're interested in C# deployments:
What we do is to simply start the awsdeploy tool (should already be installed on the build server) with a link to the configuration script. I create the environment simply in Visual Studio and when I redeploy the application once, I save the script. Once the script is on the build server, I reference it in the deployment configuration with awsdeploy /r c:\location\of\myscript.txt.
The package itself the is referenced in the AWS deployment configuration script is created at build time with the MSbuild /target:package command and defined as an artifact (default location of the ZIP package is c:\build-dir\...\project\obj\debug\package, but can be overwritten.
Everything works pretty well so far, although I am having problem to start an elastic instance when none is available (e.g. nightly builds).
Take a look at our repo: https://github.com/matzegebbe/docker-aws-login
With that snippet you are able to login with the aws an push images
simple bamboo task script (of course you need docker installed on the agents):
#!/bin/bash
docker images hellmann/awscli | grep -q awscli
[ "$?" -eq "0" ] && exit 0
cat <<'EOF' >> Dockerfile
FROM python
MAINTAINER Mathias Gebbe <mathias.gebbe#hellmann.net>
RUN pip install awscli --ignore-installed six
ENV aws_access_key_id AWS_ACCESS_KEY
ENV aws_secret_access_key AWS_SECRET_ACCESS_KEY
RUN mkdir /root/.aws/
RUN printf "[default]\nregion = eu-west-1\n" > /root/.aws/config
RUN printf "[default]\naws_access_key_id = ${aws_access_key_id}\naws_secret_access_key = ${aws_secret_access_key}\n" > /root/.aws/credentials
ENTRYPOINT ["/bin/bash","-c"]
CMD ["aws ecr get-login"]
EOF
docker build -t hellmann/awscli .
$(docker run --rm hellmann/awscli)