I'm trying to migrate an existing cluster of processing workers on the back of an SQS queue to be deployed using elastic beanstalk. Is there a way using the eb cli to specify the queue either by name or id?
My current command looks like this:
eb create -t worker -k my-key
I know it is possible in the UI but that's not going to work with our CI pipeline:
You can't do that with an eb cli option, but you can use a config file inside .ebextensions directory. See example:
option_settings:
- namespace: aws:elasticbeanstalk:sqsd
option_name: WorkerQueueURL
value: YOUR-QUEUE-URL
Here you can see related discussion: https://forums.aws.amazon.com/thread.jspa?messageID=706191
Here is documentation for the option: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elasticbeanstalksqsd
Related
I just followed this tutorial to learn how to use eb command.
One thing I want to do is to modify the Health Check Type of the auto scaling group created by Elastic-Beanstalk to ELB. But I just can't find how to do it.
Here's what I have done:
Change the Health Check Type of the environment dev-env to ELB through the AWS console.
Use eb config save dev-env --cfg my-configuration to save the configuration file locally.
The ELB health check type doesn't appear inside .elasticbeanstalk/saved_configs/my-configuration.cfg.yml file. This means that I must specify the health check type somewhere else.
Then I find another article saying that you can put the health check type inside .ebextensions folder.
So I make a modification to eb-python-flask, which is the example of the tutorial.
Here's my modification of eb-python-flask.
I thought that running eb config put prod, and eb create prod2-env --cfg prod with my eb-python-flask would create an environment whose health-check-type of the auto scaling group is ELB. But I was wrong. The health check type created by the eb commands is still EC2.
Anyone know how to set the health check type programmatically?
I don't want to set it through AWS console. It's inconvenient.
An ebextension like the below will do it:
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: ELB
HealthCheckGracePeriod: 300
I use the path .ebextensions/autoscaling.config
eb create prod3-env --cfg prod command uses git HEAD version to create a zip file to upload to elastic beanstalk.
This can be discovered through eb create --verbose prod3-env --cfg prod command, which shows you a verbose output.
The reason I failed to run my own configuraion is that I didn't commit the config file to git before running eb create prod3-env --cfg prod.
After committing the changes of the code, I successfully deployed an Auto Scaling Group whose Health Check Type is ELB.
The use case is like - developer makes some code changes and the below things happen automatically -
build runs, application artifact created, docker image generated with the artifact, image pushed to Docker registry, AWS ECS tasks and ECS services updated.
I want to know what are the ways to achieve the above automation of update of AWS ECS services. Till now I have implemented AWS ECS update from Jenkins build using -
1>run post build AWS CLi scripts from Jenkins to update ECS
2>post build action or pipeline step to invoke AWS Lambda function. I have created one Lambda function in Java to implement that.
Please let me the other ways we can achieve the above. Thanks.
I'm continuously deploying Docker containers from CircleCI to AWS ECS.
The outline of the deployment flow is as follows:
Build and tag a new Docker image
Login to AWS ECR and push the image
Update task definitions and services of ECS with ecs-deploy
ecs-deploy is a useful script that updates Docker images in ECS.
https://github.com/silinternational/ecs-deploy
You could use a shell script that calls aws cli commands to create cloudformation stacks or directly call the create commands in the aws cli for the ECR repository, Task Definition, Events rule and target(for scheduling).
then you just call this script on your terminal using this command: ./setup.sh and it should execute all your commands at once.
aws ecr create-repository \
--repository-name tasks-${TASK_NAME}-${TASK_ENV} \
;
or if you want to set up your resources via cloudformation templates, you can launch them using this command as long as the template exists at file://name.yml:
aws cloudformation create-stack \
--stack-name stack-name \
--capabilities CAPABILITY_IAM \
--template-body file://name.yml \
--parameters
ParameterKey=ParamName,ParameterValue=${PARAM_NAME} \
;
Take a look at Codefresh - https://docs.codefresh.io/docs/amazon-ecs
You can build your pipeline
Build Step
Push to Registry
Deply to ECS
That easy
While there are a ton of CI/CD tools out there, since I am early in my rollout, I decided to write a small script instead of having CI/CD pipelines do it.
Here is a one-click deploy script I wrote using the ecs-deploy script as a dependency to achieve a rolling deploy of a docker image to ECS.
You can run this locally from your dev or build/deployment box or use Jenkins or some local build tool.
#!/bin/bash
# automatically login to AWS
eval $(aws ecr get-login)
# build local docker image and push repo to AWS
docker build -t <yourlocaldockerimagetag> .
docker tag <yourlocaldockerimagetag>:latest <yourECSRepoURL>:latest
docker -D -l debug push <yourECSRepoURL>:latest
# deploy to ECS
ecs-deploy/ecs-deploy -m 50 -k <access-key> -s <secret-key> -r <aws-region> -c <cluster-name> -n <service-name> -i <yourECSRepoURL>:latest
Parameters:
cluster-name: Your cluster name in ECS
service-name: Your service name that you had created in ECS
yourECSRepoURL: ECS Repository URL
yourlocaldockerimagetag: Any local image tag name
access-key: your AWS access key for deployments
secret-key: your AWS secret key
Make sure you install ecs-deploy before this script.
The -m 50 tells it that it can deploy even if the number of nodes drops to 50%. Ideally you would have an extra node to do deployments, but if you can't afford that setting this would ensure that deployments continue to happen.
If you are also using an ELB (load balancer), then the default deregistration delay for target groups is 5 minutes which is a bit excessive. The deregistration delay is the time to wait for existing requests to complete BEFORE ECS sends a SIGTERM or SIGINT to your docker container. You should lower this by going to the Target Groups in EC2 dashboard and click the Edit Attributes to edit it. Otherwise your deployments may take forever.
I think nobody has mentioned CodePipeline from AWS, it really integrates easilly with many AWS Services including ECS and CodeCommit:
Push commit to CodeCommit Repo, triggering the pipeline execution.
(Optional) Configure a Manual Approval step that needs you to take an action before Build.
Run a CodeBuild Project that builds your Dockerfile and push the image to an ECR Repo.
Run a "Deploy" step that deploys to a specific ECS Service. It updates the services with a new Task Definition that points to the new ECR Image.
I have used this flow with BitBucket also, just configure a BitBucket pipeline that pushes all new code to a CodeCommit Repo as a previous step.
Exactly as #minamiyojo and #astav answers, we ended up glueing ecs-deploy with a template engine to power up our CD pipeline with some reusable component, we just open-sourced as well:
https://github.com/GuccioGucci/yoke
Please refer to Motivation section in README, hope this would help your scenario too.
So I have a docker container running jenkins and an EC2 registry on AWS. I would like to have jenkins push containers back to the EC2 registry.
To do this, I would like to be able to automate the aws configure and get login steps on container startup. I figured that I would be able to
export AWS_ACCESS_KEY_ID=*
export AWS_SECRET_ACCESS_KEY=*
export AWS_DEFAULT_REGION=us-east-1
export AWS_DEFAULT_OUTPUT=json
Which I expected to cause aws configure to complete automatically, but that did not work. I then tried creating configs as per the AWS docs and repeating the process, which also did not work. I then tried using aws configure set also with no luck.
I'm going bonkers here, what am I doing wrong?
No real need to issue aws configure instead as long as you populate env vars
export AWS_ACCESS_KEY_ID=aaaa
export AWS_SECRET_ACCESS_KEY=bbbb
... also export zone and region
then issue
aws ecr get-login --region ${AWS_REGION}
you will achieve the same desired aws login status ... as far as troubleshooting I suggest you remote login into your running container instance using
docker exec -ti CONTAINER_ID_HERE bash
then manually issue above aws related commands interactively to confirm they run OK before putting same into your Dockerfile
I am developing an Elastic Beanstalk app. It is a Scala web application, built with sbt. I want to deploy the resulting WAR from the command line to an existing environment.
All I can find is the eb CLI which appears to require you to use git: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-getting-started.html
Is there not a way to simply specify a WAR and environment name to perform the deployment?
What is the best workaround otherwise? I can upload to S3 from the command line and then use the web app to choose that file, but it's a bit more painful than I wanted.
You can use Elastic Beanstalk CLI (eb) instead of AWS CLI. Just run eb create to create a new environment and eb deploy to update your environment.
You can set specific artifact (your *.war file), by configuring the EB CLI (read: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-configuration.html#eb-cli3-artifact):
You can tell the EB CLI to deploy a ZIP or WAR file that you generate
as part of a separate build process by adding the following lines to
.elasticbeanstalk/config.yml in your project folder.
deploy:
artifact: path/to/buildartifact.zip
I found a way - use the aws CLI instead. First upload to S3 (I actually use s3cmd) then create an application version:
$ aws elasticbeanstalk create-application-version --application-name untaggeddb --version-label myLabel --source-bundle S3Bucket="bucketName",S3Key="key.war"
I believe the application version can then be deployed with update-environment also using the aws CLI.
I'm banging my head against a wall trying to both install and then enable a service in elastic beanstalk. What I want to do is:
Install a service in /etc/init.d that points to my python app in /opt/python/current/app/
Have Elastic Beanstalk start and keep-alive the service, as specified in an .ebextensions/myapp.config file.
(Reference: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services)
Here's my .ebextensions/myapp.config file:
container_commands:
01_copy_service:
command: "cp /opt/python/ondeck/app/my_service /etc/init.d/"
02_chmod_service:
command: "chmod +x /etc/init.d/my_service"
services:
sysvinit:
my_service:
enabled: true
ensureRunning: true
files : [/etc/init.d/my_service]
This fails because services are run before container_commands. If I comment out services, deploy, then uncomment services, then deploy again, it will work. But I want to have a single-step deploy, because this will be an auto-scaling node.
Is there a solution? Thanks!
Nate, I have the exact same scenario as you and I solved it this way:
Drop the "services" section and add a "restart" command.
container_commands:
...
03_restart_service:
command: /sbin/service my_service restart
You can cause the service to restart after a command is run by using a commands: key under the services: key. The documentation for the services: key is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services
I haven't done it myself, but I want to give you some ideas which should work. It's just the matter of convenience and the workflow.
Since it is not really application file, but rather EC2 file, and unlikely to be changed often, you can do one of the following:
Use files content to create the service init script. You can even have a specific config file just for that script.
Store service init script on S3 and copy the contents with command.
Create dummy service script, replace the contents with the one from deployment with container command and dependency on the above command to the service.
(this one is heavy) Create custom AMI and specify it in Autoscaling configuration.
Hope it helps.