AWS CodeDeploy redeployment from command line - amazon-web-services

Is it possible to retry/redeploy a previously successful deployment through command line? I know a list of deployments can be fetched from command line using
aws deploy list-deployments but I didn't find any option to rerun the deployment using deployment-id returned from this command.
There is an option to retry a previously run deployment from console though.

Re-deployment is simply based on creating new-deployment using a previous version of your application. Unfortunately, there is no special redeploy command for that.
In the docs there is example how to redeploy sample project:
aws deploy create-deployment --application-name HelloWorld_App --deployment-config-name CodeDeployDefault.OneAtATime --deployment-group-name HelloWorld_DepGroup --s3-location bucket=codedeploydemobucket,bundleType=zip,key=HelloWorld_App.zip

Here's a bash script that finds the last successful deployment and deploys that. With a few simple changes you could
deploy the last successful blue deployment to green
specify a deployment-config-name or description
add a wait deployment-successful then a notification or another action
applicationName="my-application-name"
deploymentGroupName="my-deployment-group"
lastSuccessfulDeployment=$(aws deploy list-deployments --application-name $applicationName --deployment-group-name $deploymentGroupName --include-only-statuses "Succeeded" --query 'deployments[0]' --output text)
s3Location=$(aws deploy get-deployment --deployment-id $lastSuccessfulDeployment --query 'deploymentInfo.revision.s3Location')
aws deploy create-deployment --application-name $applicationName --deployment-group-name $deploymentGroupName --s3-location $s3Location

Clubbing other answers, I was able to make it work as below:
REGION=us-east-1
applicationName="MyApp"
deploymentGroupName="MyDeploymentGroup"
lastSuccessfulDeployment=$(aws deploy list-deployments --application-name $applicationName --deployment-group-name $deploymentGroupName --include-only-statuses "Succeeded" --query 'deployments[0]' --output text --region $REGION)
echo "lastSuccessfulDeployment: $lastSuccessfulDeployment"
s3LocationBucket=$(aws deploy get-deployment --deployment-id $lastSuccessfulDeployment --query 'deploymentInfo.revision.s3Location.bucket' --region $REGION --output text)
echo "s3LocationBucket: $s3LocationBucket"
s3LocationKey=$(aws deploy get-deployment --deployment-id $lastSuccessfulDeployment --query 'deploymentInfo.revision.s3Location.key' --region $REGION --output text)
echo "s3LocationKey: $s3LocationKey"
deploymentId=$(aws deploy create-deployment --application-name $applicationName --deployment-group-name $deploymentGroupName --deployment-config-name CodeDeployDefault.AllAtOnce --s3-location bucket=$s3LocationBucket,bundleType=zip,key=$s3LocationKey --region $REGION --query 'deploymentId' --output text)
echo "deploymentId: $deploymentId"
aws deploy wait deployment-successful --deployment-id $deploymentId --region $REGION

Related

AWS CLI create ECR repo

I am trying to create a ECR repo with list of tags from the AWS cli. This is the command I am running. I need to pass in the tags as JSON
aws ecr create-repository \
--repository-name alpha/sample-repo \
--region us-west-2 \
--tags [{"Key":"env","Value":"dev"},{"Key":"team","Value":"finance"}]
and getting the below error
Error parsing parameter '--tags': Invalid JSON:
[Key:env]
what am i missing here? How to make it work?
Try enclosing the tags with a single quote
aws ecr create-repository \
--repository-name alpha/sample-repo \
--region us-west-2 \
--tags '[{"Key":"env","Value":"dev"},{"Key":"team","Value":"finance"}]'

does AWS CLI requires default profile as mandatory?

i am trying to fetch VPC details for all region.i tried to run my script without default profile which results in error "You must specify a region. You can also configure your region by running "aws configure" ,evnthough i have my own profile configured with all required details for it.
same script works fine after configuring default profile.
Question is does AWS CLI requires default profile as mandatory ?
My script
for region in `aws ec2 describe-regions --output text| cut -f4`
do
aws ec2 --profile sam --region $region --output text --query 'Vpcs[*].{VpcId:VpcId,CidrBlock:CidrBlock}'
describe-vpcs
done
cat .aws/config
[profile sam]
output = json
region = us-east-1
If you don’t have a default profile configured, you can define the target profile with the --profile option.
aws ec2 describe-regions --profile profile-name
Another way is to set the AWS_PROFILE environment variable. This way you don’t have to explicitly add the option for every AWS CLI command.
export AWS_PROFILE=profile-name
Seems a bug in your script. I tried the below and it worked for me.
for region in `aws ec2 describe-regions --output text| cut -f4`
do
aws ec2 describe-vpcs --profile <myProfile> --region $region --output text --query 'Vpcs[*].{VpcId:VpcId,CidrBlock:CidrBlock}'
done
found the issue , need to add --profile in my first line of code as well.It works fine now.
for region in `aws ec2 describe-regions --profile sam --output text| cut -f4

Unable to tag EBS Volumes using UserData bash script

I have been trying to tag EBS Volumes attached to EC2 instances in the CloudFormation UserData section. Here was my first attempt:
Example 1:
AWS_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
ROOT_DISK_ID=`aws ec2 describe-volumes \
--filter Name=attachment.instance-id,Values="${AWS_INSTANCE_ID}" \
--query "Volumes[].VolumeId" --region us-east-1 --out text`
aws ec2 create-tags --resources "${ROOT_DISK_ID}" \
--tags 'Key=VolumeTagName,Value=VolumeTagValue' --region us-east-1
This resulted in a Template format error: Unresolved resource dependencies [AWS_INSTANCE_ID, ROOT_DISK_ID] in the Resources block of the template error.
A post I came across mentioned that using the ! when calling the variable in the Cloudformation UserData script will get around this, so it now looks like this:
Example 2:
AWS_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
ROOT_DISK_ID=`aws ec2 describe-volumes \
--filter Name=attachment.instance-id,Values="${!AWS_INSTANCE_ID}" \
--query "Volumes[].VolumeId" --region us-east-1 --out text`
aws ec2 create-tags --resources "${!ROOT_DISK_ID}" \
--tags 'Key=VolumeTagName,Value=VolumeTagValue' --region us-east-1
This gets around that error, yet still, no tags appear on the Volume attached to an instance launched with this template. If I ssh into the instance and run Example 1, it works just fine. Example 2 does not give me any errors to work with.
What am I doing wrong in bash, that is specific to Cloudformation?
If I understand correctly you're trying to create your script using cloudformation, and then executing it on the ec2-instance on startup. Using yaml, this is my userdata section:
UserData: !Base64
Fn::Join:
- ''
- - "#!/bin/bash -xe \n"
- "cat << 'EOF' > /home/ec2-user/script.sh \n"
- "AWS_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`\n"
- "ROOT_DISK_ID=`aws ec2 describe-volumes "
- "--filter Name=attachment.instance-id,Values=\"${AWS_INSTANCE_ID}\" "
- "--query \"Volumes[*].[VolumeId]\" --region eu-west-1 --out text`\n"
- "aws ec2 create-tags --resources \"${ROOT_DISK_ID}\" "
- "--tags 'Key=MyAutoTagName,Value=MyAutoTagValue' --region eu-west-1\n"
- "EOF\n"
- "chmod +x /home/ec2-user/script.sh\n"
- "/home/ec2-user/script.sh\n"
I changed the region due to the region I'm using.
If I view the contents of my script.sh file I get the below:
AWS_INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
ROOT_DISK_ID=`aws ec2 describe-volumes --filter Name=attachment.instance-id,Values="${AWS_INSTANCE_ID}" --query "Volumes[*].[VolumeId]" --region eu-west-1 --out text`
aws ec2 create-tags --resources "${ROOT_DISK_ID}" --tags 'Key=MyAutoTagName,Value=MyAutoTagValue' --region eu-west-1
The only difference I can see is your "Volumes[].[VolumeId]*", I'm not sure what your userdata section looks like, so it may be issues with escaping.
Using my UserData section above the tag was created as soon as the instance was spun up and userdata section ran.

How to get recent Amazon Linux image in all regions?

#!/bin/bash
aws ec2 describe-images \
--owners self amazon \
--filters "Name=root-device-type,Values=ebs" \
--query 'Images[*].[ImageId,CreationDate]' \
| sort -k2 -r \
| head -n1
I have written a script to get the latest Amazon Linux Image using AWS CLI. When I run this script, I get the latest Amazon Linux Image in my default region eu-west-1. How can I modify the code to get the latest image in all the regions.
add --region <region_name> to your CLI command.
Something like this
aws ec2 describe-images --region eu-west-2 \
--owners self amazon \
--filters "Name=root-device-type,Values=ebs" \
--query 'Images[*].[ImageId,CreationDate]' \
| sort -k2 -r \
| head -n1
Instead of hardcoding region names, you can use aws ec2 describe-regions command and get the list of regions and run your query for each region.
https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-regions.html
Execute aws ec2 describe-regions --query "Regions[].{Name:RegionName}" --output text which gives output as
ap-south-1
eu-west-3
eu-west-2
eu-west-1
ap-northeast-3
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
us-east-2
us-west-1
us-west-2
Now loop thru each region and execute describe-images CLI command.

aws cli --query doesn't filter output in windows command promt

I am following the documentation to use the --query option in aws cli. However it doesn't work for me at all. I have defined profiles because I have several accounts to pull the data. If I omit the --query, it returns the data successfully. Any insight into this please?
Thank you
> aws --version
aws-cli/1.14.8 Python/3.6.3 Windows/10 botocore/1.8.12
> aws ec2 describe-volumes --profile TEST1 --region us-east-1 --query 'Volumes[0]'
"Volumes[0]"
> aws ec2 describe-volumes --profile TEST1 --region us-east-1
{
"Volumes": [
{
"Attachments": [ ....
Change from single quotes to double quotes:
aws ec2 describe-volumes --profile TEST1 --region us-east-1 --query "Volumes[0]"
As soon as i switched to powershell, it works successfully. Although I am not sure why it requires using powershell.