AWS CLI create ECR repo - amazon-web-services

I am trying to create a ECR repo with list of tags from the AWS cli. This is the command I am running. I need to pass in the tags as JSON
aws ecr create-repository \
--repository-name alpha/sample-repo \
--region us-west-2 \
--tags [{"Key":"env","Value":"dev"},{"Key":"team","Value":"finance"}]
and getting the below error
Error parsing parameter '--tags': Invalid JSON:
[Key:env]
what am i missing here? How to make it work?

Try enclosing the tags with a single quote
aws ecr create-repository \
--repository-name alpha/sample-repo \
--region us-west-2 \
--tags '[{"Key":"env","Value":"dev"},{"Key":"team","Value":"finance"}]'

Related

AWS CLI restore-from-cluster-snapshot doesn't find snapshot in account

I'm trying to restore a cluster from a snapshot using
aws redshift restore-from-cluster-snapshot --cluster-identifier my-cluster
--snapshot-identifier my-identifier --profile my-profile --region my-region
But I'm receiving
An error occurred (ClusterSnapshotNotFound) when calling
the RestoreFromClusterSnapshot operation: Snapshot not found: my-identifier
I checked the available snapshots using
aws redshift describe-cluster-snapshots --profile my-profile --region my-region
And my-identifier appears as available snapshot.
Entering via Redshift console I'm also able to see the snapshots and was able to restore it from the UI.
Does anybody have any clues ?
P.S.: Not sure if it's relevant, but it's a snapshot from another account that I shared with the account where I'm trying to restore the cluster
You must specify the owner account number when restoring to enable Redshift to decrypt the shared snapshot.
aws redshift restore-from-cluster-snapshot \
--profile myAwsCliProfile \
--snapshot-identifier mySnapshotName \
--owner-account 012345678910 \
--cluster-identifier my-new-redshift-cluster \
--number-of-nodes 6 \
--node-type ra3.16xlarge \
--port 5439 \
--region us-east-1 \
--availability-zone us-east-1d \
--cluster-subnet-group-name default\
--availability-zone-relocation \
--no-publicly-accessible \
--maintenance-track-name CURRENT

AWS ECR Login with podman

Good morning/afternoon/night!
Can you help me, please?
I'm working with RHEL 8.2 and this version doesn't support Docker. I installled Podman and everything was ok until I use the following command:
$(aws ecr get-login --no-include-email --region us-east-1)
But, it doesn't work because it's from Docker (I thought it was from AWS Cli).
The error is:
# $(aws ecr get-login --no-include-email --region us-east-1)
-bash: docker: command not found
I've been searching for an answer and some people used a command like this:
podman login -u AWS -p ....
But I tried some flags and the image, but nothing is working!
What is the equivalent command for podman?
Thanks!
The above command is not associated to docker alone.
It is an AWS cli command to authenticate into the private container image registry(ECR).
Run the below command to get the password for container registry
aws ecr get-login-password --region us-east-1
Then use the password against the below command
podman login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
This is how the password from aws ecr is piped to podman using AWS CLI. BTW, the username AWS is hardwired and so never needs to be changed:
$ aws ecr get-login-password --region us-east-1 | \
podman login \
--username AWS \
--password-stdin \
<aws_account_id>.dkr.ecr.<region>.amazonaws.com
Podman will use the IAM credentials for the dev profile in ~/.aws/credentials to log into that AWS account:
[default]
aws_access_key_id = ********************
aws_secret_access_key = ****************************************
region = us-east-1
[dev]
aws_access_key_id = ********************
aws_secret_access_key = ****************************************
region = us-east-1
This is how real values can be looked up for profile dev:
$ export AWS_PROFILE=dev
$ AWS_ACCOUNT="$( aws sts get-caller-identity \
--query Account \
--output text
)"
$ AWS_REGION="$( aws configure get region )"
$ aws ecr get-login-password \
--region $AWS_REGION | \
podman login \
--password-stdin \
--username AWS \
$AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com
The above is from my blog post on the subject.

AWS CodeDeploy redeployment from command line

Is it possible to retry/redeploy a previously successful deployment through command line? I know a list of deployments can be fetched from command line using
aws deploy list-deployments but I didn't find any option to rerun the deployment using deployment-id returned from this command.
There is an option to retry a previously run deployment from console though.
Re-deployment is simply based on creating new-deployment using a previous version of your application. Unfortunately, there is no special redeploy command for that.
In the docs there is example how to redeploy sample project:
aws deploy create-deployment --application-name HelloWorld_App --deployment-config-name CodeDeployDefault.OneAtATime --deployment-group-name HelloWorld_DepGroup --s3-location bucket=codedeploydemobucket,bundleType=zip,key=HelloWorld_App.zip
Here's a bash script that finds the last successful deployment and deploys that. With a few simple changes you could
deploy the last successful blue deployment to green
specify a deployment-config-name or description
add a wait deployment-successful then a notification or another action
applicationName="my-application-name"
deploymentGroupName="my-deployment-group"
lastSuccessfulDeployment=$(aws deploy list-deployments --application-name $applicationName --deployment-group-name $deploymentGroupName --include-only-statuses "Succeeded" --query 'deployments[0]' --output text)
s3Location=$(aws deploy get-deployment --deployment-id $lastSuccessfulDeployment --query 'deploymentInfo.revision.s3Location')
aws deploy create-deployment --application-name $applicationName --deployment-group-name $deploymentGroupName --s3-location $s3Location
Clubbing other answers, I was able to make it work as below:
REGION=us-east-1
applicationName="MyApp"
deploymentGroupName="MyDeploymentGroup"
lastSuccessfulDeployment=$(aws deploy list-deployments --application-name $applicationName --deployment-group-name $deploymentGroupName --include-only-statuses "Succeeded" --query 'deployments[0]' --output text --region $REGION)
echo "lastSuccessfulDeployment: $lastSuccessfulDeployment"
s3LocationBucket=$(aws deploy get-deployment --deployment-id $lastSuccessfulDeployment --query 'deploymentInfo.revision.s3Location.bucket' --region $REGION --output text)
echo "s3LocationBucket: $s3LocationBucket"
s3LocationKey=$(aws deploy get-deployment --deployment-id $lastSuccessfulDeployment --query 'deploymentInfo.revision.s3Location.key' --region $REGION --output text)
echo "s3LocationKey: $s3LocationKey"
deploymentId=$(aws deploy create-deployment --application-name $applicationName --deployment-group-name $deploymentGroupName --deployment-config-name CodeDeployDefault.AllAtOnce --s3-location bucket=$s3LocationBucket,bundleType=zip,key=$s3LocationKey --region $REGION --query 'deploymentId' --output text)
echo "deploymentId: $deploymentId"
aws deploy wait deployment-successful --deployment-id $deploymentId --region $REGION

How to fix ''could not find Image for "kope.io/k8s..." when running kops update cluster ${NAME} --yes

I am setting up a kubernetes cluster on AWS. I run the following commands to create the cluster and fails when the final command is run, kops update cluster
COMMANDS
vim ~/.aws/config
Add the the following text
[default]
region = eu-west-2
kops delete cluster --name ${CLUSTER_NAME} --yes
export CLUSTER_NAME=example-1-kops1.k8s.local
export REGION=eu-west-2
export AWS_AVAILABILITY_ZONES=eu-west-2b
export KUBERNETES_VERSION=v1.14.1
export KOPS_STATE_STORE=s3://example-1-com-state-store
export KOPS_STATE_STORE_S3=example-1-com-state-store
aws ec2 describe-availability-zones --region $REGION
aws s3api create-bucket --bucket $KOPS_STATE_STORE_S3 --create-bucket-configuration LocationConstraint=$REGION
aws s3api put-bucket-versioning --bucket $KOPS_STATE_STORE_S3 --versioning-configuration Status=Enabled
kops create cluster --name=$CLUSTER_NAME \
--state=$KOPS_STATE_STORE --zones=$AWS_AVAILABILITY_ZONES \
--node-count=2 --node-size=t2.micro --master-size=t2.micro \
--ssh-public-key ~/.ssh/id_rsa-example-1.pub
kops update cluster ${CLUSTER_NAME} --yes
ERROR MESSAGE
error running task "LaunchConfiguration/nodes.example-1-kops1.k8s.local" (9m57s remaining to succeed): could not find Image for "kope.io/k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-13"
W0514 01:23:03.908405 21889 executor.go:130] error running task "LaunchConfiguration/master-eu-west-2b.masters.example-1-kops1.k8s.local" (9m57s remaining to succeed): could not find Image for "kope.io/k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-13"
Welcome on StackOverflow. It looks like intermittent issue with image repository of owner 383156758163 (alias kope.io).
Simply 'kope.io/k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-13' image did not exist in any AWS region, in the time you were creating KOPS cluster. I verified it with:
aws ec2 describe-images --owner 383156758163 --filters 'Name=name,Values=k8s-*-debian-stretch*' | grep k8s-1.12-debian-stretch-amd64
Update:
Image is showing up right now in describe-images's output:
"ImageLocation": "383156758163/k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-13",
"Name": "k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-13",
"ImageLocation": "383156758163/k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-14",
"Name": "k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-14",
Check out Image's CreationDate: "2019-05-14T08:57:47.000Z"
Please give it try again, should work now.

How to get recent Amazon Linux image in all regions?

#!/bin/bash
aws ec2 describe-images \
--owners self amazon \
--filters "Name=root-device-type,Values=ebs" \
--query 'Images[*].[ImageId,CreationDate]' \
| sort -k2 -r \
| head -n1
I have written a script to get the latest Amazon Linux Image using AWS CLI. When I run this script, I get the latest Amazon Linux Image in my default region eu-west-1. How can I modify the code to get the latest image in all the regions.
add --region <region_name> to your CLI command.
Something like this
aws ec2 describe-images --region eu-west-2 \
--owners self amazon \
--filters "Name=root-device-type,Values=ebs" \
--query 'Images[*].[ImageId,CreationDate]' \
| sort -k2 -r \
| head -n1
Instead of hardcoding region names, you can use aws ec2 describe-regions command and get the list of regions and run your query for each region.
https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-regions.html
Execute aws ec2 describe-regions --query "Regions[].{Name:RegionName}" --output text which gives output as
ap-south-1
eu-west-3
eu-west-2
eu-west-1
ap-northeast-3
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
us-east-2
us-west-1
us-west-2
Now loop thru each region and execute describe-images CLI command.