AWS Cognito delete all users from a user pool - amazon-web-services

How can we delete all users from a specific user pool in AWS Cognito using AWS CLI?

try with below:
aws cognito-idp list-users --user-pool-id $COGNITO_USER_POOL_ID |
jq -r '.Users | .[] | .Username' |
while read uname1; do
echo "Deleting $uname1";
aws cognito-idp admin-delete-user --user-pool-id $COGNITO_USER_POOL_ID --username $uname1;
done

In order to speed up deletion, I modified #GRVPrasad's answer to use xargs -P which will farm deletions out to multiple processes.
aws cognito-idp list-users --user-pool-id $COGNITO_USER_POOL_ID | jq -r '.Users | .[] | .Username' | xargs -n 1 -P 5 -I % bash -c "echo Deleting %; aws cognito-idp admin-delete-user --user-pool-id $COGNITO_USER_POOL_ID --username %"

Here is a bash version based on #ajilpm's batch script:
# deleteAllUsers.sh
COGNITO_USER_POOL_ID=$1
aws cognito-idp list-users --user-pool-id $COGNITO_USER_POOL_ID |
jq -r '.Users | .[] | .Username' |
while read user; do
aws cognito-idp admin-delete-user --user-pool-id $COGNITO_USER_POOL_ID --username $user
echo "$user deleted"
done
You must have jq installed and remember to make the script executable: chmod +x deleteAllUsers.sh.
The user pool id can be provided as a command line argument: ./deleteAllUsers.sh COGNITO_USER_POOL_ID.

I created a script to do it from Windows CMD if you have AWS Cli installed and configured, which will delete all the users page by page, so you need to run it till all users are removed.
You need to have JQ downloaded and its path added to system env path for the following to work.
---delete.bat---
#echo off setlocal
for /f "delims=" %%I in
('aws cognito-idp list-users --user-pool-id $COGNITO_USER_POOL_ID ^|
jq -r ".Users | .[] | .Username"')
do
(aws cognito-idp admin-delete-user --user-pool-id $COGNITO_USER_POOL_ID --username %%I
echo %%I deleted)
---delete.bat---

Sorry cannot add comment. I had same requirement and slight modification in command mentioned by ajilpm worked in windows 10 for me. You need to download jq.exe and keep on path in command line
---Start.bat---
#echo off setlocal
for /f "delims=" %%I in ('aws cognito-idp list-users --user-pool-id us-west-2_O7rRBQ5rr --profile dev-hb ^| jq -r ".Users | .[] | .Username"') do ( aws cognito-idp admin-delete-user --user-pool-id us-west-2_O7rRBQ5rr --username %%I --profile dev-hb)
---delete.bat---

With Python and boto3:
I use email as username
import boto3 as aws
import pandas as pd
client_cognito = aws.client('cognito-idp')
getProperties = pd.read_csv('CognitoUsers.csv',header=0)
usernames = getProperties['email']
for username in usernames:
response = client_cognito.admin_delete_user(
UserPoolId="us-east-1_xxxxxxxxx",
Username = username,
)
you need login in aws-cli with your AWS Access Key ID & AWS Secret Access Key

Related

Why the environment variable is not working with Docker run command used in Jenkins pipeline

Below is my Deployment stage pipeline code.
stage('Deploy') {
if (continueBuild) {
println("Start Deployment");
//Deploy step for liberty-web
if ("${repo_name}" == 'enterprise-content-management/liberty-web') {
if ("${deploy_env}" == "DEV") {
def REACT_APP_CONFIGS = sh(script: "aws ssm get-parameter --region us-east-1 --name \"/liberty/config/liberty-web_dev/app.config\" | jq -r '.Parameter.Value'", returnStdout: true).trim().replaceAll('\n', '').replaceAll('\"', '\\\\"');
def APP_SPECIFIC_CONFIG = sh(script: "aws ssm get-parameter --region us-east-1 --name \"/liberty/config/liberty-web_dev/app.appSpecificConfig\" | jq -r '.Parameter.Value'", returnStdout: true).trim().replaceAll('\n', '').replaceAll('\"', '\\\\"');
print REACT_APP_CONFIGS
print APP_SPECIFIC_CONFIG
def CLOUDFRONT_DISTRIBUTION_ID = sh(script: "aws ssm get-parameter --region us-east-1 --name \"/liberty/config/liberty-web_dev/cloudfront.distribution.id\" | jq -r '.Parameter.Value'", returnStdout: true).trim()
print CLOUDFRONT_DISTRIBUTION_ID
def DEPLOYMENT_BUCKET = sh(script: "aws ssm get-parameter --region us-east-1 --name \"/liberty/config/liberty-web_dev/s3.bucket.name\" | jq -r '.Parameter.Value'", returnStdout: true).trim()
print DEPLOYMENT_BUCKET
writeFile file: 'build-web-dev.sh', text: "#!/usr/bin/env bash \n docker run --rm --env REACT_APP_CONFIGS=\"${REACT_APP_CONFIGS}\" --env APP_SPECIFIC_CONFIG=\"${APP_SPECIFIC_CONFIG}\" --name liberty-web -v /data/jenkins/workspace/liberty-web-deployment:/Project -w /Project node:12-alpine npm run build"
sh 'cat build-web-dev.sh'
sh 'bash build-web-dev.sh'
sh "aws cloudfront create-invalidation --distribution-id ${CLOUDFRONT_DISTRIBUTION_ID} --paths \"/*\" && aws s3 sync build/ s3://${DEPLOYMENT_BUCKET}"
}
}
}
}
This is a node app. When i try to access below 2 env variables mentioned(REACT_APP_CONFIGS, APP_SPECIFIC_CONFIG) only REACT_APP_CONFIGS works. These values of the params are stored in SSM in AWS. I tried by putting the same value for both variables. But still the same. Ex;-
In my node app
console.log(process.env.REACT_APP_CONFIGS) -> gives correct value
console.log(process.env.APP_SPECIFIC_CONFIG) -> undefined
What is the reason for this behaviour?

AWS sts assume role in one command

To assume an AWS role in the CLI, I do the following command:
aws sts assume-role --role-arn arn:aws:iam::123456789123:role/myAwesomeRole --role-session-name test --region eu-central-1
This gives to me an output that follows the schema:
{
"Credentials": {
"AccessKeyId": "someAccessKeyId",
"SecretAccessKey": "someSecretAccessKey",
"SessionToken": "someSessionToken",
"Expiration": "2020-08-04T06:52:13+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "idOfTheAssummedRole",
"Arn": "theARNOfTheRoleIWantToAssume"
}
}
And then I manually copy and paste the values of AccessKeyId, SecretAccessKey and SessionToken in a bunch of exports like this:
export AWS_ACCESS_KEY_ID="someAccessKeyId"
export AWS_SECRET_ACCESS_KEY="someSecretAccessKey"
export AWS_SESSION_TOKEN="someSessionToken"
To finally assume the role.
How can I do this in one go? I mean, without that manual intervention of copying and pasting the output of the aws sts ... command on the exports.
No jq, no eval, no multiple exports - using the printf built-in (i.e. no credential leakage through /proc) and command substitution:
export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" \
$(aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/MyAssumedRole \
--role-session-name MySessionName \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
--output text))
Finally, a colleague shared with me this awesome snippet that gets the work done in one go:
eval $(aws sts assume-role --role-arn arn:aws:iam::123456789123:role/myAwesomeRole --role-session-name test | jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\nexport AWS_SESSION_TOKEN=\(.SessionToken)\n"')
Apart from the AWS CLI, it only requires jq which is usually installed in any Linux Desktop.
You can store an IAM Role as a profile in the AWS CLI and it will automatically assume the role for you.
Here is an example from Using an IAM role in the AWS CLI - AWS Command Line Interface:
[profile marketingadmin]
role_arn = arn:aws:iam::123456789012:role/marketingadminrole
source_profile = user1
This is saying:
If a user specifies --profile marketingadmin
Then use the credentials of profile user1
To call AssumeRole on the specified role
This means you can simply call a command like this and it will assume the role and use the returned credentials automatically:
aws s3 ls --profile marketingadmin
Arcones's answer is good but here's a way that doesn't require jq:
eval $(aws sts assume-role \
--role-arn arn:aws:iam::012345678901:role/TrustedThirdParty \
--role-session-name=test \
--query 'join(``, [`export `, `AWS_ACCESS_KEY_ID=`,
Credentials.AccessKeyId, ` ; export `, `AWS_SECRET_ACCESS_KEY=`,
Credentials.SecretAccessKey, `; export `, `AWS_SESSION_TOKEN=`,
Credentials.SessionToken])' \
--output text)
I have had the same problem and I managed using one of the runtimes that the CLI served me.
Once obtained the credentials I used this approach, even if not so much elegant (I used PHP runtime, but you could use what you have available in your CLI):
- export AWS_ACCESS_KEY_ID=`php -r 'echo json_decode(file_get_contents("credentials.json"))->Credentials->AccessKeyId;'`
- export AWS_SECRET_ACCESS_KEY=`php -r 'echo json_decode(file_get_contents("credentials.json"))->Credentials->SecretAccessKey;'`
- export AWS_SESSION_TOKEN=`php -r 'echo json_decode(file_get_contents("credentials.json"))->Credentials->SessionToken;'`
where credentials.json is the output of the assumed role:
aws sts assume-role --role-arn "arn-of-the-role" --role-session-name "arbitrary-session-name" > credentials.json
Obviously this is just an approach, particularly helping in case of you are automating the process. It worked to me, but I don't know if it's the best. For sure not the most linear.
You can use aws config with external source following the guide: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html.
Create a shell script, for example assume-role.sh:
#!/bin/sh
aws sts --profile $2 assume-role --role-arn arn:aws:iam::123456789012:role/$1 \
--role-session-name test \
--query "Credentials" \
| jq --arg version 1 '. + {Version: $version|tonumber}'
At ~/.aws/config config profile with shell script:
[profile desktop]
region=ap-southeast-1
output=json
[profile external-test]
credential_process = "/path/assume-role.sh" test desktop
[profile external-test2]
credential_process = "/path/assume-role.sh" test2 external-test
Incase anyone wants to use credential file login:
#!/bin/bash
# Replace the variables with your own values
ROLE_ARN=<role_arn>
PROFILE=<profile_name>
REGION=<region>
# Assume the role
TEMP_CREDS=$(aws sts assume-role --role-arn "$ROLE_ARN" --role-session-name "temp-session" --output json)
# Extract the necessary information from the response
ACCESS_KEY=$(echo $TEMP_CREDS | jq -r .Credentials.AccessKeyId)
SECRET_KEY=$(echo $TEMP_CREDS | jq -r .Credentials.SecretAccessKey)
SESSION_TOKEN=$(echo $TEMP_CREDS | jq -r .Credentials.SessionToken)
# Put the information into the AWS CLI credentials file
aws configure set aws_access_key_id "$ACCESS_KEY" --profile "$PROFILE"
aws configure set aws_secret_access_key "$SECRET_KEY" --profile "$PROFILE"
aws configure set aws_session_token "$SESSION_TOKEN" --profile "$PROFILE"
aws configure set region "$REGION" --profile "$PROFILE"
# Verify the changes have been made
aws configure list --profile "$PROFILE"
based on Nev Stokes's answer if you want to add credentials to a file
using printf
printf "
[ASSUME-ROLE]
aws_access_key_id = %s
aws_secret_access_key = %s
aws_session_token = %s
x_security_token_expires = %s" \
$(aws sts assume-role --role-arn "arn:aws:iam::<acct#>:role/<role-name>" \
--role-session-name <session-name> \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken,Expiration]" \
--output text) >> ~/.aws/credentials
if you prefere awk
aws sts assume-role \
--role-arn "arn:aws:iam::<acct#>:role/<role-name>" \
--role-session-name <session-name> \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken,Expiration]" \
--output text | awk '
BEGIN {print "[ROLE-NAME]"}
{ print "aws_access_key_id = " $1 }
{ print "aws_secret_access_key = " $2 }
{ print "aws_session_token = " $3 }
{ print "x_security_token_expires = " $4}' >> ~/.aws/credentials
to update credentials in ~/.aws/credentials file run blow sed command before running one of the above command.
sed -i -e '/ROLE-NAME/,+4d' ~/.aws/credentials

Find and confirm unconfirmed users in Cognito using a bash script

I am trying to write a bash script to confirm all unconfirmed users in a Cognito User Pool. The documentation here says that I can use cognito:user_status to filter by state. So I wrote this code.
#!/bin/bash
USER_POOL_ID=pool_id
RUN=1
until [ $RUN -eq 0 ] ; do
echo "Listing users"
# Here is the problem after the --filter param. How should I query for the unconfirmed users?
USERS=`aws --profile jaws-lap cognito-idp list-users --user-pool-id ${USER_POOL_ID} --filter 'cognito:user_status="unconfirmed"' | grep Username | awk -F: '{print $2}' | sed -e 's/\"//g' | sed -e 's/,//g'`
if [ ! "x$USERS" = "x" ] ; then
for user in $USERS; do
echo "Confirming user $user"
aws --profile jaws-lap cognito-idp admin-delete-user --user-pool-id ${USER_POOL_ID} --username ${user}
echo "Result code: $?"
echo "Done"
done
else
echo "Done, no more users"
RUN=0
fi
done
The thing is that the --filter is not configured properly. How should I write the statement so I get the unconfirmed users?
Thanks.
This command worked for me:
aws cognito-idp list-users --user-pool-id xxx --filter 'cognito:user_status="CONFIRMED"'

Copy docker image from one AWS ECR repo to another

We want to copy a docker image from non-prod to prod ECR account. Is it possible without pulling, retaging and pushing it again.
No you have to run these commands
docker login OLD_REPO
docker pull OLD_REPO/IMAGE:TAG
docker tag OLD_REPO/IMAGE:TAG NEW_REPO/IMAGE:TAG
docker login NEW_REPO
docker push NEW_REPO/IMAGE:TAG
I have written this program in python to migrate all the images (or a specific image) from a repository to another region or to another account in a different region
https://gist.github.com/fabidick22/6a1962697357360f0d73e01950ae962b
Answer: No, you must pull, tag, and push.
I wrote a bash script for this today. You can specify the number of tagged images that will be copied.
https://gist.github.com/virtualbeck/a635ef6701991f2087384eab7edbb18b
a slight improvement ( and may be couple of bug fixes on ) this answer: https://stackoverflow.com/a/69905254/65706
set -e
################################# UPDATE THESE #################################
LAST_N_TAGS=10
SRC_AWS_REGION="us-east-1"
TGT_AWS_REGION="eu-central-1"
SRC_AWS_PROFILE="your_source_aws_profile"
TGT_AWS_PROFILE="your_target_aws_profile"
SRC_BASE_PATH="386151140899.dkr.ecr.$SRC_AWS_REGION.amazonaws.com"
TGT_BASE_PATH="036149202915.dkr.ecr.$TGT_AWS_REGION.amazonaws.com"
#################################################################################
URI=($(aws ecr describe-repositories --profile $SRC_AWS_PROFILE --query 'repositories[].repositoryUri' --output text --region $SRC_AWS_REGION))
NAME=($(aws ecr describe-repositories --profile $SRC_AWS_PROFILE --query 'repositories[].repositoryName' --output text --region $SRC_AWS_REGION))
echo "Start repo copy: `date`"
# source account login
aws --profile $SRC_AWS_PROFILE --region $SRC_AWS_REGION ecr get-login-password | docker login --username AWS --password-stdin $SRC_BASE_PATH
# destination account login
aws --profile $TGT_AWS_PROFILE --region $TGT_AWS_REGION ecr get-login-password | docker login --username AWS --password-stdin $TGT_BASE_PATH
for i in ${!URI[#]}; do
echo "====> Grabbing latest $LAST_N_TAGS from ${NAME[$i]} repo"
# create ecr repo if one does not exist in destination account
aws ecr --profile $SRC_AWS_PROFILE --region $SRC_AWS_REGION describe-repositories --repository-names ${NAME[$i]} || aws ecr --profile $TGT_AWS_PROFILE --region $TGT_AWS_REGION create-repository --repository-name ${NAME[$i]}
for tag in $(aws ecr describe-images --repository-name ${NAME[$i]} \
--query 'sort_by(imageDetails,& imagePushedAt)[*]' \
--filter tagStatus=TAGGED --output text \
| grep IMAGETAGS | awk '{print $2}' | tail -$LAST_N_TAGS); do
# if [[ ${NAME[$i]} == "repo-name/frontend-nba" ]]; then
# continue
# fi
# # 386517340899.dkr.ecr.us-east-1.amazonaws.com/spectralha-api/white-ref-detector
# if [[ ${NAME[$i]} == "386351741199.dkr.ecr.us-east-1.amazonaws.com/repo-name/white-ref-detector" ]]; then
# continue
# fi
echo "START ::: pulling image ${URI[$i]}:$tag"
AWS_REGION=$SRC_AWS_REGION AWS_PROFILE=$SRC_AWS_PROFILE docker pull ${URI[$i]}:$tag
AWS_REGION=$SRC_AWS_REGION AWS_PROFILE=$SRC_AWS_PROFILE docker tag ${URI[$i]}:$tag $TGT_BASE_PATH/${NAME[$i]}:$tag
echo "STOP ::: pulling image ${URI[$i]}:$tag"
echo "START ::: pushing image $TGT_BASE_PATH/${NAME[$i]}:$tag"
# status=$(AWS_REGION=$TGT_AWS_REGION AWS_PROFILE=$TGT_AWS_PROFILE docker push $TGT_BASE_PATH/${NAME[$i]}:$tag)
# echo $status
AWS_REGION=$TGT_AWS_REGION AWS_PROFILE=$TGT_AWS_PROFILE docker push $TGT_BASE_PATH/${NAME[$i]}:$tag
echo "STOP ::: pushing image $TGT_BASE_PATH/${NAME[$i]}:$tag"
sleep 2
echo ""
done
# docker image prune -a -f #clean-up ALL the images on the system
done
echo "Finish repo copy: `date`"
echo "Don't forget to purge you local docker images!"
#Uncomment to delete all
#docker rmi $(for i in ${!NAME[#]}; do docker images | grep ${NAME[$i]} | tr -s ' ' | cut -d ' ' -f 3 | uniq; done) -f

AWS SSM Parameters Store

Is there anyway to just nuke / remove all items in AWS Parameters Store?
All the command line I found are to remove it either one by one or remove it given a list of names.
I also tried using
aws ssm delete-parameters --cli-input-json test.json
with test.json file looks like this
{
"Names": [
"test1",
"test2"
]
}
still does not work..
Ideally if I can use --query and use it as is, that'd be great.
I'm using --query like so
aws ssm get-parameters-by-path --path / --max-items 2 --query 'Parameters[*].[Name]'
When you need to delete all parameters by path in AWS Systems Manager Parameter Store and there are more than 10 parameters you have to deal with pagination.
Otherwise, an the command will fail with the error:
An error occurred (ValidationException) when calling the DeleteParameters operation: 1 validation error detected: Value '[/config/application/prop1, ...]' at 'names' failed to satisfy constraint: Member must have length less than or equal to 10
The following Bash script using AWS CLI pagination options deletes any number of parameters from AWS SSM Parameter Store by path:
#!/bin/bash
path=/config/application_dev/
while : ; do
aws ssm delete-parameters --names $(aws ssm get-parameters-by-path --path "$path" --query "Parameters[*].Name" --output text --max-items 10 $starting_token | grep -v None)
next_token=$(aws ssm get-parameters-by-path --path "$path" --query NextToken --output text --max-items 10 | grep -v None)
if [ -z "$next_token" ]; then
starting_token=""
break
else
starting_token="--starting-token $next_token"
fi
done
You can combine get-parameters-by-path with delete-parameters:
aws ssm delete-parameters --names `aws ssm get-parameters-by-path --path / --query Parameters[].Name --output text`
I tested it by creating two parameters, then running the above command. It successfully deleted by parameters.
try this and execute multiple times
aws ssm delete-parameters --names `aws ssm get-parameters-by-path --path / --recursive --query Parameters[].Name --output text --max-items 9`
Adding to the above. I had to delete around 400 params from the parameter store. Ran the below in command line and it did it! (Change 45 in for loop to whatever number you like);
for ((n=0;n<**45**;n++)); do
aws ssm delete-parameters --names `aws ssm get-parameters-by-path --path / --recursive --query Parameters[].Name --output text --max-items 9`
done
This is my one line solution for this:
$ for key in $(aws ssm get-parameters-by-path --path "/" --recursive | jq -r '.Parameters[] | .Name' | tr '\r\n' ' '); do aws ssm delete-parameter --name ${key}; done
NOTE: Be careful if you copy & paste this as it will remove everything under "/"