aws cognito-idp admin-create-user --user-pool-id us-west-2_####### --username test --user-attributes Name="name",Value="demo" Name="email",Value="demo#mail.com" Name="family_name",Value="test" Name="birthdate",Value="01-01-1999" Name="gender",Value="female" Name="middle_name",Value="demo" --region ap-southeast-2 --output json
error--
An error occurred (UserLambdaValidationException) when calling the AdminCreateUser operation: PreSignUp failed with error cannot unpack non-iterable NoneType object.
create new user in awsclibuilder
I can filter on addresses that start with "john#"
aws cognito-idp list-users --user-pool-id my-pool --filter "email ^= \"john#\"" --limit 20
Is it possible to filter on ends with "#gmail.com"?
Not at the moment. You can follow this issue to see if in the future they add this feature: https://github.com/aws/aws-sdk-js/issues/3136
Yes it is possible.
If you use the --query instead of --filter you can query on anything in the resulting response by using JMESPath.
So to filter out #gmail.com you can do:
aws cognito-idp list-users --user-pool-id my-pool --query 'Users[?Attributes[?Name==`email` && contains(Value, `gmail.com`)]]
To assume an AWS role in the CLI, I do the following command:
aws sts assume-role --role-arn arn:aws:iam::123456789123:role/myAwesomeRole --role-session-name test --region eu-central-1
This gives to me an output that follows the schema:
{
"Credentials": {
"AccessKeyId": "someAccessKeyId",
"SecretAccessKey": "someSecretAccessKey",
"SessionToken": "someSessionToken",
"Expiration": "2020-08-04T06:52:13+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "idOfTheAssummedRole",
"Arn": "theARNOfTheRoleIWantToAssume"
}
}
And then I manually copy and paste the values of AccessKeyId, SecretAccessKey and SessionToken in a bunch of exports like this:
export AWS_ACCESS_KEY_ID="someAccessKeyId"
export AWS_SECRET_ACCESS_KEY="someSecretAccessKey"
export AWS_SESSION_TOKEN="someSessionToken"
To finally assume the role.
How can I do this in one go? I mean, without that manual intervention of copying and pasting the output of the aws sts ... command on the exports.
No jq, no eval, no multiple exports - using the printf built-in (i.e. no credential leakage through /proc) and command substitution:
export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" \
$(aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/MyAssumedRole \
--role-session-name MySessionName \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
--output text))
Finally, a colleague shared with me this awesome snippet that gets the work done in one go:
eval $(aws sts assume-role --role-arn arn:aws:iam::123456789123:role/myAwesomeRole --role-session-name test | jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\nexport AWS_SESSION_TOKEN=\(.SessionToken)\n"')
Apart from the AWS CLI, it only requires jq which is usually installed in any Linux Desktop.
You can store an IAM Role as a profile in the AWS CLI and it will automatically assume the role for you.
Here is an example from Using an IAM role in the AWS CLI - AWS Command Line Interface:
[profile marketingadmin]
role_arn = arn:aws:iam::123456789012:role/marketingadminrole
source_profile = user1
This is saying:
If a user specifies --profile marketingadmin
Then use the credentials of profile user1
To call AssumeRole on the specified role
This means you can simply call a command like this and it will assume the role and use the returned credentials automatically:
aws s3 ls --profile marketingadmin
Arcones's answer is good but here's a way that doesn't require jq:
eval $(aws sts assume-role \
--role-arn arn:aws:iam::012345678901:role/TrustedThirdParty \
--role-session-name=test \
--query 'join(``, [`export `, `AWS_ACCESS_KEY_ID=`,
Credentials.AccessKeyId, ` ; export `, `AWS_SECRET_ACCESS_KEY=`,
Credentials.SecretAccessKey, `; export `, `AWS_SESSION_TOKEN=`,
Credentials.SessionToken])' \
--output text)
I have had the same problem and I managed using one of the runtimes that the CLI served me.
Once obtained the credentials I used this approach, even if not so much elegant (I used PHP runtime, but you could use what you have available in your CLI):
- export AWS_ACCESS_KEY_ID=`php -r 'echo json_decode(file_get_contents("credentials.json"))->Credentials->AccessKeyId;'`
- export AWS_SECRET_ACCESS_KEY=`php -r 'echo json_decode(file_get_contents("credentials.json"))->Credentials->SecretAccessKey;'`
- export AWS_SESSION_TOKEN=`php -r 'echo json_decode(file_get_contents("credentials.json"))->Credentials->SessionToken;'`
where credentials.json is the output of the assumed role:
aws sts assume-role --role-arn "arn-of-the-role" --role-session-name "arbitrary-session-name" > credentials.json
Obviously this is just an approach, particularly helping in case of you are automating the process. It worked to me, but I don't know if it's the best. For sure not the most linear.
You can use aws config with external source following the guide: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html.
Create a shell script, for example assume-role.sh:
#!/bin/sh
aws sts --profile $2 assume-role --role-arn arn:aws:iam::123456789012:role/$1 \
--role-session-name test \
--query "Credentials" \
| jq --arg version 1 '. + {Version: $version|tonumber}'
At ~/.aws/config config profile with shell script:
[profile desktop]
region=ap-southeast-1
output=json
[profile external-test]
credential_process = "/path/assume-role.sh" test desktop
[profile external-test2]
credential_process = "/path/assume-role.sh" test2 external-test
Incase anyone wants to use credential file login:
#!/bin/bash
# Replace the variables with your own values
ROLE_ARN=<role_arn>
PROFILE=<profile_name>
REGION=<region>
# Assume the role
TEMP_CREDS=$(aws sts assume-role --role-arn "$ROLE_ARN" --role-session-name "temp-session" --output json)
# Extract the necessary information from the response
ACCESS_KEY=$(echo $TEMP_CREDS | jq -r .Credentials.AccessKeyId)
SECRET_KEY=$(echo $TEMP_CREDS | jq -r .Credentials.SecretAccessKey)
SESSION_TOKEN=$(echo $TEMP_CREDS | jq -r .Credentials.SessionToken)
# Put the information into the AWS CLI credentials file
aws configure set aws_access_key_id "$ACCESS_KEY" --profile "$PROFILE"
aws configure set aws_secret_access_key "$SECRET_KEY" --profile "$PROFILE"
aws configure set aws_session_token "$SESSION_TOKEN" --profile "$PROFILE"
aws configure set region "$REGION" --profile "$PROFILE"
# Verify the changes have been made
aws configure list --profile "$PROFILE"
based on Nev Stokes's answer if you want to add credentials to a file
using printf
printf "
[ASSUME-ROLE]
aws_access_key_id = %s
aws_secret_access_key = %s
aws_session_token = %s
x_security_token_expires = %s" \
$(aws sts assume-role --role-arn "arn:aws:iam::<acct#>:role/<role-name>" \
--role-session-name <session-name> \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken,Expiration]" \
--output text) >> ~/.aws/credentials
if you prefere awk
aws sts assume-role \
--role-arn "arn:aws:iam::<acct#>:role/<role-name>" \
--role-session-name <session-name> \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken,Expiration]" \
--output text | awk '
BEGIN {print "[ROLE-NAME]"}
{ print "aws_access_key_id = " $1 }
{ print "aws_secret_access_key = " $2 }
{ print "aws_session_token = " $3 }
{ print "x_security_token_expires = " $4}' >> ~/.aws/credentials
to update credentials in ~/.aws/credentials file run blow sed command before running one of the above command.
sed -i -e '/ROLE-NAME/,+4d' ~/.aws/credentials
I am trying to store a new Secret in AWS Secrets Manager using AWS CLI.
On console i get an option to create a Other type of secrets under Select secret type where i choose a plaintext type under Specify the key/value pairs to be stored in this secret.
I want to do that using CLI.
Below is the format to use the CLI Command
aws secretsmanager create-secret
--name <value>
[--client-request-token <value>]
[--description <value>]
[--kms-key-id <value>]
[--secret-binary <value>]
[--secret-string <value>]
[--tags <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
You can use the --secret-string option for this.
For Key-Value pairs you can do JSON formatted string and it will show up as Key-Value pairs in console:
aws secretsmanager create-secret --name my-secret-kv-pairs --secret-string '{"foo":"bar"}'
If you just want plain text you can do :
aws secretsmanager create-secret --name my-secret-just-text --secret-string 'My random string'
I was looking through aws docs at https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html as I needed to update the address for a user. According to the docs it says it follows the openID spec which for an address is a json object. However it errors on anything that is not a string.
I'm using the aws cli and calling it like so:
aws cognito-idp admin-update-user-attributes --user-pool-id my_user_pool --username a#b.com --user-attributes Name=address,Value={"street_address": "123 Fake Street","locality": "Somewhere","postal_code":"AA1 1AA"}
the following also doesn't work:
aws cognito-idp admin-update-user-attributes --user-pool-id my_user_pool --username a#b.com --user-attributes Name=address,Value="123 Fake Street, Somewhere"
Parameter validation failed:
Invalid type for parameter UserAttributes[0].Value, value: ['123 Fake
Street', 'Somewhere'], type: <class 'list'>, valid types: <class 'str'>
Am I inputting something wrong or is aws docs incorrect and only allowing strings through
I just ran into this issue as well, I couldn't find a solve in any of the AWS documentation but if you escape the comma it works.
aws cognito-idp admin-update-user-attributes --user-pool-id my_user_pool --username a#b.com --user-attributes Name=address,Value="123 Fake Street\, Somewhere"