How to update credentials file of aws cli? - amazon-web-services

My credentials file of aws looks this way.
vagrant#vagrant:~/.aws$ cat credentials
[default]
aws_access_key_id = *****************
aws_secret_access_key = ***************
[mysubaccount1]
role_arn = arn:aws:iam::**********:role/OrganizationAccountAccessRole
source_profile = default
[mysubaccount2]
role_arn = arn:aws:iam::**********:role/OrganizationAccountAccessRole
source_profile = default
Need to see any options to update that here.
I tried below command to list the existing profiles to frame any condition, but it is giving error as syntax invalid.
vagrant#vagrant:~/.aws$ aws configure list-profiles
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
aws: error: argument subcommand: Invalid choice, valid choices are:
list | get
set | add-model
Any aws config commands or any bash script you can suggest to update the details coming from terraform output when we create the account from there to update or add a new profile like mysubaccount3.

The command is
aws configure list
not
aws configure list-profiles
Also, you can easily edit the .aws/credentials files with any text editor and add/remove/update any entry there.

Related

aws transfer update-server is throwing errors

I am using terraform to create an aws sftp server and trying to use IP whitelisting to secure my server.
Terraform aws_transfer_server command supports only endpoint_types such as PUBLIC or VPC_ENDPOINT at this time. So I am using null_resource to execute an aws command to update the sftp server after it was created. The terraform snippet is below:
resource "null_resource" "update_sftp_server" {
provisioner "local-exec" {
command = <<EOF
aws transfer update-server --server-id ${aws_transfer_server.sftp.id} --endpoint-type VPC --endpoint-details SubnetIds="${join("\", \"", var.subnet_ids)}", AddressAllocationIds="${join("\", \"", toset(aws_eip.nlb.*.id))}", VPCEndpointID="${aws_vpc_endpoint.transfer.id}", VpcId="${var.vpc_id}"
EOF
}
depends_on = [aws_transfer_server.sftp, aws_vpc_endpoint.transfer]
}
This executes the below aws command
aws transfer update-server --server-id s-######## --endpoint-type VPC --endpoint-details SubnetIds="subnet-#####", "subnet-#####", AddressAllocationIds="eipalloc-######", "eipalloc-######", VPCEndpointID="vpce-#######", VpcId="vpc-#####"
But I am getting an error as below:
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: AddressAllocationIds=eipalloc-######, eipalloc-######, VPCEndpointID=vpce-######, VpcId=vpc-######, subnet-######
Can someone help me to know why this error is thrown? My environment details are below:
Terraform v0.12.28
provider.aws v3.0.0
provider.null v2.1.2
aws-cli/2.0.33 Python/3.7.7 Windows/10 botocore/2.0.0dev37
Have you tried building your argument list without spaces? So that it looks like SubnetIds="subnet-#####","subnet-#####",AddressAllocationIds="eipalloc-######","eipalloc-######",VPCEndpointID="vpce-#######",VpcId="vpc-#####" ?
Otherwise when the commandline is broken up into tokens, most of those bits will not be parsed as part of the --endpoint-details argument.

AWS Translate | Asynchronous Batch Processing | CLI | describe-text-translation-job not valid command

I have installed and configure AWS CLI both my windows 10 machine and AWS EC2 Linux machine also have one AWS translated batch job in frankfurt aws region. I am following this document for to initiate the batch translation process using CLI. https://docs.aws.amazon.com/translate/latest/dg/translate-dg.pdf
Now, suppose I am using this sample command
aws translate describe-text-translation-job --job-id xxxxxxx
I am getting this error everyplace
[ec2-user#ip-xx-xx-xx-xx ~]$ aws translate describe-text-translation-job --job-id xxxxxxx
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:
delete-terminology | get-terminology
import-terminology | list-terminologies
translate-text | help
It only show 5 valid choice other than help but as per the documentation it should be more
https://docs.aws.amazon.com/cli/latest/reference/translate/index.html#cli-aws-translate
why I am not getting these options
describe-text-translation-job
start-text-translation-job
stop-text-translation-job
list-text-translation-jobs
It was AWS CLI version problem only, I updated the version and now I get all the required methods.

Unable to describe my keys from aws cli to aws accounts

Whenever I try to describe my keys to confirm that I could access my AWS accounts from the AWS cli - I see this error:
usage: aws [options] [ ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Sometimes I see this error:
Unable to locate credentials. You can configure credentials by running "aws configure".
Can someone figure out this issue?
You need to configure the aws cli to use credentials. There are a couple ways to do this.
aws configure command
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
Environment Variables
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ export AWS_DEFAULT_REGION=us-west-2

How to run aws configure on Amazon AWS EC2 automatically without interaction without prompt?

I'm trying to set up Amazon AWS EC2 instance to talk to s3. The basic command is
aws configure
then follow the prompt to enter
AWS Access Key ID [None]: my-20-digit-id
AWS Secret Access Key [None]: my-40-digit-secret-key
Default region name [None]: us-east-1
Default output format [None]: text
However, what I really want is to have the command
aws configure
automatically without interaction, i.e., no prompt and wait for input
I know there are files at
~.aws/credentials
~.aws/config
where I put those 4 key=value pairs. And the "credentials" file looks like
[default]
aws_secret_access_key = my-40-digit-secret-key
aws_access_key_id = my-20-digit-id
while the "config" file looks like
[default]
region = us-east-1
output = text
However, with those file at ~/.aws/, I get into ~/.aws/, and at the command line, I type and enter command
aws configure
I still got the prompt to ask me
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
If I don't enter valid values at prompt, I won't be able to connect to s3, for example via command
aws s3 ls s3://mybucket
I turned help to amazon aws documentation pages. At this page, it mentions this option
"Command line options – region, output format and profile can be specified as command options to override default settings."
as the first option for aws configure
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
However, it didn't mention how to use the command line options. I tried something like this
aws configure --region us-east-1
but I still got
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
exactly like I have no option of "--region us-east-1"
If I try to
aws configure --aws_access_key_id my-20-digit-id --aws_secret_access_key my-40-digit-secret-key --region us-east-1
I get this
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument subcommand: Invalid choice, valid choices are:
How I can run the command
aws configure
automatically, no prompt, no interaction.
Please help! TIA
Edit and response to helloV, as the format in main post is much clearer than comment.
I tried the command helloV mentioned, but I got error
aws configure set aws_access_key_id my-20-digit-id
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument subcommand: Invalid choice, valid choices are:
Thanks though.
Continue on "aws configure set"
On another EC2 instance where I've already set connection to s3, I enter
aws configure set region us-east-1
runs and returns to command prompt ">"
aws configure set aws_access_key_id my-20-digit-id
runs and returns to command prompt ">"
aws configure set aws_secret_access_key my-40-digit-secret-key
runs and returns to command prompt ">"
aws configure
runs but comes with prompts and waits for interaction
AWS Access Key ID [****************ABCD]:
AWS Secret Access Key [****************1234]:
Default region name [us-east-1]:
Default output format [text]:
helloV:
here is my screen looks like
ubuntu#ip-11111:~/.aws$ more config
[default]
region = us-east-1
output = text
ubuntu#ip-11111:~/.aws$ more credentials
[default]
aws_secret_access_key = my-40-digit-secret-key
aws_access_key_id = my-20-digit-id
ubuntu#ip-11111:~/.aws$ aws s3 ls s3://
I got this
Unable to locate credentials. You can configure credentials by running "aws configure".
After this, I run
aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region us-east-1 config_file ~/.aws/config
Looks like it does not check ~/.aws/credentials file, but ~/.aws/config file is in the list.
These commands worked for me. If this doesn't works for you. Try do the first time using the interaction mode aws configure
aws --profile default configure set aws_access_key_id "my-20-digit-id"
aws --profile default configure set aws_secret_access_key "my-40-digit-secret-key"
I figured out, finally. Use export such as
export AWS_ACCESS_KEY_ID=my-20-digit-id
export AWS_SECRET_ACCESS_KEY=my-40-digit-secret-key
export AWS_DEFAULT_REGION=us-east-1
then run
aws s3 ls s3://
would work. Don't run "aws configure" as others mentioned.
Thank you all.
You describe the file very well. Why not just create a file and put it in the right place? I just tried... it's exactly the same as running aws configure
UPDATE: You mention that you want to access S3 from EC2 instance. In this case you shouldn't be using credentials at all. You should user Roles instead
The solution is that you actually don't have to run aws configure! After you run it for the 1st time and established the credentials (~/.aws/credentials) and config (~/.aws/config), going forward you simply have to run the required aws command. I tested this with a cron job and did a "aws s3 ls" command and it worked without having to provide a configure command before it.
Follow this command
$aws configure set aws_access_key_id default_access_key
$ aws configure set aws_secret_access_key default_secret_key
$ aws configure set default.region us-west-2
or
aws configure set aws_access_key_id <key_id> && aws configure set aws_secret_access_key <key> && aws configure set default.region us-east-1
For more details use this link
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/set.html
I use something like this:
aws configure --profile my-profile-name <<-EOF > /dev/null 2>&1
${AWS_ACCESS_KEY_ID}
${AWS_SECRET_ACCESS_KEY}
${AWS_REGION}
text
EOF
also to cleanup after automated process, and not remove `~/.aws/ directory (since some other credentials might be stored there) I run:
aws configure --profile my-profile-name <<-EOF > /dev/null 2>&1
null
null
null
text
EOF

NoSuchBucket error when running Kubernetes on AWS

Downloaded Kubernetes 1.1.8 from:
https://github.com/kubernetes/kubernetes/releases/download/v1.1.8/kubernetes.tar.gz
Followed the instructions at:
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/aws.md
And got the following error:
kubernetes-1.1.8 > ./kubernetes/cluster/kube-up.sh
... Starting cluster using provider: aws
... calling verify-prereqs
... calling kube-up
Starting cluster using os distro: vivid
Uploading to Amazon S3
Creating kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149
make_bucket: s3://kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/
A client error (NoSuchBucket) occurred when calling the GetBucketLocation operation: The specified bucket does not exist
+++ Staging server tars to S3 Storage: kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/devel
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --region: expected one argument
AWS Console showed that the bucket was created but was empty.
It's probably a region issue; I'm guessing that the bucket is created in another region than Kubernetes tries to access.
Looks like the aws cmdline tool is confused about the region:
aws: error: argument --region: expected one argument
When it can't determine the region, it defaults to one of the us regions.
EDIT: the S3 sync is triggered by script cluster/aws/util.sh.
The command executed is aws s3 sync --region ${s3_bucket_location} --exact-timestamps ${local_dir} "s3://${AWS_S3_BUCKET}/${staging_path}/".
You can add an echo ${s3_bucket_location} before the line above. It should give you more information on what the region is set to.