I'm trying to upload some files from Bitrise CI to AWS S3 bucket.
When I try to configure AWS on my laptop, I have no problem
$ aws configure
$ AWS Access Key ID [None]: MY_KEY
$ AWS Secret Access Key [None]: MY_ACCESS_KEY
$ Default region name [None]: MY_REGION_NAME
$ Default output format [None]:
My problem is how to assign MY_KEY, MY_ACCESS_KEY, MY_REGION_NAME and EMPTY to above requests (via script)?
I tried to cheat! in this way but I wasn't successful.
echo "[default]" > ~/.aws/config
echo "aws_access_key_id = MY_KEY" >> ~/.aws/config
echo "aws_secret_access_key = MY_ACCESS_KEY" >> ~/.aws/config
echo "region = MY_REGION_NAME" >> ~/.aws/config
cat ~/.aws/config
I'm getting following error:
echo '[default]' /tmp/bitrise316130716/step_src/._script_cont: line 16: /root/.aws/config: No such file or directory
You don't have to write the configuration into file, you can supply the credentials as Environment Variables:
export AWS_ACCESS_KEY_ID=..
export AWS_SECRET_ACCESS_KEY=..
export AWS_DEFAULT_REGION=..
You can check how we implemented this in our amazon-s3-upload step.
Thanks to the answer https://stackoverflow.com/a/3804645/513413
I changed my above code to this and I'm able to upload my files to S3.
yes Y | sudo apt-get install awscli
printf 'MY_KEY\nMY_ACCESS_KEY\nMY_REGION_NAME\njson' | aws configure
Related
How can I configure a temporary AWS-CLI user if I already have a default user in the .aws/ path ??? if I could create a temp user, I could test my task without interfering default user !!
You can use profile as below:
$ aws ec2 describe-instances --profile user1
Have a look at the aws documentation here
You can add temp user as follows:
export AWS_ACCESS_KEY_ID=<your AWS_ACCESS_KEY_ID >
export AWS_SECRET_ACCESS_KEY=<your AWS_SECRET_ACCESS_KEY>
export AWS_REGION=<your AWS_REGION>
When you set these values, you will be able to see similar like these:
{
"Account": "2*********4",
"UserId": "A*****************V",
"Arn": "arn:aws:iam::275*******04:user/s3ba*****ser"
}
Once you are done, do the rest :
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_REGION
You can use AWS CLI named profiles:
Create a new profile called temp, provide your temporary CLI user's credentials:
$ aws configure --profile temp
AWS Access Key ID [None]: xxxxxxxxxxxxx
AWS Secret Access Key [None]: yyyyyyyyyyyyy
Default region name [None]: eu-central-1
Default output format [None]: json
Use the newly created profile:
$ aws s3 ls --profile temp
Specifying --profile temp with each AWS call is not that handy, so consider either of these:
Specify a profile in an environment variable called AWS_PROFILE:
$ export AWS_PROFILE=temp
$ aws s3 ls
$ export AWS_PROFILE=another-profile
$ aws s3 ls
Use a profile switcher tool like awsp. With it switching profiles (users/roles) is as easy as:
$ awsp temp
$ aws s3 ls
$ awsp another-profile
$ awsho
$ aws s3 ls
The good thing about awsp is it supports auto-complete and you can easily switch between profiles even without memorizing your profile names. If you want to check the current profile, use:
$ awswho
Name Value Type Location
---- ----- ---- --------
profile temp env ['AWS_PROFILE', 'AWS_DEFAULT_PROFILE']
access_key ****************DHGY shared-credentials-file
secret_key ****************O2pq shared-credentials-file
region eu-central-1 config-file ~/.aws/config
I'm trying to set up Amazon AWS EC2 instance to talk to s3. The basic command is
aws configure
then follow the prompt to enter
AWS Access Key ID [None]: my-20-digit-id
AWS Secret Access Key [None]: my-40-digit-secret-key
Default region name [None]: us-east-1
Default output format [None]: text
However, what I really want is to have the command
aws configure
automatically without interaction, i.e., no prompt and wait for input
I know there are files at
~.aws/credentials
~.aws/config
where I put those 4 key=value pairs. And the "credentials" file looks like
[default]
aws_secret_access_key = my-40-digit-secret-key
aws_access_key_id = my-20-digit-id
while the "config" file looks like
[default]
region = us-east-1
output = text
However, with those file at ~/.aws/, I get into ~/.aws/, and at the command line, I type and enter command
aws configure
I still got the prompt to ask me
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
If I don't enter valid values at prompt, I won't be able to connect to s3, for example via command
aws s3 ls s3://mybucket
I turned help to amazon aws documentation pages. At this page, it mentions this option
"Command line options – region, output format and profile can be specified as command options to override default settings."
as the first option for aws configure
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
However, it didn't mention how to use the command line options. I tried something like this
aws configure --region us-east-1
but I still got
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
exactly like I have no option of "--region us-east-1"
If I try to
aws configure --aws_access_key_id my-20-digit-id --aws_secret_access_key my-40-digit-secret-key --region us-east-1
I get this
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument subcommand: Invalid choice, valid choices are:
How I can run the command
aws configure
automatically, no prompt, no interaction.
Please help! TIA
Edit and response to helloV, as the format in main post is much clearer than comment.
I tried the command helloV mentioned, but I got error
aws configure set aws_access_key_id my-20-digit-id
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument subcommand: Invalid choice, valid choices are:
Thanks though.
Continue on "aws configure set"
On another EC2 instance where I've already set connection to s3, I enter
aws configure set region us-east-1
runs and returns to command prompt ">"
aws configure set aws_access_key_id my-20-digit-id
runs and returns to command prompt ">"
aws configure set aws_secret_access_key my-40-digit-secret-key
runs and returns to command prompt ">"
aws configure
runs but comes with prompts and waits for interaction
AWS Access Key ID [****************ABCD]:
AWS Secret Access Key [****************1234]:
Default region name [us-east-1]:
Default output format [text]:
helloV:
here is my screen looks like
ubuntu#ip-11111:~/.aws$ more config
[default]
region = us-east-1
output = text
ubuntu#ip-11111:~/.aws$ more credentials
[default]
aws_secret_access_key = my-40-digit-secret-key
aws_access_key_id = my-20-digit-id
ubuntu#ip-11111:~/.aws$ aws s3 ls s3://
I got this
Unable to locate credentials. You can configure credentials by running "aws configure".
After this, I run
aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region us-east-1 config_file ~/.aws/config
Looks like it does not check ~/.aws/credentials file, but ~/.aws/config file is in the list.
These commands worked for me. If this doesn't works for you. Try do the first time using the interaction mode aws configure
aws --profile default configure set aws_access_key_id "my-20-digit-id"
aws --profile default configure set aws_secret_access_key "my-40-digit-secret-key"
I figured out, finally. Use export such as
export AWS_ACCESS_KEY_ID=my-20-digit-id
export AWS_SECRET_ACCESS_KEY=my-40-digit-secret-key
export AWS_DEFAULT_REGION=us-east-1
then run
aws s3 ls s3://
would work. Don't run "aws configure" as others mentioned.
Thank you all.
You describe the file very well. Why not just create a file and put it in the right place? I just tried... it's exactly the same as running aws configure
UPDATE: You mention that you want to access S3 from EC2 instance. In this case you shouldn't be using credentials at all. You should user Roles instead
The solution is that you actually don't have to run aws configure! After you run it for the 1st time and established the credentials (~/.aws/credentials) and config (~/.aws/config), going forward you simply have to run the required aws command. I tested this with a cron job and did a "aws s3 ls" command and it worked without having to provide a configure command before it.
Follow this command
$aws configure set aws_access_key_id default_access_key
$ aws configure set aws_secret_access_key default_secret_key
$ aws configure set default.region us-west-2
or
aws configure set aws_access_key_id <key_id> && aws configure set aws_secret_access_key <key> && aws configure set default.region us-east-1
For more details use this link
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/set.html
I use something like this:
aws configure --profile my-profile-name <<-EOF > /dev/null 2>&1
${AWS_ACCESS_KEY_ID}
${AWS_SECRET_ACCESS_KEY}
${AWS_REGION}
text
EOF
also to cleanup after automated process, and not remove `~/.aws/ directory (since some other credentials might be stored there) I run:
aws configure --profile my-profile-name <<-EOF > /dev/null 2>&1
null
null
null
text
EOF
When I pull a clean Alphine Linux Docker image, install aws-cli on it and try to authenticate myself with aws ecr get-authorization-token --region eu-central-1 I keep getting the following error:
An error occurred (UnrecognizedClientException) when calling the
GetAuthorizationToken operation: The security token included in the
request is invalid.
I've already checked the timezone which seem to be okay, and the command works properly on my local machine.
These are the commands I run to set up aws-cli:
apk add --update python python-dev py-pip
pip install awscli --upgrade
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Is there something obvious I'm missing?
You don't have permission to access those resources until you get permission to aws-cli, for that you can use the below steps.
Log into your AWS account, click on your account name, select my security credentials, click on access keys and download the credentials
Open your PowerShell as administrator and follow the commands.
$ aws configure
$ AWS Access Key ID [****************E5TA]=xxxxxxxxxx
$ AWS Secret Access Key [****************7gNT]=xxxxxxxxxxxxxx
It was an access issue after all! Turns out that if you create a new IAM user with full admin access it can't by default access the ECR registry you created using a different account. Using the IAM credentials from that other account resolved the issue.
In my case, my ~/.aws/credentials file had an old aws_session_token that was not updated by the aws configure CLI command. Once I opened the file with vi ~/.aws/credentials and deleted the aws_session_token entry, I no longer encountered the UnrecognizedClientException. I'm guessing that the AWS CLI first gives priority to the aws_session_token over the aws access key id and aws secret access key when running AWS CLI commands, if aws_session_token is present in the ~/.aws/credentials file.
Create a new account with AmazonEC2ContainerRegistryFullAccess permission.
Add this account to the .credentials file like this:
[ecr-user]
aws_access_key_id = XXX
aws_secret_access_key = XXX
Then next use following command:
aws ecr get-login-password --profile ecr-user
What worked for me is:
on the first part of pipe add the param --profile < your-profile-name >
and after that in every ECR command you need to provide that parameter.
My issue was caused by the fact that I had inactivated my access key in the AWS IAM Management Console earlier as part of an exercise I was doing. Once I reactivated it, the problem was resolved.
(Make sure you're in the right AWS region, too.)
I had same error message however I was using session based AWS access . The solution is to add all the keys given by AWS including session token.
aws_access_key_id="your-key-id"
aws_secret_access_key="your-secret-access-key"
aws_session_token="your-session-token"
add it into ~/.aws/credentials for profile you are using .
After a couple of hours , this is my conclusion :
If you want to use AWS_PROFILE makes sure that the rest of AWS env vars are unset (NOT empty only ... MUST be UNSET).
profile=$AWS_PROFILE
unset $(printenv |grep AWS_ | cut -f1 -d"=");
export AWS_PROFILE=${profile};
Then :
# with aws cli >= 1.x
$(aws ecr get-login --no-include-email --region ${aws_region})
# with aws cli >= 2.x
registry=${aws_account_id}.dkr.ecr.${aws_region}.amazonaws.com
aws ecr get-login-password --region ${aws_region} | docker login --username AWS --password-stdin ${registry}
Resolved issue after following below:
Go to AWS IAM Management Console
Generate credential in section "Access keys (access key ID and secret access key)"
Run command aws configure and set same downloaded credentials in Cdrive-User-directory.aws\credentials
It wasn't working for me. Out of sheer desperation, I copied the lines starting with export and posted them in the terminal and pressed enter.
Thereafter I wrote aws configure and filled in the details from https://MYCOMPANY.awsapps.com/start#/ >> Account >> Clicked "Command line or programmatic access".
Default region name: eu-north-1
Default output format: text
And then the login succeeded. Don't ask my why.
open the file ~/.aws/credentials (or c:\Users\{user}\.aws\credentials on Windows)
It might look something like the following:
[default]
aws_access_key_id = XXXXX
aws_secret_access_key = XXXXX
aws_session_token = XXXXX
Update the aws_access_key_id and aws_secret_access_key with new values and remove the aws_session_token. You can also update aws_access_key_id and aws_secret_access_key via the aws configure command, but this doesn't remove the session token.
Try running echo $varname to see if the environment variables are set correctly:
echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEY
echo $AWS_DEFAULT_REGION
If they are incorrectly set, run unset varname:
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_DEFAULT_REGION
In my case, the region I wanted to use was not enabled. Addressed by enabling it at Account > AWS Regions -> enable (and wait patiently for some minutes).
An update, --profile must be added, I solve this.
Calling the following AWS CLI SDK command triggers the shell prompting for a series of values:
$ aws configure --profile profilename
$ AWS Access Key ID [None]:
etc....
Is there any way to specify the parameters in line? E.g.
$ aws configure --profile profilename --access-key=foo --access-secret=goo --region=bar
Thanx in adv,
Michael McD
Sort of. You can't do them all at once (aws configure help will show you there are no such options), but can do them one at a time.
From aws configure set help:
Given an empty config file, the following commands:
$ aws configure set aws_access_key_id default_access_key
$ aws configure set aws_secret_access_key default_secret_key
$ aws configure set default.region us-west-2
$ aws configure set default.ca_bundle /path/to/ca-bundle.pem
$ aws configure set region us-west-1 --profile testing
$ aws configure set profile.testing2.region eu-west-1
$ aws configure set preview.cloudsearch true
will produce the following config file:
[default]
region = us-west-2
ca_bundle = /path/to/ca-bundle.pem
[profile testing]
region = us-west-1
[profile testing2]
region = eu-west-1
[preview]
cloudsearch = true
and the following ~/.aws/credentials file:
[default]
aws_access_key_id = default_access_key
aws_secret_access_key = default_secret_key
Note that you could also set the credentials temporarily as environment variables when running other aws commands. If that's interesting to you, see the documentation. You can't just set them and run aws configure --profile profilename though -- this will still prompt you.
There are lots of posting about copying over from S3 to GS, but I'm stumped on this one - all the authentication seems to be correct, what am I missing?
$ gsutil -m cp -r s3://ws-logs gs://ws-logs
AccessDeniedException: 403 AccessDenied
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>659EC2ADB59407E2</RequestId><HostId>mTnCaiVt0elcnN7eljLvmiwqPwQui6Hr/VPSrR3pKT1TpjQaqRo3ZKeNloa9QU9DjzTcQE+Fc1g=</HostId></Error>
CommandException: 1 file/object could not be transferred.
$ gsutil -m cp -r test.txt gs://ws-logs
Copying file://test.txt [Content-Type=text/plain]...
$ aws s3 ls ws-logs
PRE dt=2012-12-01/
[ lots more entries ]
So, as you can see, I can create an entry in my gs bucket, and I can list my S3 directories. Is there something else I'm missing?
UPDATE
It works when I log directly into a VM on Gcloud:
$ gcloud compute --project "my-sample" ssh --zone "us-central1-f" "random-machine"
[ Login ]
$ aws configure
AWS Access Key ID [None]: <ENTERED KEY>
AWS Secret Access Key [None]: <ENTERED KEY>
Default region name [None]: us-west-2
Default output format [None]: json
$ gsutil -m cp -R s3://ws-log gs://ws-log
[... Everything Copying Correctly ...]