I have installed the latest versions of the aws-cli-2 and docker, as well as ran "aws configure" and entered my access key and secret key. I have also verified the aws.config is correct and showing the right region and output format. My credentials in AWS are admin. I keep getting the following error:
'''Unable to locate credentials. You can configure credentials by running "aws configure".
Error: Cannot perform an interactive login from a non TTY device'''
Even though I have already ran 'aws configure.' I am running the commands prefixed with 'sudo' as well. Any thoughts?! Thank you for your time!
The aws configure command was being run as the local user, whereas the ecr command was being run as sudo.
If you run commands as sudo it will not have access to your local users config, it will instead default to the root users.
Instead ensure all commands are run as the same user.
If you want to use the aws credentials file from the default location you can also specify the location via the AWS_CONFIG_FILE environment variable.
Related
I had an AWS account configured to work with the CLI. The free tier expired so I setup another account. I created an IAM user ran aws configure and put in the credentials for that user. I have the default profile setup with that users credentials as well.
From the cli if I run the command aws s3 ls it will always show the buckets from the old account. If I specify the profile using aws s3 ls --profile GrantM then it lists the buckets from the correct account and IAM user.
The environment variables are set to the new user also. Can someone explain this and how to switch it to use my new account?
create or edit this file:
% vim ~/.aws/credentials
list as many key pairs as you like:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
set a local variable to select the pair of keys you want to use:
% export AWS_PROFILE=user1
do what you like:
aws s3api list-buckets # any aws cli command now using user1 pair of keys
more details:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
When you use aws configure without any additional arguments it should allow you to amend the default profile, which is the one that is being accessed when you specify no profile. By amending this you will not need to specify the --profile flag.
If you would also like to amend over named profiles you would simply use aws configure --profile $PROFILE_NAME, where you can just as above replace the credentials currently stored in the configuration.
Alternatively for Linux/MacOS you can access your credentials in ~/.aws/credentials or for Windows in %USERPROFILE%\.aws\credentials. You can modify these files to replace any values.
More information is available on the Named profiles documentation page.
I would not mix environment variables and credentials profiles, you'll just get confused.
Remove the environment variables, ensure that the default profile in your ~/.aws/credentials file (or %USERPROFILE%\.aws\credentials on Windows) is set to the new credentials, then run aws s3 ls. If it's not what you expected, then run aws s3 ls --debug to work out what you did wrong.
According to Credentials — Boto 3 Docs documentation, the Environment Variables will be used in preference to the configuration files.
Therefore, I suggest you remove the credentials from your Environment Variables, and just use the configuration files.
Depending upon your operating system, you could use unset, or remove them from where ever you put them in the Environment Variables.
run on terminal where you be running the cli commands
export AWS_PROFILE='PROFILE_NAME'
move this the bashrc/zshrc file to make this permanent or just add a default section to the .aws/config and .aws/configure. Run following command and input the credentials you want.
aws configure
works on mac and windows.
I would like to use gsutil as a command in Ansible (2.5.X).
On the managed server I already setup Cloud access (service account).
When I use gsutil on the machine, it works without problems.
But when I create a playbook on my management machine and try to
run SDK command I have no access to cloud and permission denied
errors.
I suspect that SSH connection and environment is handled in
a specific way by Ansible. Could someone help me how to use SDK commands in Ansible?
- name: use ansible command
command: >
gsutil list gs://project.something.com
I know that there is gs_storage module. But I do not know
where to look for gs_access_key in an already configured setup.
In .config/gcloud? I'm still learning the Cloud, so some of this
things are new to me. The Cloud access was setup using .json key,
but after I delete this key from the managed machine (shouldn't be exposed).
Best Regards
Kamil
gsutil list would at least require role Viewer assigned to the instance service account - or roles/storage.objectViewer, in case it should also be able to get files from a bucket. Providing Credentials as Module Parameters shows how to authenticate with the gcp_compute_instance module - also see the Cloud Storage IAM Roles and Cloud Storage Authentication (the scopes).
My aim is to launch an instance such that a start-up script is triggered on boot-up to download some configuration files stored in AWS S3. Therefore, in the start-up script, I am setting the S3 bucket details and then, triggering a config.sh where aws s3 sync does the actual download. However, the aws command does not work - it is not found for execution.
User data
I have the following user data when launching an EC2 instance:
#!/bin/bash
# Set command from https://stackoverflow.com/a/34206311/919480
set -e -x
export S3_PREFIX='bucket-name/folder-name'
/home/ubuntu/app/sh/config.sh
The AWS CLI was installed with pip as described in the documentation.
Observation
I think, the user data script is run with root user ID. That is why, in the user data I have /home/ubuntu/ because $HOME did not resolve into /home/ubuntu/. In fact, the first command in config.sh is mkdir /home/ubuntu/csv which creates a directory with owner as root!
So, would it be right to conclude that, the user data runs under root user ID?
Resolution
Should I use REST API to download?
Scripts entered as user data are executed as the root user, so do not use the sudo command in the script.
See: Running Commands on Your Linux Instance at Launch
One solution is to set the PATH env variable to include AWS CLI (and add any other required path) before executing AWS CLI.
Solution
Given that, AWS CLI was installed without a sudo pip, the CLI is not available for root. Therefore, to run with ubuntu user, I used the following user data script:
#!/bin/bash
su ubuntu -c '$HOME/app/sh/config.sh default`
In config.sh, the argument default is used to build the full S3 URI before invoking the CLI. However, the invocation was successful only with the full path $HOME/.local/bin/aws despite the fact that aws can be accessed with normal login.
I had a Jenkins 2.46 installation running on an EC2 box, associated to a IAM role through an instance profile.
Jenkins was able to do various tasks requiring AWS credentials (f.e. use terraform, upload files to s3, access CodeCommit git repos) using just the instance profile role (no access key or secret keys were stored on the instance).
After upgrading to Jenkins 2.89, this is no longer the case: every task requiring authentication with AWS fails with a 403 error.
However, running a command on the instance bash as the jenkins user still works fine (f.e. running sudo -u jenkins /usr/bin/aws s3 ls s3://my-bucket/ lists bucket files; running the same command into Jenkins' Script Console yelds a 403).
I read the release notes of every version from 2.46 to 2.89 but I did not find anything relevant.
Jenkins was installed and updated through yum, the aws cli was installed using the bundled installer provided by AWS.
I'm trying to create a reusable delegation set to use as whitelisted nameservers for my domains, using aws cli on Mac OS X. My AWS credentials (those of an IAM profile I created for that purpose with full administrator privileges, an location set to us-east-1) were correctly entered during setup and accepted by the system.
When entering the command
$ aws route53 create-reusable-delegation-set --caller-reference [CALLER-REFERENCE] --hosted-zone-id [HOSTED_ZONE] --generate-cli-skeleton
the request is successful and I get the response:
{
"CallerReference": "",
"HostedZoneId": ""
}
But when I remove --generate-cli-skeleton and enter
aws route53 create-reusable-delegation-set --caller-reference [CALLER-REFERENCE] --hosted-zone-id [HOSTED_ZONE]
I get this:
An error occurred (InvalidClientTokenId) when calling the CreateReusableDelegationSet operation: The security token included in the request is invalid.
I reality, my IAM credentials, despite being valid, and despite the profile I am using (donaldjenkins) having full administrator privileges, are refused systematically in all aws services and for all commands, not just Route53.
I've been unable to pinpoint the cause of this despite extensive research. Any suggestions gratefully receieved.
Deleting my credentials file (Linux, macOS, or Unix: ~/.aws Windows: %UserProfile%\.aws) then running aws configure again worked for me
The solution is to delete existing credentials for the IAM user and issue new ones. For some reason the credentials recorded during the initial setup of aws cli never worked properly, but overwriting them with new ones removed the issue instantly.
I had the same exact issue.
I'm running NodeJS on my local environment, and trying to deploy to Amazon using code deploy and some other aws tools.
What worked for me was to delete the current config and credentials folder, regnerate a new key and use. THis was after i originally installed aws cli and added the keys, had to add the keys again.
Depending on your folder structure, navigate to your home directory.
On mac if you open a new terminal, it should show your current home directory: "/Users/YOURNAME"
cd .aws
rm -rf config
rm -rf credentials
After you do this, go back to your home directory, then run:
"aws configure".
Enter your Key and secret key.
You can find more details here: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration under Quickly Configuring the AWS CLI