Cannot read credentials from /.aws/credentials - amazon-web-services

I am trying to integrate AWS PHP SDK for codeigniter
But its showing error as follows
An uncaught Exception was encountered
Type: Aws\Exception\CredentialsException
Message: Cannot read credentials from /.aws/credentials
Filename: /var/www/html/aws/Aws/Credentials/CredentialProvider.php
And on cli getting an error as
-bash: /root/.aws/credentials: Permission denied
So after this i have allowed permission ... cli error has gone but php error Cannot read credentials from /.aws/credentials still remain.
Please help to solve this issue.
Thanks!

If your are using IAM Role to EC2 Instance then there is no need of using following
'profile'=>'default',
i just remove above line which solved error "Cannot read credentials from /.aws/credentials"
Issue using an IAM role with PHP SDK

When running code on another AWS service, you do not work with key and secret, as you would on your local machine. Take a look at the answer I gave on another question.
Basically, your EC2 instance is assigned a service role. Then you would attach one or more IAM policies to that role. The IAM policies will determine what AWS resources and actions your EC2 instance can access.
In your PHP code you would instantiate your client using the CredentialProvider::defaultProvider(). If you were working with S3 for example, it would look like this.
$s3 = new S3Client([
'region' =>'us-east-1',
'credentials' => CredentialProvider::defaultProvider()
]);

When PHP is running under a service there is no "user". Therefore PHP will not attempt to access /root/.aws/credentials. If you review the error the path is /.aws/credentails.
To solve this problem create a new directory /.aws and copy the directory /root/.aws to /.aws
Improvement:
You have installed the PHP SDK inside your website root folder which makes these files accessible externally. SDKs should be installed outside of your website folders.

Related

Boto s3 permission issue

I've come across very weird permission issue. I'm trying to upload a file to s3, here's my function
def UploadFile(FileName, S3FileName):
session = boto3.session.Session()
s3 = session.resource('s3')
s3.meta.client.upload_file(FileName, "MyBucketName", S3FileName)
I did configure aws-cli on the server. This function works fine when I log into server and launch python interpreter but fails when called from my django rest api with:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
No idea why the same function works when called from interpreter and fails when called from django. Both are in the same virtual environment. Any suggestions?
According to the boto3 docs, boto3 is looking for credentials in the following places:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
Note that many of these places are paths with "~" in them. "~" refers to the current user's home directory. Most likely, your REST API is running under a different system user than you are using to test your code.
The proper solution is to use IAM roles, as this allows your server to have S3 access without you needing to give it IAM credentials. However, if that doesn't work for your setup, you should put the IAM credentials in the /etc/boto.cfg file as that is user agnostic.

CLI command "describe-instances" throw error "An error occurred (AuthFailure) when calling the

I was able to install CLI on windows 16 AWS instance. when I try "aws ec2 describe-instances" CLI command, I get the following error
CLI command "describe-instances" throw error "An error occurred (AuthFailure) when calling the DescribeInstances operation: AWS was not able to validate the provided access credentials"
In .aws\config file I have following content:
[default]
region = us-west-2
How can authorization fail when it took my access key id and secret access key without any issue.
Verify if your datetime is sync ok.
use: ntpdate ntp.server
bests
I deleted my two configuration files from .aws directory and re-ran "aws config"
That fixed the problem for me.
My Steps:
Go to your .aws directory under Users e.g. "c:\Users\Joe\.aws"
Two files: configure and credential. Delete both files
Rerun configure: "aws configure"
Note when you run aws configure you will need the AWS Access and Secret Key. If you don't have them you can just create another.
Steps:
Goto "My Security Credentials" Under you Account Name in AWS Console.
Expand Access Key panel.
Create New Access Key.
When you first ran aws configure, it just populated the local credentials in %UserProfile%\.aws\credentials; it didn't validate them with AWS.
(aws-cli doesn't know what rights your user has until it tries to do an operation -- all of the access control happens on AWS's end. It just tries to do what you ask, and tells you if it doesn't have access, like you saw.)
That said, if you're running the CLI from an AWS instance, you might want to consider applying a role to that instance, so you don't have to store your keys on the instance.
My Access and Security keys are correct. My server time was good. I got error while using Ap-south-1 region. After I changed my region to us-west-2, it worked without any problem.
I tried setting that too on my windows environment. didn't work and getting error above.
so I tried setting my environment
SET AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY
SET AWS_SECRET_ACCESS_KEY=***YOUR_SECRET_ACCESS_KEY*
and then tried running command like "aws ec2 describe-instance"
I tried many things. Finally, just uninstalling and installing again (not repairing) did the trick. Just make sure to save a copy of your credentials (key and key ID) to use later when calling aws configure.

The AWS Access Key Id does not exist in our records

I created a new Access Key and configured that in the AWS CLI with aws configure. It created the .ini file in ~/.aws/config. When I run aws s3 ls it gives:
A client error (InvalidAccessKeyId) occurred when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
AmazonS3FullAccess policy is also attached to the user. How to fix this?
It might be happening that you have the old keys exported via env variables (bash_profile) and since the env variables have higher precedence over credential files it is giving the error "the access key id does not exists".
Remove the old keys from the bash_profile and you would be good to go.
Happened with me once earlier when I forgot I have credentials in bash_profile and gave me headache for quite some time :)
It looks like some values have been already set for the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
If it is like that, you could see some values when executing the below commands.
echo $AWS_SECRET_ACCESS_KEY
echo $AWS_ACCESS_KEY_ID
You need to reset these variables, if you are using aws configure
To reset, execute below commands.
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
Need to add aws_session_token in credentials, along with aws_access_key_id,aws_secret_access_key
None of the up-voted answers work for me. Finally I pass the credentials inside the python script, using the client API.
import boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN)
Please notice that the aws_session_token argument is optional. Not recommended for public work, but make life easier for simple trial.
For me, I was relying on IAM EC2 roles to give access to our machines to specific resources.
I didn't even know there was a credentials file at ~/.aws/credentials, until I rotated/removed some of our accessKeys at the IAM console to tighten our security, and that suddenly made one of the scripts stop working on a single machine.
Deleting that credentials file fixed it for me.
I made the mistake of setting my variables with quotation marks like this:
AWS_ACCESS_KEY_ID="..."
You may have configured AWS credentials correctly, but using these credentials, you may be connecting to some specific S3 endpoint (as was the case with me).
Instead of using:
aws s3 ls
try using:
aws --endpoint-url=https://<your_s3_endpoint_url> s3 ls
Hope this helps those facing the similar problem.
you can configure profiles in the bash_profile file using
<profile_name>
aws_access_key_id = <access_key>
aws_secret_access_key = <acces_key_secret>
if you are using multiple profiles. then use:
aws s3 ls --profile <profile_name>
You may need to set the AWS_DEFAULT_REGION environment variable.
In my case, I was trying to provision a new bucket in Hong Kong region, which is not enabled by default, according to this:
https://docs.aws.amazon.com/general/latest/gr/s3.html
It's not totally related to OP's question, but to topic per se, so if anyone else like myself finds trapped on this edge case:
I had to enable that region manually, before operating on that AWS s3 region, following this guide: https://docs.aws.amazon.com/general/latest/gr/rande-manage.html
I have been looking for information about this problem and I have found this post. I know it is old, but I would like to leave this post in case anyone has problems.
Okay, I have installed the AWS CLI and opened:
It seems that you need to run aws configure to add the current credentials. Once changed, I can access
Looks like ~/.aws/credentials was not created. Try creating it manually with this content:
[default]
aws_access_key_id = sdfesdwedwedwrdf
aws_secret_access_key = wedfwedwerf3erfweaefdaefafefqaewfqewfqw
(on my test box, if I run aws command without having credentials file, the error is Unable to locate credentials. You can configure credentials by running "aws configure".)
Can you try running these two commands from the same shell you are trying to run aws:
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
and then try aws command.
another thing that can cause this, even if everything is set up correctly, is running the command from a Makefile. for example, I had a rule:
awssetup:
aws configure
aws s3 sync s3://mybucket.whatever .
when I ran make awssetup I got the error: fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.. but running it from the command line worked.
Adding one more answer since all the above cases didn't work for me.
In AWS console, check your credentials(My Security Credentials) and see if you have entered the right credentials.
Thanks to this discussion:
https://forums.aws.amazon.com/message.jspa?messageID=771815
This could happen because there's an issue with your AWS Secret Access Key. After messing around with AWS Amplify, I ran into this issue. The quickest way is to create a new pair of AWS Access Key ID and AWS Secret Access Key and run aws configure again.
I works for me. I hope this helps.
To those of you who run aws s3 ls and getting this exception. Make sure You have permissions to all regions under the provided AWS Account. When running aws s3 ls you try to pull all the s3 buckets under the AWS Account. therefore, in case you don't have permissions to all regions, you'll get this exception - An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
Follow Describing your Regions using the AWS CLI for more info.
I had the same problem in windows and using the module aws-sdk of javascript. I have changed my IAM credentials and the problem persisted even if i give the new credentials through the method update like this
s3.config.update({
accessKeyId: 'ACCESS_KEY_ID',
secretAccessKey: 'SECRET_ACCESS_KEY',
region: 'REGION',
});
After a while i found that the module aws-sdk had created a file inside the folder User on windows with this path
C:\Users\User\.aws\credentials
. The credentials inside this file take precedence over the other data passed through the method update.
The solution for me was to write here
C:\Users\User\.aws\credentials
the new credentials and not with the method s3.config.update
Kindly export the below variables from the credential file from the below directory.
path = .aws/
filename = credentials
export aws_access_key_id = AK###########GW
export aws_secret_access_key = g#############################J
Hopefully this saves others from hours of frustration:
call aws.config.update({ before initializing s3.
const AWS = require('aws-sdk');
AWS.config.update({
accessKeyId: 'AKIAW...',
secretAccessKey: 'ptUGSHS....'
});
const s3 = new AWS.S3();
Credits to this answer:
https://stackoverflow.com/a/61914974/11110509
I tries below steps and it worked:
1. cd ~
2. cd .aws
3. vi credentials
4. delete
aws_access_key_id =
aws_secret_access_key =
by placing cursor on that line and pressing dd (vi command to delete line).
Delete both the line and check gain.
If you have an AWS Educate account and you get this problem:
An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records".
The solution is here:
Go to your C:/ drive and search for .aws folder inside your main folder in windows.
Inside that folder you get the "credentials" file and open it with notepad.
Paste the whole key credential from AWS account to the same notepad and save it.
Now you are ready to use you AWS Educate account.
Assuming you already checked Access Key ID and Secret... you might want to check file team-provider-info.json which can be found under amplify/ folder
"awscloudformation": {
"AuthRoleName": "<role identifier>",
"UnauthRoleArn": "arn:aws:iam::<specific to your account and role>",
"AuthRoleArn": "arn:aws:iam::<specific to your account and role>",
"Region": "us-east-1",
"DeploymentBucketName": "<role identifier>",
"UnauthRoleName": "<role identifier>",
"StackName": "amplify-test-dev",
"StackId": "arn:aws:cloudformation:<stack identifier>",
"AmplifyAppId": "<id>"
}
IAM role being referred here should be active in IAM console.
If you get this error in an Amplify project, check that "awsConfigFilePath" is not configured in amplify/.config/local-aws-info.json
In my case I had to remove it, so my environment looked like the following:
{
// **INCORRECT**
// This will not use your profile in ~/.aws/credentials, but instead the
// specified config file path
// "dev": {
// "configLevel": "project",
// "useProfile": false,
// "awsConfigFilePath": "/Users/dev1/.amplify/awscloudformation/cEclTB7ddy"
// },
// **CORRECT**
"dev": {
"configLevel": "project",
"useProfile": true,
"profileName": "default",
}
}
Maybe you need to active you api keys in the web console, I just saw that mine were inactive for some reason...
Thanks, everyone. This helped to solve.
Something somehow happened which changed the keys & I didn't realize since everything was working fine until I connected to S3 from a spark...then from the command line also error started coming even in AWS s3 ls
Steps to solve
Run AWS configure to check if keys are set up (verify from last 4 characters & just keep pressing enter)
AWS console --> Users --> click on the user --> go to security credentials--> check if the key is the same that is showing up in AWS configure
If both not the same, then generate a new key, download csv
run --> AWS configure, set up new keys
try AWS s3 ls now
Change keys at all places in my case it was configs in Cloudera.
I couldn't figure out how to get the system to accept my Vocareum credentials so I took advantage of the fact that if you configure your instance to use IAM roles, the SDK automatically selects the IAM credentials for your application, eliminating the need to manually provide credentials.
Once a role with appropriate permissions was applied to the EC2 instance, I didn't need to provide any credentials.
Open the ~/.bash_profile file and edit the info with the new values that you received at the time of creating the new user:
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_DEFAULT_REGION=us-east-1
Afterward, run the command:
source ~/.bash_profile
This will enable the new keys for the local machine. Now, we will need to configure the info in the terminal as well. Run the command -
aws configure
Provide the new values as requested and you are good to go.
In my case, I was using aws configure
However, I hand-edited the .aws/config file to export the KeyID and key environment variables.
This apparently caused a silent error and saw the error listed above.
I solved this by destroying the .aws directory and running aws configure again.
I have encountered this issue when trying to export RDS Postgres data to S3 following this official guide.
TL;DR Troubleshooting tips:
Reset RDS credentials using:
DROP EXTENSION aws_s3 CASCADE;
DROP EXTENSION aws_commons CASCADE;
CREATE EXTENSION aws_s3 CASCADE;
Delete and add DB instance role used for s3Export feature. Optionally reset RDS credentials (previous action point) once again after that.
Below you will find more details on my case.
In particular, I have encountered:
[XX000] ERROR: could not upload to Amazon S3
Details: Amazon S3 client returned 'The AWS Access Key Id you provided does not exist in our records.'.
To be able to perform export to S3, RDS DB instance should be configured to assume a role with permission to write to S3 bucket, the guide describes these steps.
The reason of an error was in aws_s3.query_export_to_s3 Postgres procedure using some (cached?) invalid assumed credentials. I am still not aware which credentials has it been using but I have managed to achieve the same behaviour using AWS CLI:
I have assumed a role (aws sts assume-role),
And then tried to perform another action (aws s3 cp in particular) with this credentials without session token (only AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY without AWS_SESSION_TOKEN).
This resulted in the same error from AWS CLI: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
In short: hard resetting RDS credentials helped.
I just found another cause/remedy for this error/situation. I was getting the error running a PowerShell script. The error was happening on an execution of Write-S3Object. I have been working with AWS for a while now and have been running this script with success, but had not run it in a while.
My usual method of setting AWS credentials is:
Set-AWSCredential -ProfileName <THE_PROFILE_NAME>
I tried the "aws configure" command and every other recommendation in this forum post. No luck.
Well, I am aware of the .aws\credentials file and took a look in there. I have only three profiles, with one being [default]. Everything was looking good, but then I noticed a new element in there, present in all 3 profiles, that I had not seen before:
toolkit_artifact_guid=64GUID3-GUID-GUID-GUID-004GUID236
(GUID redacting added by me)
Then I noticed this element differed between the profile I was running with and the [default] profile, which was the same profile, except for that.
On a hunch I changed the toolkit_artifact_guid in the [default] to match it to my target profile, and no more error. I have no idea why.

AWS credentials not working - ~/.aws/credentials

I'm having a problem with my AWS credentials. I used the credentials file that I created on ~/.aws/credentials just as it is written on the AWS doc. However, apache just can't read it.
First, I was getting this error:
Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the "key" and "secret" options when creating a client or provide an instantiated Aws\Common\Credentials CredentialsInterface object.
Then I tried some solutions that I found on internet. For example, I tried to check my HOME variable. It was /home/ubuntu. I tried also to move my credentials file to the /var/www directory even if it is not my web server directory. Nothing worked. I was still getting the same error.
As a second solution, I saw that we could call directly the CredentialsProvider and indicate the directory on the client.
https://forums.aws.amazon.com/thread.jspa?messageID=583216&#583216
The error changed but I couldn't make it work:
Cannot read credentials from /.aws/credentials
I saw also that we could use the default provider of the CredentialsProvider instead of indicating a path.
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#using-credentials-from-environment-variables
I tried and I keep getting the same error:
Cannot read credentials from /.aws/credentials
Just in case you need this information, I'm using aws/aws-sdk-php (3.2.5). The service I'm trying to use is the AWS Elastic Transcoder. My EC2 instance is an Ubuntu 14.04. It runs a Symfony application deployed using Capifony.
Before I try on this production server, I tried it in a development server where it works perfectly only with the ~/.aws/credentials file. This development server is exactly a copy of the production server. However, it doesn't use Capifony for the deployment. It is just a normal git clone of the project. And it has only one EBS volume while the production server has one for the OS and one for the application.
Ah! And I also checked if the permissions/owners of the credentials file were the same on both servers and they are the same. I tried a 777 to see if it could change something but nothing.
Does anybody have an idea?
It sounds like you're doing it wrong. You do not need to deploy credentials to an EC2 instance in order to have that instance interact with other AWS services, and if fact should not ever deploy credentials to an EC2 instance.
Instead, when you create your instance, you associate an IAM role with it. That role has policies that control access to the other AWS services.
You can create an empty role, launch the instance, and then modify the role later. You can't assign a role after the instance has been launched.
You can now add roles to an instance after it has been assigned.
It is still considered a best practice to not deploy actual credentials to an EC2 instance.
If this can help someone, I managed to make my .ini file work, doing this way:
$profile = 'default';
$path = '/mnt/app/www/.aws/credentials/default.ini';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = ElasticTranscoderClient::factory(array(
'region' => 'eu-west-1',
'version' => '2012-09-25',
'credentials' => $provider
));
The CredentialProvider is explained on this doc:
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#ini-provider
I still don't understand why my application can't read the file on the home directory (~/.aws/credentials/default.ini) on one server but in the other it does.
If someone knows something about it, please let me know.
The SDK reads from a file located at ~/.aws/credentials, but it looks like you're saving a file at ~/.aws/credentials/default.ini. If you move the file, the error you were experiencing should be cleared up.
2 Ways of solving this problem to me Node.js
Its going to get my credentials from /home/{USER}/.aws/credentials usin' the default profile
const aws = require('aws-sdk');
aws.config.credentials = aws.SharedIniFileCredentials({profile: 'default'})
...
The hardcoded way
var lambda = new aws.Lambda({
region: 'us-east-1',
accessKeyId: <KEY>
secretAccessKey: <KEY>
});

Packer amazon-ebs : AuthFailure

For some reason Packer fails to authenticate to AWS, using plain aws client works though, and my environment variables are correctly set:
AWS_ROLE_SESSION_NAME=...
AWS_SESSION_TOKEN=...
AWS_SECRET_ACCESS_KEY=...
AWS_ROLE=...
AWS_ACCESS_KEY_ID=...
AWS_CLI=...
AWS_ACCOUNT=...
AWS_SECURITY_TOKEN=...
I am using authentication using aws saml, and Packer gives me the following:
Error querying AMI: AWS was not able to validate the provided access credentials (AuthFailure)
The problem lies within the way Packer authenticates with AWS.
Packer is written in go and uses goamz for authentication. When creating a config using aws saml, a couple of files are generated in ~/.aws : config and credentials.
Turns out this credentials file takes precedence over the environment variables, so if these credentials are incorrect and you rely on your environment variables, you will get the same error.
Since aws-saml needs aws_access_key_id and aws_secret_access_key to be defined, deleting the credentials file would not suffice in this case.
We had to copy these values into ~/.aws/config and delete the credentials file, then Packer was happy to use our environment variables.
A ticket has been raised in github for goamz so AWS CLI and Packer can have the same authenticating behavior, feel free to vote it up if you have the issue too : https://github.com/mitchellh/goamz/issues/171