I had create a server with a minio instance following this tutorial. https://www.digitalocean.com/community/tutorials/how-to-set-up-an-object-storage-server-using-minio-on-ubuntu-18-04-es. All is working fine but when I change the credentials at the file in /etc/default/minio the server just crash. I restarted minio but the problem continue. Just if I change the credentials to the original credentials the server works ok.
So what can I do to change my credentials periodically for security reasons?
Minio stores "accessKey" and "secretKey" in "config.json" file usually located at "/root/.mc/config.json" or "/root/.minio/config.json"
change current accessKey and secretKey values in config.json and restart minio service.
Related
I'm currently trying to connect to my enterprise s3 URL (which is not amazon web-service) using boto3 and I have the following error.
EndpointConnectionError: Could not connect to the endpoint URL: "https://s3.fr-par.amazonaws.com/my_buket...." which is absolutely not the enpoint given in the code.
s3 = boto3.resource(service_name='s3',
aws_access_key_id= 'XXXXXX',
aws_secret_access_key='YYYYYYY',
endpoint_url= 'https://my_buket.s3.my_region.my_company_enpoint_url')
my_bucket=s3.Bucket(s3_bucket_name)
bucket_list = []
for file in my_bucket.objects.filter(Prefix='boston.csv'):
bucket_list.append(file.key)
As can be seen in the error image boto3 tries to connect to a amazonaws url, which is not that of my enterprise. Finally I want to indicate that I am able to connect to my enterprise s3 using minIO https://docs.min.io/ which indicate there no errors in the aws_access_key_id, the aws_secret_access_key and endpoint_url I use with boto3.
I have executed the code using a python 3.9 environment (Boto3 version 1.22.1) a anaconda 3.9 environment (Boto3 version 1.22.0) and a jupyter notebook always with same error. The OS is an Ubuntu 20.04.4 LTS virtualized on Oracle VM virtual box.
https://my_buket.s3.my_region.my_company_enpoint_url is not the endpoint. The list of valid S3 endpoints is here. But normally you don't have to specify it explicitly. Boto3 will "know" which endpoint to use for each region.
Since some people seem to have the same problem, I'm posting the solution I found.
For some reason the code in the question still doesn't work for me. Alternatively, I handle pointing to my enterprise's S3 just by first creating a session and creating the resource and client from it. Note that in endpoint_url, no bucket is indicated.
Since there is no bucket in endpoint_url, you have access to all buckets associated with the credential pass, and therefore it is necessary to specify the bucket in the resource and client instances methods.
session = boto3.Session(region_name=my_region)
resource = session.resource('s3',
endpoint_url='https://s3.my_region.my_company_enpoint_url',
aws_access_key_id='XXXXXX',
aws_secret_access_key='YYYYYY')
client = session.client('s3',
endpoint_url='https://s3.my_region.my_company_enpoint_url',
aws_access_key_id='XXXXXX',
aws_secret_access_key='YYYYYY')
client.upload_file(path_to_local_file, bucket_name, upload_path,
Callback=call,
ExtraArgs=ExtraArgs)
I've come across very weird permission issue. I'm trying to upload a file to s3, here's my function
def UploadFile(FileName, S3FileName):
session = boto3.session.Session()
s3 = session.resource('s3')
s3.meta.client.upload_file(FileName, "MyBucketName", S3FileName)
I did configure aws-cli on the server. This function works fine when I log into server and launch python interpreter but fails when called from my django rest api with:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
No idea why the same function works when called from interpreter and fails when called from django. Both are in the same virtual environment. Any suggestions?
According to the boto3 docs, boto3 is looking for credentials in the following places:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
Note that many of these places are paths with "~" in them. "~" refers to the current user's home directory. Most likely, your REST API is running under a different system user than you are using to test your code.
The proper solution is to use IAM roles, as this allows your server to have S3 access without you needing to give it IAM credentials. However, if that doesn't work for your setup, you should put the IAM credentials in the /etc/boto.cfg file as that is user agnostic.
Through boto3 library, I uploaded and downloaded file from AWS s3 successfully.
But after few hours, it shows InvalidAccessKeyId suddenly for the same code.
What I have done:
set ~/.aws/credentials
Set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
I tried the following solutions, but the error still heppens.
adding quotes on config values
ref2
Do I miss anything? Thanks for your help.
You do not need to configure both .aws/credentials AND environment variables.
From Credentials — Boto 3 documentation:
The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
The fact that your credentials stopped working after a period of time suggests that they were temporary credentials created via the AWS Security Token Service, with an expiry time.
If you have the credentials in ~/.aws/credentials there is no need to set environment variables AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY.
Environment variables are valid only for a session.
If you are using boto3, you can specify the credentials while creating client itself.
The best way to configure AWS credential is to install the AWS Command-Line Interface (CLI) and run aws configure from the bash console:
~/.aws/credentials format
[default]
aws_access_key_id = ***********
aws_secret_access_key = ************
I found this article for the same issue.
Amazon suggests to generate new key, and I did.
Then it works, but we don't know the root cause.
Suggest to do so for saving a lot of time when having the same problem.
I'm having a problem with my AWS credentials. I used the credentials file that I created on ~/.aws/credentials just as it is written on the AWS doc. However, apache just can't read it.
First, I was getting this error:
Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the "key" and "secret" options when creating a client or provide an instantiated Aws\Common\Credentials CredentialsInterface object.
Then I tried some solutions that I found on internet. For example, I tried to check my HOME variable. It was /home/ubuntu. I tried also to move my credentials file to the /var/www directory even if it is not my web server directory. Nothing worked. I was still getting the same error.
As a second solution, I saw that we could call directly the CredentialsProvider and indicate the directory on the client.
https://forums.aws.amazon.com/thread.jspa?messageID=583216򎘰
The error changed but I couldn't make it work:
Cannot read credentials from /.aws/credentials
I saw also that we could use the default provider of the CredentialsProvider instead of indicating a path.
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#using-credentials-from-environment-variables
I tried and I keep getting the same error:
Cannot read credentials from /.aws/credentials
Just in case you need this information, I'm using aws/aws-sdk-php (3.2.5). The service I'm trying to use is the AWS Elastic Transcoder. My EC2 instance is an Ubuntu 14.04. It runs a Symfony application deployed using Capifony.
Before I try on this production server, I tried it in a development server where it works perfectly only with the ~/.aws/credentials file. This development server is exactly a copy of the production server. However, it doesn't use Capifony for the deployment. It is just a normal git clone of the project. And it has only one EBS volume while the production server has one for the OS and one for the application.
Ah! And I also checked if the permissions/owners of the credentials file were the same on both servers and they are the same. I tried a 777 to see if it could change something but nothing.
Does anybody have an idea?
It sounds like you're doing it wrong. You do not need to deploy credentials to an EC2 instance in order to have that instance interact with other AWS services, and if fact should not ever deploy credentials to an EC2 instance.
Instead, when you create your instance, you associate an IAM role with it. That role has policies that control access to the other AWS services.
You can create an empty role, launch the instance, and then modify the role later. You can't assign a role after the instance has been launched.
You can now add roles to an instance after it has been assigned.
It is still considered a best practice to not deploy actual credentials to an EC2 instance.
If this can help someone, I managed to make my .ini file work, doing this way:
$profile = 'default';
$path = '/mnt/app/www/.aws/credentials/default.ini';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = ElasticTranscoderClient::factory(array(
'region' => 'eu-west-1',
'version' => '2012-09-25',
'credentials' => $provider
));
The CredentialProvider is explained on this doc:
http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html#ini-provider
I still don't understand why my application can't read the file on the home directory (~/.aws/credentials/default.ini) on one server but in the other it does.
If someone knows something about it, please let me know.
The SDK reads from a file located at ~/.aws/credentials, but it looks like you're saving a file at ~/.aws/credentials/default.ini. If you move the file, the error you were experiencing should be cleared up.
2 Ways of solving this problem to me Node.js
Its going to get my credentials from /home/{USER}/.aws/credentials usin' the default profile
const aws = require('aws-sdk');
aws.config.credentials = aws.SharedIniFileCredentials({profile: 'default'})
...
The hardcoded way
var lambda = new aws.Lambda({
region: 'us-east-1',
accessKeyId: <KEY>
secretAccessKey: <KEY>
});
For some reason Packer fails to authenticate to AWS, using plain aws client works though, and my environment variables are correctly set:
AWS_ROLE_SESSION_NAME=...
AWS_SESSION_TOKEN=...
AWS_SECRET_ACCESS_KEY=...
AWS_ROLE=...
AWS_ACCESS_KEY_ID=...
AWS_CLI=...
AWS_ACCOUNT=...
AWS_SECURITY_TOKEN=...
I am using authentication using aws saml, and Packer gives me the following:
Error querying AMI: AWS was not able to validate the provided access credentials (AuthFailure)
The problem lies within the way Packer authenticates with AWS.
Packer is written in go and uses goamz for authentication. When creating a config using aws saml, a couple of files are generated in ~/.aws : config and credentials.
Turns out this credentials file takes precedence over the environment variables, so if these credentials are incorrect and you rely on your environment variables, you will get the same error.
Since aws-saml needs aws_access_key_id and aws_secret_access_key to be defined, deleting the credentials file would not suffice in this case.
We had to copy these values into ~/.aws/config and delete the credentials file, then Packer was happy to use our environment variables.
A ticket has been raised in github for goamz so AWS CLI and Packer can have the same authenticating behavior, feel free to vote it up if you have the issue too : https://github.com/mitchellh/goamz/issues/171