S3 credentials for a public bucket? - amazon-web-services

I have this snippet to upload a file on S3
s3 = boto3.resource('s3')
s3.Object('bucketname', timestamped_filename).put(Body=open(FILE_SAVE_PATH, 'rb'))
my bucket has a delete/upload permission for everyone, so it does work on my Windows machine.
However, when I try to run the same code on my Mac it throws
botocore.exeptions.NoCredentialsError: Unable to locate credentials
Is this behavior normal?
And what kind of credentials I can possibly provide if I'm accessing a public bucket?
Thank you.

When making an API call to AWS, valid credentials must be provided. These credentials are associated with an IAM User and grant access to AWS services.
When making API calls (or using the AWS Command-Line Interface (CLI)) from an Amazon EC2 instance, these credentials can be granted to the EC2 instance by assigning an IAM Role to the instance at launch time.
When making calls from a non-EC2 computer, credentials must be provided via a configuration file or environment variables.
It appears that your Windows machine is either an EC2 instance with a role, or it has a local configuration file with valid credentials; and it appears that your Mac has neither of these.
See: boto3 Credentials documentation

Related

Accessing S3 bucket from a script using IAM Role

We are trying to upload and display a file to and from S3 bucket through our .Net Script.
We are currently using the user's access key and secret key in our code, Which is a bad practice.
Could anyone let me if there is a way that we can use roles in the pace of these keys directly? If there is then how ?
As you're going to run this on EC2 the answer is yes you can attach an IAM role to an EC2 host.
This is indeed the best practice for running your scripts on your EC2 host. Once attached the EC2 your script will have access to all permissions that your EC2 has as long as you do not provide an IAM key/secret in the credentials of the SDK or have any of the environment variables set as these will override the IAM role.
More information is available in the IAM roles for Amazon EC2 documentation.
If you run your application in EC2, try to attach the role to EC2 directly.
If you are run on your local server, try to save your credentials on your server by using aws configure command

Which IAM Policy require for EC2 to run boto3 to call aws service?

I have Ec2 running with Fullec2 accessrole and running some script which has Boto3 module and calling some aws service.
Which extra IAM permission require to run boto3 ? other than configure credential file under .aws folder.
Thanks
AN
boto3 is a Python library for making API calls to AWS. It is the AWS Python SDK.
Any API calls made to AWS must be made using AWS credentials. These credentials are associated with an IAM (Identity and Access Management) User. The User must be assigned the necessary permissions to allow them to make the call.
For example, if you wish to make an API call to create an Amazon SQS queue, the call must be made using credentials from an IAM User that has permission to call CreateQueue().

Boto s3 permission issue

I've come across very weird permission issue. I'm trying to upload a file to s3, here's my function
def UploadFile(FileName, S3FileName):
session = boto3.session.Session()
s3 = session.resource('s3')
s3.meta.client.upload_file(FileName, "MyBucketName", S3FileName)
I did configure aws-cli on the server. This function works fine when I log into server and launch python interpreter but fails when called from my django rest api with:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
No idea why the same function works when called from interpreter and fails when called from django. Both are in the same virtual environment. Any suggestions?
According to the boto3 docs, boto3 is looking for credentials in the following places:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
Note that many of these places are paths with "~" in them. "~" refers to the current user's home directory. Most likely, your REST API is running under a different system user than you are using to test your code.
The proper solution is to use IAM roles, as this allows your server to have S3 access without you needing to give it IAM credentials. However, if that doesn't work for your setup, you should put the IAM credentials in the /etc/boto.cfg file as that is user agnostic.

s3fs with aws ec2 instance and using instance profiles

As far as I can tell the only way to mount an s3 bucket with s3fs is to use an accesskey:secretkey specified in a file with various file locations supported.
However, if I'm an ec2 instance, in the local s3 account, with an instance profile, I just want to use the instance profile credentials that are available. Does anyone know of a way to use an instance profile, and not have to set credentials in the local file system? If not, is anyone working on supporting this feature going forward?
Thanks
Once/if you have a role that is attached to the EC2 instance, you can then add the following entry in /etc/fstab to automatically mount the S3 bucket on boot:
s3fs#bucketname /PATHtoLocalMount fuse _netdev,iam_role=nameofiamrolenoquotes
Naturally, you have to have s3fs installed (as you do judging from the question), and the role policy must grant the appropriate (probably full) access to the S3 bucket. This is great in the sense that no IAM credentials need to be stored on the instance (=safer, because the role access cannot be used outside the instance attached to the role, while the IAM credentials can).

Mounting AWS S3 bucket using AWS IAM roles instead of using a passwd file

I am mounting an AWS S3 bucket as a filesystem using s3fs-fuse. It requires a file which contains AWS Access Key Id and AWS Secret Access Key.
How do I avoid the access using this file? And instead use AWS IAM roles?
As per Fuse Over Amazon document, you can specify the credentials using 4 methods. If you don't want to use a file, then you can set AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables.
Also, if your goal is to use AWS IAM instance profile, then you need to run your s3fs-fuse from an EC2 instance. In that case, you don't have to set these credential files/environment variables. This is because while creating the instance, if you attach the instance role and policy, the EC2 instance will get the credentials at boot time. Please see the section 'Using Instance Profiles' in page 190 of AWS IAM User Guide
there is an argument -o iam_role=--- which helps you to avoid AccessKey and SecretAccessKey
The Full steps to configure this is given below
https://www.nxtcloud.io/mount-s3-bucket-on-ec2-using-s3fs-and-iam-role/