As far as I can tell the only way to mount an s3 bucket with s3fs is to use an accesskey:secretkey specified in a file with various file locations supported.
However, if I'm an ec2 instance, in the local s3 account, with an instance profile, I just want to use the instance profile credentials that are available. Does anyone know of a way to use an instance profile, and not have to set credentials in the local file system? If not, is anyone working on supporting this feature going forward?
Thanks
Once/if you have a role that is attached to the EC2 instance, you can then add the following entry in /etc/fstab to automatically mount the S3 bucket on boot:
s3fs#bucketname /PATHtoLocalMount fuse _netdev,iam_role=nameofiamrolenoquotes
Naturally, you have to have s3fs installed (as you do judging from the question), and the role policy must grant the appropriate (probably full) access to the S3 bucket. This is great in the sense that no IAM credentials need to be stored on the instance (=safer, because the role access cannot be used outside the instance attached to the role, while the IAM credentials can).
Related
We are trying to upload and display a file to and from S3 bucket through our .Net Script.
We are currently using the user's access key and secret key in our code, Which is a bad practice.
Could anyone let me if there is a way that we can use roles in the pace of these keys directly? If there is then how ?
As you're going to run this on EC2 the answer is yes you can attach an IAM role to an EC2 host.
This is indeed the best practice for running your scripts on your EC2 host. Once attached the EC2 your script will have access to all permissions that your EC2 has as long as you do not provide an IAM key/secret in the credentials of the SDK or have any of the environment variables set as these will override the IAM role.
More information is available in the IAM roles for Amazon EC2 documentation.
If you run your application in EC2, try to attach the role to EC2 directly.
If you are run on your local server, try to save your credentials on your server by using aws configure command
I am trying to run a script from OpenTraffic repository, and it needs access to some AWS S3 buckets. I am unable to figure out how to get access to a particular AWS S3 bucket?
FYI:
OpenTraffic is a open source platform to obtain and analyse dynamic traffic data : https://github.com/opentraffic
The script I am trying to run:
https://github.com/opentraffic/reporter/blob/dev/load-historical-data/load_data.sh
Documentation(https://github.com/opentraffic/reporter/tree/dev/load-historical-data) says: In order to run above script,
access required to both s3://grab_historical_data, s3://reporter-drop-
{prod, dev}.
Your're accessing the S3 buckets from r3.4xlarge ec2 instance according to the documentation link your share.
Firstly, You've to create a IAM role for ec2 instance and S3 access policy with it.
Create the ec2 instance and attach the IAM role to it because this is the only time you can to assign a role to it and launch it.
Role gives your ec2 instance access permission for s3 bucket.
We have not given anyone any download rights through S3 but it is still possible to download data through an EMR cluster using scp
Is it possible to give someone the cluster dns but make sure they can use the data in the cluster but not download it?
EMR nodes by default will assume EC2 instance profile: EMR_EC2_DefaultRole IAM role to access resources on your account including S3. Policies defined in this role will decide on what EMR had access to.
If that role has s3:* , or s3:get* etc, allowed , on all resources like buckets and objects, then all nodes on EMR can download objects from all buckets on your account. (Given you do not have any bucket policies).
http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/
Yes, given EMR has access to S3 , if you are sharing the Private SSH key (.pem) file of an EMR/Ec2 with a user, they can use SCP to copy data from EMR to their machine.
I have this snippet to upload a file on S3
s3 = boto3.resource('s3')
s3.Object('bucketname', timestamped_filename).put(Body=open(FILE_SAVE_PATH, 'rb'))
my bucket has a delete/upload permission for everyone, so it does work on my Windows machine.
However, when I try to run the same code on my Mac it throws
botocore.exeptions.NoCredentialsError: Unable to locate credentials
Is this behavior normal?
And what kind of credentials I can possibly provide if I'm accessing a public bucket?
Thank you.
When making an API call to AWS, valid credentials must be provided. These credentials are associated with an IAM User and grant access to AWS services.
When making API calls (or using the AWS Command-Line Interface (CLI)) from an Amazon EC2 instance, these credentials can be granted to the EC2 instance by assigning an IAM Role to the instance at launch time.
When making calls from a non-EC2 computer, credentials must be provided via a configuration file or environment variables.
It appears that your Windows machine is either an EC2 instance with a role, or it has a local configuration file with valid credentials; and it appears that your Mac has neither of these.
See: boto3 Credentials documentation
I am mounting an AWS S3 bucket as a filesystem using s3fs-fuse. It requires a file which contains AWS Access Key Id and AWS Secret Access Key.
How do I avoid the access using this file? And instead use AWS IAM roles?
As per Fuse Over Amazon document, you can specify the credentials using 4 methods. If you don't want to use a file, then you can set AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables.
Also, if your goal is to use AWS IAM instance profile, then you need to run your s3fs-fuse from an EC2 instance. In that case, you don't have to set these credential files/environment variables. This is because while creating the instance, if you attach the instance role and policy, the EC2 instance will get the credentials at boot time. Please see the section 'Using Instance Profiles' in page 190 of AWS IAM User Guide
there is an argument -o iam_role=--- which helps you to avoid AccessKey and SecretAccessKey
The Full steps to configure this is given below
https://www.nxtcloud.io/mount-s3-bucket-on-ec2-using-s3fs-and-iam-role/