I recently moved my app to elasticbeanstalk, and i am running Symfony3, there is a mandatory parameters.yml file that has to be populated with Environmental variables.
Id like to wget the parameters.yml from a private S3 bucket, limiting access to instances only.
I know i can set the environmental variables directly on the environment, but i have some very very sensitive stuff there, and environmental variables get leaked into my logging system, which is very bad.
I also have multiple environments such as workers using the same environmental variables, and copy pasting them is quite annoying.
So i am wondering if its possible to have the app wget it on deploy, i know how to do that, but i cant seem to configure the S3 bucket to only allow access from instances.
Yep, that definitely can be done, there are different ways of doing this depending what approach you want to take. I would suggest to use .ebextentions to create IAM Role -> grant access for that role to your bucket -> after package is unzip on the instance -> copy object from s3 using instance role
Create custom IAM role using AWS console or using .ebextentions custom resources and grant access for that role to the objects in your bucket.
Related read
Using above mentioned .ebextentions set aws:autoscaling:launchconfiguration in options_setting to specify instance profile you created before.
Again, using .ebxtentions use container_commands option to run aws s3 cp command
Related
I am on a federated account that only allows for 60 minutes access tokens. This makes using AWS difficult since I have to constantly relog in with MFA, even for the AWS CLI on my machine. I'm fairly certain that any programmatic secret access key and token I generate would be useless after an hour.
I am writing a .NET program (.NET framework 4.8) that will run on a EC2 instance to read and write from an S3 bucket. As per the documentation example, they give this example to initalize the AmazonS3Client:
// Before running this app:
// - Credentials must be specified in an AWS profile. If you use a profile other than
// the [default] profile, also set the AWS_PROFILE environment variable.
// - An AWS Region must be specified either in the [default] profile
// or by setting the AWS_REGION environment variable.
var s3client = new AmazonS3Client();
I've looked into SecretManager and ParameterStore, but that would matter if the programmatic access keys go inactive after an hour. Perhaps there is another way to give the program access to S3 and the SDK...
If I cannot use access keys and tokens stored in a file, could I use the IAM access that awscli uses? For example, I can type into powershell aws s3 ls s3://mybucket to list and read files from s3 to the ec2 instance. Could the .NET SDK use the same credentials to access the S3 bucket?
I have a VM (vm001) on Google Cloud and on that I have added some users. Using a user (user1) I want to copy a directory to a GCP bucket (bucket1). The following is what I do:
user1#vm001: gsutil cp -r dir_name gs://bucket1
, but I get the following error:
[Content-Type=application/octet-stream]...ResumableUploadAbortException: 403 Access denied.
I know user1 does not have access to upload files to bucket1 and I should use IAM to grant permission to it but I do not know how to do it for a user that is on VM. This video shows how we can give access using an email but I have not been able to see how we can do it for current users that are already on VM.
Note
I have added user1 using adduser on VM and I do not know how to see it on my Google Cloud Console to change its access.
I managed to replicate your error. There are two (2) ways on how to transfer your files from your VM to your GCS bucket.
You can either create a new VM or use your existing one. Before finishing your setup, go to API and identity management > Cloud API access scopes. Search for Storage and set it to Read Write.
If you're not sure which access scope to set, you can select Allow full access to all Cloud APIs. Make sure that you restrict access by setting the following permissions on your service account under your GCS bucket:
Storage Legacy Bucket Owner (roles/storage.legacyBucketOwner)
Storage Legacy Bucket Writer (roles/storage.legacyBucketWriter)
After that I started my VM and refreshed my GCS bucket and run gsutil cp -r [directory/name] gs://[bucket-name] and managed to transfer the files to my GCS bucket.
I followed the steps using this link on changing the service account and access scopes for an instance. Both steps worked out for me.
I have a requirement of accessing S3 bucket on the AWS ParallelCluster nodes. I did explore the s3_read_write_resource option in the ParallelCluster documentation. But, it is not clear as to how we can access the bucket. For example, will it be mounted on the nodes, or will the users be able to access it by default. I did test the latter by trying to access a bucket I declared using the s3_read_write_resource option in the config file, but was not able to access it (aws s3 ls s3://<name-of-the-bucket>).
I did go through this github issue talking about mounting S3 bucket using s3fs. In my experience it is very slow to access the objects using s3fs.
So, my question is,
How can we access the S3 bucket when using s3_read_write_resource option in AWS ParallelCluster config file
These parameters are used in ParallelCluster to include S3 permissions on the instance role that is created for cluster instances. They're mapped into Cloudformation template parameters S3ReadResource and S3ReadWriteResource . And later used in the Cloudformation template. For example, here and here. There's no special way for accessing S3 objects.
To access S3 on one cluster instance, we need to use the aws cli or any SDK . Credentials will be automatically obtained from the instance role using instance metadata service.
Please note that ParallelCluster doesn't grant permissions to list S3 objects.
Retrieving existing objects from S3 bucket defined in s3_read_resource, as well as retrieving and writing objects to S3 bucket defined in s3_read_write_resource should work.
However, "aws s3 ls" or "aws s3 ls s3://name-of-the-bucket" need additional permissions. See https://aws.amazon.com/premiumsupport/knowledge-center/s3-access-denied-listobjects-sync/.
I wouldn't use s3fs, as it's not AWS supported, it's been reported to be slow (as you've already noticed), and other reasons.
You might want to check the FSx section. It can create an attach an FSx for Lustre filesystem. It can import/export files to/from S3 natively. We just need to set import_path and export_path on this section.
As far as I can tell the only way to mount an s3 bucket with s3fs is to use an accesskey:secretkey specified in a file with various file locations supported.
However, if I'm an ec2 instance, in the local s3 account, with an instance profile, I just want to use the instance profile credentials that are available. Does anyone know of a way to use an instance profile, and not have to set credentials in the local file system? If not, is anyone working on supporting this feature going forward?
Thanks
Once/if you have a role that is attached to the EC2 instance, you can then add the following entry in /etc/fstab to automatically mount the S3 bucket on boot:
s3fs#bucketname /PATHtoLocalMount fuse _netdev,iam_role=nameofiamrolenoquotes
Naturally, you have to have s3fs installed (as you do judging from the question), and the role policy must grant the appropriate (probably full) access to the S3 bucket. This is great in the sense that no IAM credentials need to be stored on the instance (=safer, because the role access cannot be used outside the instance attached to the role, while the IAM credentials can).
I am mounting an AWS S3 bucket as a filesystem using s3fs-fuse. It requires a file which contains AWS Access Key Id and AWS Secret Access Key.
How do I avoid the access using this file? And instead use AWS IAM roles?
As per Fuse Over Amazon document, you can specify the credentials using 4 methods. If you don't want to use a file, then you can set AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables.
Also, if your goal is to use AWS IAM instance profile, then you need to run your s3fs-fuse from an EC2 instance. In that case, you don't have to set these credential files/environment variables. This is because while creating the instance, if you attach the instance role and policy, the EC2 instance will get the credentials at boot time. Please see the section 'Using Instance Profiles' in page 190 of AWS IAM User Guide
there is an argument -o iam_role=--- which helps you to avoid AccessKey and SecretAccessKey
The Full steps to configure this is given below
https://www.nxtcloud.io/mount-s3-bucket-on-ec2-using-s3fs-and-iam-role/