AWS S3 Bucket Accessible from my ElasticBeanStalk Instance only - amazon-web-services

I have 1 s3 bucket, 1 elasticbeanstalk instance. Currenly my s3bucket is made public hence its accessible from any domain, even from my localhost.
I want that all my s3 bucket resources should be accessible from my EBS instance only where my APP is hosted/running. My app should be able to view these resources and upload new images/resources to this bucket .
I am sure somebody myt have done this.

Controlling access to S3 has several ways. The best practice to make something privately accessible is: not to give any rights to access your S3 buckets/files in the bucket policy.
However you should create an IAM role which has either a full access to S3, or limited access to some actions, some buckets.
For every EC2 instances and also to every Elastic Beanstalk environments, you can attache an IAM Role. This role will be automatically served to your instances via instance metadata. This is a safe way to give special rights to your instances.
(Note: This is an AWS security best practice, since AWS will deal with the key rotations on your EC2 boxes.)

Related

Is it possible to test a security group policy that restricts S3 access only to a single VPC without creating an EC2?

Title is pretty self explanatory, I want to lockdown S3 access to only be allowed from a particular VPC. Is there a way of doing this without creating an EC2 to test it?
I want to lockdown S3 access to only be allowed from a particular VPC
AWS docs have entire chapter devoted to limiting access to S3 to a given VPC or VPC endpoint:
Controlling access from VPC endpoints with bucket policies
But if you apply these bucket policies, you will actually lock yourself out of the bucket. So you will have to use EC2 instance or some other way of accessing the bucket from within the VPC. Any other access will not be allowed.

Export Data from AWS EC2 windows instance hosted in one account to S3 to another account

I am trying to explore more on AWS S3 side and got one doubt. Is it possible to export data from EC2 windows instance hosted in one account to S3 bucket hosted in another AWS account? I know one way is using external modules like tntdrive where I can map S3 as mounted drive and export data. Looking for another good solution if S3 provides, so if someone knows this, please place your suggestions.
I think all you would need in your EC2 instance is access to the AWS credentials of the other account, then you could copy your file from your EC2 instance to an S3 bucket for the second account. You may also be able to do it if you grant the identity associated with the EC2 instance rights to an S3 bucket owned by the second account - then you could just write to that bucket "yourself" ...

Should I use EC2 or Elastic Beanstalk when I am creating a new role where my EC2 / Beanstalk instances should have access to S3?

This link says
To create the IAM role
Open the IAM console.
In the navigation pane, select Roles, then Create New Role.
Enter a name for the role, then select Next Step. Remember this name, since you'll need it when you launch your Amazon EC2 instance.
On the Select Role Type page, under AWS Service Roles, select Amazon EC2.
On the Set Permissions page, under Select Policy Template, select Amazon S3 Read Only Access, then Next Step.
On the Review page, select Create Role.
But when you click "Create New Role", you will be asked as follows
They say "choose a service that will use this role"
a) As you launch an app in ElasticBeanStalk which in turn creates an Ec2 instance , should I select Ec2 service or Elastic beanstalk service?
You are creating an EC2 instance role, so the service to select is EC2, regardless of whether or not the instances are being spawned and managed by Elastic Beanstalk.
With an instance role, your instance has continuous access to a set of automatically-rotated temporary credentials that it can use to access whatever services the role policies grant access to.
Here, you are granting the EC2 service permission to actually obtain those temporary credentials on behalf of your instance.
Rule of thumb with AWS, only create the resources you need. The reason for this is that AWS charges you for everything that you use. Now with that said, if you only need an EC2 that can communicate with your S3, then go with an EC2 only. EC2's are sorta like your base server, and you can always link one to your Elastic Beanstalk (if in fact you want to utilize that service later on).
Note, if you eventually begin using your S3 to show content to your users (e.g. your images, videos, etc.), then you should use CloudFront as your CDN to control things like caching, speed, and availability across various regions.
Hope this helps.
The AWS document merely is an example (Apply IAM on EC2). You don't need follow the document mechanically, because your case is different, applying IAM on different type(s) of AWS services.

Understanding on concept of IAM role on EC2

I'm just getting in touch with EC2 and came across the IAM Role concept. This question is to clear my doubt about the concept on restriction level.
Lets say I have an EC2 instance with attached IAM Role Role A which posses one policy AmazonS3ReadOnlyAccess, correct me if i'm wrong but it means this particular instance only allow to perform S3 Read only operation.
Now says I created a User with Programmatic access and AmazonS3FullAccess policy.
If this user SSH into the EC2 instance, can he write file to s3 ?
I still unable to try it out by myself as I don't have a linux machine and still figuring on how to connect to Ec2 using putty
Lets say I have an EC2 instance with attached IAM Role Role A which
posses one policy AmazonS3ReadOnlyAccess, correct me if i'm wrong but
it means this particular instance only allow to perform S3 Read only
operation.
Yes
Now says I created a User with Programmatic access and
AmazonS3FullAccess policy. If this user SSH into the EC2 instance, can
he write file to s3 ?
IAM users cannot SSH to EC2 instances using IAM user credentials. After provisioning a EC2 instance, you need to use regular Operating System User constructs, to SSH to the Server (Default user keys created by AWS).
In addition if a user SSH to EC2 instance and use a Programatic Access Credentials of a EC2 User through AWS CLI, REST API or SDKs (Doesn't have to be a EC2 instance, it also can be your on-premise server) then if the IAM User has a S3 write policy, the CLI commands or API calls or the code using SDK is able to write files to S3.
So in a summary
Use IAM roles if you are running a EC2 instance, for your CLI commands, Code with SDK, or REST API calls to access AWS Resources.
If you are using a server on-premise or outside AWS, use IAM User's Programatic Access keys to do the same.
Insight on how IAM roles work internally with EC2
When you attach a IAM role to a EC2 instance, AWS periodically updates the EC2 instance with temporal Access Credentials to that EC2 instance (Which is a good security practice).
These credentials are accessible through the Metadata URLs for the CLI, REST API and Code using SDKs inside EC2 instance.
Note: When using Roles, its much secure since it uses Temporal access credentials vs IAM Users Programatic Access uses Long lived access credentials.

Supplying non-AWS credentials to EC2 instance on launch

We have an EC2 instance is coming up as part of autoscaling configuration. This instance can retrieve AWS credentials using the IAM role assigned to it. However, the instance needs additional configuration to get started, some of which is sensitive (passwords to non-EC2 resources) and some of which is not (configuration parameters).
It seems that the best practice from AWS is to store instance configuration in IAM and retrieve it at run-time. The problem I have with this approach is that configuration is sitting unprotected in S3 bucket - incorrect policy may expose it to parties who were never meant to see it.
What is a best practice for accomplishing my objective so that configuration data stored in S3 is also encrypted?
PS: I have read this question but it does not address my needs.
[…] incorrect policy may expose it to parties who were never meant to see it.
Well, then it's important to ensure that the policy is set correctly. :) Your best bet is to automate your deployments to S3 so that there's no room for human error.
Secondly, you can always find a way to encrypt the data before pushing it to S3, then decrypt it on-instance when the machine spins-up.
AWS does not provide clear guidance on this situation, which is a shame. This is how I am going to architect the solution:
Developer box encrypts per-instance configuration blob using the
private portion of asymmetric keypair and places it in an S3 bucket.
Restrict access to S3 bucket using IAM policy.
Bake public portion of asymmetric keypair into AMI.
Apply IAM role to EC2 instance and launch it from AMI
EC2 instance is able to download configuration from S3 (thanks to IAM role) and decrypt it (thanks to having the public key available).
The private key is never shared sent to an instance so it should not be compromised. If the public key is compromised (e.g. if the EC2 instance is rooted), then the attacker can decrypt the contents of the S3 bucket (but at that point they already have root access to the instance and can read configuration directly from the running service).