Limit aws to just s3 - amazon-web-services

Is there a way to disable access to all aws services, but s3? I have an account that will only use s3 and I am worried about unexpected charges from running ec2.
Alternatively, is there a way to create a api keys for s3 access only?

You could easily create an IAM user and allow (maybe) full permissions to S3 and all other services just read only access. In that way even using api keys, he can only use s3 and cant create any other resources in any other services.

Related

How to get S3 Secure Access form Application?

How to access, upload and delete objects of the S3 bucket from the web URL securely?
We are accessing the objects in S3 from our Application. But that bucket is public which is not secure. 
I have tried CloudFront with OAI on the s3 bucket and putting bucket private but access is denied from the application when trying to upload an object to the s3 bucket.   
We want to upload and delete objects in s3 bucket. We want that bucket to private only. And we want to do this from web applications only not from CLI, not from any tool. How could we achieve this?
Any suggestion will be appreciated. 
Thanks!
Your application can use an AWS SDK to communicate directly with AWS services.
Your application will require a set of credentials to gain access to resources in your AWS Account. This can be done in one of two ways:
If your application is running on an Amazon EC2 instance, assign an IAM Role to the instance
Otherwise, create an IAM User and store the AWS credentials on the application server by using the AWS CLI aws configure command
You can control the exact permissions and access given to the IAM Role / IAM User. This can include granting the application permission to access your Amazon S3 buckets. This way, the buckets can be kept private, but the application will be able to upload/download/delete objects in the bucket.
To add more to the previous answer, you can find many S3 SDK examples in the AWS Github located here:
https://github.com/awsdocs/aws-doc-sdk-examples
If you look under each programming language, you will find Amazon S3 examples. You can use the AWS SDK to perform actions on a bucket when its private. You can take security a step further too and use encryption as shown here:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/java/example_code/s3/src/main/java/aws/example/s3/S3EncryptV2.java
Also, you can interact with Amazon S3 bucket from a web app as well by using the AWS SDK. Here is an example of building a web app using Spring Boot that interacts with an Amazon S3 bucket by reading all of the objects in the bucket.
Creating an example AWS photo analyzer application using the AWS SDK for Java
It's a bad practice to use long term credentials. AWS recommends to use short term credentials along with STS. Here is an article using Python/Flask to upload a file into private S3 bucket using STS/short term credentials.
Connect on-premise Python application with AWS services using Roles
I could have listed down all the steps in this post. But, it's a bit too long and so the reference to the above link.

Use AWS keys to transfer data between organizations

I am trying to move client data from clients S3 bucket(s3://client-bucket) to our organizations S3 bucket(s3://org-bucket) I was given access keys to the clients S3 bucket.
Using AWS CLI i am able to access S3 bucket of client as see all files. I cannot however use aws s3 mv because the profile that has access to client-bucket does not have permissions set up for org-bucket.
I am not allowed to move data to an intermediate public bucket bc of security issues/sensitivity of data.
What is the best way of making this transfer go thru? Is there a way to set up a profile in aws cli config/credentials with both the access keys to org-bucket and client-bucket?
The best way is to use the access keys in your organization to access your client's S3 bucket. Since you need to copy objects directly via the CopyObject API, your IAM user/role needs to have access to both the S3 bucket in your org AND your client's S3 bucket. Therefore, your current approach doesn't work and even AssumeRole would not work either. You can follow this guide to configure proper resource-based policies in S3.

How to store data using s3?

I am new to aws and I want to integrate IAM in my aws account.
I have gone through this link:
https://www.youtube.com/watch?v=KQheV84Ae40&list=PL_OdF9Z6GmVZCwyfd8n6_50jcE_Xlz1je&index=3
but not getting the proper idea.
Is there any example for that?
You can use IAM to create Users in your AWS Account.
You can then associate policies with those users, which grant them permission to use particular AWS services, such as Amazon S3. IAM is automatically integrated with every AWS service.
See: Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket | AWS Security Blog
IAM service in AWS is used for user management, that helps you securely control access to AWS resources. In IAM you can create users and you can assign roles to the users based on your needs. you can create custom policies also, AWS provides many custom policies by default once go through Its mostly self-explanatory.

upload to s3 from ec2 without access key

Can you connect to S3 via s3cmd or mount S3 to and ec2 instance with IAM users and not using access keys?
All the tutorials I see say to use access keys but what if you can't create your own access keys (IT policy).
There are two ways to access data in Amazon S3: Via an API, or via URLs.
Via an API
When accessing Amazon S3 via API (which includes code using an AWS SDK and also the AWS Command-Line Interface (CLI)), user credentials must be provided in the form of an Access Key and a Secret Key.
The aws and s3cmd utilities, and also software that mounts Amazon S3 as a drive, require access to the API and therefore require credentials.
If you have been given a login to an AWS account, you should be able to ask your administrators to also create credentials that are associated with your User. These credentials will have exactly the same permissions as your normal login (via Username/password), so it's strange that they would be disallowing it. They can be very useful for automating AWS activities, such as starting/stopping Amazon EC2 instances.
Via URLs
Objects stored in Amazon S3 can also be made available via a URL that points directly to the data, eg s3.amazonaws.com/bucket-name/object.txt
To provide public access to these objects without requiring credentials, either add permission to each object or create a Bucket Policy that grants access to content within the bucket.
This access method can be used to retrieve individual objects, but is not sufficient to mount Amazon S3 as a drive.

Restricting AWS S3 bucket to only certain instances for /GET requests

Folks,
We have (sensitive) images and video stored in an S3 bucket. Would like to only allow our web server instances to be able to access the data in these buckets via http calls. What are our options?
Thanks
Create a policy that will restrict access:
http://docs.aws.amazon.com/AmazonS3/latest/dev/AccessPolicyLanguage_UseCases_s3_a.html
http://awspolicygen.s3.amazonaws.com/policygen.html