How to handle bucket limits on AWS - amazon-web-services

We create an s3 bucket for each client to keep their data separated. The buckets are accessed by clients via Amazon's SFTP service.
I'm reading that there is a 1000 bucket limit per account. What are the work arounds for this?

Related

Is copying an S3 bucket from one AWS account to another AWS account secure in transit?

I am looking to copy the contents of one S3 bucket to another S3 bucket in a different account.
I found the following tutorial and tested it with non confidential files - https://medium.com/tensult/copy-s3-bucket-objects-across-aws-accounts-e46c15c4b9e1
I am wondering if any data that is transferred between accounts using this method is secure - as in encrypted in transit. Is it using AWS to do a direct copy or is it using the computer running the sync as the middle man - download to the computer then uploading to the destination bucket.
I do have AES-256 (Use Server-Side Encryption with Amazon S3-Managed Keys) enabled on the source S3 bucket.
I did see a recommendation about using AWS-KMS but it was not clear if that would do what I need.
Just want to make sure the S3 transfer between one account to the other is secured!
When using the cp or sync commands, the objects are always copied "within S3". The objects are not downloaded and uploaded.
If you are copying data between buckets, and the buckets are in the same region then the traffic is totally within the AWS "backplane", so it never goes to the Internet or to a VPC. I believe that it is also encrypted while being copied.
If you are copying between regions, the data is encrypted as it travels across the AWS network between the regions. (Note: Data Transfer charges will apply.)
As you're using the AWS CLI it will default to using HTTPS according to the documentation.
By default, the AWS CLI sends requests to AWS services by using HTTPS on TCP port 443. To use the AWS CLI successfully, you must be able to make outbound connections on TCP port 443.
You can also ensure that no plain text actions can be performed with the AWS CLI by utilizing the "aws:SecureTransport": "false" condition within a bucket policy.
Take a look at the What S3 bucket policy should I use to comply with the AWS Config rule s3-bucket-ssl-requests-only? documentation for an example bucket policy using this condition.

Limit number of file shares in AWS storage gateway for multiple local servers

There are 9 local on premises servers with unique output. Seems like the AWS storage gateway will expose one share per AWS S3 bucket.
It means that there would be total of 9 shares with 9 S3 buckets.
Is there any way to do this using just one file share?
You can setup a single s3 bucket as your data hub on aws and setup a nfs file-share for this bucket through storage gateway. Locally setup sub-folders for each data source/destination under this common nfs share.
Have servers/processes write to their own folders. Now all these folders will be replicated to the single s3 bucket
have a lambda function on s3 bucket that will replicate (aws-cli s3 sync) each of the folders to their corresponding target s3 bucket.
this approach will handle traffic for all your servers with a single bucket share. Storage gateway only supports 10 file-shares/10buckets per gateway instance. Above approach lets you go past that limit.
All the best.

How would I know if an AWS key with S3 access got into the wrong hands?

We are assigning IAM access keys with only S3 access.
What would I have to do to make sure there are not 1000 unique IPs making downloads to S3 from the S3 key?
I want to be alerted if someone loses an s3 key and a hacker recursively downloads all files in our s3.
There is no in-built mechanism to give you such a warning. However, here are some options to consider...
You could restrict access to a specific range of IP addresses (eg, a corporate network) so that the Amazon S3 bucket can only be accessed from those IP addresses. This is highly recommended if you know they will only be accessed from certain IP addresses.
You could monitor the Amazon S3 bucket logs, which record each access to Amazon S3 objects. These logs could be passed to Amazon CloudWatch Events, which could be configured to count the number of times objects are being accessed. You could then create an Amazon CloudWatch Alarm to send a notification if the count of accesses exceeds a certain threshold.
You could use Amazon CloudTrail to track object-level operations and pass them to Amazon CloudWatch Events (similar to the above).
You could use temporary credentials instead of permanent credentials. Rather than giving people credentials that last forever, you could have them authenticate against a system that then provides time-limited credentials using the AWS Security Token Service. These credentials automatically expire after a specific period of time. This way, if credentials get 'into the wild', they will only be valid for a specific period of time, after which they cannot be used.

Limit aws to just s3

Is there a way to disable access to all aws services, but s3? I have an account that will only use s3 and I am worried about unexpected charges from running ec2.
Alternatively, is there a way to create a api keys for s3 access only?
You could easily create an IAM user and allow (maybe) full permissions to S3 and all other services just read only access. In that way even using api keys, he can only use s3 and cant create any other resources in any other services.

Restricting AWS S3 bucket to only certain instances for /GET requests

Folks,
We have (sensitive) images and video stored in an S3 bucket. Would like to only allow our web server instances to be able to access the data in these buckets via http calls. What are our options?
Thanks
Create a policy that will restrict access:
http://docs.aws.amazon.com/AmazonS3/latest/dev/AccessPolicyLanguage_UseCases_s3_a.html
http://awspolicygen.s3.amazonaws.com/policygen.html