Are S3 buckets region specific? - amazon-web-services

When we go to S3 in AWS console in "Global" option it shows
"S3 does not require region selection."
But when we create new bucket there it asks for Region !
So are S3 buckets region specific ?

The user interface shows all your buckets, in all regions. But buckets exist in a specific region and you need to specify that region when you create a bucket.

S3 buckets are region specific, you can check http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region to get the list of end-points based on the region
From the doc on creating S3 bucket
Amazon S3 creates bucket in a region you specify. You can choose any
AWS region that is geographically close to you to optimize latency,
minimize costs, or address regulatory requirements. For example, if
you reside in Europe, you might find it advantageous to create buckets
in the EU (Ireland) or EU (Frankfurt) regions. For a list of AWS
Amazon S3 regions, go to Regions and Endpoints in the AWS General
Reference.
Also from UI, if you look at the properties for each of your bucket, you will see the original region

Yes S3 buckets are region specific.
When you create a new bucket you need to select the target region for that bucket.
For example:
Hope it helps.

How it works now is that if you are expecting the content to load fast globally, you create a bucket for every region you want your data to load quickly from, but use 'Versioning' to auto duplicate content from one bucket to the other.
Click on one of your buckets, then go to Management, then go to 'Replication'.
Follow the instructions to setup a rule that will copy from one bucket to another.
Congratualtion, you now have globally fast content from a single bucket.
I appreciate if this seems a little off-piste, but I think this is what we are all really looking to achieve.

There is a related answer that highlights one important point: although the console and the CLI allow viewing buckets in all regions, probably due to the fact that bucket names must be globally unique, buckets are still tied to a region.
This matters, for example, when dealing with permissions. You may have Infrastructure as Code generalized with roles that give permissions to all buckets for the current region. Although the CLI might give you the impression that a newly created bucket can be seen in all regions, in reality you may end up with errors if you fail to specifically grant access to a service running in one region but requiring access to an S3 bucket in another region.

Related

Data Lakes - S3 and Databricks

I understand Data Lake Zones in S3 and I am looking at establishing 3 zones - LANDING, STAGING, CURATED. If I were in an Azure environment, I would create the Data Lake and have multiple folders as various zones.
How would I do the equivalent in AWS - Would it be a separate bucket for each zone (s3://landing_data/, s3://staging_data, s3://curated_data) or a single bucket with multiple folders (i.e. s3://bucket_name/landing/..., s3://bucket_name/staging/). I understand AWS S3 is nothing more than containers.
Also, would I be able to mount multiple S3 buckets on Databricks AWS? If so is there any reference documentation?
Is there any best/recommended approach given that we can read and write to S3 in multiple ways?
I looked at this as well.
S3 performance Best Pratices
There is no single solution - the actual implementation depends on the amount of data, number of consumers/producers, etc. You need to take into account AWS S3 limits, like:
By default you may have only 100 buckets in an account - it could be increased although
You may issue 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix (directory) in a single bucket (although the number of prefixes is not limited)
You can mount each of the buckets, or individual folders into Databricks workspace as described in documentation. But it's really not recommended from the security standpoint, as everyone in workspace will have the same permissions as role that was used for mounting. Instead of that, just use full S3 URLs in combination with instance profiles.

Amazon S3 redundancy over Availability Zones vs. over Regions

This https://aws.amazon.com/blogs/storage/architecting-for-high-availability-on-amazon-s3/#:~:text=Amazon%20S3%20maintains%20redundancy%20even%20within%20one%20of,can%20still%20access%20their%20data%20with%20no%20downtime states the following:
Amazon S3 storage classes replicate their data on more than three
Availability Zone (except for S3 One Zone-Infrequent Access).
What's the point of this article https://aws.amazon.com/blogs/startups/large-scale-disaster-recovery-using-aws-regions/ stating:
S3 snapshots: We rely on the cross s3 sync and this works like a
charm. We are able to copy the data from our primary to the DR region
within a matter of few minutes.
The latter seem superfluous now and is from 2017, so may be it is out-dated? Or is it the thrust that we should also be be placing Amazon S3 copies over over Regions? I see no such need as the AZ's within a Region are physically separated from each other. What am I missing?
S3 buckets are region specific. When you create a new bucket you need to select the target region for that bucket.
For DR reasons, you can keep backups in another region. Should the primary region fail in a way that the entire region is affected, then you could restore in the backup region.
Your DR strategy will depend on your use case, and your needs for returning services back to normal in case of region wide failure.
For example, let's say you rely on ec2/ebs to operate your service and those services suffer region wide outage for 5 hours. In order to recover your service you would need to move to a region where the resources are available. Assuming you need S3 data for operational processing you would want to have that data ready in the Target recovery region.
Storing in multiple AZs in a region does not guarantee safety in case of entire region failure.This is applicable for all regional services. The article you shared indeed mentions this so it is not irrelevant.
The service that runs in HA is handled by hosts running in different
availability zones but in the same geographical region. This approach,
however, does not guarantee that our business will be up and running
in case the entire region goes down

Is it possible to replicate a specific S3 folder between 2 buckets?

Does anyone know if it is possible to replicate just a folder of a bucket between 2 buckets using AWS S3 replication feature?
P.S.: I don't want to replicate the entire bucket, just one folder of the bucket.
If it is possible, what configurations I need to add to filter that folder in the replication?
Yes. Amazon S3's Replication feature allows you to replicate objects at a prefix (say, folder) level from one S3 bucket to another within same region or across regions.
From the AWS S3 Replication documentation,
The objects that you want to replicate — You can replicate all of the objects in the source bucket or a subset. You identify a subset by providing a key name prefix, one or more object tags, or both in the configuration.
For example, if you configure a replication rule to replicate only objects with the key name prefix Tax/, Amazon S3 replicates objects with keys such as Tax/doc1 or Tax/doc2. But it doesn't replicate an object with the key Legal/doc3. If you specify both prefix and one or more tags, Amazon S3 replicates only objects having the specific key prefix and tags.
Refer to this guide on how to enable replication using AWS console. Step 4 talks about enabling replication at prefix level. The same can be done via Cloudformation and CLI as well.
Yes you can do this using the Cross-Region Replication feature. You can replicate the object either in the same region or a different one. The replicated object in the new bucket will keep their original storage class, object name and object permissions.
However, you can change the owner to the new owner of the destination bucket.
Despite all of this, there are disadvantages of this feature:-
You cannot replicate objects which are present in the source bucket
before you create the replication rule using CRR. Only the ones
which are created after replication rule can be created.
You cannot use SSE-C encryption in replication.
You can do this with sync command.
aws s3 sync s3://SOURCE_BUCKET_NAME s3://NEW_BUCKET_NAME
You must grant the destination account the permissions to perform the cross-account copy.

Does AWS S3 cross-region replication use same URL for multiple regions?

Using S3 cross-region replication, if a user downloads http://mybucket.s3.amazonaws.com/myobject , will it automatically download from the closest region like cloudfront? So no need to specify the region in the URL like http://mybucket.s3-[region].amazonaws.com/myobject ?
http://aws.amazon.com/about-aws/whats-new/2015/03/amazon-s3-introduces-cross-region-replication/
Bucket names are global, and cross-region replication involves copying it to a different bucket.
In other words, example.us-west-1 and example.us-east-1 is not valid, as there can only be one bucket named 'example'.
That's implied in the announcement post- Mr. Barr is using buckets named jbarr and jbarr-replication.
Using S3 cross-Region replication will put your object into two (or more) buckets in two different Regions.
If you want a single access point that will choose the closest available bucket then you want to use Multi-Region Access Points (MRAP)
MRAP makes use of Global Accelerator and puts bucket requests onto the AWS backbone at the closest edge location, which provides faster, more reliable connection to the actual bucket. Global Accelerator also chooses the closest available bucket. If a bucket is not available, it will serve the request from the other bucket providing automatic failover
You can also configure it in an active/passive configuration, always serving from one bucket until you initiate a failover
From the MRAP page on AWS console it even shows you a graphical representation of your replication rules
s3 is global service, no need specify the region. The S3 name has to be unique globally.
when you create s3, you need specify the region, however it doesn't mean you need put the region name when you access it. To speed up the access speed from other region, there are several options like
-- Amazon S3 Transfer Acceleration with same bucket name.
-- Or use set up another bucket with different name in different region and enable cross region replication. Create an origin group with two origins for cloudfront.

Amazon S3 Global service concept

I have a client who want confirmation that his data on S3 will only be saved in UK. Amazon S3 is global service and despite i create bucket in Ireland, I guess it replicated to other regions as well to offer 11 nines of durability. This clearly means that my clients data will be copied out of UK as well for replication on AWS side.
Can anyone guide me through the solution for this please OR correct me if I am wrong with the above stated concept.
Cheers!
S3 bucket names are globally unique but they exist wholly within one AWS region:
Objects belonging to a bucket that you create in a specific AWS region never leave that region, unless you explicitly transfer them to another region. For example, objects stored in the EU (Ireland) region never leave it.