Amazon S3 redundancy over Availability Zones vs. over Regions - amazon-web-services

This https://aws.amazon.com/blogs/storage/architecting-for-high-availability-on-amazon-s3/#:~:text=Amazon%20S3%20maintains%20redundancy%20even%20within%20one%20of,can%20still%20access%20their%20data%20with%20no%20downtime states the following:
Amazon S3 storage classes replicate their data on more than three
Availability Zone (except for S3 One Zone-Infrequent Access).
What's the point of this article https://aws.amazon.com/blogs/startups/large-scale-disaster-recovery-using-aws-regions/ stating:
S3 snapshots: We rely on the cross s3 sync and this works like a
charm. We are able to copy the data from our primary to the DR region
within a matter of few minutes.
The latter seem superfluous now and is from 2017, so may be it is out-dated? Or is it the thrust that we should also be be placing Amazon S3 copies over over Regions? I see no such need as the AZ's within a Region are physically separated from each other. What am I missing?

S3 buckets are region specific. When you create a new bucket you need to select the target region for that bucket.
For DR reasons, you can keep backups in another region. Should the primary region fail in a way that the entire region is affected, then you could restore in the backup region.
Your DR strategy will depend on your use case, and your needs for returning services back to normal in case of region wide failure.
For example, let's say you rely on ec2/ebs to operate your service and those services suffer region wide outage for 5 hours. In order to recover your service you would need to move to a region where the resources are available. Assuming you need S3 data for operational processing you would want to have that data ready in the Target recovery region.

Storing in multiple AZs in a region does not guarantee safety in case of entire region failure.This is applicable for all regional services. The article you shared indeed mentions this so it is not irrelevant.
The service that runs in HA is handled by hosts running in different
availability zones but in the same geographical region. This approach,
however, does not guarantee that our business will be up and running
in case the entire region goes down

Related

Amazon QLDB Multi-Region Architecture

I am new to QLDB and seem to be finding slightly conflicting info on multi-Region architecture. I see that it has high availability in a given Region; however, it is unclear as to what happens when an entire Region goes down, or how I use it in a hot-hot multi-Region application.
Let's assume that an application is in US-East-2 and US-West-2 with latency routing rules. Each of these needs to write and read from the same ledger. Is this possible, or would the ledger need to exist in a single region and only one region can have full-access while the other would only have access to a read-only copy (maybe in S3)?
As of 21/6/2021 QLDB ledgers are in a single region. Cross-region business continuity is a need we have heard from other customers and we take this feedback very seriously. I will come back to this answer in the future when there is an update.

Amazon DynamoDB - geographically distributed?

I am new to AWS. Sorry if my question is basic, got stuck with this term.
AWS Global Infrastructure says "18 geographic Regions" -> Geographic term is used along with Regions, that makes sense.
DynamoDB FAQs 3rd questions says, "Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability."
Here(three geographically) is it referring to Region or Availability Zones ? Bit confused. If it is Region, does it mean my data is going out of my country(if my country has only 1 Region).
Please suggest.
Geographically isolated in this documentation refers to Availability Zones and not Regions. As per AWS documentation when you create a table in one region, it's replicated in others zones to ensure the high availability. If you do some activity in the table it's updated in the replicas. The AZ's are interconnected with low latency networks.
The data is stored on SSD disks and automatically replicated across
multiple Availability Zones in an AWS region, which brings the high
availability and your data is durable.
If you create a table in one region, the same table can be created in other regions also with same name.
If you want your table to be replicated in other regions you must enable the Cross-Region replication. For more details Refer
DynamoDB
All Things about DynamoDB
Almost every AWS service revolves around two things in availability: Multi AZ (multiple data centers in a single region) and Cross-Region (different geographic locations across globe) and so does the DynamoDB. By default AWS DynamoDB is a multi-AZ enabled service which means that your data is by default replicated across 3 data centers (minimum of 2 AZs) but for cross-region, you need to enable DynamoDB global tables (DynamoDB Streams).
Multi-Region Replication with DynamoDB
DynamoDB global tables are geographically distributed. They provide a fully managed solution for deploying a multiregion, multi-active database. Like with every other geographically distributed database, GlobalTables comes with ReplicationLatency.
An important thing to note here is, DynamoDB does not offer cross-region strong consistency (this is in contrast with CosmosDB, a similar offering from Azure)
From AWS documentation:
An application can read and write data to any replica table. If your
application only uses eventually consistent reads and only issues
reads against one AWS Region, it will work without any modification.
However, if your application requires strongly consistent reads, it
must perform all of its strongly consistent reads and writes in the
same Region. DynamoDB does not support strongly consistent reads
across Regions. Therefore, if you write to one Region and read from
another Region, the read response might include stale data that
doesn't reflect the results of recently completed writes in the other
Region.
Also, global tables are not to be confused with global indexes. Global indexes get their name because they are used in fetching data across multiple DynamoDB partitions.
"Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability."
This is specifically referring to multi AZ structure of dynamo, this helps in achieving high availability of your table. eg. if one of availability zone is down you still will be able to access you table.
To answer "my data is going out of my country(if my country has only
1 Region)."
For multi region its not by default ON you need to use global tables and specify regions in which you want to replicate that means your data/table wont go in any other region till you specifically want it to be.
For more on global tables refer
https://aws.amazon.com/dynamodb/global-tables/

Amazon S3 Bucket replica location

I am confusing about the Amazon S3 replica mechanism. In my understanding, by default, Amazon S3 applies 3-replica mechanism, in which there will be 3 replicas for each object created on my S3 bucket. And all the replicas are stored in multiple availability zones within only ONE region, which I specified when creating S3 bucket.
Is my understanding correct? If it's correct, is it possible to see where the replicas of an object are stored?
Thanks
You are pretty much correct. S3 replication works by replicating across at least 3 data centers, over at least two AZs within a single region (each availability zone can have multiple data centers).
The replication is part of s3, which is a managed service, meaning you just have to accept what they're telling you. Telling you where the replicas were wouldn't really serve any purpose, and AWS never really disclose the details of their infrastructure to anyone who doesn't need to know. Even if they told you the data was stored in Availability Zone 1 and 2, this is effectively meaningless information, as zones are aliases, i.e your Zone 1 probably isn't the same as my Zone 1.

What are possible ways to access Amazaon S3 data if S3 outage happens?

Can some one help me in understanding the S3 outage usecase here.
The probability of S3 outage is very less, but in case if this happens, what are the ways we can access data that sits in S3.
I know that there is one possibility, that is cross region replication, that works for new files, that I am going to put in my s3 bucket, if I enable it now. What happen to old files, I know if I go and upload all those historical files also to the other region, then it works.
Then again the same question, if both the regions went down, then what?
I am sure others would have thought of this. Any inputs on this.
From Protecting Data in Amazon S3:
Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region. To help better ensure data durability, Amazon S3 PUT and PUT Object copy operations synchronously store your data across multiple facilities before returning SUCCESS. Once the objects are stored, Amazon S3 maintains their durability by quickly detecting and repairing any lost redundancy.
...
Backed with the Amazon S3 Service Level Agreement
Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year
Designed to sustain the concurrent loss of data in two facilities
So, if you're still not happy with all those statements, how can you access your data in an outage?
If your data is in only one region, and the region is not accessible, then your data is not accessible. Note, however, that an external network connectivity problem could prevent access to Amazon S3, yet Amazon S3 might still be accessible from Amazon EC2 instances in the same region.
Cross-region replication will copy your data to another Amazon S3 region. It requires versioning to be activated. To copy any files that exist prior to activating cross-region replication, use the sync command in the AWS Command-Line Utility (CLI), eg:
aws s3 sync s3://bucket1/folder s3://bucket2/folder
Each AWS region operates independently, so the possibility of multiple regions suffering outages would presumably be even less likely.
If you are feeling particularly paranoid, you could copy your data to another cloud provider (Azure, Google, Rackspace, etc). There are tools that can assist:
CloudBerry Cloud Migrator
AzureCopy
...and no doubt many more!

What is maximum Amazon S3 replication time on file upload?

Background
We use Amazon S3 in our project as a storage for files uploaded by clients.
For technical reasons, we upload a file to S3 with a temporary name, then process its contents and rename the file after it has been processed.
Problem
The 'rename' operation fails time after time with 404 (key not found) error, although the file being renamed had been uploaded successfully.
Amazon docs mention this problem:
Amazon S3 achieves high availability by replicating data across multiple servers within Amazon's data centers.
If a PUT request is successful, your data is safely stored. However, information about the changes must replicate across Amazon S3, which can take some time, and so you might observe the following behaviors:
We implemented a kind of polling as workaround: retry the 'rename' operation until it succeeds.
The polling stops after 20 seconds.
This workaround works in most cases: the file gets replicated within few seconds.
But sometimes — very rarely — 20 seconds are not enough; the replication in S3 takes more time.
Questions
What is the maximum time you observed between a successful PUT operation and complete replication on Amazon S3?
Does Amazon S3 offer a way to 'bypass' replication? (Query 'master' directly?)
Update: this answer uses some older terminology, which i have left in place, for the most part. AWS has changed the friendly name of "US-Standard" to be more consistent with the naming of other regions, but its regional endpoint for IPv4 still has the unusual name s3-external-1.amazonaws.com.
The us-east-1 region of S3 has an IPv4/IPv6 "dual stack" endpoint that follows the standard convention of s3.dualstack.us-east-1.amazonaws.com and if you are IPv6 enabled, this endpoint seems operationally-equivalent to s3-external-1 as discussed below.
The documented references to geographic routing of requests for this region seem to have largely disappeared, without much comment, but anecdotal evidence suggests that the following information is still relevant to that region.
Q. Wasn’t there a US Standard region?
We renamed the US Standard Region to US East (Northern Virginia) Region to be consistent with AWS regional naming conventions.
— https://aws.amazon.com/s3/faqs/#regions
Buckets using the S3 Transfer Acceleration feature use a global-style endpoint of ${bucketname}.s3-accelerate.amazonaws.com and it is not yet evident how this endpoint behaves with regard to us-east-1 buckets and eventual consistency, though it stands to reason that other regions should not be affected by this feature, if enabled. This feature improves transfer throughput for users who are more distant from the bucket by routing requests to the same S3 endpoints but proxying through the AWS "Edge Network," the same system that powers CloudFront. It is, essentially, a self-configuring path through CloudFront but without caching enabled. The acceleration comes from optimized network stacks and keeping the traffic on the managed AWS network for much of its path across the Internet. As such, this feature should have no impact on consistency, if you enable and use it on a bucket... but, as I mentioned, how it interacts with us-east-1 buckets is not yet known.
The US-Standard (us-east-1) region is the oldest, and presumably largest, region of S3, and does play by some different rules than the other, newer regions.
An important and relevant difference is the consistency model.
Amazon S3 buckets in [all regions except US Standard] provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES. Amazon S3 buckets in the US Standard region provide eventual consistency.
http://aws.amazon.com/s3/faqs/
This is why I assumed you were using US Standard. The behavior you described is consistent with that design constraint.
You should be able to verify that this doesn't happen with a test bucket in another region... but, because data transfer from EC2 to S3 within the same region is free and very low latency, using a bucket in a different region may not be practical.
There is another option that is worth trying, has to do with the inner-workings of US-Standard.
US Standard is in fact geographically-distributed between Virginia and Oregon, and requests to "s3.amazonaws.com" are selectively routed via DNS to one location or another. This routing is largely a black box, but Amazon has exposed a workaround.
You can force your requests to be routed only to Northern Virginia by changing your endpoint from "s3.amazonaws.com" to "s3-external-1.amazonaws.com" ...
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
... this is speculation on my part, but your issue may be exacerbated by geographic routing of your requests, and forcing them to "s3-external-1" (which, to be clear, is still US-Standard), might improve or eliminate your issue.
Update: The advice above has officially risen above speculation, but I'll leave it for historical reference. About a year I wrote the above, Amazon indeed announced that US-Standard does offer read-after-write consistency on new object creation, but only when the s3-external-1 endpoint is used. They explain it as though it's a new behavior, and that may be the case... but it also may simply be a change in the behavior the platform officially supports. Either way:
Starting [2015-06-19], the US Standard Region now supports read-after-write consistency for new objects added to Amazon S3 using the Northern Virginia endpoint (s3-external-1.amazonaws.com). With this change, all Amazon S3 Regions now support read-after-write consistency. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Prior to this change, Amazon S3 buckets in the US Standard Region provided eventual consistency for newly created objects, which meant that some small set of objects might not have been available to read immediately after new object upload. These occasional delays could complicate data processing workflows where applications need to read objects immediately after creating the objects. Please note that in US Standard Region, this consistency change applies to the Northern Virginia endpoint (s3-external-1.amazonaws.com). Customers using the global endpoint (s3.amazonaws.com) should switch to using the Northern Virginia endpoint (s3-external-1.amazonaws.com) in order to leverage the benefits of this read-after-write consistency in the US Standard Region. [emphasis added]
https://forums.aws.amazon.com/ann.jspa?annID=3112
If you are uploading a large number of files (hundreds per second), you might also be overwhelming S3's sharding mechanism. For very high numbers of uploads per second, it's important that your keys ("filenames") not be lexically sequential.
Depending on how Amazon handles DNS, you may also want to try another alternate variant of addressing your bucket if your code can handle it.
Buckets in US-Standard can be addressed either with http://mybucket.s3.amazonaws.com/key ... or http://s3.amazonaws.com/mybucket/key ... and the internal implementation of these two could, at least in theory, be different in a way that changes the behavior in a way that would be relevant to your issue.
As you noted, currently there is no guarantee or workaround eventual consistency directly from S3. In this talk from Netflix, the speaker mentions having seen a 7h (extremely rare IMHO) consistency delay. They even created a consistency layer on top of S3, s3mper ,that is open source and might help in your context.
Other than that, as #Michael - sqlbot suggested, us-standard dos not offer read-after-write consistency, and the observed consistency delays may be different there.