The application is using carrierwave in conjunction with the carrierwave-aws gem. It hit snags when migrating rails versions (bumped up to 4.2), ruby versions (2.2.3) and redeployed to the same staging server.
The AWS bucket was initially created in the free tier, thus Oregon, us-west-2. However, I have found that all the S3 files have the property which links to eu-west-1. Admittedly I've been tinkering around and considered using the eu-west-1 region. However I do not recall making any config changes - not even sure it is allowed in the free tier...
So yes, I've had to configure my uploads initializer with:
config.asset_host = 'https://s3-eu-west-1.amazonaws.com/theapp'
config.aws_credentials = {
region: 'eu-west-1'
}
Now the console for AWS is accessible with a URL that includes region=us-west-2
I do not understand how this got to be and am looking for suggestions.
In spite of appearances, and AWS account doesn't have a "home" (native) region.
The console defaults to us-west-2 (Oregon), and conventional wisdom suggests that this is a region where AWS has the most available/spare resources, lower operational costs, lower pricing for customers, and fewest constraints for growth, so that in the event a user does not have enough information at hand to actively select a region where they deploy services, Oregon will be used by default.
But for each account, no particular region has any special standing. If you switch regions in the console, the console will tend to open up to the same region next time.
Most AWS services -- EC2, SQS, SNS, RDS (to name a few) are strictly regional: the regions are independent and not connected together¹, in the interest of reliability and survivability. When you're in a given region in the console, you can only see EC2 resources in that region, SQS queues in that region, SNS topics in that region, etc. To see your resources in other regions, you switch regions in the console.
When making API requests to these services, you're using an endpoint in the region and your credentials also include the region.
Other services are global, with centralized administration -- examples here are CloudFront, IAM, and Route 53 hosted zones. When you make requests to these services, you always use the region "us-east-1" because that's the home location of those services' central, global administration. These tend to be services were a partitioning event (where one part of the global network is isolated from another). Administrative changes are replicated out around the world, but after the provisioning is replicated, the regional installations can operate autonomously without major service impacts. When you select these services in the console, you'll note that the region changes to "Global."
S3 is a hybrid that is different from essentially all of the others. When you select S3 in the console, you'll notice that the console region also changes to show "Global" and you can see all of your buckets, like other global services. S3 has independently operating regions, but a global namespace. The regions are logically connected and can communicate administrative messages among themselves and can transfer data across regions (but only when you do this deliberately -- otherwise, data remains in the region where you stored it).
Unlike the other global services, S3 does not have a single global endpoint that can handle every possible request.
Each time you create a bucket, you choose the region where you want to bucket to live. Subsequent requests related to that bucket have to be submitted to the bucket's region, and must have authorization credentials for the correct region.
If you submit a request to another S3 region's endpoint for that bucket, you'll receive an error telling you the correct region for the bucket.
< HTTP/1.1 301 Moved Permanently
< x-amz-bucket-region: us-east-1
< Server: AmazonS3
<Error>
<Code>PermanentRedirect</Code>
<Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message>
<Bucket>...</Bucket>
<Endpoint>s3.amazonaws.com</Endpoint>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
Conversely, if you send an S3 request to the correct endpoint but using the wrong region in your authentication credentials, you'll receive a different error for similar reasons:
< HTTP/1.1 400 Bad Request
< x-amz-bucket-region: us-west-2
< Server: AmazonS3
<
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AuthorizationQueryParametersError</Code>
<Message>Error parsing the X-Amz-Credential parameter; the region 'eu-west-1' is wrong; expecting 'us-west-2'</Message>
<Region>us-west-2</Region>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
Again, this region is the region where you created the bucket, or the default "US Standard" (us-east-1). Once a bucket has been created, it can't be moved to a different region. The only way to "move" a bucket to a different region without the name changing is to remove all the files from the bucket, delete the bucket (you can't delete a non-empty bucket), wait a few minutes, and create the bucket in the new region. During the few minutes that S3 requires before the name is globally available after deleting a bucket, it's always possible that somebody else could take the bucket name for themselves... so choose your bucket region carefully.
S3 API interactions were edited and reformatted for clarity; some unrelated headers and other content was removed.
¹ not connected together seems (at first glance) to be contradicted in the sense that -- for example -- you can subscribe an SQS queue in one region to an SNS topic in another, you can replicate RDS from one region to another, and you can transfer EBS snapshots and AMIs from one region to another... but these back-channels are not under discussion, here. The control planes of the services in each region are isolated and independent. A problem with the RDS infrastructure in one region might disrupt replication to RDS in another region, but would not impact RDS operations in the other region. An SNS outage in one region would not impact SNS in another. The systems and services sometimes have cross-region communication capability for handling customer requested services, but each region's core operations for these regional services is independent.
Related
This https://aws.amazon.com/blogs/storage/architecting-for-high-availability-on-amazon-s3/#:~:text=Amazon%20S3%20maintains%20redundancy%20even%20within%20one%20of,can%20still%20access%20their%20data%20with%20no%20downtime states the following:
Amazon S3 storage classes replicate their data on more than three
Availability Zone (except for S3 One Zone-Infrequent Access).
What's the point of this article https://aws.amazon.com/blogs/startups/large-scale-disaster-recovery-using-aws-regions/ stating:
S3 snapshots: We rely on the cross s3 sync and this works like a
charm. We are able to copy the data from our primary to the DR region
within a matter of few minutes.
The latter seem superfluous now and is from 2017, so may be it is out-dated? Or is it the thrust that we should also be be placing Amazon S3 copies over over Regions? I see no such need as the AZ's within a Region are physically separated from each other. What am I missing?
S3 buckets are region specific. When you create a new bucket you need to select the target region for that bucket.
For DR reasons, you can keep backups in another region. Should the primary region fail in a way that the entire region is affected, then you could restore in the backup region.
Your DR strategy will depend on your use case, and your needs for returning services back to normal in case of region wide failure.
For example, let's say you rely on ec2/ebs to operate your service and those services suffer region wide outage for 5 hours. In order to recover your service you would need to move to a region where the resources are available. Assuming you need S3 data for operational processing you would want to have that data ready in the Target recovery region.
Storing in multiple AZs in a region does not guarantee safety in case of entire region failure.This is applicable for all regional services. The article you shared indeed mentions this so it is not irrelevant.
The service that runs in HA is handled by hosts running in different
availability zones but in the same geographical region. This approach,
however, does not guarantee that our business will be up and running
in case the entire region goes down
withRegion(Regions) of AmazonS3ClientBuilder takes what parameter? From AWS documentation says "It sets the region to be used by the client."
Is it the region where our application is running? So that there would be minimum latency as it will read from the same region of S3 bucket where the calling client is deployed?
Or it is the Region where S3 bucket is present?
Sample code of line:
AmazonS3 amazonS3 = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1).build();
Please don't do any guess work.. An URL(pref doc.aws.amazon.com) to support your explanation will be highly appreciated..
https://docs.aws.amazon.com/general/latest/gr/rande.html
Some services, such as IAM, do not support regions; therefore, their endpoints do not include a region. Some services, such as Amazon EC2, let you specify an endpoint that does not include a specific region, for example, https://ec2.amazonaws.com. In that case, AWS routes the endpoint to us-east-1.
If a service supports regions, the resources in each region are independent. For example, if you create an Amazon EC2 instance or an Amazon SQS queue in one region, the instance or queue is independent from instances or queues in another region.
In this case, S3 buckets can be created in specific regions and there are multiple REST endpoints you can access. In the case of S3, you must connect to the same region as the bucket (except for calls such as ListAllMyBuckets that are region agnostic). For other services you do not.
https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
As you point out, the Javadoc for AmazonS3ClientBuilder is incredibly vague, because it inherits the withBuilder documentation from AwsClientBuilder, which is inherited by services that support regions and those that do not.
To further add to the confusion, particularly when reading older advice scattered over the internet, it was possible in the past to access any bucket from the same region with the S3 Java API (these calls may be slower). It is possible to revert to this behaviour with withForceGlobalBucketAccessEnabled:
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#withForceGlobalBucketAccessEnabled-java.lang.Boolean-
Configure whether global bucket access is enabled for clients generated by this builder.
When global bucket access is enabled, the region to which a request is routed may differ from the region that is configured in AwsClientBuilder.setRegion(String) in order to make the request succeed.
The following behavior is currently used when this mode is enabled:
All requests that do not act on an existing bucket (for example, AmazonS3Client.createBucket(String)) will be routed to the region configured by AwsClientBuilder.setRegion(String), unless the region is manually overridden with CreateBucketRequest.setRegion(String), in which case the request will be routed to the region configured in the request.
The first time a request is made that references an existing bucket (for example, AmazonS3Client.putObject(PutObjectRequest)) a request will be made to the region configured by AwsClientBuilder.setRegion(String) to determine the region in which the bucket was created. This location may be cached in the client for subsequent requests acting on that same bucket.
Enabling this mode has several drawbacks, because it has the potential to increase latency in the event that the location of the bucket is physically far from the location from which the request was invoked. For this reason, it is strongly advised when possible to know the location of your buckets and create a region-specific client to access that bucket.
Using S3 cross-region replication, if a user downloads http://mybucket.s3.amazonaws.com/myobject , will it automatically download from the closest region like cloudfront? So no need to specify the region in the URL like http://mybucket.s3-[region].amazonaws.com/myobject ?
http://aws.amazon.com/about-aws/whats-new/2015/03/amazon-s3-introduces-cross-region-replication/
Bucket names are global, and cross-region replication involves copying it to a different bucket.
In other words, example.us-west-1 and example.us-east-1 is not valid, as there can only be one bucket named 'example'.
That's implied in the announcement post- Mr. Barr is using buckets named jbarr and jbarr-replication.
Using S3 cross-Region replication will put your object into two (or more) buckets in two different Regions.
If you want a single access point that will choose the closest available bucket then you want to use Multi-Region Access Points (MRAP)
MRAP makes use of Global Accelerator and puts bucket requests onto the AWS backbone at the closest edge location, which provides faster, more reliable connection to the actual bucket. Global Accelerator also chooses the closest available bucket. If a bucket is not available, it will serve the request from the other bucket providing automatic failover
You can also configure it in an active/passive configuration, always serving from one bucket until you initiate a failover
From the MRAP page on AWS console it even shows you a graphical representation of your replication rules
s3 is global service, no need specify the region. The S3 name has to be unique globally.
when you create s3, you need specify the region, however it doesn't mean you need put the region name when you access it. To speed up the access speed from other region, there are several options like
-- Amazon S3 Transfer Acceleration with same bucket name.
-- Or use set up another bucket with different name in different region and enable cross region replication. Create an origin group with two origins for cloudfront.
I have a client who want confirmation that his data on S3 will only be saved in UK. Amazon S3 is global service and despite i create bucket in Ireland, I guess it replicated to other regions as well to offer 11 nines of durability. This clearly means that my clients data will be copied out of UK as well for replication on AWS side.
Can anyone guide me through the solution for this please OR correct me if I am wrong with the above stated concept.
Cheers!
S3 bucket names are globally unique but they exist wholly within one AWS region:
Objects belonging to a bucket that you create in a specific AWS region never leave that region, unless you explicitly transfer them to another region. For example, objects stored in the EU (Ireland) region never leave it.
Amazon's pricing documentation says
"Data Transfer OUT From Amazon S3 To Amazon EC2 in the Northern Virginia Region (is) $0.000 per GB"
Is there anything special I need to do to ensure that I don't get charged for transfers from S3 to EC2, assuming both are in the Northern Virginia Region? IE) Is executing a GET request from this EC2 instance to https://s3.amazonaws.com/{bucketName}/{resource} going to automatically get counted as 'free' without doing anything additional? Or is there a private IP address for S3 that I need to access in this scenario?
If the request comes from within the same region, you're likely to be redirected to the right endpoint and not going to be charged. If you want to be 100% sure, however, you can choose the correct endpoint yourself:
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
This may be particularly useful if you use API and tools from 3rd parties, which sometimes use varying default endpoints, unless you override them explicitly.
As long as your S3 buckets are in the same region as your EC2 machines, your data xfers are free. If you always deal with the same region, then you do not have to worry about the data xfer cost.
Data Transfer OUT From Amazon S3 To
Amazon EC2 in the same region $0.000 per GB