In Api Gateway I've created one custom domain, foo.example.com, which creates a Cloud Front distribution with that CNAME.
I also want to create a wildcard domain, *.example.com, but when attempting to create it, CloudFront throws an error:
CNAMEAlreadyExistsException: One or more of the CNAMEs you provided
are already associated with a different resource
AWS in its docs states that:
However, you can add a wildcard alternate domain name, such as
*.example.com, that includes (that overlaps with) a non-wildcard alternate domain name, such as www.example.com. Overlapping domain
names can be in the same distribution or in separate distributions as
long as both distributions were created by using the same AWS account.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html#alternate-domain-names-wildcard
So I might have misunderstood this, is it possible to accomplish what I've described?
This is very likely to be a side-effect of your API Gateway endpoint being configured as Edge Optimized instead of Regional, because with an edge-optimized API, there is a hidden CloudFront distribution provisioned automatically... however, the CloudFront distribution associated with your API is not owned by your account, but rather by an account associated with API Gateway.
Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution that is created and managed by API Gateway.
— Amazon API Gateway Supports Regional API Endpoints
This creates a conflict that prevents the wildcard distribution from being created.
Subdomains that mask a wildcard are not allowed to cross AWS account boundaries, because this would potentially allow traffic for a wildcard distribution's matching domains to be hijacked by creating a more specific alternate domain name -- but, as you noted from the documentation, you can do within your own account.
Redeploying your API as Regional instead of Edge Optimized is the likely solution. If you still want the edge optimization behavior, you can create another CloudFront distribution with that specific subdomain for use with the API. This would be allowed, because you would own the distribution. Regional APIs are still globally accessible.
Yes it is. But keep in mind that CNAMEs set for CloudFront distributions are validated to be globally unique, including API Gateway distributions. So this means you (or any other account) have that CNAME already set up. Currently there is no way to lookup where the conflict is, you may need to raise a ticket with AWS support if you can't find that yourself.
Related
When you navigate to a file uploaded on S3, you'll see its URL in a format such as this (e.g. in this example the bucket name is example and the file is hello.txt):
https://example.s3.us-west-2.amazonaws.com/hello.txt
Notice that the region, us-west-2, is embedded in the domain.
I accidentally tried accessing the same url without the region, and noticed that it worked too:
https://example.s3.amazonaws.com/hello.txt
It seems much simpler to use these shorter URLs rather than the longer ones as I don't need to pass around the region.
Are there any advantages/disadvantages of excluding the region from the domain? Or are the two domains the same?
This is a deprecated feature of Amazon S3 known as Global Endpoints. Some regions support the global endpoint for backward compatibility purposes. AWS recommends that you use the standard endpoint syntax in the future.
For regions that support the global endpoint, your request is redirected to the standard endpoint. By default, Amazon routes global endpoint requests to the us-east-1 region. For buckets that are in supported regions other than us-east-1, Amazon S3 updates the DNS record for future requests (note that DNS updates require 24-48 hours to propagate). Amazon then redirects the request to the correct region using the HTTP 307 Temporary Redirect.
Are there any advantages/disadvantages of excluding the region from the domain? Or are the two domains the same?
The domains are not the same.
Advantages to using the legacy global endpoint: the URL is shorter.
Disadvantages: the request must be redirected and is, therefore, less efficient. Further, if you create a bucket in a region that does not support global endpoints, AWS will return an HTTP 400 Bad Request error response.
TLDR: It is a best practice to use the standard (regional) S3 endpoint syntax.
I am trying to implement a green/blue AWS deploy of static files backed by S3 according to this (oldish) whitepaper.
In short, the idea is to create two separate CloudFront distributions which point to two separate folders in an S3 bucket. One is "green" and one "blue". After deploying one or the other, you then switch traffic over from green to blue or vice versa using weighted routing.
That is all well and good but the problem comes with using your own domain and linking a certificate.
In order to get CloudFront to serve the S3 files properly (over https with a cert on your own domain), you need to input the FQDN in the "Alternate Domain Names (CNAMEs) field when configuring the CloudFront distribution. However you cannot use the same name in multiple Cloudfront Distributions.
Therefore, I would need to use a different url per cloudfront distribution e.g. blue.mydomain.com and green.mydomain.com
However, if I do this then using weighted routing with a single A record in the associated Route53 entry would not work as the name must match the "CNAMEs" entered in the Cloudfront distribution to prevent ssl errors. Am I missing something? I could add my own reverse proxy or something but I really don't want to do that.
TL;DR it seems like this whitepaper is impossible to implement as-is?
You can use single CloudFront distribution with two AWS buckets as websites and switch them while deploying an application. Another option you can modify the viewer request with Lambda#Edge/Cloudfront function in order to redirect the request to the right origin or implement weighted routing.
Also, I suggest considering using *.domain_name for blue distribution and app.domain_name for another one with ACM certificate *.domain_name. This allows you to use the same FQDN as an entry point for both.
Take into account the fact that Cloudfront is HA and a global AWS service. There is no point to include it in your blue/green deployment schemas. Lambda#Edge or Cloudfront Functions might be really useful to switch between origins.
There is an example.
My setup:
API Gateway - 10 APIs (api1, api2,...), all mapped to one custom domain name (api.xxx.com)
Route53 - api.xxx.com pointed to my Cloudfront distribution
Cloudfront - distribution created, api.xxx.com set as a CNAME
What I need to know - I would like to set Origin of this Cloudfront to this custom domain name, so I can call APIs like api.xxx.com/api1/endpoint, api.xxx.com/api2/endpoint. But how? I used API Gateway Name of my api.xxx.com Custom Domain name (xxxxxxx.execute-api.us-east-1.amazonaws.com) for default behavior Origin name and assumed that requests to all 10 APIs will be routed correctly, but it´s not happening,
What works: I created Origin name using the Invoke Url of api1 and assigned it to the Default behavior. So now, when I call "https://api.xxx.com/endpoint", api1 gets called. That makes sense, but the problem is - I need the path to the API to be the part of the URL, such as "https://api.xxx.com/api1/endpoint" so I can differentiate between them.
What doesn't work: But I need several APIs set in the distribution so I can call them like "https://api.xxx.com/api1/endpoint" and so on. And if I use Invoke URL as the Origin name for the API, I cannot attach this API name also to the URL, that returns 403. I was hoping that if I used "API Gateway domain name" of "Custom Domain Names" (after all, it has a format of xxxxx.execute-api.us-east-1.amazonaws.com), I could then use APIs in the URL, but that doesn't work. I cannot even use this "API Gateway domain name" to call individual apis through Postman. Could someone advise me on how to do it? How can I configure Cloudfront so it can call various APIs and use their routes in URL?
Finally found a solution, described in more detail in this discussion thread. My problem was that I was trying to use link to custom domain name (xxxxxxxxxxxx.execute-api.us-east-1.amazonaws.com) directly from Cloudfront, but I should have used "nice", readable address as Origin name and do the redirect in Route53
Working setup:
In API Gateway, Custom Domain Name regional-api.xxx.com is created, endpoint type Regional (xxxxxxxxxxxx.execute-api.us-east-1.amazonaws.com).
In Route53, A and AAAA records map regional-api.xxx.com to the Regional endpoint target domain name.
Cloudfront distribution created that uses regional-api.xxx.com as the Origin Domain Name and api.xxx.com as a CNAME.
In Route53, A and AAAA records map api.xxx.com to the Domain name of a newly created CF distribution.
My setup is a bit different then yours but it seems we want to accomplish the same goal.
I have four S3 buckets which I serve through cloudfront.
One bucket is the root website; 3 other buckets contain 3 different admin panels
For each s3 bucket I created an seperate origin; I believe you should create an origin for each seperate api.
I added for each origin group two path patterns; I believe for your api you can have one pattern per api. A path pattern could look like /api1/* which points to the origin of api1
Not sure if you tried adding origins for all your api's.
I've set up a static site on AWS with route 53, ACM, cloudfront and s3. However although I can prevent direct access to the bucket's generated domain name via a bucket policy so that access is only via my custom domain eg www.example.com I'm not sure how to do this for cloudfront and currently the website can be accessed via a cloudfront domain name eg 23324sdfff.cloudfront.net
Is there a way to prevent access to the website via the cloudfront domain name so that traffic can only access the site directly via www.example.com?
I think you could achieve that using Lambda#Edge.
Specifically you could create a function for viewer-request. The function would inspect the request and then decide if to allow or deny it.
Sadly, I don't have concrete example addressing your specific use-case. But AWS docs provide a number of examples that could be useful to you.
Maybe,there is an easier way not involving the lambda, but at present I'm not aware of such a possibility.
I was wondering how getbucketlocation work. Is there a centralized store to save all the bucket-location mappings? Buckets created in Regions launched before March 20, 2019 are reachable via the https://bucket.s3.amazonaws.com So If I have a bucket, then I use https://bucket.s3.amazonaws.com/xxxxx to access the bucket then it will query the centralized mapping store for the region then route my request to correct region?
There's a centralized database in us-east-1 and all the other regions have replicas of it. This is used for the GET bucket location API call as well as List Buckets.
But this isn't used for request routing.
Request routing is a simple system -- the database is DNS. There's a DNS record automatically created for every single bucket -- a CNAME to an S3 endpoint in the bucket's region.
There's also a *.s3.amazonaws.com DNS wildcard that points to us-east-1... so these hostnames work immediately when the new bucket is in us-east-1. Otherwise there's a delay until the specific bucket record is created, overriding the wildcard, and requests send to that endpoint will arrive at us-east-1, which will respond with an HTTP redirect to an appropriate regional endpoint for the bucket.
Why they might have stopped doing this for new regions is presumably related to scaling considerations, and the fact that it's no longer as useful as it once was. The ${bucket}.s3.amazonaws.com URL style became largely irrelevant when mandatory Signature Version 4 authentication became the rule for regions launched in 2014 and later, because you can't generate a valid Sig V4 URL without knowing the target region of the request. Signature V2 signing didn't require the region to be known to the code generating a signature.
S3 also didn't historically have consistent hostnames for regional endpoints. For example, in us-west-2, the regional endpoints used to be ${bucket}.s3-us-west-2.amazonaws.com but in us-east-2, the regional endpoints have always been ${bucket}.s3.us-east-2.amazonaws.com... did you spot the difference? After s3 there was a - rather than a . so constructing a regional URL also required knowledge of the random rules for different regions. Even more random was that region-specific endpoints for us-east-1 were actually ${bucket}.s3-external-1.amazonaws.com unless, of course, you had a reason to use ${bucket}.s3-external-2.amazonaws.com (There was a legacy reason for this -- it made sense at the time, but it was a long time ago.)
To their credit, they fixed this so that all regions now support ${bucket}.s3.${region}.amazonaws.com and yet (also to their credit) the old URLs also still work in older regions, even though standardization is now in place.