Calling cross domain requests from almost static site - amazon-web-services

I'm using vuejs and almost everything I do is on client-side, but for thing I need to call the server-side to check if URL exists or not.
I don't want to make these requests from browser, because that doesn't make sense to fetch different website from my scripts that will be more like calling any bad website without user knowing it in background, so I need to call cloud-function(gce) or aws lambda(since I don't want to host the site on server for it, since it has just one api call).
What would be the best way to accomplish it, I'm looking for something like website is www.webapp.com and cloud-function call on www.webapp.com/checkUrl

If you choose AWS platform, you can use S3, CloudFront, Route53, API Gateway and Lambda to accomplish your goal.
Step01
Create a S3 bucket and upload your frontend vueJs code
Enable Static Web Hosting onto your bucket from S3 properties
Create a CloudFront distribution
Create a CloudFront origin pointing to your s3 bucket url (you have to add static website url of the s3 bucket)
Set the default behaviour pointing to S3 orgin ID
Step 02
Create your lambda function
Create a API gateway
Add new resource (GET/POST) pointing to your lambda
Deploy your API
Go back to the CloudFront distribution and add a origin pointing to your API Gateway
In the behaviour tab, create a new behaviour eg: (/checkUrl) and point it to the OriginId of the API Gateway
Step 03
Goto Route53 and create a new Hosted Zone
Set the NS records of the hosted zone in your domain configuration
Create a new record set (eg: www.webapp.com) and point it to the DNS of your CloudFront distribution
Update your CloudFront distribution's Alternate Domain Name to www.webapp.com

Related

AWS: How to configure Cloudfront for Custom Domain Names

My setup:
API Gateway - 10 APIs (api1, api2,...), all mapped to one custom domain name (api.xxx.com)
Route53 - api.xxx.com pointed to my Cloudfront distribution
Cloudfront - distribution created, api.xxx.com set as a CNAME
What I need to know - I would like to set Origin of this Cloudfront to this custom domain name, so I can call APIs like api.xxx.com/api1/endpoint, api.xxx.com/api2/endpoint. But how? I used API Gateway Name of my api.xxx.com Custom Domain name (xxxxxxx.execute-api.us-east-1.amazonaws.com) for default behavior Origin name and assumed that requests to all 10 APIs will be routed correctly, but it´s not happening,
What works: I created Origin name using the Invoke Url of api1 and assigned it to the Default behavior. So now, when I call "https://api.xxx.com/endpoint", api1 gets called. That makes sense, but the problem is - I need the path to the API to be the part of the URL, such as "https://api.xxx.com/api1/endpoint" so I can differentiate between them.
What doesn't work: But I need several APIs set in the distribution so I can call them like "https://api.xxx.com/api1/endpoint" and so on. And if I use Invoke URL as the Origin name for the API, I cannot attach this API name also to the URL, that returns 403. I was hoping that if I used "API Gateway domain name" of "Custom Domain Names" (after all, it has a format of xxxxx.execute-api.us-east-1.amazonaws.com), I could then use APIs in the URL, but that doesn't work. I cannot even use this "API Gateway domain name" to call individual apis through Postman. Could someone advise me on how to do it? How can I configure Cloudfront so it can call various APIs and use their routes in URL?
Finally found a solution, described in more detail in this discussion thread. My problem was that I was trying to use link to custom domain name (xxxxxxxxxxxx.execute-api.us-east-1.amazonaws.com) directly from Cloudfront, but I should have used "nice", readable address as Origin name and do the redirect in Route53
Working setup:
In API Gateway, Custom Domain Name regional-api.xxx.com is created, endpoint type Regional (xxxxxxxxxxxx.execute-api.us-east-1.amazonaws.com).
In Route53, A and AAAA records map regional-api.xxx.com to the Regional endpoint target domain name.
Cloudfront distribution created that uses regional-api.xxx.com as the Origin Domain Name and api.xxx.com as a CNAME.
In Route53, A and AAAA records map api.xxx.com to the Domain name of a newly created CF distribution.
My setup is a bit different then yours but it seems we want to accomplish the same goal.
I have four S3 buckets which I serve through cloudfront.
One bucket is the root website; 3 other buckets contain 3 different admin panels
For each s3 bucket I created an seperate origin; I believe you should create an origin for each seperate api.
I added for each origin group two path patterns; I believe for your api you can have one pattern per api. A path pattern could look like /api1/* which points to the origin of api1
Not sure if you tried adding origins for all your api's.

What is the best way to point domain to a S3 bucket that doesnt have the domain as bucket name

I'm new to AWS and all of it's services and my first go at it I started my project with a S3 bucket that was created by default by AWS Vue CLI. I've got Cognito pool & gateway API connected to this bucket but now that I want to connect this project to a custom domain I just purchased I realize the bucket name needs to match the root domain name. From what I understand this will mean that I need to pull all non-aws files from my Vue project, duplicate it, and either reconfigure pre-existing connections or start all over.
I've got my custom domain set up with an empty S3 bucket, Cloudfront, and Router 53 so that's up and working but now I am not sure how to go about transferring this project from buckets.
So basically I started my project with Bucket1 and finished everything that included Cognito Pool & Gateway API. Now I have a custom domain I want to use have Cloudfront and Route 53 with CustomDomain bucket name and I want to have the project from Bucket1 load for the new bucket.
Using Cloud-front you can mitigate this issue.
Route 53(DNS Name) --> Cloudfront Url --> S3 origin
As you already created a bucket for website hosting, below steps can help you.
Use the Amazon S3 console t to login and search for CloudFront.
Click on Create distribution
Create a Web distribution
Select existing bucket in Origin Domain Name and complete the setup.
Update the DNS records for your domain to point your website's CNAME to your CloudFront distribution's domain name. You can find your distribution's domain name in the CloudFront console in a format that is similar to d1234abcd.cloudfront.net.
Wait for your DNS changes to propagate and for the previous DNS entries to expire.
The typical AWS way to do this is to use cloudfront - the domain points to cloudfront and cloudfront can point to any bucket name or other source location; when you introduce cloudfront into the mix, the bucketname no longer needs to match the domain name.

Serve single page app under specific path with AWS

At the moment I have mydomain.com pointing to a Django EC2 instance with ABL in front of it and CF in front of the ABL.
Now I have the requirement of serving a React single page app under an specific path mydomain.com/my-specific-path (of course everything under HTTPS)
I tried everything I could to host my SPA in an S3 bucket and use CF to redirect the calls to that S3. But it was impossible to serve the app over HTTPS that way (because of S3 hosting and subfolders).
I am thinking now about setting a reverse proxy in front of my Django app. But I don't know if that is the best solution, and I don't know the best way to do it.
Could you please give me some insights about how to serve a SPA under a specific path?
Thank you in advance.
You need to:
1) Add your ALB as an origin to your CloudFront distribution
2) Add your S3 bucket website URL as an origin to your CloudFront distribution
Note: Adding S3 as an origin from the dropdown box that auto populates here will not work for hosting a website out of S3. This feature is for hosting static files only.
2a) Optionally lock your S3 bucket down to CloudFront using a condition in the bucket policy that checks for header value that only CloudFront and your S3 bucket knows
3) Set the default root object in your CloudFront distribution to be index.html
4) Upload your react app to a sub-folder in your S3 bucket, not in the root. This sub-folder must match the path you set on your React app origin in CloudFront
5) Set a default behaviour in your CloudFront distribution that points to your ALB
6) Set a behaviour in your CloudFront distribution that points my-specific-path/* to your S3 bucket origin
7) Terminate SSL on your CloudFront distribution using AWS Certificate Manager
This setup should give you SSL on both your Django app and your React app being hosted in S3.
I've got this running, screen shots below:

How to serve static web content from S3 backed by multiple buckets from different regions

I'm trying to serve static web content (HTML, CSS, and JS files) from S3 buckets. I know I can go to the bucket's properties tab and choose the item Use this bucket to host a website from the Static website hosting box. And I'm sure this step will still be part of the solution I'm looking for but it won't be all.
Here's what I'm trying to accomplish:
Deploying the same content to multiple regions and based on availability and/or latency, provide the service to the client.
As for the API Gateway, I know how to do this. I should create the same API Gateway (alongside underlying lambda functions) and Custom Domain Names in all the regions. And then creating the same domain on Route 53 (of type CNAME) and choose Latency as Routing Policy. One can also set up a Health Check for the Record Set so availability of the API Gateway and lambda functions are checked periodically.
Now I want to do the same for the S3 bucket and my static content. i.e. I want to deploy the same content to different regions and somehow make Route 53 to route the request to the closest available bucket. Previously, I was using CloudFront but it seems to me in this setup, I can only introduce one bucket.
Does anyone know how can I serve my static content from multiple buckets? If you are going to suggest CouldFront, please tell me how you plan to use multiple buckets.
You can generate a certificate, setup a CloudFront distribution to grab the content from your bucket and then point your domain to your distribution using Route53. You get free https and you can also add several S3 buckets as origins for your distribution.
From AWS Docs:
After you configure CloudFront to deliver your content, here's what happens when users request your objects:
1. A user accesses your website or application and requests one or more objects, such as an image file and an HTML file.
2. DNS routes the request to the CloudFront edge location that can best serve the request—typically the nearest CloudFront edge location in terms of latency—and routes the request to that edge location.
3. In the edge location, CloudFront checks its cache for the requested files. If the files are in the cache, CloudFront returns them to the user. If the files are not in the cache, it does the following:
3a. CloudFront compares the request with the specifications in your distribution and forwards the request for the files to the applicable origin server for the corresponding file type—for example, to your Amazon S3 bucket for image files and to your HTTP server for the HTML files.
3b. The origin servers send the files back to the CloudFront edge location.
3c. As soon as the first byte arrives from the origin, CloudFront begins to forward the files to the user. CloudFront also adds the files to the cache in the edge location for the next time someone requests those files.
P.D. Keep in mind this is for static content only!
This is possible with CloudFront using Lambda#Edge to change origin based on answer from Route 53.
Please refer this blog for a sample Lambda#Edge code to do this -
https://aws.amazon.com/blogs/apn/using-amazon-cloudfront-with-multi-region-amazon-s3-origins/

One domain to mulitple s3 buckets based on geolocation

We want to host images on our application as fast as possible. As we already have an AWS setup we prefer to host our images on S3 buckets (but are open for alternatives).
The challenge is routing the request to the closest S3 bucket.
Right now we use Amazon Route 53 with geolocation routing policy to the closes EC2 instance wich redirects to the respective bucket. We find this inefficent as the request goes:
origin->DNS->EC2->S3 and would prefer
origin->DNS->S3. Is it possible to bind two static website S3 buckets to the same domain where request are routed based on Geolocation?
Ps: We have looked into cloudfront, but since many of the images are dynamic and are only viewed once we would like the origin to be as close to the user as possible.
It's not possible to do this.
In order for an S3 bucket to serve files as a static website, the bucket name must match the domain that is being browsed. Due to this restriction, it's not possible to have more than one bucket serve files for the same domain because you cannot create more than one bucket with the same name, even in different regions.
CloudFront can be used to serve files from S3 buckets, and those S3 buckets don't need to have their names match the domain. So at first glance, this could be a workaround. However, CloudFront does not allow you to create more than one distribution for the same domain.
So unfortunately, as of this writing, geolocating is not possible from S3 buckets.
Edit for a deeper explanation:
Whether the DNS entry for your domain is a CNAME, an A record, or an ALIAS is irrelevant. The limitation is on the S3 side and has nothing to do with DNS.
A CNAME record will resolve example.com to s3.amazonaws.com to x.x.x.x and the connection will be made to S3. But your browser will still send example.com in the Host header.
When S3 serves files for webpages, it uses the Host header in the HTTP request to determine from which bucket the files should be served. This is because there is a single HTTP endpoint for S3. So, just like when your own web server is hosting multiple websites from the same server, it uses the Host header to determine which website you actually want.
Once S3 has the Host that you want, it compares it against the buckets available. It decided that the bucket name would be used to match against the Host header.
So after a lot of research we did not find an answer to the problem. We did however update our setup. The scenario is that a user clicks a button and will view some images in an IOS app. The request when the user pushes the button is geo rerouted to the nearest EC2 instance for faster performance. Instead of returning the same imagelinks in EU and US we updated it so when clicking in US you get links to an American S3 bucket and the same for Europe. We also put up two cloud front distributions, one in front of each S3 bucket, to increase speed.