AWS Cloudfront URL Mismatch - amazon-web-services

I got the following setup: Cloudfront in account A and the ALB loadbalancer/webserver in Account B.
Cloudfront got a domain for the user, the loadbalancer a domain and certificate for the connection between cloudfront and loadbalancer / with security header and so on.
The initial website loads fine, but all links and scripts got the url of the loadbalancer. The webserver thinks that the client connects directly via the loadlanacer and adds the url of the loadlabancer to links and scripts etc. How can I tell the webserver that the origil url is the one from cloudfront? Is there a header I can set somewhere? The website is programmed in .net nuke, but Im not the developer of the website...
Thanks and best

Related

(GCP Cloud CDN) bucket http works but https doesn't

I have set up a CDN by following this document: https://cloud.google.com/cdn/docs/setting-up-cdn-with-bucket
http (with port 80)
https (with port 443) with a google-managed certificate
example.com is pointing to the load balancer's ip adress (google domains)
the certificate says example.com is active
simple index.html is in the backend bucket
I can load http://example.com fine but it is insecure. When I load https://example.com on chrome I get the following:
This site can’t provide a secure connection
mydomain.com uses an unsupported protocol.
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
Can somebody help me set up Cloud CDN with https using GCP storage?
EDIT: trying adding AAAA following How do you serve a static website using Google Cloud CDN, Google Cloud Storage, and a custom domain?
Requestor Pays was already off
The permission was to allUsers with Storage Object Viewer
EDIT2: adding AAAA didn't work for me
EDIT3: Got rid of AAAA. It is working now... I guess it takes quite long
Yes, depending on your domain provider. Normally, the longest takes up to 78 hours.
You need to enable the http-https redirect. You need to configure the http frontend, and there you have the option to enable the http-https redirect.
I would assume that you did not add the external IP of the Load Balancer as one of the domains accepted by your SSL certificate (and you shouldn't have to), so it will not really load the page via https://(LoadBalancer-IPAddress)

Cloudfront domain defaults to HTTP when HTTPS is available

Similarly to other stacks, I have hosted a website using AWS services:
Registered domain on Route 53 (example.net)
Content is hosted on an S3 bucket
Got an SSL certificate using AWS Certificate Manager
Created a CloudFront distribution, pointed it to S3 and connected it to my domain with Route 53.
All of this works except for an issue at what seems to be the final hurdle. When I enter my domain url into the search bar, example.net, the connection isn't secure by default. I've illustrated the problem here.
I'm relatively new to hosting and can't find a solution relating to this. My thoughts are that I'm missing some Cloudfront or Route 53 configuration, since another thing that doesn't work is connecting via www (I don't care about that issue as much). Any input is appreciated.
By default enabling HTTPS on a website doesn't disable HTTP. They are both available, on separate ports. That's why you have to type https:// in the browser's address bar to go directly to the HTTPS version of your website. You can get CloudFront to redirect all HTTP requests to HTTPS by following this guide.

Google Domains to AWS Route53 HTTPS

I have a domain hosted through Google. I'm using Google Workspace for a lot of my day-to-day operations (e.g. Drive, Gmail, etc). I'm using AWS as my infrastructure and business logic for my application. I'm having trouble making my site support TLS. If you visit it now, you get this on chrome and I can't seem to make HTTPS requests work.
I have my domain pointing to AWS via Custom Name Server.
My route 53 has the NS type records listed under the hosted zone
I've tried to request a Certificate from AWS to make it work.
My problem is I don't know how to tell Google about it. How do you let Google know about the certificate so I can make my site HTTPS?
I believe approaching Google is not going to solve your issue as in the above case Google is only responsible to host your domain . So DNS setup is only responsible to route requests to your site and not making your site more secured.
I also found that you are exposing your site as http rather than https and thats why your site is unsecured.
Is your site is running on a web server or is it hosted on S3 as static web site ?
Note: you cant enable https on S3 static website.
The workaround to above problem is below :
Route53 has A record to pointing to ALB (configured with ACM) distributing traffic to Ec2 instances running your web application.
If anyone is still looking. I wanted to keep it cheap with a simple S3 static website. If you want to maintain the S3 part, make a CloudFront distribution (if you haven't already.
Inside the CloudFront under the main settings, use a Certificate you made from Certificate Manager.
Then head over to Route53 (even if the domain is hosted via Google) and route the "A" name record to the CloudFront. NOTE: make sure the "Alternate Domain" name is filled in or else it won't see it.
Let it update for about a minute or two and it will show https

Using cloudfront to handle static files and backend

I have a single page application (made with angular), which I am serving by pointing cloudfront to an S3 bucket. This is working well.
However, I want to run the backend of my website via the same domain - What I've done is added another origin to my cloudfront distribution which points to elastic beanstalk where the django app is running.
Then, I configured behaviors so that the Path Pattern /apiv1/* is handled by django. This doesn't work and I'm getting a 403 forbidden error when trying to access my endpoints.
The behavior I'm looking for is as follows:
/ should point to index.html and load static files (this currently works)
/apiv1/... should point to django. For example, to access a login endpoint I would have website.come/apiv1/api/login (as a pose to localhost/api/login on my machine).
Is this possible?
If anyone is doing something similar, here is a fix:
Add a subdomain - I added api.example.com which is a subdomain of example.com
Then, in Route 53, I configured api.example.com to point to elb via an alias and requested an ssl certificate for the subdomain! Note, YOU MUST use https when making requests hence the reason for the ssl certificate.
I simply changed the base url in my angular http requests and it works.

Hosting React page on S3 and making REST api calls to server on Elastic Beanstalk

Background
I am trying to deploy a dummy application with React frontend and Django backend interacting via REST api. I have done the following:
Use a S3 bucket to host static website and deploy my react code to it
Put Cloudfront for S3 bucket - set up certificate and changed my domain name (from GoDaddy) to link to this address
Kicked off Elastic Beanstalk environment following the python environment tutorial of AWS
Set up Postgres RDS and linked the Django server with it
So now I can do the following
Access my frontend using https via my domain name (https://www.example.com)
Access django admin site using the path of elastic beanstalk and update items
i.e. each component is up and running
Problem
I am having trouble with:
Making a secure REST API call from the static page to Elastic Beanstalk environment. Before I set up certificates I could easily make REST API calls.
The guides I can find usually involve putting a domain name for Elastic Beanstalk, which I imagine does not apply to my case (or does it?)
I tried to follow this faq and updated configuration in load balancer that accepts 443 https and redirects to 80 http. But I am using same certificate as from CloudFront, which does not sound right to me.
Would appreciate help with
how to solve the above ssl connection issue
or is there a better architecture for what I'm trying to achieve here?
According to Request a certificate in ACM for Elastic Beanstalk backend, it sounds like I have to use a subdomain and request a certificate for that subdomain, and use Cloud 53 to direct requests to that subdomain to Elastic Beanstalk environment. Would that be the case?
Thank you in advance!
By default EB url will HTTP only. To use HTTPS you need to deploy SSL certificate on your ALB.
In order to do that you need a custom domain, because you can only associated an SSL certificates with domains that you control. Thus, normally you would get a domain (you seem to already have one from godaday). So in this case you can setup a subdomain (e.g. api.my-domian.com) on godady. Then you can use AWS ACM to register a free public SSL certificate for api.my-domian.com.
Once the certificate is verified, using either DNS (easier) or email technique, you deploy it on your ALB using HTTPs listener. Obviously you will need to point api.my-domian.com to the EB's https url. You can also redirect on your ALB http traffic from port 80 to 443 to always use https.
Then in your front-end application you only use https://api.my-domian.com, not the original EB url.
There can be also CORS issues alongside this, so have to be vary of them as well.