Securing traffic from AWS CloudFront to Azure App Gateway - amazon-web-services

Please can someone advise if what I'm trying to do is possible - apologies I know a lot more about AWS than Azure, and I can't find any guidance online or bypass the issue by setting up services and 'giving it a go'.
I want to send SSL-secured subdomain traffic from AWS where our primary domain is hosted to Azure where some dependent services and resources are hosted. We want to use AWS ACM for SSL management/renewals, removing any dependency on third parties or Azure for this if at all possible.
I am able to set up a CloudFront distribution with an origin of an Azure Storage Account endpoint:
xxx.blob.core.windows.net
With an alternate domain name of a subdomain of the desired URL:
xxx.xxx.co.uk
I can secure this with a wildcard ACM SSL, and the resultant images are all secure.
I have also set up a static web app, applied a custom domain to it of:
xxx.xxx.co.uk
And with the appropriate DNS/CF I can make traffic to that Azure SWA secure.
Is it possible to do the same with Azure App Gateway? All the things that I've tried or the developers working in Azure (a third party) have tried do not work, we end up with mostly 502 errors depending on the configuration. Depending on the CF/DNS configuration, I can get through to the correct resources/services by bypassing an SSL warning.
Would adding a port 80/non-https listener for our subdomain on the App Gateway work?

Related

How can I let my HTTPS frontend server connect with my HTTP rest API?

I have a React.js web app deployed via Google Firebase hosting. I also have an express Rest API deployed via AWS EC2. I have been so far unable to get the React app to interact with the express API because it is using HTTP. I tried to get all the SSL/cert stuff figured out to enable HTTPS on the backend but it seems like it will not work because the cert is not signed by a Certificate Authority.
Is there any workaround or other solution here? Thank you in advance.
A web browser will not accept a self-signed SSL certificate. In order to generate a legitimate SSL certificate you must first own a domain name.
You need to purchase a domain, and point your domain or subdomain to the EC2 instance. Then you need to create an SSL certificate that actually matches that domain name or subdomain, using an SSL provider like Let's Encrypt that will actually be accepted by modern web browsers.
Finally you will need to use that domain name in your API calls.
You could place a Load Balancer, or CloudFront distribution, or AWS API Gateway, in front of the EC2 server, at which point you could use a free AWS ACM SSL certificate.
If you don't want to purchase a domain name, you could still place CloudFront or API Gateway in front of the server and use their default endpoint which will also provide SSL.

Google Domains to AWS Route53 HTTPS

I have a domain hosted through Google. I'm using Google Workspace for a lot of my day-to-day operations (e.g. Drive, Gmail, etc). I'm using AWS as my infrastructure and business logic for my application. I'm having trouble making my site support TLS. If you visit it now, you get this on chrome and I can't seem to make HTTPS requests work.
I have my domain pointing to AWS via Custom Name Server.
My route 53 has the NS type records listed under the hosted zone
I've tried to request a Certificate from AWS to make it work.
My problem is I don't know how to tell Google about it. How do you let Google know about the certificate so I can make my site HTTPS?
I believe approaching Google is not going to solve your issue as in the above case Google is only responsible to host your domain . So DNS setup is only responsible to route requests to your site and not making your site more secured.
I also found that you are exposing your site as http rather than https and thats why your site is unsecured.
Is your site is running on a web server or is it hosted on S3 as static web site ?
Note: you cant enable https on S3 static website.
The workaround to above problem is below :
Route53 has A record to pointing to ALB (configured with ACM) distributing traffic to Ec2 instances running your web application.
If anyone is still looking. I wanted to keep it cheap with a simple S3 static website. If you want to maintain the S3 part, make a CloudFront distribution (if you haven't already.
Inside the CloudFront under the main settings, use a Certificate you made from Certificate Manager.
Then head over to Route53 (even if the domain is hosted via Google) and route the "A" name record to the CloudFront. NOTE: make sure the "Alternate Domain" name is filled in or else it won't see it.
Let it update for about a minute or two and it will show https

How to deploy frontend and backend with the same domain name?

I'm not very familiar with deployment and networking as I'm primarily a frontend developer. I want to create a project with Laravel and React (separated, not integrated with blade), and deploy them to AWS. I want to use Laravel only as an API server, and I'm planning to deploy it on EC2. If I host my React app on S3, how will it be possible for me to share the same domain with the API sever running on the EC2 instance?
I know that I can have separate subdomains,like www.example.com for my React app and api.example.com for my API server. However, if I want to have www.example.com for my React app and www.example.com/api for my API server, what options do I have? And what resources can you recommend for me to get more up to speed on this topic? Thanks!
As you want to use S3 and EC2 you would need to use a service that can distribute to both endpoints based on a condition.
The best service for this would be CloudFront, which supports distribution to S3 and EC2 (as a custom origin).
To do this you would create your distribution with an origin for the S3 bucket, and an origin for the API. As your API is hosted on the /api/* path you would add this as the path pattern when adding the secondary origin via a behaviour.
CloudFront will then route any requests to /api/* paths to your EC2 origin.
I have found an article named How to route to multiple origins with CloudFront which I hope will explain the steps to accomplish this in greater detail.

Hosting React page on S3 and making REST api calls to server on Elastic Beanstalk

Background
I am trying to deploy a dummy application with React frontend and Django backend interacting via REST api. I have done the following:
Use a S3 bucket to host static website and deploy my react code to it
Put Cloudfront for S3 bucket - set up certificate and changed my domain name (from GoDaddy) to link to this address
Kicked off Elastic Beanstalk environment following the python environment tutorial of AWS
Set up Postgres RDS and linked the Django server with it
So now I can do the following
Access my frontend using https via my domain name (https://www.example.com)
Access django admin site using the path of elastic beanstalk and update items
i.e. each component is up and running
Problem
I am having trouble with:
Making a secure REST API call from the static page to Elastic Beanstalk environment. Before I set up certificates I could easily make REST API calls.
The guides I can find usually involve putting a domain name for Elastic Beanstalk, which I imagine does not apply to my case (or does it?)
I tried to follow this faq and updated configuration in load balancer that accepts 443 https and redirects to 80 http. But I am using same certificate as from CloudFront, which does not sound right to me.
Would appreciate help with
how to solve the above ssl connection issue
or is there a better architecture for what I'm trying to achieve here?
According to Request a certificate in ACM for Elastic Beanstalk backend, it sounds like I have to use a subdomain and request a certificate for that subdomain, and use Cloud 53 to direct requests to that subdomain to Elastic Beanstalk environment. Would that be the case?
Thank you in advance!
By default EB url will HTTP only. To use HTTPS you need to deploy SSL certificate on your ALB.
In order to do that you need a custom domain, because you can only associated an SSL certificates with domains that you control. Thus, normally you would get a domain (you seem to already have one from godaday). So in this case you can setup a subdomain (e.g. api.my-domian.com) on godady. Then you can use AWS ACM to register a free public SSL certificate for api.my-domian.com.
Once the certificate is verified, using either DNS (easier) or email technique, you deploy it on your ALB using HTTPs listener. Obviously you will need to point api.my-domian.com to the EB's https url. You can also redirect on your ALB http traffic from port 80 to 443 to always use https.
Then in your front-end application you only use https://api.my-domian.com, not the original EB url.
There can be also CORS issues alongside this, so have to be vary of them as well.

Use https in aws for flask api without purchasing domain name

I have made a flask application to use only as API. I have hosted it on aws using nginx and gunicorn. I intend to use the API to run my android application. There is a part in the application where i have to download something using Android Download Manager, but it only downloads things hosted in https domains. So i want to make my application https instead http. But every tutorial shows me a way with a purchased domain. I dont have much information on it yet, but I cant get an SSL Certificate from amazon without purchased domain name(which is pointless for an API). I just want to know how can I do this? How can I make my nginx server listen to https requests?
I have hosted it on aws using nginx and gunicorn.
I think you need a domain name to get ssl on AWS.
It is not allowed in AWS.
One part of HTTPS is encryption, the other part is identity verification. What you're asking for is impossible since it is required that you have to verify your domain name. Without this no Certificate authority will sign a certificate. You cannot have publicly valid certificate if it's self-signed. ACM (Amazon Certificate Manager) an AWS service, will not allow you to create a certificate without a valid domain name.