AWS EC2 / Elastic Beanstalk | How to white list by domain? - amazon-web-services

I have an Elastic Beanstalk environment which is a Node API. I also have a Angular web app outside of AWS on domain: www.example.com
How would I go about making it so only calls made from 'www.example.com' to the AWS environment(Node API can be accessed?
I am familiar with AWS security groups but it doesn't handle domain whitelisting (Only IP's). Since users will be on different IP's using www.example.com I need to whitelist by domain and not IP.
Any help would be greatly appreciated!

Without blocking via IP your only choice is to look at attaching a WAF to the ALB in your Elastic Beanstalk environment.
By doing this you can allow only traffic that matches a set of conditions, if there is a particular header that your requests include when they make requests to the backend then you can allow requests from these sources.
Assuming the request to the API is made from the frontend you should have a referrer header which contains the source of the previous page. You could whitelist this domain in the WAF.

Related

AWS : Manage Elastic Beanstalk API and CloudFront routes with Routes 53

I wonder how to manage the routes of my architecture. To summarize, my architecture is composed of :
S3 static website exposed under CloudFront CDN
Elastic Beanstalk API (based on Docker container with Django Rest and Python)
I usually insert a new record to my hosted zone in my route 53 but my goal here is to have the equivalent of Nginx locations with proxy_pass. For example, I would have :
<my_dns_record>/api that target my Beanstalk API
<my_dns_record> that target my static website on my CloudFront
I thought about an API Gateway but I wonder if it's really the best way to structure the routing.
Does anyone have an idea how to achieve the desired behavior ?
Thank you in advance for your help.
If I understand correctly, you want to have multiple apps (your S3 static site, and your Elastic Beanstalk app) served under a single domain. Route53 doesn't have any special features to handle this, since it is just a DNS service, and what you are talking about is an HTTP path routing thing.
You also shouldn't use API Gateway for this, as you would be placing your entire website behind an API Gateway when it is really only appropriate to have your API behind an API Gateway.
I would just add the Elastic Beanstalk API under CloudFront as well, as a second origin, and configure CloudFront to send requests at the /api path to the Elastic Beanstalk origin.
Alternatively, forget about having everything under the same domain, and use a subdomain api.yourdomain.com for your API. Using subdomains for your different services instead of path routing is a lot more flexible.

AWS setup: moving single page static application frontend to S3 (from web server)

My current website (single page app with CORS + API) is deployed on AWS EC2 instance and is served via ALB (mostly for easier setup of HTTPS as I only have one region covered now). The web server is configured to serve the single page application but that's all it is doing regarding frontend. I want to move the single page application to S3 instead and so completely separate the backend from the frontend. The question is, what would be the most efficient way to do it with regards to AWS setup? I can come up with the following:
point the domain at the S3 instance to serve frontend files, point API calls to ALB public DNS address
keep the domain pointed at the ELB as it is, route port 80 & 443 to S3, change API port and route that port to EC2
...
Any help appreciated.
If you're trying to completely separate the infrastructure for frontend and backend but keep the same domain you could make use of CloudFront.
In CloudFront you would create 2 origins within your distribution:
The default origin would be the S3 static website.
Then an additional origin which would point to the original ALB.
You would configure the behaviours of this CloudFront distribution so that when a path matches a specific pattern i.e. api/* it would forward traffic to the ELB. If it does not match this it would default to your S3 bucket.
Take a look at the Can I use a single CloudFront web distribution to serve content from multiple origins using multiple behaviors? article which covers a similar behaviour to what I have outlined.

Hosting React page on S3 and making REST api calls to server on Elastic Beanstalk

Background
I am trying to deploy a dummy application with React frontend and Django backend interacting via REST api. I have done the following:
Use a S3 bucket to host static website and deploy my react code to it
Put Cloudfront for S3 bucket - set up certificate and changed my domain name (from GoDaddy) to link to this address
Kicked off Elastic Beanstalk environment following the python environment tutorial of AWS
Set up Postgres RDS and linked the Django server with it
So now I can do the following
Access my frontend using https via my domain name (https://www.example.com)
Access django admin site using the path of elastic beanstalk and update items
i.e. each component is up and running
Problem
I am having trouble with:
Making a secure REST API call from the static page to Elastic Beanstalk environment. Before I set up certificates I could easily make REST API calls.
The guides I can find usually involve putting a domain name for Elastic Beanstalk, which I imagine does not apply to my case (or does it?)
I tried to follow this faq and updated configuration in load balancer that accepts 443 https and redirects to 80 http. But I am using same certificate as from CloudFront, which does not sound right to me.
Would appreciate help with
how to solve the above ssl connection issue
or is there a better architecture for what I'm trying to achieve here?
According to Request a certificate in ACM for Elastic Beanstalk backend, it sounds like I have to use a subdomain and request a certificate for that subdomain, and use Cloud 53 to direct requests to that subdomain to Elastic Beanstalk environment. Would that be the case?
Thank you in advance!
By default EB url will HTTP only. To use HTTPS you need to deploy SSL certificate on your ALB.
In order to do that you need a custom domain, because you can only associated an SSL certificates with domains that you control. Thus, normally you would get a domain (you seem to already have one from godaday). So in this case you can setup a subdomain (e.g. api.my-domian.com) on godady. Then you can use AWS ACM to register a free public SSL certificate for api.my-domian.com.
Once the certificate is verified, using either DNS (easier) or email technique, you deploy it on your ALB using HTTPs listener. Obviously you will need to point api.my-domian.com to the EB's https url. You can also redirect on your ALB http traffic from port 80 to 443 to always use https.
Then in your front-end application you only use https://api.my-domian.com, not the original EB url.
There can be also CORS issues alongside this, so have to be vary of them as well.

Route53 for AWS Elastic Search Domain gives certificate error

I have create a AWS elastic search domain in Virginia and got a Endpoint url.
Now I wanted to configure the Route53 behavior around it, so that a caller can use the same url, even though there is some change in elastic search or in case of a disaster recovery.
So,
Virginia Route 53 -- 1 Points to -- Virgina Elastic Search Domain URL
Oregon Route 53 -- 2 Points to -- Oregon Elastic Search Domain URL
Main Route 53 -- 3 Points to -- Route 53 1 or 2
I have already create these and also created and uploaded SSL certificate with correct SAN entries. But when I execute,
curl https://mainroute53/health
curl https://virginiaroute53/health
curl https://oregonroute53/health
I am getting this error,
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.
But when I am calling the Elastic Search URL directly its working. So I understand this is a issue with the way I am using the certificate. Any help appreciated.
Your Elastic Search endpoint will always return the Elastic Search SSL certificate.
So when you create a Route 53 "alias" for it, you may be connecting to it via your custom DNS entry, but Elastic Search will still use the Elastic Search SSL certificate.
Since the DNS endpoint you're using does not match the SSL certificate, you get that error.
You could use the --insecure curl flag to have it not check the SSL certificate, however, there are risks of doing that.
You can probably work around this by setting up a proxy server in front of the Elasticsearch domain, although it's kind of silly since there appears to also be an ELB inside the Elasticsearch domain. Ah well.
The domain Amazon ES creates for you includes the nodes in the Elasticsearch cluster and resources from several AWS services. When Amazon ES creates your domain, it launches instances into a service-controlled VPC. Those instances are fronted by Elastic Load Balancing (ELB), and the endpoint for the load balancer is published through Route 53. Requests to the domain pass through the ELB load balancer, which routes them to the domain’s EC2 instances.
https://aws.amazon.com/blogs/database/set-access-control-for-amazon-elasticsearch-service/
One way you can access Elasticsearch using your custom domain name is to use an API Gateway as an HTPP proxy. But then you have to deal with the authentication part since the Cognito cookies for ES will be pointing to the original domain (*.es.amazonaws.com).
In my experience this is doable and you should be able to use API Gateway (plus Custom Domain Names and Route 53) to achieve what you want (having a custom domain name over ES). It's just that it requires some Cognito knowledge and most likely, some coding (to handle the cookie problem).
You can use the http endpoint instead of the https one
i.e
curl **http**://mainroute53/health
This works around the fact that AWS does not allow providing custom domain certificate in its managed Elastic service
We had the same issue, wanted to be redirected to Kibana with a more friendlier DNS name and we used the solution with S3 bucket and the redirection as described here.
The steps:
Create a S3 bucket with any name.
In the bucket properties, enable “Static Website hosting”.
In the Static WebSite hosting properties, select the option to “Redirect Requests”.
In the target domain set the Kibana URL that is given from your elasticsearch domain: i.e. https://vpc-es-randomstring.us-east-1.es.amazonaws.com/_plugin/kibana/
Set Protocol to https
Then follow the steps from Step 5 on the guide above

Two different AWS applications on the same domain with a different subfolders

Based on this issue CAS server cross subdomain ST ticket I'm thinking about changing of my applications urls.
I have two applications on the following subdomains
https://ui.example.com - static AngularJS application(JavaScript, HTML) hosted on Amazon S3
https://api.example.com - Java Spring application hosted on Amazon EC2 instances (for Tomcat) with Elastic Load Balancing that distributes incoming application traffic across multiple Amazon EC2.
Right now I need to change urls of my applications to the following:
https://ui.example.com
https://ui.example.com/api
In other words I need to make api.example.com application available as /api subfolder of my ui.example.com domain.
How it can be configured with AWS ? Where at AWS I need to make an appropriate changes and configuration ?
You could setup an Nginx proxy in front of both servers, mapping the root path to S3 and the /api path to your EC2 instance. Or you could set up a CloudFront distribution (or use another CDN like CloudFlare) and map the different paths to different origin servers.
In general you have to put a proxy in front of all the servers sharing a domain name.