I have the following VPC setup with AWS Elastic Beanstalk:
Web App Public Load Balancer pointed to by my domain (proxied through cloudflare) with EC2 instances in private subnet.
Private internal API Load Balancer with inbound access granted to EC2 instances above via Security Group
Database within the private subnet, accessible by EC2 instances behind the API Load Balancer.
I would like to enable end to end HTTPS, AWS has good documentation here (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html).
I have followed this, albeit with my free Cloudflare domain certs. This seemed ok until I get the following error: 'SELF_SIGNED_CERT_IN_CHAIN' when my web app tries to connect to the internal API via https://internal-aweseb-dns.amazonaws.com (DNS for internal API Load Balancer).
Questions
Is this the correct way get end to end HTTPS?; and
How do I resolve the above error? (returned by Node JS)
Thanks
In the end I came to this conclusion: I don't need end to end HTTPS when my instances are in a private subnet because:-
Once HTTPS is terminated at the Load Balancer, the internal requests are over HTTP but are not over the public internet. They requests cannot be seen by anyone outside the AWS network.
The data I am transmitting is not overly sensitive (just emails and user preferences) so there is no Compliance/Regulatory reason to enforce end to end HTTPS in a private network.
There is a small performance hit when using HTTPS as an SSL handshake must occur, which is an overhead.
I have additional security via Security Groups, only allowing internal traffic originating from the Load Balancer.
There are many suggestions that would guide you to configure your application to ignore the certificate when connecting via HTTPS... but that defeats the whole point of HTTPS (secure encrypted connection). You may as well just HTTP instead of doing this.
After much research and discussion with AWS, I think using HTTP over an internal network is secure enough for 99% of use cases and is pretty standard with a lot of setups and so unless you actually need end-to-end encryption for your use case, I would advise doing this instead.
Hope this helps.
Related
I have a MERN application with the frontend hosted on Netlify. I currently have the backend hosted at onrender.com. However this is quite slow and so was looking for something with faster load rates.
I have set up an EC2 instance on AWS and it is much faster, but I am struggling to enable HTTPS traffic.
The current setup:
EC2 instance set up and backend running. (I have ran it locally over http and it works fine).
AWS: security groups enabled https
The issue is that when I try to connect over https, it does not work.
I have tried various things including the ACM certificates (I have a certificate for my domain), creating load balancers that would direct to my instance, but I don't seem to succeeding. Admittedly, I don't fully understand what exactly I need to do here.
The outcome I want is to simply interact with the backend, which is on an AWS ubuntu instance, from my frontend over https.
Any help would be greatly appreciated.
if you are going the Load Balancer way, should be fairly simple.
Yes, it is a good idea to use ACM to provision Certificate for you.
Make sure that the Security Groups are well configured
In your case the Load Balancer should accept traffic from port 80 and 443
The Instance security group should be open where you have configured the instance to listen, it depends on your impmenetation
In the target group make sure that you have configured the target port correctly (that is the ec2 open port, where recieves traffic), also make sure that the health check is configured correctly.
I attached a quick summary how this little architecture should look like
Need some serious help here, thanks a lot in advance !
I need to deploy a scalable 3 tier web application on AWS and I am having some doubts/trouble understanding the best practice to design the architecture.
NOTE: As per my understanding, all the backend requests are requested through the browser, after the Frontend server serves html/css/js to the user.
Let me show you what I have come up with till now :
Assuming the above 'note':
Cons (as per my understanding):
All the backend routes will be exposed to the outside world.
Even though backend servers are in private subnet, now that they're being accessed via external load balancer, the endpoints API could be accessed from the users.
How will we route a request from a Load balancer to another Load balancer. Because what I have seen is that you could only route a request to an EC2 instance added in the target group.
To overcome the cons as I think in the above approach, I came up with this architecture instead:
Pros (as per my understanding):
The backend routes are safe (in a way) because we have a way of internally connecting from the frontend to the backend servers(if required).
Cons:
If the request is made from the browser, the endpoints are again exposed.
Solution that I found online:
REAL BIG DOUBT IN THIS LAST ONE
This breaks all the logic of my understanding that : All the requests are made by the browser from the user to the backend because in this the requests to the backend are being routed FROM the frontend servers.
QUESTIONS
What if the backend request (say login) is made by the user from the browser?
How will this work out in such case?
seems like you have done some good work here.
Let me start by making things easy for you:
Users only interact with the Load Balancer: If you want to keep it simple and not break off your frontend asset serving to an external service like CloudFront, which you should if you are starting out, you will be hosting the application only via EC2 instances (application origin, or simply orgin). Your requests would look something like this:
Users <--> ALB <--> EC2
Notice how users never interact with EC2 instances directly, its always via Application Load Balancer (ALB).
If I can oversimply thing, this is how HTTP operates, a request is made to a resource at an IP and the response is sent back from the same resource or IP. So as in your diagram, a request will not be responded back by EC2 but rather be relayed via the ALB.
You don't need NAT gateway: NAT gateway are there to make it possible for resources in provate subnet access the internet. In this case, unless you want your application to access the internet, you don't need NAT gateway. Many large scale applications are actually locked down in part by not keeping this resource at all.
You are still protecting the origin: Given that only the ALB can be accessed over the internet and everything else internal you can structure things here in any way that you want to. you could have few internal microservices that can be used internally without ever being exposed to end users. Note that here request never leaves the VPN.
You can read more about this and build a sample application via the official docs here or access AWS tutorials here.
To me, #3 is the correct solution because it does not expose /api to end users (since you mention "I DO NOT want the users to directly access the /api"). In #1, I don't think you could limit access to /api to only the front-end servers, since security groups work on the whole load balancer, not per-target.
Also, being an Internet-facing load balancer, any requests from the front-end servers to the load balancer in #1 will be referencing the load balancer via public IP addresses. This will cause a 1c/GB charge to go "out of" the VPC and then back in again.
Only #3 correctly refers to back-end resources via private IP addresses. The internal load balancer will be referenced via private IP addresses.
We're living behind a corporate proxy/firewall, that can only consume static IP rules and not FQDNs.
For our project, we need to access Google Speech To Text API: https://speech.googleapis.com. If outside of corporate network, we use gRPC stream over HTTP/2 to do that.
The ideal scenario looks like:
Corporate network -> static IP in GCP -> forwarded gRPC stream to speech.googleapis.com
What we have tried is creating a global static external IP, but failed when configuring the Load Balancer, as it can only connect to VMs and not APIs.
Alternatively, we were thinking to use output of nslookup speech.googleapis.com IP address ranges and update it daily, though it seems pretty 'dirty'.
I'm aware we can configure a compute engine resource / VM and forward the traffic, but this really doesn't seem like an elegant solution either. Preferably, we can achieve that with existing GCP networking components.
Many thanks for any pointers!
Google does not publish a CIDR block for you to use. You will have daily grief trying to whitelist IP addresses. Most of Google's API services are fronted by the Global Frontend (GFE). This uses HTTP Host headers to route traffic and not IP addresses, which will cause routing to fail.
Trying to lookup the IP addresses can be an issue. DNS does not have to return all IP addresses for name resolution in every call. This means that a DNS lookup might return one set of addresses now and a different set an hour from how. This is an edge example of grief you will cause yourself with whitelisting IP addresses.
Solution: Talk to your firewall vendor.
Found a solution thanks to clever networking engineers from Google, posting here for future reference:
You can use a CNAME in your internal DNS to point *.googleapis.com to private.googleapis.com. This record in public DNS points to two public IP addresses (199.36.153.8/30) that are not reachable from the public internet but through a VPN tunnel or Cloud interconnect only.
So if setting up a VPN tunnel to a project in GCP is possible (and it should be quite easy, see https://cloud.google.com/vpn/docs/how-to/creating-static-vpns), then this should solve the problem.
I have an Apache server running the front end (Angular) which relies on an API which is hosted on the same instance as the Apache. I don't want my API (Express) open to public yet but need access to it with my front end which shares the same IP. Things I've tried,
Setting API base url as 'localhost' doesn't seem to work.
Adding a security rule in AWS security groups to allow connections only to the same IP (to itself) doesn't work.
Is there any workaround for this?
Connections to same IP are always open by default. You may need to add private IP of the ec2 instance as your api base URL. (Port you know better). Cors too should be enabled for that private IP.
First of all, using Angular as the front-end means needing an API that can access publicly you just need to implement securities, because you just serve the UI to the client user and their browsers are the one accessing the API not the server of the angular.
You can setup another API which can be deploy on the same server of your UI and same url which will serve as controller of your "Private API" that you can manage using Security groups in AWS
Replaced ${IP} to 172.17.0.1 so it can connect to the same EC2 after restarting. Add a rule for the inbound connection from the same SG
I have a secure web API in the AWS cloud and I'm trying to figure out the best way to put it behind a load balancer without compromising security.
Right now, all communications are conventionally encrypted end-to-end. The API server has a Let's Encrypt certificate, which is used to treat all messages exchanged with clients. Unless the encryption is broken, nobody besides the server and its clients can view the raw contents of messages.
If I start using a load balancer and allow multiple instances of my server to run concurrently, I'll have to give up on LE and use centralized certificate management (e.g. ACM). AWS conveniently supports linking ACM-generated certificates to load balancer HTTPS listeners. This is especially useful for automatic renewal. However, the load balancer would then remove the encryption layer, and all communications with the instances of my server would be decrypted from that point on.
I'm not too comfortable having my raw data traveling in a public cloud. Still, I'd welcome a second opinion on this.
My question therefore is: Is it considered secure to have load balancer strip HTTPS encryption layer and forward all traffic as HTTP to internal server instances?
Since I can guess the answer, I would appreciate any suggestions on how to deploy load balancing securely.
I consider it secure because each AWS VPC is isolated from another.
The traffic of one VPC cannot be captured in another VPC. Of course whether AWS VPC technology is secure remains to be seen as others have said.
Also check out the documentation from EBS about secure end-to-end encryption. It says that:
Terminating secure connections at the load balancer and using HTTP on the backend may be sufficient for your application. Network traffic between AWS resources cannot be listened to by instances that are not part of the connection, even if they are running under the same account.