GCP: HTTPS termination. Why is the load balancer so expensive? - google-cloud-platform

I want to use a GCP load balancer to terminate HTTPS and auto manage HTTPS cert renewal with Lets Encrypt.
The pricing calculator gives me $21.90/month for a single rule. Is this how much it would cost to do HTTPS termination for a single domain? Are there cheaper managed options on GCP?

Before looking at the price, and to another solution, look at what you need. Are you aware of Global Load balancer capabilities?
It offers you a unique IP reachable all over the globe and route the request to the region the closest to your user for reducing the latency. If the region is off, or the capacity of your backend full (health check KO), the request is routed to the next closest region.
It allows you to rewrite your URL, to manage SSL certificates, to cache your file into CDN, to scale with your traffic, to deploy security layer on top of it, like IAP, to absorb the DDoS attack without impacting your backend.
And the price is for 5 forwarding rules, not only one.
Now, of course, you can do differently.
You can use regional solution. This solution is often free or affordable. But you don't have all the Global load balancer feature.
If your backend is on Cloud Run or App Engine. Cloud Endpoint is a solution for Cloud Function (and other API endpoints).
You can deploy and set up your own nginx with your SSL certificate on a compute engine.
If you want to serve only static file, you can have a look to Firebase hosting.

Related

Automatic redirection to static error site if web server is unavailable

I was reading an article about warm recoverable web server with Compute Engine and Cloud Storage.
Normal scenario:
Failover:
Documentation states that:
In an outage, you update the external HTTP(S) Load Balancing
configuration and fail over to a static site in Cloud Storage.
Can the change of external HTTP(S) Load Balancing configuration occur automatically based on some health checks? For example, if load balancer detects that website deployed on compute engine stopped responding, it automatically redirects the traffic to static site in Cloud Storage. Once web server starts working again, load balancer automatically redirects requests back to it. How can I achieve this?
Failover response can be done in many ways like at DNS(Cloud DNS, Cloudflare failover). Cloud orchestration (GC deployment, Terraform) at client side (CDN approach) since you want at load balancer level let’s see how it works.
As you can see in the link traffic is split between TCP, UDP, HTTP, HTTPS at the extreme LB level. If at all it is your use case in the first place. I mean if you use an external load balancer. As detailed in the chart in GCP docs.
For an internal HTTP load balancer, it can do a failover to healthy instances(GCE, GKE, GAE etc)in the same region. For an external LB you need an external LB as well. There are little differences for External load balancer types like Global, Classic(Premium tier) regional check them for detailed information. Now let’s see how load balancer works for static sites hosted in Cloud storage(GCP’s S3 equivalent).
As said in GCP docs this is how Static site on CDN is created. Then you need to further configure to redirect data between instances and CDN. See link1 and link2.
In the same docs see timeout, fault etc.
For me personally I think Cloudflare failover can be better.

Which AWS service can I use to put SSL encryption in front of an instance in my VPN?

I have instance in VPN on which some external consultants are working on. I need expose the app they are developing to the internet but I don't want them to have access to the private key of our SSL cert.
I am thinking I can put the SSL cert into ACM and then use some AWS component in front of the instance to handle the client connections and TLS encryption. I believe that an application load balancer can do this - will this work and is the best and cheapest option? I don't actually need load balancing just yet but may do in the future.
Yes load balancer is one of the option.
Another choice is using a CDN, CloudFront for the SSL, you simply set the origin to the EC2 instance.
Depending on your use case you need to consider what is the right caching policy (if applicable) though.
CloudFront charge by bandwidth, while Load Balancer charge by hour, so you need to consider the type of workload as well.

How do I enable HTTPS for my Elastic Beanstalk Java application?

My instance is a single instance, no load balancer.
I cannot seem to add a load balancer to my existing app instance.
Other recommendations regarding Elastic Load Balancer are obsolete - there seems to be no such service in AWS.
I do not need caching or edge delivery - my application is entirely transactional APIs, so probably don't need CloudFront.
I have a domain name and a name server (external to AWS). I have a certificate (generated in Certificate Manager).
How do I enable HTTPS for my Elastic Beanstalk Java application?
CloudFront is the easiest and cheapest way to add SSL termination, because AWS will handle it all for you through its integration with certificate manager.
If you add an ELB, you have to run it 24/7 and it will double the cost of a single instance server.
If you want to support SSL termination on the server itself, you're going to have to do that yourself (using your web container, such as apache, nginx, tomcat or whatever you're running). Its not easy to setup.
Even if you don't need caching, CloudFront is going to be worth it just for handling your certificate (which is as simple as selecting the certificate from a drop-down).
I ended up using CloudFront.
That created a problem that cookies were not being passed through.
I created a custom Caching Policy to allow the cookies, and in doing so, I also changed the caching TTLs to be very low. This served my purposes.

What is the benefit of adding AWS Cloudfront on top of AWS Application LB?

I have attended an AWS training, and they explained to us that a good practice is to have cache all dynamic content via Cloudfront, setting TTL to 0, even if you have an LB in front on the Load Balancer. So it could be like:
Route 53 -> CloudFront -> Application LB
I can not see any advantage of this architecture, instead of having directly (only for dynamic content):
Route 53 -> Application LB
I do not see the point since Cloudfront will send all traffic always to the LB, so you will have:
Two HTTPS negotiation (client <-> Cloudfront, and Cloudfront <-> LB)
No caching at all (it is dynamic content, it should not be cached, since that is the meaning of "dynamic")
You will not have the client IP since your LB will see only the Cloudfront IP (I know this can be fixed, to have the client IP, but then you will have issues with the next bullet).
As an extra work, you need to be able to update your LB security groups often, to match the CloudFront IPs (for this region), as I guess you want to get traffic only from your Cloudfront, and not directly from the LB public endpoint.
So, probably, I am missing something important about this Route 53 -> CloudFront -> Application LB architecture.
Any ideas?
Thanks!
Here are some of the benefits of having cloudfront on top of your ALB
For a web application or other content that's served by an ALB in Elastic Load Balancing, CloudFront can cache objects and serve them
directly to users (viewers), reducing the load on your ALB.
CloudFront can also help to reduce latency and even absorb some distributed denial of service (DDoS) attacks. However, if users can
bypass CloudFront and access your ALB directly, you don't get these
benefits. But you can configure Amazon CloudFront and your Application
Load Balancer to prevent users from directly accessing the Application
Load Balancer (Doc).
Outbound data transfer charges from AWS services to CloudFront is $0/GB. The cost coming out of CloudFront is typically half a cent less
per GB than data transfer for the same tier and Region. What this
means is that you can take advantage of the additional performance and
security of CloudFront by putting it in front of your ALB, AWS Elastic
Beanstalk, S3, and other AWS resources delivering HTTP(S) objects for
next to no additional cost (Doc).
The CloudFront global network, which consists of over 100 points of presence (POP), reduces the time to establish viewer-facing
connections because the physical distance to the viewer is shortened.
This reduces overall latency for serving both static and dynamic
content (Doc).
CloudFront maintains a pool of persistent connections to the origin, thus reducing the overhead of repeatedly establishing new
connections to the origin. Over these connections, traffic between
CloudFront and AWS origins are routed over a private backbone network
for reliability and performance. This reduces overall latency for
serving both static and dynamic content (Doc).
You can use geo restriction, also known as geo blocking, to prevent users in specific geographic locations from accessing content that
you're distributing through a CloudFront distribution (Doc).
In other words you can use the benefits of ClodFront to add new features to your source (ALB, Elastic Beanstalk, S3, EC2) but if you don't need these features it is better not to do this configuration in your architecture.
Cloudfront enables you deliver content faster because Cloudfront Edge locations closer to the user requesting and are connected to the AWS Regions through the AWS network backbone.
You can terminate SSL at cloudfront and make the load balancer listen at port 80
Cloudfront allows to apply geo location restriction easily in 2 clicks.
I think another reason you may want to use CF in front of an ALB is that you could have a better experience with WAF (if you are already using (or planning to) WAF, of course).
Even though WAF is available for both ALB and CF, ALB and CF use different services for WAF. The reason is that Cloudfront is a global service and ALB is one per region.
That may bring more complex management and duplication of ACL (and probably more costs).
Cloudfront is really an amazing CDN content delivery network service like Akamai etc . Now if your web applications having lots of dynamic content like media files even you static code you can put them into a S3 bucket another object storage service by AWS .
Once you have your dynamic content to you S3 bucket you can create a Cloudfront distribution by considering that bucket as a origin this phenomena will cached your dynamic content across AWS multiple edge locations. And it will become fast to deliver on client side.
Now if we talk Load balancer . So it have it’s own purpose to serve image you are using a Application where you get an unpredictable traffic so here your Load balancer which we are considering an Application or classic Load balancer which is accepting request from Route 53 and passing it to your servers.
For high availability and scalability we consider such architecture of Application.
we create a autoscaling group of our ec2 instances and put them behind a load balancer and as per our scaling policy example: if my cpu or memory utilization goes more that 70% launch another instance or similar.
You can set a request policy as well on load balancer to serve traffic to your ec2 server maybe in round Robbin or on availability.
I just shared the best practices recommended of AWS fault tolerant and highly available architecture . I hope this may help you to get a better idea to decide now .
Please let me know if I can help you with more suggestions on it.
Thanks and Happy Leanings!!

AWS ALB per ECS Service vs. multiple services per ALB for a microservices architecture

Initially I thought that multiple services per ALB listener with different path patterns to distribute API calls appropriately was the obvious choice. In terms of health checks though (if one of those services goes down), I don't know of a smart way to divert traffic for just that service to a different region.
If I have an active active setup with weighted route 53 records that will failover on a health check, I don't see any other solution than to either cut off that entire ALBs traffic and divert to another region, or ignore the 1 down service and continue to send traffic to the partially failing ALB.
Having a one to one mapping of ALBs to services fixes this solution, but it adds additional overhead in terms of cost and complexity.
What is the recommended pattern to follow for an active active microservices architecture?
If all of the services are accessed under a single hostname then the DNS of course must point to exactly one place, so rerouting is fundamentally an all-or-nothing prospect.
However, there's an effective workaround.
Configure a "secret" hostname for each service. ("Secret" in the sense that the client does not need to be aware of it.) We'll call these "service endpoints." The purpose of these hostnames is for routing requests to each service... svc1.api.example.com, svc2.api.example.com, etc.
Configure each of these DNS records to point to the primary or failover load balancer, with Route 53 entries and a Route 53 health check that specifically checks that one service for health at each balancer.
What you have at this point is a hostname for each service that will have a DNS answer that correctly points to the preferred, healthy endpoint.
What you don't yet have is a way to ensure that client requests go to the right place.
For this, create a CloudFront distribution, with your public API hostname as an Alternate Domain Name. Define one CloudFront Origin for each of these service endpoints (leave "Origin Path" blank), then create a Cache Behavior for each service with the appropriate path pattern e.g. /api/svc1* and select the matching origin. Whitelist any HTTP headers that your API needs to see.
Finally, point DNS for your main hostname to CloudFront.
The clients will automatically connect to their nearest CloudFront edge location, and CloudFront -- after matching the path pattern to discover where to send the request -- will check the DNS for that service-specific endpoint and forward the request to the appropriate balancer.
CloudFront, in this application is not a "CDN" per se, but rather a globally-distributed reverse proxy -- logically, a single destination for all your traffic, so no failover configuration is required on the main hostname for the API... so no more all-or-nothing routing. On the back-side of CloudFront, those service endpoint hostnames ensure that requests are routed to a healthy destination based on the Route 53 health checks. CloudFront respects the TTL of these DNS records and will not cache DNS responses that it shouldn't.