Equivalent like AWS Route53 service in GCP (cloud dns?) - google-cloud-platform

With AWS Route53 I can setup an URL that assigns 50% traffic to one URL and the other URL, which points to different versions of the service. Is there sth similar in GCP (cloud dns)?
The setup looks like the following
50%
100% traffic -> service.com -> serviceA.com (version-featureRF)
50%
-> serviceB.com (version-featureCNN)
Update:
I have a look at
https://stackoverflow.com/a/32617722/3952994, but it doesn't explain how to set it up.

From Cloud DNS is not possible to setup the configuration mentioned as is possible with AWS Route 53.
To balance the load between different services you can use Cloud Load Balancing.

Related

How can I redirect multiple paths in a domain to different AWS load balancers?

I want to host all my REST services on one single domain in AWS. (Each REST service is hosted in AWS beanstalk, using EC2 and load balancers so that each service can autoscale depending on usage)
I would like to achieve something like this:
https://api.foo.com/product-service -> product REST service
https://api.foo.com/attribute-service -> attribute REST service
https://api.foo.com/login-service -> login REST service
...
But I'm pretty new to AWS, so I'm not sure how can I achieve this, do you guys have any ideas?
You can't do this from Route53 as R53 is not aware of any url paths. If you want to use R53 for that your domains must be:
product-service.api.foo.com
attribute-service.api.foo.com
login-service.api.foo.com
This would be the easiest way to achieve that, as you would just create alias records to different EBs. Otherwise I think you have to do add CloudFront to your setup and associates different origins with different API servers of yours based on the path.

How to setup a Google Cloud load balancer to allow pointing domains with cname records?

I have recently started exploring the Google Cloud Platform Cloud Load Balancer and Cloud CDN products.
I am interested in setting up a load balancer to accept requests from multiple customer-pointed domains and map to an internal service.
Currently, I am creating multiple front-ends for the load balancer which feature a single domain and one or more SSL certificates. This creates a new ephemeral IP address per front-end that customer domains can be pointed to via A records.
Instead, I would like to allow customers to point their domains to my service using CNAME records.
eg. demo.customerdomain.com -> CNAME service.mydomain.com.
Can anyone help me figure out the best way to do this?
I am not sure what benefits/risks this has in terms of security or caching, so if anyone has any input on that, I would be interested to hear it.
Thanks,

Splitting READ and WRITE traffic using Route 53

I have an API which is deployed on GCP (GKE+External LB) and AWS (EKS + ELB). The DNS resolution is via Route 53.
Can Route 53 split the incoming traffic in way where are READ operations (GET) go to GCP and all writes (PUT/POST etc) goes to AWS.
Basically something like :
read.domain.com going to an external ipv4 address on gcp
write.domain.com going to AWS ELB
Thanks.
You can set up read.domain.com to resolve to GCP IPs by just setting up A-records for that. You can use Alias records for write.domain.com to point to your ELB.
What you can't do is DNS routing based on the HTTPS method (PUT/POST/GET/...), that's another layer of the network stack, DNS has no concept of that.
DNS basically does Layer 3 resolution (IP-Addresses) and HTTP is a Layer 5 protocol.

Configuring Cloud Run services and GCS Buckets with a load balancer

I would like to setup a HTTPS load balancer to serve both static content from a storage bucket and also point to a number of cloud run services.
The setup I am trying to achieve looks like the following:
// prod
api.example.com/serviceA -> cloud run: serviceA
api.example.com/serviceB -> cloud run: serviceB
cdn.example.com/cat.jpg -> storage bucket: cats
// dev
api-dev.example.com/serviceA -> cloud run: serviceA
api-dev.example.com/serviceB -> cloud run: serviceB
cdn-dev.example.com/cat.jpg -> storage bucket: cats-dev
The dev & prod environments are separated by projects in my case.
I have followed this guide as to how to setup this configuration.
However I am unable to resolved the various services. It is not 100% clear how the following elements interact:
url map
url mask
custom domain mapping in cloud run itself VS at the load balancer level
Please help!
Sure, it can be confusing!
URL Map: it's plugged on the Load Balancer to route a URL to a backend (Bucket, (un)Managed instance group (MIG), Network Endpoint Group (NEG)). For example the /static/ is routed to a bucket, the others to a serverless NEG
URL Mask: it's plugged on the serverless NEG. You can define a template on the URL Name to extract the service name (and tag) from the URL itself. It requires to have a dependency between your site structure and your Cloud Run service naming.
Summary
The URL Map is the first pass routing to the serverless NEG. The serverless NEG apply the URL mask to route the request to the correct service.
Custom domain mapping
On a single Cloud Run service you can apply a custom domain to reach it directly. In this case, you have no load balancing, you reach directly only one service deployed on Cloud Run.
I hope to be clear!!

GCP: HTTPS termination. Why is the load balancer so expensive?

I want to use a GCP load balancer to terminate HTTPS and auto manage HTTPS cert renewal with Lets Encrypt.
The pricing calculator gives me $21.90/month for a single rule. Is this how much it would cost to do HTTPS termination for a single domain? Are there cheaper managed options on GCP?
Before looking at the price, and to another solution, look at what you need. Are you aware of Global Load balancer capabilities?
It offers you a unique IP reachable all over the globe and route the request to the region the closest to your user for reducing the latency. If the region is off, or the capacity of your backend full (health check KO), the request is routed to the next closest region.
It allows you to rewrite your URL, to manage SSL certificates, to cache your file into CDN, to scale with your traffic, to deploy security layer on top of it, like IAP, to absorb the DDoS attack without impacting your backend.
And the price is for 5 forwarding rules, not only one.
Now, of course, you can do differently.
You can use regional solution. This solution is often free or affordable. But you don't have all the Global load balancer feature.
If your backend is on Cloud Run or App Engine. Cloud Endpoint is a solution for Cloud Function (and other API endpoints).
You can deploy and set up your own nginx with your SSL certificate on a compute engine.
If you want to serve only static file, you can have a look to Firebase hosting.