Dynamic SSL allocation in GCP HTTP(s) Layer 7 Load balancer - google-cloud-platform

i m exploring GCP and i love the way it lets the developer play with such costly infrastructure. till now i have learnt a lot many things. i m no more a beginner and i have this case which i m unable to find docs or example for or i might be thinking in wrong direction.
I want to build an auto-scaling hosting solution where users can :
Create Account
Create multiple websites [these websites are basically tempaltes where user can define certain fields and the website is rendered in a specific manner | users are not allowed to upload file instead just some data entries]
In a website user can connect domain [put 'A' record DNS entry in their domain]
After that an SSl is provisioned automatically by the platform and the website is up and running. [somewhat like firebase]
I could easily create such a project on one server with the following configuration[skipped simple steps like user auth etc.]:
I use ubunutu 16.04 as my machine type with 4GB ram and 10GB persistance disk
Then i install nvm [a package to manage node.js]
after that i install specific version of node.js using nvm
i have written a simple javascript package in which i use express server to respond to the client requests with some html
for managing ssl i use letsencrypt's certbot package
i use pm2 to run the javascipt file as service in background
after being able to accomplish this thing i could see everything works the way i want it to.
then i started exploring GCP's load balancers there i learnt about the 4 layer and 7 layer LBs and i implemented some hello world tests [using startup scripts] in all possible configuration like
7 layer http
7 layer https
4 layer internal tcp
4 layer internal ssl
Here is the main problem i m facing :
I can't find a way to dynamically allocate an SSL to an incoming request to the load balancer
In my case requests might be coming from any domain so GCP load balacer must have some sort of configuration to provision SSL for specific domain [i have read that it can alloccate an SSL for upto 100 domains but how could i automate things] or could there be a way that instead of requests being proxied[LB generates a new requeest to the internal servers], requests are just being redirected so that the internal servers can handle the SSL management themseleves
I might be wrong somewhere in my understanding of the concepts. Please help me solve the problem. i want to build firebase-hosting clone at my own. anykind of response is welcomed 🙏🙏🙏

One way to do it would be to update your JS script to generate Google-managed certificate for each new domain via gcloud:
gcloud compute ssl-certificates create CERTIFICATE_NAME \
--description=DESCRIPTION \
--domains=DOMAIN_LIST \
--global
and then apply it to the load balancer:
gcloud compute target-https-proxies update TARGET_PROXY_NAME \
--ssl-certificates SSL_CERTIFICATE_LIST \
--global-ssl-certificates \
--global
Please be aware that it may take anywhere from 5 to 20 minutes for the Load Balancer to start using new certificates.
You can find more information here.

Related

Traefik Best Practices/Capabilities For Dynamic Vanity Domain Certificates

I'm looking for guidance on the proper tools/tech to accomplish what I assume is a fairly common need.
If there exists a web service: https://www.ExampleSaasWebService.com/ and customers can add vanity domains/subdomains to white-label or resell the service and replace the domain name with their own, there needs to be a reverse proxy to terminate vanity domains TLS traffic and route it to the statically defined (HTTPS) back-end service on the non-vanity original domain (there is essentially one "back-end" server somewhere else on the internet, not the local network, that accepts all incoming traffic no matter the incoming domain). Essentially:
"Customer A" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from example.customerA.com.
"Customer B" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from customerB.com and www.customerB.com.
etc...
I (surprisingly) haven't found anything that does this out of the box, but looking at Traefik (2.x) I'm seeing some promising capabilities and it seems like the most capable tool to accomplish this. Primarily because of the Let's Encrypt integration and the ability to reconfigure without a restart of the service.
I initially considered AWS's native certificate management and load balancing, but I see there is a limit of ~25 certificates per load balancer which seems like a non-starter. Presumably there could be thousands of vanity domains in place at any time.
Some of my Traefik specific questions:
Am I correct in understanding that you can get away without explicitly provisioning a generated list of explicit vanity domains to produce TLS certificates for in the config files? They can be determined on-the-fly and provisioned from Let's Encrypt based on the headers of the incoming requests/SNI?
E.g. If a request comes to www.customerZ.com and there is not yet a certificate for that domain name, one can be generated on the fly?
I found this note on the OnDemand flag in the v1.6 docs, but I'm struggling to find the equivalent documentation in the (2.x) docs.
Using AWS services, how can I easily share "state" (config/dynamic certificates that have already been created) between multiple servers to share the load? My initial thought was EFS, but I see EFS shared file system may not work because of a dependency on file change watch notifications not working on NFS mounted file systems?
It seemed like it would make sense to provision an AWS NLB (with a static IP and an associated DNS record) that delivered requests to a fleet of 1 or more of these Traefik proxies with a universal configuration/state that was safely persisted and kept in sync.
Like I mentioned above, this seems like a common/generic need. Is there a configuration file sample or project that might be a good starting point that I overlooked? I'm brand new to Traefik.
When routing requests to the back-end service, the original Host name will be identifiable still somewhere in the headers? I assume it can't remain in the Host header as the back-end recieves requests to an HTTPS hostname as well.
I will continue to experiment and post any findings back here, but I'm sure someone has setup something like this already -- so just looking to not reinvent the wheel.
I managed to do this with Caddy. It's very important that you configure the ask,interval and burst to avoid possible DDoS attacks.
Here's a simple reverse proxy example:
# https://caddyserver.com/docs/caddyfile/options#on-demand-tls
{
# General Options
debug
on_demand_tls {
# will check for "?domain=" return 200 if domain is allowed to request TLS
ask "http://localhost:5000/ask/"
interval 300s
burst 1
}
}
# TODO: use env vars for domain name? https://caddyserver.com/docs/caddyfile-tutorial#environment-variables
qrepes.app {
reverse_proxy localhost:5000
}
:443 {
reverse_proxy localhost:5000
tls {
on_demand
}
}

How can I Use SMTP in Dockerized Django web app running on AWS

I have a Django web app running in a Docker container which in due course will run live in an AWS EC2 instance (but for development is running on my Windows 10 laptop).
As the web app wants to send emails for various reasons, I would like to know the simplest approach for this, both for AWS operation, presumably using Amazon's Simple Email Service (SES), and for my development.
Having read that the Docker container needed to include an SMTP relay, I spent some time setting up the neat looking relay at https://hub.docker.com/r/turgon37/smtp-relay
But despite the claimed simplicity of the config setup, I had trouble with this (probably my fault, although the documentation was somewhat ambiguous in places as it was obviously written by someone whose native language is not English!). More to the point, it occurred to me that the claim an SMTP relay was needed in the container may have been nonsense, because surely one can just map container port 25 to the host's port 25 ?!
So in summary, has anyone had this requirement, i.e. for using SMTP in a Docker container, and what is the simplest approach?
TIA
Regards
John R

Port mapping in Windows Server 2016 - Docker

I have been trying to setup Docker in Windows Server 2016 in an AWS instance to run an IIS program.
From this question,
Cannot access an IIS container from browser - Docker, IIS has been setup inside a container and it is accessible from the host without port mapping.
However, if I want to allow other users from the Internet/Intranet to access the website, after Google-ing it, I guess we do need port mapping...
The error I have encountered in port mapping is given in the above question so... I guess using nat is not the correct option. Therefore, my team and I tried to create another network (custom/bridge) following instructions from
https://docs.docker.com/v17.09/engine/userguide/networking/#user-defined-networks
However, we cannot create a network as follows:
; Googled answer:
https://github.com/docker/for-win/issues/1960
My team guessed maybe its because AWS blocked that option, if anyone can confirm me, please do.
Another thing that I notice is: when we create an ECS instance in AWS,
So... only default = NAT network mode is accepted in Windows server?
Our objective: put the container hosted IIS application to Internet/Intranet in Windows Server 2016...
If anyone has any suggestion/advice, please tell me, many thanks.

For Rasa core and Rasa nlu, how should a reliable infrastructure?

My REST application is being developed with Python and Flask, I am also using Rasa Core and Rasa NLU. Currently everything is a single local development server. Would you like to know what ideal recommendations for production?
A scenario that I imagined: treat all REST flames and database structure on one server, keep Rasa Core and together with a "micro" python application on another server and Rasa NLU on a third server.
But the question is: all users would end up asking the 3 cascading servers, so I think all servers are subject to the same bottleneck of requests.
And what would be the ideal settings if you leave 1 server with all or 3 servers? (for AWS)
To be the most scale-able you can use a containerized solution with load balancing.
Rasa NLU has a public docker container (or you could create your
own). Use docker & kubernetes to scale out the NLU to however large
you need your base
Create separate docker containers for your rasa core, connecting to the NLU load balancer for NLU translation. Use a load balancer here too if you need to.
Do the same for your REST application, connecting to #2 load balancer
This solution would allow you to scale your NLU and core separately however you need to as well as your REST application if you need to do that separately.
I wrote a tutorial on this if you are interested here:

How do rolling updates on website work

Lets say I have a website with 4 webapp server and 1 HAProxy before them to do load balancing. Now I want to update my webapp with new api/v2 and I start the rolling update. My webapp is doing HATEOAS so lets assume that 1 instance got updated and it sent a link like api/v2/dothis to a clinet.
Now the client made a request on this link and HAProxy directed it to 3rd server in the cluster which is still running the old webapp and doesn't know about api/v2.
How do people solve this problem in general, how do websites do rolling updates without disrupting the service.
Thanks in advance
You could use one of these options
Option a: Once you
updated instance 1, shut down all the other instances so all the
traffic goes to instance 1 (if this is even possible with the load
you might expect. You could do this at a time when your one instance
would be capable). Update instance 2 with the new webapp and bring it
online, continue with all the other instances.
Option b: Keep all the available resources in a place to where all
your servers can check whether the resources exists on another webapp
instance if they do not have it themselves (yet).
I feel that option a would be best, since you would not have to maintain another server/system for brokerage.