i m exploring GCP and i love the way it lets the developer play with such costly infrastructure. till now i have learnt a lot many things. i m no more a beginner and i have this case which i m unable to find docs or example for or i might be thinking in wrong direction.
I want to build an auto-scaling hosting solution where users can :
Create Account
Create multiple websites [these websites are basically tempaltes where user can define certain fields and the website is rendered in a specific manner | users are not allowed to upload file instead just some data entries]
In a website user can connect domain [put 'A' record DNS entry in their domain]
After that an SSl is provisioned automatically by the platform and the website is up and running. [somewhat like firebase]
I could easily create such a project on one server with the following configuration[skipped simple steps like user auth etc.]:
I use ubunutu 16.04 as my machine type with 4GB ram and 10GB persistance disk
Then i install nvm [a package to manage node.js]
after that i install specific version of node.js using nvm
i have written a simple javascript package in which i use express server to respond to the client requests with some html
for managing ssl i use letsencrypt's certbot package
i use pm2 to run the javascipt file as service in background
after being able to accomplish this thing i could see everything works the way i want it to.
then i started exploring GCP's load balancers there i learnt about the 4 layer and 7 layer LBs and i implemented some hello world tests [using startup scripts] in all possible configuration like
7 layer http
7 layer https
4 layer internal tcp
4 layer internal ssl
Here is the main problem i m facing :
I can't find a way to dynamically allocate an SSL to an incoming request to the load balancer
In my case requests might be coming from any domain so GCP load balacer must have some sort of configuration to provision SSL for specific domain [i have read that it can alloccate an SSL for upto 100 domains but how could i automate things] or could there be a way that instead of requests being proxied[LB generates a new requeest to the internal servers], requests are just being redirected so that the internal servers can handle the SSL management themseleves
I might be wrong somewhere in my understanding of the concepts. Please help me solve the problem. i want to build firebase-hosting clone at my own. anykind of response is welcomed 🙏🙏🙏
One way to do it would be to update your JS script to generate Google-managed certificate for each new domain via gcloud:
gcloud compute ssl-certificates create CERTIFICATE_NAME \
--description=DESCRIPTION \
--domains=DOMAIN_LIST \
--global
and then apply it to the load balancer:
gcloud compute target-https-proxies update TARGET_PROXY_NAME \
--ssl-certificates SSL_CERTIFICATE_LIST \
--global-ssl-certificates \
--global
Please be aware that it may take anywhere from 5 to 20 minutes for the Load Balancer to start using new certificates.
You can find more information here.
I'm looking for guidance on the proper tools/tech to accomplish what I assume is a fairly common need.
If there exists a web service: https://www.ExampleSaasWebService.com/ and customers can add vanity domains/subdomains to white-label or resell the service and replace the domain name with their own, there needs to be a reverse proxy to terminate vanity domains TLS traffic and route it to the statically defined (HTTPS) back-end service on the non-vanity original domain (there is essentially one "back-end" server somewhere else on the internet, not the local network, that accepts all incoming traffic no matter the incoming domain). Essentially:
"Customer A" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from example.customerA.com.
"Customer B" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from customerB.com and www.customerB.com.
etc...
I (surprisingly) haven't found anything that does this out of the box, but looking at Traefik (2.x) I'm seeing some promising capabilities and it seems like the most capable tool to accomplish this. Primarily because of the Let's Encrypt integration and the ability to reconfigure without a restart of the service.
I initially considered AWS's native certificate management and load balancing, but I see there is a limit of ~25 certificates per load balancer which seems like a non-starter. Presumably there could be thousands of vanity domains in place at any time.
Some of my Traefik specific questions:
Am I correct in understanding that you can get away without explicitly provisioning a generated list of explicit vanity domains to produce TLS certificates for in the config files? They can be determined on-the-fly and provisioned from Let's Encrypt based on the headers of the incoming requests/SNI?
E.g. If a request comes to www.customerZ.com and there is not yet a certificate for that domain name, one can be generated on the fly?
I found this note on the OnDemand flag in the v1.6 docs, but I'm struggling to find the equivalent documentation in the (2.x) docs.
Using AWS services, how can I easily share "state" (config/dynamic certificates that have already been created) between multiple servers to share the load? My initial thought was EFS, but I see EFS shared file system may not work because of a dependency on file change watch notifications not working on NFS mounted file systems?
It seemed like it would make sense to provision an AWS NLB (with a static IP and an associated DNS record) that delivered requests to a fleet of 1 or more of these Traefik proxies with a universal configuration/state that was safely persisted and kept in sync.
Like I mentioned above, this seems like a common/generic need. Is there a configuration file sample or project that might be a good starting point that I overlooked? I'm brand new to Traefik.
When routing requests to the back-end service, the original Host name will be identifiable still somewhere in the headers? I assume it can't remain in the Host header as the back-end recieves requests to an HTTPS hostname as well.
I will continue to experiment and post any findings back here, but I'm sure someone has setup something like this already -- so just looking to not reinvent the wheel.
I managed to do this with Caddy. It's very important that you configure the ask,interval and burst to avoid possible DDoS attacks.
Here's a simple reverse proxy example:
# https://caddyserver.com/docs/caddyfile/options#on-demand-tls
{
# General Options
debug
on_demand_tls {
# will check for "?domain=" return 200 if domain is allowed to request TLS
ask "http://localhost:5000/ask/"
interval 300s
burst 1
}
}
# TODO: use env vars for domain name? https://caddyserver.com/docs/caddyfile-tutorial#environment-variables
qrepes.app {
reverse_proxy localhost:5000
}
:443 {
reverse_proxy localhost:5000
tls {
on_demand
}
}
OK, so I have three ELBs, and three subdomains in the same hosted zone. Each ELB load balances a different environment--one is prod, one is staging, one is a second staging environment. I've got three CNAMEs configured in Route 53, each directing to one of the ELBs, like this:
mysite.com = directs to ELB for prod (let's call it ProdELB)
staging.mysite.com = directs to StagingELB
newtest.staging.mysite.com = directs to NewStagingELB
The first two work fine.. however, the last one won't work.. it keeps mixing it up with the second one. Whenever I type newtest.staging.mysite.com into my browser, my browser responds by loading a page from staging.mysite.com instead, as though it's somehow redirecting to the second ELB instead of the third one. But there's nothing in my Route 53 to tell it to do that.
This even happens if I try to load the ELB domain name directly; ie. typing http://NewStagingELB.elb.aws.amazon.com in my browser will also cause staging.mysite.com to load. Even loading one of the instance IPs directly causes my browser to load the staging.mysite.com site.. what the heck is going on?
It's only the browser that does this.. pinging newtest.staging.mysite.com returns the correct ELB. It's also not a cache or cookie issue or anything like that because I've tried on multiple browsers including on my cell phone over data.
How do I get newtest.staging.mysite.com to actually direct to the right ELB?
Ended up being a software-related problem.. newtest's tomcat's server.xml's Connector had proxyName and proxyPort settings that were stuck to the second instance.
I got a problem about ec2 and route 53. I setup my website using node with keystonejs framework. It works fast in my local server and work well before I bind the domain name with route 53. But after I succeed binding domain name, it loads very slow and I use pindom to do speed test and find the following problem:
View Pingdom website speed test result
When it requests firstly, it has been waited for at least 8 seconds. Anyone know what's going wrong ? Why after I got this? I think it is not because of node js. My website is nexstartup.org( http://nexstartup.org)
I have a bunch of different websites, mostly random weekend projects that I'd like to keep on the web because they are still useful to me. They don't see more than 3-5 hits per day between all of them though, so I don't want to pay for a server for each of them when I could probably fit them all on a single EC2 micro instance. Is that possible? They all run off different web servers, since I tend to experiment with a lot of new tech. I was thinking I could have each webserver serve on a different port, then have incoming requests to app1.com get routed to app1.com:3000 and requests to app2.com get routed to app2.com:3001 and so on, but I don't know how I would go about setting that up.
I would suggest that what you are looking for is a reverse web proxy, which typically includes among its features the ability to understand portions of the request at layer 7, and direct the incoming traffic to the appropriate set of one or more back-end ip/port combinations based on what's observed in the request headers (or other aspects of the request).
Apache, Varnish, and Nginx all have this capacity, as does HAProxy, which is the approach that I use because it seems to be very fast and easy on memory, and thus appropriate for use on a micro instance... but that is not at all to imply that it is somehow more "correct" to use than the others. The principle is the same with any of those choices; only the configuration details are different. One service is listening to port 80, and based on the request, relays it to the appropriate server process by opening up a TCP connection to the appropriate destination, tying the ends of the two pipes together, and otherwise for the most part staying out of the way.
Here's one way (among several alternatives) that this might look in an haproxy config file:
frontend main
bind *:80
use_backend app1svr if { hdr(host) -i app1.example.com }
use_backend app2svr if { hdr(host) -i app2.example.com }
backend app1svr
server app1 127.0.0.1:3001 check inter 5000 rise 1 fall 1
backend app2svr
server app2 127.0.0.1:3002 check inter 5000 rise 1 fall 1
This says listen on port 80 of all local IP addresses; if the "Host" header contains "app1.example.com" (-i means case-insensitive) then use the "app1" backend configuration and send the request to that server; do something similar for app2.example.com. You can also declare a default_backend to use if none of the ACLs match; otherwise, if no match, it will return "503 Service Unavailable," which is what it will also return if the requested back-end isn't currently running.
You can also configure a stats endpoint to show you the current state and traffic stats of your frontends and backends in an HTML table.
Since the browser isn't connecting "directly" to the web server any more, you have to configure and rely on the X-Forwarded-For header inserted into the request headers to identify the browser's IP address, and there are other ways in which your applications may have to take the proxy into account, but this overall concept is exactly how web applications are typically scaled, so I don't see it as a significant drawback.
Note these examples do use "Anonymous ACLs," of which the documentation says:
It is generally not recommended to use this construct because it's a lot easier
to leave errors in the configuration when written that way. However, for very
simple rules matching only one source IP address for instance, it can make more
sense to use them than to declare ACLs with random names.
— http://cbonte.github.io/haproxy-dconv/configuration-1.4.html
For simple rules like these, this construct makes more sense to me than explicitly declaring an ACL and then later using that ACL to cause the action that you want, because it puts everything together on the same line.
I use this to solve a different root problem that has the same symptoms -- multiple sites for development/test projects, but only one possible external IP address (which by definition means "port 80" can only go to one place). This allows me to "host" development and test projects on different ports and platforms, all behind the single external IP of my home DSL line. The only difference in my case is that the different sites are sometimes on the same machine as the haproxy and other times they're not, but the application seems otherwise identical.
Rerouting in way you show - depends on the OS your server is hosting on. For linux you have to use iptables, for windows you could use windows firewall. You should set all incoming connections to a port 80 to be redirected do desired port 3000
But, instead of port, you could use a different host name for each service, like
app1.apps.com
app2.apps.com
and so on. You can configure it with redirecting on your DNS hosting, for apps.com IMHO this is best solution, if i got you right.
Also, you can configure a single host to reroute to all other sites, like
app1.com:3001 -> apphost1.com
app1.com:3002 -> apphost2.com
Take in mind, in this case, all traffic will pas through app1.com.
You can easily do this. Set up a different hostname for each app you want to use, create a DNS entry that points to your micro instance, and create a name-based virtual host entry for each app.
Each virtual host entry should look something like:
<VirtualHost *>
ServerName app1.example.com
DocumentRoot /var/www/html/app1/
DirectoryIndex index.html
</VirtualHost>