Traefik Best Practices/Capabilities For Dynamic Vanity Domain Certificates - amazon-web-services

I'm looking for guidance on the proper tools/tech to accomplish what I assume is a fairly common need.
If there exists a web service: https://www.ExampleSaasWebService.com/ and customers can add vanity domains/subdomains to white-label or resell the service and replace the domain name with their own, there needs to be a reverse proxy to terminate vanity domains TLS traffic and route it to the statically defined (HTTPS) back-end service on the non-vanity original domain (there is essentially one "back-end" server somewhere else on the internet, not the local network, that accepts all incoming traffic no matter the incoming domain). Essentially:
"Customer A" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from example.customerA.com.
"Customer B" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from customerB.com and www.customerB.com.
etc...
I (surprisingly) haven't found anything that does this out of the box, but looking at Traefik (2.x) I'm seeing some promising capabilities and it seems like the most capable tool to accomplish this. Primarily because of the Let's Encrypt integration and the ability to reconfigure without a restart of the service.
I initially considered AWS's native certificate management and load balancing, but I see there is a limit of ~25 certificates per load balancer which seems like a non-starter. Presumably there could be thousands of vanity domains in place at any time.
Some of my Traefik specific questions:
Am I correct in understanding that you can get away without explicitly provisioning a generated list of explicit vanity domains to produce TLS certificates for in the config files? They can be determined on-the-fly and provisioned from Let's Encrypt based on the headers of the incoming requests/SNI?
E.g. If a request comes to www.customerZ.com and there is not yet a certificate for that domain name, one can be generated on the fly?
I found this note on the OnDemand flag in the v1.6 docs, but I'm struggling to find the equivalent documentation in the (2.x) docs.
Using AWS services, how can I easily share "state" (config/dynamic certificates that have already been created) between multiple servers to share the load? My initial thought was EFS, but I see EFS shared file system may not work because of a dependency on file change watch notifications not working on NFS mounted file systems?
It seemed like it would make sense to provision an AWS NLB (with a static IP and an associated DNS record) that delivered requests to a fleet of 1 or more of these Traefik proxies with a universal configuration/state that was safely persisted and kept in sync.
Like I mentioned above, this seems like a common/generic need. Is there a configuration file sample or project that might be a good starting point that I overlooked? I'm brand new to Traefik.
When routing requests to the back-end service, the original Host name will be identifiable still somewhere in the headers? I assume it can't remain in the Host header as the back-end recieves requests to an HTTPS hostname as well.
I will continue to experiment and post any findings back here, but I'm sure someone has setup something like this already -- so just looking to not reinvent the wheel.

I managed to do this with Caddy. It's very important that you configure the ask,interval and burst to avoid possible DDoS attacks.
Here's a simple reverse proxy example:
# https://caddyserver.com/docs/caddyfile/options#on-demand-tls
{
# General Options
debug
on_demand_tls {
# will check for "?domain=" return 200 if domain is allowed to request TLS
ask "http://localhost:5000/ask/"
interval 300s
burst 1
}
}
# TODO: use env vars for domain name? https://caddyserver.com/docs/caddyfile-tutorial#environment-variables
qrepes.app {
reverse_proxy localhost:5000
}
:443 {
reverse_proxy localhost:5000
tls {
on_demand
}
}

Related

How to enable Cipher TLS_ECDHE_ECDSA on Windows server 2019 with AWS Load Balancer

The website is on Windows server 2019 with the AWS Load Balancer with ELB SecurityPolicy-2016-08. This policy definitely has the ECDHE_ECDSA cipher enabled. I have checked their docs. SSL certificate is installed on LB.
Running TLS Cipher Suites in PowerShell Windows server 2019 also shows these suits enabled but when running the website domain with SSLLabs or Zenmap. These suites are not appearing
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
or even these:
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
any ideas? the website is ASP.NetFramework 4.7. but I hardly think it has anything to do with the ciphers. Any help will be appreciated. Thanks
Zenmap Snapshot
AWS load balancer Snapshot
PowerShell Snapshot
Meta: this isn't about programming, and I'm not sure 'how to operate cloud' counts as development, so I authorize deletion if this is voted offtopic.
Your server is irrelevant and nothing you set or change on it will affect client(s).
You don't tell us which AWS load balancer you use but to be at HTTPS level it must be Application or Classic, and in either case to do HTTPS it must terminate the SSL/TLS protocol -- in other words, the LB establishes one SSL/TLS connection with the client and decrypts the incoming request, parses it, and then optionally uses a separate SSL/TLS connection to the backend to re-encrypt, and reverses the process on the response: decrypt from the backend if necessary and re-encrypt to the client. See the line "SSL Offloading" well down in the table on that page; that's a jargon way of saying "LB does the SSL/TLS for the client, your server does not".
Thus the settings in the LB, only, control the SSL/TLS seen by the client(s). ELBSecurityPolicy-2016-08 which is the default (and I'm guessing that might be why you used it) excludes all DHE-RSA ciphersuites. (To avoid confusion, note the AWS webpage uses the OpenSSL names for ciphersuites, where RSA-only keyexchange is omitted from the name, whereas Zenmap/nmap uses the RFC names TLS_RSA_with_whatever.) It does allow ECDHE_ECDSA suites, but those will actually be negotiated, and thus seen by a scanner like Zenmap/nmap, only if you configure an ECDSA certificate and key -- which I bet you didn't.

Dynamic SSL allocation in GCP HTTP(s) Layer 7 Load balancer

i m exploring GCP and i love the way it lets the developer play with such costly infrastructure. till now i have learnt a lot many things. i m no more a beginner and i have this case which i m unable to find docs or example for or i might be thinking in wrong direction.
I want to build an auto-scaling hosting solution where users can :
Create Account
Create multiple websites [these websites are basically tempaltes where user can define certain fields and the website is rendered in a specific manner | users are not allowed to upload file instead just some data entries]
In a website user can connect domain [put 'A' record DNS entry in their domain]
After that an SSl is provisioned automatically by the platform and the website is up and running. [somewhat like firebase]
I could easily create such a project on one server with the following configuration[skipped simple steps like user auth etc.]:
I use ubunutu 16.04 as my machine type with 4GB ram and 10GB persistance disk
Then i install nvm [a package to manage node.js]
after that i install specific version of node.js using nvm
i have written a simple javascript package in which i use express server to respond to the client requests with some html
for managing ssl i use letsencrypt's certbot package
i use pm2 to run the javascipt file as service in background
after being able to accomplish this thing i could see everything works the way i want it to.
then i started exploring GCP's load balancers there i learnt about the 4 layer and 7 layer LBs and i implemented some hello world tests [using startup scripts] in all possible configuration like
7 layer http
7 layer https
4 layer internal tcp
4 layer internal ssl
Here is the main problem i m facing :
I can't find a way to dynamically allocate an SSL to an incoming request to the load balancer
In my case requests might be coming from any domain so GCP load balacer must have some sort of configuration to provision SSL for specific domain [i have read that it can alloccate an SSL for upto 100 domains but how could i automate things] or could there be a way that instead of requests being proxied[LB generates a new requeest to the internal servers], requests are just being redirected so that the internal servers can handle the SSL management themseleves
I might be wrong somewhere in my understanding of the concepts. Please help me solve the problem. i want to build firebase-hosting clone at my own. anykind of response is welcomed 🙏🙏🙏
One way to do it would be to update your JS script to generate Google-managed certificate for each new domain via gcloud:
gcloud compute ssl-certificates create CERTIFICATE_NAME \
--description=DESCRIPTION \
--domains=DOMAIN_LIST \
--global
and then apply it to the load balancer:
gcloud compute target-https-proxies update TARGET_PROXY_NAME \
--ssl-certificates SSL_CERTIFICATE_LIST \
--global-ssl-certificates \
--global
Please be aware that it may take anywhere from 5 to 20 minutes for the Load Balancer to start using new certificates.
You can find more information here.

Clone http traffic to another port on same server transparently

I am experimenting with following setup.
Clone/copy (but not redirect) all incoming HTTP requests from port 80 to another port say 8080 on same machine. I have a simple NGINX + Lua based WAF which is listening on 8080. Essentially, I am running two instances of webservers here, one which is serving real requests and other one working on cloned traffic for detection purpose. I don't care about being able to block the malicious requests so I dont care about being inline.
I want to use WAF only for detection purpose i.e. it should analyze all incoming requests, raise alert and drop the request after that. This will not hamper anything from users point of view since port 80 is serving real requests.
How can I clone traffic this way and just discard it after analysis is done ? Is this feasible ? If yes, please suggest any tools which can clone traffic with minimal performance hit.
2.
Have a look : https://github.com/buger/gor
Example instructions are straightforward. Additional logging or certain forwards you could possibly add as well
In the current Nginx version, there is an ngx_http_mirror_module, which retranslates requests to another endpoint and ignores responses. See also this answer

Hosting multiple websites on a single server

I have a bunch of different websites, mostly random weekend projects that I'd like to keep on the web because they are still useful to me. They don't see more than 3-5 hits per day between all of them though, so I don't want to pay for a server for each of them when I could probably fit them all on a single EC2 micro instance. Is that possible? They all run off different web servers, since I tend to experiment with a lot of new tech. I was thinking I could have each webserver serve on a different port, then have incoming requests to app1.com get routed to app1.com:3000 and requests to app2.com get routed to app2.com:3001 and so on, but I don't know how I would go about setting that up.
I would suggest that what you are looking for is a reverse web proxy, which typically includes among its features the ability to understand portions of the request at layer 7, and direct the incoming traffic to the appropriate set of one or more back-end ip/port combinations based on what's observed in the request headers (or other aspects of the request).
Apache, Varnish, and Nginx all have this capacity, as does HAProxy, which is the approach that I use because it seems to be very fast and easy on memory, and thus appropriate for use on a micro instance... but that is not at all to imply that it is somehow more "correct" to use than the others. The principle is the same with any of those choices; only the configuration details are different. One service is listening to port 80, and based on the request, relays it to the appropriate server process by opening up a TCP connection to the appropriate destination, tying the ends of the two pipes together, and otherwise for the most part staying out of the way.
Here's one way (among several alternatives) that this might look in an haproxy config file:
frontend main
bind *:80
use_backend app1svr if { hdr(host) -i app1.example.com }
use_backend app2svr if { hdr(host) -i app2.example.com }
backend app1svr
server app1 127.0.0.1:3001 check inter 5000 rise 1 fall 1
backend app2svr
server app2 127.0.0.1:3002 check inter 5000 rise 1 fall 1
This says listen on port 80 of all local IP addresses; if the "Host" header contains "app1.example.com" (-i means case-insensitive) then use the "app1" backend configuration and send the request to that server; do something similar for app2.example.com. You can also declare a default_backend to use if none of the ACLs match; otherwise, if no match, it will return "503 Service Unavailable," which is what it will also return if the requested back-end isn't currently running.
You can also configure a stats endpoint to show you the current state and traffic stats of your frontends and backends in an HTML table.
Since the browser isn't connecting "directly" to the web server any more, you have to configure and rely on the X-Forwarded-For header inserted into the request headers to identify the browser's IP address, and there are other ways in which your applications may have to take the proxy into account, but this overall concept is exactly how web applications are typically scaled, so I don't see it as a significant drawback.
Note these examples do use "Anonymous ACLs," of which the documentation says:
It is generally not recommended to use this construct because it's a lot easier
to leave errors in the configuration when written that way. However, for very
simple rules matching only one source IP address for instance, it can make more
sense to use them than to declare ACLs with random names.
— http://cbonte.github.io/haproxy-dconv/configuration-1.4.html
For simple rules like these, this construct makes more sense to me than explicitly declaring an ACL and then later using that ACL to cause the action that you want, because it puts everything together on the same line.
I use this to solve a different root problem that has the same symptoms -- multiple sites for development/test projects, but only one possible external IP address (which by definition means "port 80" can only go to one place). This allows me to "host" development and test projects on different ports and platforms, all behind the single external IP of my home DSL line. The only difference in my case is that the different sites are sometimes on the same machine as the haproxy and other times they're not, but the application seems otherwise identical.
Rerouting in way you show - depends on the OS your server is hosting on. For linux you have to use iptables, for windows you could use windows firewall. You should set all incoming connections to a port 80 to be redirected do desired port 3000
But, instead of port, you could use a different host name for each service, like
app1.apps.com
app2.apps.com
and so on. You can configure it with redirecting on your DNS hosting, for apps.com IMHO this is best solution, if i got you right.
Also, you can configure a single host to reroute to all other sites, like
app1.com:3001 -> apphost1.com
app1.com:3002 -> apphost2.com
Take in mind, in this case, all traffic will pas through app1.com.
You can easily do this. Set up a different hostname for each app you want to use, create a DNS entry that points to your micro instance, and create a name-based virtual host entry for each app.
Each virtual host entry should look something like:
<VirtualHost *>
ServerName app1.example.com
DocumentRoot /var/www/html/app1/
DirectoryIndex index.html
</VirtualHost>

Coldfusion Amazon S3 support for file upload, does it connect to a specific IP?

I'm trying to use S3 as an off site file location for a database backup. On my home dev machine this works just fine, I just do a dump out from mySQL and then
<cffile action = "copy"
source = "#backupPath##filename#"
destination = "s3://myID:myKey#myBucket/#filename#">
and all is good. However, the production server at work is behind a router/firewall controlled/managed by a 3rd party. I read somewhere that S3 needs port 843 open to work (and then lost that reference) but does the CF built in function connect to a particular IP at amazon so I could ask for that port open for just that IP?
I see that you found some answers via comments on Ray Camden's blog post about the S3 functionality, with information contributed by Steven Erat, but for the sake of completeness here on Stack Overflow and for others who may find this question, here is that information:
By default, all communication from your CF server and S3 is done over HTTPS on port 443. There is a Java system property (s3service.https-only), which defaults to true, and will do the communication over http and not over https if you set it to false. Sorry, I don't know how you might change it, unless maybe as a JVM argument.
The IP of any given bucket could be different (and possibly change over time), so you can't necessarily get by on opening a port for a single IP -- but luckily you shouldn't have to since it's all done over SSL/443.
What does use port 843 is the Amazon S3 console, an optional flash-based web interface for managing your bucket(s).