Coldfusion Amazon S3 support for file upload, does it connect to a specific IP? - coldfusion

I'm trying to use S3 as an off site file location for a database backup. On my home dev machine this works just fine, I just do a dump out from mySQL and then
<cffile action = "copy"
source = "#backupPath##filename#"
destination = "s3://myID:myKey#myBucket/#filename#">
and all is good. However, the production server at work is behind a router/firewall controlled/managed by a 3rd party. I read somewhere that S3 needs port 843 open to work (and then lost that reference) but does the CF built in function connect to a particular IP at amazon so I could ask for that port open for just that IP?

I see that you found some answers via comments on Ray Camden's blog post about the S3 functionality, with information contributed by Steven Erat, but for the sake of completeness here on Stack Overflow and for others who may find this question, here is that information:
By default, all communication from your CF server and S3 is done over HTTPS on port 443. There is a Java system property (s3service.https-only), which defaults to true, and will do the communication over http and not over https if you set it to false. Sorry, I don't know how you might change it, unless maybe as a JVM argument.
The IP of any given bucket could be different (and possibly change over time), so you can't necessarily get by on opening a port for a single IP -- but luckily you shouldn't have to since it's all done over SSL/443.
What does use port 843 is the Amazon S3 console, an optional flash-based web interface for managing your bucket(s).

Related

How to enable Cipher TLS_ECDHE_ECDSA on Windows server 2019 with AWS Load Balancer

The website is on Windows server 2019 with the AWS Load Balancer with ELB SecurityPolicy-2016-08. This policy definitely has the ECDHE_ECDSA cipher enabled. I have checked their docs. SSL certificate is installed on LB.
Running TLS Cipher Suites in PowerShell Windows server 2019 also shows these suits enabled but when running the website domain with SSLLabs or Zenmap. These suites are not appearing
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
or even these:
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
any ideas? the website is ASP.NetFramework 4.7. but I hardly think it has anything to do with the ciphers. Any help will be appreciated. Thanks
Zenmap Snapshot
AWS load balancer Snapshot
PowerShell Snapshot
Meta: this isn't about programming, and I'm not sure 'how to operate cloud' counts as development, so I authorize deletion if this is voted offtopic.
Your server is irrelevant and nothing you set or change on it will affect client(s).
You don't tell us which AWS load balancer you use but to be at HTTPS level it must be Application or Classic, and in either case to do HTTPS it must terminate the SSL/TLS protocol -- in other words, the LB establishes one SSL/TLS connection with the client and decrypts the incoming request, parses it, and then optionally uses a separate SSL/TLS connection to the backend to re-encrypt, and reverses the process on the response: decrypt from the backend if necessary and re-encrypt to the client. See the line "SSL Offloading" well down in the table on that page; that's a jargon way of saying "LB does the SSL/TLS for the client, your server does not".
Thus the settings in the LB, only, control the SSL/TLS seen by the client(s). ELBSecurityPolicy-2016-08 which is the default (and I'm guessing that might be why you used it) excludes all DHE-RSA ciphersuites. (To avoid confusion, note the AWS webpage uses the OpenSSL names for ciphersuites, where RSA-only keyexchange is omitted from the name, whereas Zenmap/nmap uses the RFC names TLS_RSA_with_whatever.) It does allow ECDHE_ECDSA suites, but those will actually be negotiated, and thus seen by a scanner like Zenmap/nmap, only if you configure an ECDSA certificate and key -- which I bet you didn't.

What keeps accessing Google Cloud metadata on my instance

I have a Google Cloud compute instance running with Ubuntu 18. We had wireshark running tracking another problem and we noticed that every minute something is accessing the meta data server. Three requests every minute:
GET /computeMetadata/v1/instance/virtual-clock/drift-token?alt=json&last_etag=XXXXXXXXXXXXXXXX&recursive=False&timeout_sec=60&wait_for_change=True
GET /computeMetadata/v1/instance/network-interfaces/?alt=json&last_etag=XXXXXXXXXXXXXXXX&recursive=True&timeout_sec=60&wait_for_change=True
GET /computeMetadata/v1/?alt=json&last_etag=XXXXXXXXXXXXXXXX&recursive=True&timeout_sec=77&wait_for_change=True
In call cases, the wireshark says the source is the IP of my instance, and the destination is the 169.254.169.254 which is the Google metadata server.
I don't have any code we have written that is accessing the server. The first one makes me think that this is some Google specific software that is accessing the meta data? But I haven't been able to prove that. What is worrisome is that the response for the third one contains ssh keys. Also, every minute seem excessive.
I see another post talking about scripts in /usr/share/google, but I don't have that directory. I do see that google-fluent is installed. I also see a installed snap for google-cloud-sdk. Could one of those be it? I don't recall installing them, AFAIK, I am not using it, so if that is it, what is the harm in uninstalling it?
You do not have a problem to worry about. The metadata server is private to your instance. The Google VM guest environment software and Stackdriver (fluentd) are making requests to the metadata server to get credentials, detect changes (new SSH keys), set the clock, etc.
The IP address 169.254.169.254 is an IPv4 Link Local Address. Only your VM has a route to that network.
Compute Engine Guest Environment
Do not attempt to uninstall the Guest Environment. You can remove Stackdriver, but I do not recommend that. Stackdriver provides logging and monitoring features that are very useful.

Traefik Best Practices/Capabilities For Dynamic Vanity Domain Certificates

I'm looking for guidance on the proper tools/tech to accomplish what I assume is a fairly common need.
If there exists a web service: https://www.ExampleSaasWebService.com/ and customers can add vanity domains/subdomains to white-label or resell the service and replace the domain name with their own, there needs to be a reverse proxy to terminate vanity domains TLS traffic and route it to the statically defined (HTTPS) back-end service on the non-vanity original domain (there is essentially one "back-end" server somewhere else on the internet, not the local network, that accepts all incoming traffic no matter the incoming domain). Essentially:
"Customer A" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from example.customerA.com.
"Customer B" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from customerB.com and www.customerB.com.
etc...
I (surprisingly) haven't found anything that does this out of the box, but looking at Traefik (2.x) I'm seeing some promising capabilities and it seems like the most capable tool to accomplish this. Primarily because of the Let's Encrypt integration and the ability to reconfigure without a restart of the service.
I initially considered AWS's native certificate management and load balancing, but I see there is a limit of ~25 certificates per load balancer which seems like a non-starter. Presumably there could be thousands of vanity domains in place at any time.
Some of my Traefik specific questions:
Am I correct in understanding that you can get away without explicitly provisioning a generated list of explicit vanity domains to produce TLS certificates for in the config files? They can be determined on-the-fly and provisioned from Let's Encrypt based on the headers of the incoming requests/SNI?
E.g. If a request comes to www.customerZ.com and there is not yet a certificate for that domain name, one can be generated on the fly?
I found this note on the OnDemand flag in the v1.6 docs, but I'm struggling to find the equivalent documentation in the (2.x) docs.
Using AWS services, how can I easily share "state" (config/dynamic certificates that have already been created) between multiple servers to share the load? My initial thought was EFS, but I see EFS shared file system may not work because of a dependency on file change watch notifications not working on NFS mounted file systems?
It seemed like it would make sense to provision an AWS NLB (with a static IP and an associated DNS record) that delivered requests to a fleet of 1 or more of these Traefik proxies with a universal configuration/state that was safely persisted and kept in sync.
Like I mentioned above, this seems like a common/generic need. Is there a configuration file sample or project that might be a good starting point that I overlooked? I'm brand new to Traefik.
When routing requests to the back-end service, the original Host name will be identifiable still somewhere in the headers? I assume it can't remain in the Host header as the back-end recieves requests to an HTTPS hostname as well.
I will continue to experiment and post any findings back here, but I'm sure someone has setup something like this already -- so just looking to not reinvent the wheel.
I managed to do this with Caddy. It's very important that you configure the ask,interval and burst to avoid possible DDoS attacks.
Here's a simple reverse proxy example:
# https://caddyserver.com/docs/caddyfile/options#on-demand-tls
{
# General Options
debug
on_demand_tls {
# will check for "?domain=" return 200 if domain is allowed to request TLS
ask "http://localhost:5000/ask/"
interval 300s
burst 1
}
}
# TODO: use env vars for domain name? https://caddyserver.com/docs/caddyfile-tutorial#environment-variables
qrepes.app {
reverse_proxy localhost:5000
}
:443 {
reverse_proxy localhost:5000
tls {
on_demand
}
}

AWS Lightsail, Windows Server 2016 and SFTP

We are migrating to / experimenting with AWS. We have chosen Lightsail, as our needs are pretty simple and this seems like a great, simple, affordable option. With that said, we have hit an early roadblock! I cannot figure out how to setup SFTP (or alternatively FTPS) to transfer files to the server?!
FWIW, I am a total AWS newbie. I have searched fairly extensively, and there are troves of information on how to do this on Lightsail w/ Linux, but nothing on Windows.
On our existing infrastructure we simply set up a third party SSH server (it's called Bitvise - FYI), and opened port 22 for it (IP restricted, etc). We can then connect with our FTP client of choice (whether that be FileZilla or our IDEs, etc). However, the same approach did not work on our Lightsail instance (no idea why)!
Does anyone have any idea how to do this? Any assistance is hugely appreciated. Thanks!

RAILO - Configuring Amazon EC2 firewall to allow CFFTP

I have RAILO (Railo 3.1.2.001 final) installed on an AMAZON EC2 instance and everything seems to be working fine for the tests I have done. I can connect to mySQL and simple commands work. The applications I am planning to run on it make extensive use of CFFTP to pull files in from clients and process them. The OPEN command works fine and I get a succeeded in Active and Passive mode, but when I try to do anything (check for a file, put a file, download) I get : 500 Illegal PORT command.
My thought here is the AMAZON firewall is blocking some ports and something needs to be setup for this to function.
Anyone have any experience with this and can point me in the correct direction?
Thanks in advance,
Jeff
do you connect from outside amazon to the instance ? if you do check the security group and allow the ip/port for your application.