Multiple server applications, one public IP on Amazon EC2 - amazon-web-services

I have a single Windows Amazon EC2 instance and one public IP. The instance is running multiple web server EXEs which all sit on port 80. I want to have different domain names which I want to point to each server. On my old dedicated server I achieved this simply by having different public IPs, but with Amazon EC2 I want to keep to just one public IP.
I am not using IIS, Apache, etc. otherwise life would be a lot simpler (I would simply bind hostnames accordingly). The web server executables perform unusual "utility" tasks as part of a range of other websites, but still need to be hosted on port 80. There is no configuration other than address to bind to and port #.
I have setup several private IPs and bound each server application to those private IPs. Is it possible to leverage some of the Amazon networking products to direct the traffic to the correct private IP? e.g. I have tried setting up a private-DNS using Amazon Route53, and internally at least this seems to point to the correct servers - but not (perhaps logically) when I try to access the site externally.

In absence of any other solutions I decided to solve this using the blunt hammer approach and use a reverse proxy. Downside is my servers now only see the user IPs as 127.0.0.1 which was less than ideal, but better than nothing at all.
For my reverse proxy I used Redbird (uses node.js) but Nginx may also be an option. Both are free / open source.

Related

AWS EC2 for QuickBooks

AWS and network noob. I've been asked to migrate QuickBooks Desktop Enterprise to AWS. This seems easy in principle but I'm finding a lot of conflicting and confusing information on how best to do it. The requirements are:
Setup a Windows Server using AWS EC2
QuickBooks will be installed on the server, including a file share that users will map to.
Configure VPN connectivity so that the EC2 instance appears and behaves as if it were on prem.
Allow additional off site VPN connectivity as needed for ad hoc remote access
Cost is a major consideration, which is why I am doing this instead of getting someone who knows this stuff.
The on-prem network is very small - one Win2008R2 server (I know...) that hosts QB now and acts as a file server, 10-15 PCs/printers and a Netgear Nighthawk router with a static IP.
My approach was to first create a new VPC with a private subnet that will contain the EC2 instance and setup a site-to-site VPN connection with the Nighthawk for the on-prem users. I'm unclear as to if I also need to create security group rules to only allow inbound traffic (UDP,TCP file sharing ports) from the static IP or if the VPN negates that need.
I'm trying to test this one step at a time and have an instance setup now. I am remote and am using my current IP address in the security group rules for the test (no VPN yet). I setup the file share but I am unable to access it from my computer. I can RDP and ping it and have turned on the firewall rules to allow NB and SMB but still nothing. I just read another thread that says I need to setup a storage gateway but before I do that, I wanted to see if that is really required or if there's another/better approach. I have to believe this is a common requirement but I seem to be missing something.
This is a bad approach for QuickBooks. Intuit explicitly recommends against using QuickBooks with a file share via VPN:
Networks that are NOT recommended
Virtual Private Network (VPN) Connects computers over long distances via the Internet using an encrypted tunnel.
From here: https://quickbooks.intuit.com/learn-support/en-us/configure-for-multiple-users/recommended-networks-for-quickbooks/00/203276
The correct approach here is to host QuickBooks on the EC2 instance, and let people RDP (remote desktop) into the EC2 Windows server to use QuickBooks. Do not let them install QuickBooks on their client machines and access the QuickBooks data file over the VPN link. Make them RDP directly to the QuickBooks server and access it from there.

Self hosted VPN with PiHole on AWS

I'm trying to create a setup where all of my (mobile and home) traffic is encrypted and ad-blocked. The idea is to use this setup:
wherein all of my traffic when using the VPN client on my phone or PC is routed through a custom OpenVPN setup running on a AWS EC2 instance. On its way out of the EC2 instance towards the public internet, I want to have a PiHole or equivalent DNS sinkhole filtering requests for blacklisted sites.
It's important that this is configured in such a way that I'm not allowing for a public/open DNS resolver - only traffic coming from through the OpenVPN (and therefore coming from an OpenVPN client that is using one of my keys) should be allowed.
Is this possible? Am I correctly understanding the functionality of all the parts?
How do I set this up? What concepts do I need to understand to make this work?
This tutorial seems like a good place to start. This is using lightsail not EC2, but if you aren't planning to scale this up much that might be simpler and cheaper.

Static IP to access GCP Machine Learning APIs via gRPC stream over HTTP/2

We're living behind a corporate proxy/firewall, that can only consume static IP rules and not FQDNs.
For our project, we need to access Google Speech To Text API: https://speech.googleapis.com. If outside of corporate network, we use gRPC stream over HTTP/2 to do that.
The ideal scenario looks like:
Corporate network -> static IP in GCP -> forwarded gRPC stream to speech.googleapis.com
What we have tried is creating a global static external IP, but failed when configuring the Load Balancer, as it can only connect to VMs and not APIs.
Alternatively, we were thinking to use output of nslookup speech.googleapis.com IP address ranges and update it daily, though it seems pretty 'dirty'.
I'm aware we can configure a compute engine resource / VM and forward the traffic, but this really doesn't seem like an elegant solution either. Preferably, we can achieve that with existing GCP networking components.
Many thanks for any pointers!
Google does not publish a CIDR block for you to use. You will have daily grief trying to whitelist IP addresses. Most of Google's API services are fronted by the Global Frontend (GFE). This uses HTTP Host headers to route traffic and not IP addresses, which will cause routing to fail.
Trying to lookup the IP addresses can be an issue. DNS does not have to return all IP addresses for name resolution in every call. This means that a DNS lookup might return one set of addresses now and a different set an hour from how. This is an edge example of grief you will cause yourself with whitelisting IP addresses.
Solution: Talk to your firewall vendor.
Found a solution thanks to clever networking engineers from Google, posting here for future reference:
You can use a CNAME in your internal DNS to point *.googleapis.com to private.googleapis.com. This record in public DNS points to two public IP addresses (199.36.153.8/30) that are not reachable from the public internet but through a VPN tunnel or Cloud interconnect only.
So if setting up a VPN tunnel to a project in GCP is possible (and it should be quite easy, see https://cloud.google.com/vpn/docs/how-to/creating-static-vpns), then this should solve the problem.

How many domains can be associated with an EC2 instance simultaneously?

How many domains can be associated simultaneously with an EC2 instance which runs on windows 2012 Server with SQL Web ?
We have 5*n domain names to host on these servers, n stands for number of versions we run parallel to each other..
The question isn't very clear but with just one ElasticIP you can host nearly infinite domain names to that IP. Then use Apache Virtuahosts, or IIS equivalent, to serve the websites. Just point the needed DNS records to the ElasticIP associated to the EC2 instance.
There are many possible limitations to this, like storage, memory, SSL certificates on the same IP, etc.
No limit, as long as your server doesn't fall over.

Keeping some web services private and others public

Not sure of the best way of achieving something...
We've got a number of web services running on asp.net v3.5 on a couple of web servers. They all talk nicely to each other and to the public internet.
Now we'd like to keep some of these web services 'private' ie make them not available to the public internet, whilst leaving others accessible.
AFAICS the simplest way to do this is simply to run the private services on a different port and keep the public ones on port 80. Our firewall only permits internet access via port 80 so would drop any requests from the internet to the private web services. Sorted... I think?
Is this idea a reasonable solution? Or is there some drop dead simple IIS mechanism that I ought to use?
Thanks
SAL
You can restrict access to a site via a blacklist/whitelist in the IIS control Panel (directory security tab). That's what I've done in the past to filter by IP address.
AFAICS the simplest way to do this is
simply to run the private services on
a different port and keep the public
ones on port 80. Our firewall only
permits internet access via port 80 so
would drop any requests from the
internet to the private web services.
This is exactly the approach we take. We also have a VPN so that employees can access the site if they're working remotely.
You can put IP access restrictions onto any site/app you want. We have several internal web services that only allow access on the 10.x.x.x range for example.
It really depends on how secure you want the internal web services.
If you have sensitive data on the internal web services, you need to have them on a completely different server, even if you don't allow access to them from the outside by assigning them a different port.
However, if you don't have an issue with sensitive data then assigning a different port, or IP-address, for internal and external users is a good way to go.
Besides the port, you could use the restriction for the caller (using IP address filtering, for example).
Also you could actually require authentication for the caller of a web-service, which should be easy to configure in case you use ActiveDirectory.
In any case if you have a 'public' web service, which is private as well, you may want to 'publish' it twice: once for public (with nice external URL) and one for internal, so that your other internal services and/or clients do not have to go via 'external' URL. Then you could configure restrictions (client IP, authentication, ..) differently for different publishers of the same service.