Self hosted VPN with PiHole on AWS - amazon-web-services

I'm trying to create a setup where all of my (mobile and home) traffic is encrypted and ad-blocked. The idea is to use this setup:
wherein all of my traffic when using the VPN client on my phone or PC is routed through a custom OpenVPN setup running on a AWS EC2 instance. On its way out of the EC2 instance towards the public internet, I want to have a PiHole or equivalent DNS sinkhole filtering requests for blacklisted sites.
It's important that this is configured in such a way that I'm not allowing for a public/open DNS resolver - only traffic coming from through the OpenVPN (and therefore coming from an OpenVPN client that is using one of my keys) should be allowed.
Is this possible? Am I correctly understanding the functionality of all the parts?
How do I set this up? What concepts do I need to understand to make this work?

This tutorial seems like a good place to start. This is using lightsail not EC2, but if you aren't planning to scale this up much that might be simpler and cheaper.

Related

Run multiple servers with interconnection on Amazon AWS

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.
Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

VPN on EC2 to Heroku server

Hi there networking experts,
I have a Rails app hosted on Heroku, and I am looking to set up a VPN tunnel on a separate EC2 instance which will connect with a 3rd party.
3rd party <----(VPN tunnel)----> EC2 <----(HTTP/SSH)---> Heroku
Best case scenario would have been to set up the tunnel directly on our Heroku instance, but that doesn't seem possible according to some of these answers.
With my limited knowledge, I figured that the next best thing would be to set up a 'middle-man' EC2 instance with the capability to listen to the VPN tunnel as well as send HTTP requests to our Heroku server over SSH. The most important consideration in this integration would be security. I would like to encrypt end-to-end, and only decrypt on our Heroku server.
What would be the best practice for achieving something like this, if possible at all?
Thank you!
AWS has a managed VPN offering.
You configure a customer gateway for the client side, attach a virtual private gateway to your VPC, and the VPN connects the two. You can then set up routes which will allow them to connect securely to any services running inside your VPC.
A VPN in AWS can use static or dynamic routing. Static is generally simpler, especially if there is a limited IP range on the client side.

Multiple server applications, one public IP on Amazon EC2

I have a single Windows Amazon EC2 instance and one public IP. The instance is running multiple web server EXEs which all sit on port 80. I want to have different domain names which I want to point to each server. On my old dedicated server I achieved this simply by having different public IPs, but with Amazon EC2 I want to keep to just one public IP.
I am not using IIS, Apache, etc. otherwise life would be a lot simpler (I would simply bind hostnames accordingly). The web server executables perform unusual "utility" tasks as part of a range of other websites, but still need to be hosted on port 80. There is no configuration other than address to bind to and port #.
I have setup several private IPs and bound each server application to those private IPs. Is it possible to leverage some of the Amazon networking products to direct the traffic to the correct private IP? e.g. I have tried setting up a private-DNS using Amazon Route53, and internally at least this seems to point to the correct servers - but not (perhaps logically) when I try to access the site externally.
In absence of any other solutions I decided to solve this using the blunt hammer approach and use a reverse proxy. Downside is my servers now only see the user IPs as 127.0.0.1 which was less than ideal, but better than nothing at all.
For my reverse proxy I used Redbird (uses node.js) but Nginx may also be an option. Both are free / open source.

Setting up a loadbalancer behind a proxy server on Google Cloud Compute engine

I am looking to build a scalable REST webservice on the Google Cloud Compute Engine but have a couple of requirements that I am not sure how best to implement.
Structure so far:
2 Instances running a REST webservice connected to a MySQL Cloud database.
(number of instances to scale up in the future)
Load balancer to split request between the two or more Instances.
this part is fine.
What I need next is that the traffic (POST requests from instances to an external webservice) must come from a single IP address. I assume these requests can not route back through the public IP of the load balancer?
I get the impression the solution to this is to route all requests from instances though a 3rd instance running squid. Is this the best way to do this? (side question)
Now to my main question:
I have been reading about ApiAxle which sounds like a nice proxy for Web Services, giving some good access control, throttling and reporting capabilities.
Can I have an instance running ApiAxle followed by a google cloud Load Balancer which shares the request from the proxy to the backend instances that do the leg work and feed the response back through the ApiAxle proxy, thus having everything though a single IP visible to clients using the API? (letting me add new instances to the pool to add capacity.)
and Would the proxy be much of a bottle neck?
Thanks in advance.
/Dave
(new to this, so sorry if its a stupid question because I cant find anything like this on the web)
Sounds like you need to NAT on your outbound traffic so it appears to come from one IP address. You need to do that via a third instance since Google LB stack doesn't provide this. GCLB works only with inbound connections on the load-balanced IP.
You can setup source-NAT using advanced routing, or you can use a proxy as you suggested.

Open app from AWS instance

First of all let me say that I'm new to AWS and don't know much about servers but trying to learn something now!
I have been given access to AWS instance. I can access the server using ssh. It's ubuntu server.
There is an application deployed under var/www/. I have also public IP of server but when I try to access this public IP it's not opening and I also can't ping that IP.
Am I doing something wrong? I will note that I don't have very big experience with servers.
You will need to check security group credentials associated with your ec2. There http and any other required protocols will need to be opened. And also that internet gateway is correctly NATed. Good luck. It's a steep but fast learning curve with aws.