Keeping some web services private and others public - web-services

Not sure of the best way of achieving something...
We've got a number of web services running on asp.net v3.5 on a couple of web servers. They all talk nicely to each other and to the public internet.
Now we'd like to keep some of these web services 'private' ie make them not available to the public internet, whilst leaving others accessible.
AFAICS the simplest way to do this is simply to run the private services on a different port and keep the public ones on port 80. Our firewall only permits internet access via port 80 so would drop any requests from the internet to the private web services. Sorted... I think?
Is this idea a reasonable solution? Or is there some drop dead simple IIS mechanism that I ought to use?
Thanks
SAL

You can restrict access to a site via a blacklist/whitelist in the IIS control Panel (directory security tab). That's what I've done in the past to filter by IP address.

AFAICS the simplest way to do this is
simply to run the private services on
a different port and keep the public
ones on port 80. Our firewall only
permits internet access via port 80 so
would drop any requests from the
internet to the private web services.
This is exactly the approach we take. We also have a VPN so that employees can access the site if they're working remotely.

You can put IP access restrictions onto any site/app you want. We have several internal web services that only allow access on the 10.x.x.x range for example.

It really depends on how secure you want the internal web services.
If you have sensitive data on the internal web services, you need to have them on a completely different server, even if you don't allow access to them from the outside by assigning them a different port.
However, if you don't have an issue with sensitive data then assigning a different port, or IP-address, for internal and external users is a good way to go.

Besides the port, you could use the restriction for the caller (using IP address filtering, for example).
Also you could actually require authentication for the caller of a web-service, which should be easy to configure in case you use ActiveDirectory.
In any case if you have a 'public' web service, which is private as well, you may want to 'publish' it twice: once for public (with nice external URL) and one for internal, so that your other internal services and/or clients do not have to go via 'external' URL. Then you could configure restrictions (client IP, authentication, ..) differently for different publishers of the same service.

Related

Restrict access to some endpoints on Google Cloud

I have a k8s cluster that runs my app (gce as an ingress) and I want to restrict access to some endpoints "/test/*" but all other endpoints should be publically available. I don't want to restrict for specific IP's to have some flexibility and ability to access restricted endpoints from any device like phones.
I considered IAP but it restricts access to the full service when I need it only for some endpoints. Hence extra.
I have thought about VPN. But I don't understand how to set this up, or would it even resolve my issues.
I have heard about proxy but seems to me it can't fulfill my requirements (?)
I can't tell that solution should be super extensible or generic because only a few people will use this feature.
I want the solution to be light, flexible, simple, and fulfill my needs at the same time. So if you say that there are solutions but it's complex I would consider restricting access by the IP, but I worry about how the restricted IP's approach is viable in the real life. In a sense would it be too cumbersome to add the IP of my phone every time I change my location and so on?
You can use API Gateway for that. It approximatively meets your needs, it's not so flexible and simple.
But it's fully managed and can scale with your traffic.
For a more convenient solution, you have to use software proxy (or API Gateway), or go to the Bank and use Apigee
I set up OpenVPN.
It was not a tedious process because of the various small obstacles but I encourage you to do the same.
Get a host (machine, cluster, or whatever) with the static IP
Setup an OpenVPN instance. I do docker https://hub.docker.com/r/kylemanna/openvpn/ (follow instructions but update a host -u YOUR_IP)
Ensure that VPN setup works from your local machine
To the routes you need limit IP access to the VPN one. Nginx example
allow x.x.x.x;
deny all;
Make sure that nginx treats IP right. I had an issue that the nginx was having Load Balancer IP as client IP's, so I have to put some as trusted. http://nginx.org/en/docs/http/ngx_http_realip_module.html
Test the setup

How to direct traffic of users of certain countries to the right server?

The question rather concerns alternatives to Amazon Route 53 and Google Cloud DNS (Direct traffic). As far as I understand from the descriptions of these services, they only work together with their other services.
I'm trying to find a service that will allow you to determine the country of a user and, if necessary, redirect all of its traffic to the correct server.
For example, I have two servers - the main server with the application and the proxy server. By default, I want to direct all users directly to the main server. But users of certain countries I want to pass through the second - a proxy server.
Tell me, please, how best to implement all this. Perhaps you have any more correct options for implementation?

Streamlining Azure set up with app and DB on separate VMs

A Django app of mine (with a postgresql backend) is hosted over two separate Ubuntu VMs. I use Azure as my infrastructure provider, and the VMs are classic. Both are part of the same resource group, and map to the same DNS as well (i.e. they both live on xyz.cloudapp.net). Currently, I have the following database url defined in my app's settings.py:
DATABASE_URL = 'postgres://username:password#public_ip_address:5432/dbname'
The DB port 5432 is publicly open, and I'm assuming the above DB url implies the web app is connecting to the DB as if it's on a remote machine. If so, that's not the best practice: it has security repercussions, not to mention it adds anything from 20-30 milliseconds to a hundred milliseconds to each query (in latency).
My question is, how does one program such a Django+postgres setup on Azure such that the database is only exposed on the private network? I want to keep the two-VM set up intact. An illustrative example would be nice - I'm guessing I'll have to replace the public ip address in my settings.py with a private IP? I can see a private IP address listed under Virtual machines(classic) > VMname > Settings > IP Addresses in the Azure portal. Is this the one to use? If so, it's dynamically assigned, thus wouldn't it change after a while? Looking forward to guidance on this.
In Classic (ASM) mode, the Cloud Service is the network security boundary and the Endpoints with ACLs are used to restrict access from the outside Internet.
A simple solution to secure access would be:
Ensure that the the DB port (5432) is removed from the cloud service endpoint (to avoid exposing it for the entire Internet).
Get at static private IP address for the DB server.
Use the private IP address of
the DB server in the connection string.
Keep the servers in the same Cloud Service.
You can find detailed instructions here:
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-static-private-ip-classic-pportal/
This should work. But for future implementations, I would recommend the more modern Azure Resource Model (ARM), where you can benefit from many nice new features, including virtual networks (VNETs) where you get more fine-grained security.

Multiple server applications, one public IP on Amazon EC2

I have a single Windows Amazon EC2 instance and one public IP. The instance is running multiple web server EXEs which all sit on port 80. I want to have different domain names which I want to point to each server. On my old dedicated server I achieved this simply by having different public IPs, but with Amazon EC2 I want to keep to just one public IP.
I am not using IIS, Apache, etc. otherwise life would be a lot simpler (I would simply bind hostnames accordingly). The web server executables perform unusual "utility" tasks as part of a range of other websites, but still need to be hosted on port 80. There is no configuration other than address to bind to and port #.
I have setup several private IPs and bound each server application to those private IPs. Is it possible to leverage some of the Amazon networking products to direct the traffic to the correct private IP? e.g. I have tried setting up a private-DNS using Amazon Route53, and internally at least this seems to point to the correct servers - but not (perhaps logically) when I try to access the site externally.
In absence of any other solutions I decided to solve this using the blunt hammer approach and use a reverse proxy. Downside is my servers now only see the user IPs as 127.0.0.1 which was less than ideal, but better than nothing at all.
For my reverse proxy I used Redbird (uses node.js) but Nginx may also be an option. Both are free / open source.

Accessing Windows Network Share from Web Service Securely

We have developed a RESTful Web Service which requires access to a Network share in order to read and write files. This is a public facing Web Service (running over SSL) which requires staff to log on using an assigned user name and password.
This web service will be running in a DMZ. It doesn't seem "right" to access a Network Share from a DMZ. I would venture a guess that the "secure" way to do this would be to provide another service inside the domain which only talks to our Web Service. That way, if anyone wanted to exploit it, they would have to find a way to do it via the Web Service, not through known system API's.
Is my solution "correct"? Is there a better way?
Notes:
the Web Service does not run under IIS.
the Web Service currently runs under an account with access to the Network Share and access to a SQL database.
the Web Service is intended only for designated staff, not the public.
I'm a developer, not an IT professional.
What about some kind of vpn to use the internal ressources? There are some pretty solutions for this, and opening network shares to the internet seems too big a risk to do.
That aside, when an attacker breaks into your DMZ host using those webservices, he can break into your internal server using the same API unless you can afford to create two complete different solutions.
When accessing the fileservers from the DMZ directly, you would limit theses connections using a firewall so even after breaking your DMZ Host the attacker cannot do "everything" but only read (write?) to those servers.
I would suggest #2