How to direct traffic of users of certain countries to the right server? - amazon-web-services

The question rather concerns alternatives to Amazon Route 53 and Google Cloud DNS (Direct traffic). As far as I understand from the descriptions of these services, they only work together with their other services.
I'm trying to find a service that will allow you to determine the country of a user and, if necessary, redirect all of its traffic to the correct server.
For example, I have two servers - the main server with the application and the proxy server. By default, I want to direct all users directly to the main server. But users of certain countries I want to pass through the second - a proxy server.
Tell me, please, how best to implement all this. Perhaps you have any more correct options for implementation?

Related

Restrict access to some endpoints on Google Cloud

I have a k8s cluster that runs my app (gce as an ingress) and I want to restrict access to some endpoints "/test/*" but all other endpoints should be publically available. I don't want to restrict for specific IP's to have some flexibility and ability to access restricted endpoints from any device like phones.
I considered IAP but it restricts access to the full service when I need it only for some endpoints. Hence extra.
I have thought about VPN. But I don't understand how to set this up, or would it even resolve my issues.
I have heard about proxy but seems to me it can't fulfill my requirements (?)
I can't tell that solution should be super extensible or generic because only a few people will use this feature.
I want the solution to be light, flexible, simple, and fulfill my needs at the same time. So if you say that there are solutions but it's complex I would consider restricting access by the IP, but I worry about how the restricted IP's approach is viable in the real life. In a sense would it be too cumbersome to add the IP of my phone every time I change my location and so on?
You can use API Gateway for that. It approximatively meets your needs, it's not so flexible and simple.
But it's fully managed and can scale with your traffic.
For a more convenient solution, you have to use software proxy (or API Gateway), or go to the Bank and use Apigee
I set up OpenVPN.
It was not a tedious process because of the various small obstacles but I encourage you to do the same.
Get a host (machine, cluster, or whatever) with the static IP
Setup an OpenVPN instance. I do docker https://hub.docker.com/r/kylemanna/openvpn/ (follow instructions but update a host -u YOUR_IP)
Ensure that VPN setup works from your local machine
To the routes you need limit IP access to the VPN one. Nginx example
allow x.x.x.x;
deny all;
Make sure that nginx treats IP right. I had an issue that the nginx was having Load Balancer IP as client IP's, so I have to put some as trusted. http://nginx.org/en/docs/http/ngx_http_realip_module.html
Test the setup

Is this possible in API Gateway?

I've been asked to look into an AWS setup for my organisation but this isn't my area of experience so it's a bit of a challenge. After doing some research, I'm hoping that API Gateway will work for us and I'd really appreciate it if someone could tell me if I'm along the right lines.
The plan is:
We create a VPC with several private subnets. The EC2 instances in the subnets will be hosting browser based applications like Apache Guacamole, Splunk etc.
We attach to the VPC an API Gateway with a REST API which will allow users access to only the applications on 'their' subnet
Users follow a link to the API Gateway from an external API which will provide Oauth2 credentials.
The API Gateway REST API verifies their credentials and serves them with a page with links to the private IP addresses for the services in 'their' subnet only. They can then click on the links and open the Splunk, Guacamole browser pages etc.
I've also looked at Client VPN as a possible solution but my organisation wants users to be able to connect directly to the individual subnets from an existing API without having to download any other tools (this is due to differing levels of expertise of users and the need to work remotely). If there is a better solution which would provide the same workflow then I'd be happy to implement that instead.
Thanks for any help
This sounds like it could work in theory. My main concern would be if Apache Guacomole, or any of the other services you are trying to expose, requires long lived HTTP connections. API Gateway has a hard requirement that all requests must take no longer than 29 seconds.
I would suggest also looking into exposing these services via a public Application Load Balancer, instead of API Gateway, which has OIDC authentication support. You'll need to look at the requirements of the specific services you are trying to expose to evaluate if API Gateway or ALB would be a better fit.
I would personally go about this by configuring each of these environments using an Infrastructure as Code, in such a way that you can create a new client environment by simply running your IaC tool with a few parameters like the client ID and the domain name or subdomain you want to use. I would actually spin each up in their own VPC since it sounds like you want each client's environment to be isolated from the others.

Setting up a loadbalancer behind a proxy server on Google Cloud Compute engine

I am looking to build a scalable REST webservice on the Google Cloud Compute Engine but have a couple of requirements that I am not sure how best to implement.
Structure so far:
2 Instances running a REST webservice connected to a MySQL Cloud database.
(number of instances to scale up in the future)
Load balancer to split request between the two or more Instances.
this part is fine.
What I need next is that the traffic (POST requests from instances to an external webservice) must come from a single IP address. I assume these requests can not route back through the public IP of the load balancer?
I get the impression the solution to this is to route all requests from instances though a 3rd instance running squid. Is this the best way to do this? (side question)
Now to my main question:
I have been reading about ApiAxle which sounds like a nice proxy for Web Services, giving some good access control, throttling and reporting capabilities.
Can I have an instance running ApiAxle followed by a google cloud Load Balancer which shares the request from the proxy to the backend instances that do the leg work and feed the response back through the ApiAxle proxy, thus having everything though a single IP visible to clients using the API? (letting me add new instances to the pool to add capacity.)
and Would the proxy be much of a bottle neck?
Thanks in advance.
/Dave
(new to this, so sorry if its a stupid question because I cant find anything like this on the web)
Sounds like you need to NAT on your outbound traffic so it appears to come from one IP address. You need to do that via a third instance since Google LB stack doesn't provide this. GCLB works only with inbound connections on the load-balanced IP.
You can setup source-NAT using advanced routing, or you can use a proxy as you suggested.

Accessing Windows Network Share from Web Service Securely

We have developed a RESTful Web Service which requires access to a Network share in order to read and write files. This is a public facing Web Service (running over SSL) which requires staff to log on using an assigned user name and password.
This web service will be running in a DMZ. It doesn't seem "right" to access a Network Share from a DMZ. I would venture a guess that the "secure" way to do this would be to provide another service inside the domain which only talks to our Web Service. That way, if anyone wanted to exploit it, they would have to find a way to do it via the Web Service, not through known system API's.
Is my solution "correct"? Is there a better way?
Notes:
the Web Service does not run under IIS.
the Web Service currently runs under an account with access to the Network Share and access to a SQL database.
the Web Service is intended only for designated staff, not the public.
I'm a developer, not an IT professional.
What about some kind of vpn to use the internal ressources? There are some pretty solutions for this, and opening network shares to the internet seems too big a risk to do.
That aside, when an attacker breaks into your DMZ host using those webservices, he can break into your internal server using the same API unless you can afford to create two complete different solutions.
When accessing the fileservers from the DMZ directly, you would limit theses connections using a firewall so even after breaking your DMZ Host the attacker cannot do "everything" but only read (write?) to those servers.
I would suggest #2

Keeping some web services private and others public

Not sure of the best way of achieving something...
We've got a number of web services running on asp.net v3.5 on a couple of web servers. They all talk nicely to each other and to the public internet.
Now we'd like to keep some of these web services 'private' ie make them not available to the public internet, whilst leaving others accessible.
AFAICS the simplest way to do this is simply to run the private services on a different port and keep the public ones on port 80. Our firewall only permits internet access via port 80 so would drop any requests from the internet to the private web services. Sorted... I think?
Is this idea a reasonable solution? Or is there some drop dead simple IIS mechanism that I ought to use?
Thanks
SAL
You can restrict access to a site via a blacklist/whitelist in the IIS control Panel (directory security tab). That's what I've done in the past to filter by IP address.
AFAICS the simplest way to do this is
simply to run the private services on
a different port and keep the public
ones on port 80. Our firewall only
permits internet access via port 80 so
would drop any requests from the
internet to the private web services.
This is exactly the approach we take. We also have a VPN so that employees can access the site if they're working remotely.
You can put IP access restrictions onto any site/app you want. We have several internal web services that only allow access on the 10.x.x.x range for example.
It really depends on how secure you want the internal web services.
If you have sensitive data on the internal web services, you need to have them on a completely different server, even if you don't allow access to them from the outside by assigning them a different port.
However, if you don't have an issue with sensitive data then assigning a different port, or IP-address, for internal and external users is a good way to go.
Besides the port, you could use the restriction for the caller (using IP address filtering, for example).
Also you could actually require authentication for the caller of a web-service, which should be easy to configure in case you use ActiveDirectory.
In any case if you have a 'public' web service, which is private as well, you may want to 'publish' it twice: once for public (with nice external URL) and one for internal, so that your other internal services and/or clients do not have to go via 'external' URL. Then you could configure restrictions (client IP, authentication, ..) differently for different publishers of the same service.