Is a reverse proxy appliance in the LAN need to be pci-dss compliant - pci-compliance

We created a reverse proxy appliance (bridge) that transmit all the data in and out of the network. see diagram below.
|------------------------LAN----------------------------|
User --- Access Point --- Switch ---- Proxy --- Gateway --- WAN
Assume that payments are being done through that LAN but none of an HTTPS data is being stored or processed in the proxy.
Does the reverse proxy (uses Ubuntu 14.04 with bridge-utils) need to be PCI-DSS compliant?

Scope is a complicated part of PCI-DSS. The general rule is that if a device can connect to the CDE, it is in scope.
The easiest way to figure this out is to figure out exactly where your CDE lies and then isolate it with an extremely restrictive firewall. As you open up firewall ports to allow services (ie - your reverse proxy) to connect to the CDE, add those services to your scope. Alternately, you could go through a thought experiment. If a highly malicious actor took control of device a, could he/she direct credit card data somewhere else?

Related

Kubernetes: How to connect one pod to another on an arbitrary port - with or without services?

We are currently transitioning our apps to Kubernetes and I have two apps, appP and appH, that I need to communicate with each other over a port unknown at start up time.
Unlike most of our apps, we don't have a set port for them will to communicate over. Before Kubernetes, third party app (out of my control) would tell appP to start processing an item, itemA, identified with a unique id and it would also tell appH to handle the processed data produced by appP.
To coordinate communications between appP and appH, appH would generate a port based on the unique id and publish the host and port info to connect on to an intermediate app (IA). appP, once done with it's processing queries IA for the connection information based on the unique id and sends it over.
Now we have to adapt this to kubernetes. Each app runs in its own deployment, as does the IA. So how can I setup appH to accept the connection over a port without being able to specify it in the service definition?
Note: I've seen some posts say that pods should be able to communicate to any other pods in the cluster regardless of specifying the ports in the service definition but I can't seem to find a ton of confirming information on this and I don't have a ton of time on our cluster where it is free to bang my head against.
Would it would just fine as is regardless? My biggest worry is the ip resolution. Currently appH grabs its ip based on the host it's running on (using boost). Not sure how this resolves within a container.
If not, my next thought would be if I could setup a headless service with selector for appH in order to allow for ip resolution. What I am unsure of then is if I could have appP connect to <appH_Service>:<arbitrary_port>?
Would the service even have to be headless in this scenario? I mostly say headless w/ selector because I saw in one specific post that it is the only one you don't need a port in the spec for it. Also because I am unsure if the connection would go through unless it was the actual pod's ip it was connecting with, rather than the services.
Any info or clarification is appreciated. For the most part, I can't really change the architecture of these apps right now, I just have to get them talking to each other as is and haven't found a ton of clear information on this type of case.
Note: We use helm and coredns if anyone is curious.
The Kubernetes networking model is as follows: a Pod is a group of containers that share a single network identity (a cluster IP). Any port exposed by a container is thus automatically exposed on the Pod. The model demands that each Pods can communicate with other Pods.
This means that your current design can work without modifications.
What Services bring to the table is that you can bring a stable network identity to a group of Pods that is otherwise very volatile. It does not apply to your appP/appH coupling, I think.

Connection from external computer to computer in local network

Currently in my chat p2p app, I need to open the port for other computers can connect to, but static ip is not allowed by the admin to open the port. Then I found a network programming exercise that seemed like a solution to this problem. The requirements are as follows:
"Write a program to test the UPnP protocol to
ADSL modem controller opens NAT gateway automatically.
In case you can not control the modem, find out and install a NAT Traversal technique to connect two clients in two NAT networks.
internet (use an intermediary server for primers
connect)."
Can anyone tell me what is an intermediate server for connection primitives?
Check https://www.noip.com/ :P. Maybe this can solve some of your problems ^^
You can simple setup DynDNS services. You will have one external domain name with any ip address.
But best way to setup SoftEther VPN solution. That can pass thought any NAT. You can keep your application server at the NAT subnetwork too. And that server will registered on common EtherVPN registry that allow connects from anywhere.
If you want smart solution embedded in your application. Please check similar solutions for VoIP communications. Like ICE, STUN, TURN. But that will not simple to implement.

Can we write a test application which fires request on sever, from all around the globe?

I have a running AWS application, I recently blocked its access from Russia using Geolocation Routing feature.
How can i write a test application which fires request on my application from all around the globe ?
as pointed out by #tedder42 is comments you need to originate the traffic from the geographic location you are blocking in order to test this. You will need to have IPs in that location or you can use a proxy server in that region.
Here is a list of free proxies you might try: http://proxylist.hidemyass.com (there are other options around).
Configure your test app to have a list of proxies, and switch between them when sending the traffic.
Also, with geolocation, there is nothing preventing somebody from those geographies to send the traffic through a proxy in a geography you allow (i.e. geolocation is not 100% bulletproof - in fact the main reason people e use it is to improve latency/throughput not to block traffic)

Http tunnel sample

Is it possible to create an HTTP tunnel in Delphi or C++?
My application connects to several HTTP servers that do not belong to the company I work for. Because of that, our users need to open their firewall ports to allow those connections. I thought about creating a tunnel at my company and redirecting HTTP requests made by my application through this tunnel. This way, my clients will only need to open one port and the tunnel will handle all requests. All requests are made with POST or GET using indy components.
EDIT: I can't use an HTTP proxy. Some of my users have already got their own HTTP proxy and it is going to be impossible to connect to two different proxy servers at the same time.
Here is a free component is kind of old but it works you can get yourself inspired from there
TGpHTTPProxy
Or you can try this samples
https://sites.google.com/site/delphibasics/home/delphibasicssnippets/examplesocks4proxybyaphex
https://sites.google.com/site/delphibasics/home/delphibasicssnippets/multi-threadedhttpproxyserver
As Warren P. and Rob Kennedy suggest, you really just need a proxy server. Don't write a tunnel yourself, it's a huge overkill and it's far from easy (writing a robust socket application is more time consuming than it first appears to be).
If you want something dead simple look for datapipe.c or netcat (nc) unix command. SSH can create tunnels too (look in OpenSSH and PuTTy docs).
Here is a free open source HTTP-Tunnel and UDP-Tunnel: http://barbatunnel.codeplex.com/

How to access a web service behind a NAT?

We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses.
The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc.
I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic.
Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc.
Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step.
Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier.
What else should I be looking at to get this accomplished?
Is there no way this service can by hosted publicly by you or a hosting provider rather than with the customer?
I had a similar situation when I was developing kiosks. I never knew what type of network environment I'd have to deal with on the next installation.
I ended up creating a PPTP VPN to allow all the kiosks to connect to one server I hosted publicly. We then created a controller web service to expose access to the kiosks that were all connected via the VPN. I'm not sure how familiar you are with VPN's but with the VPN connection I was able to completely circumvent the firewall in front of each kiosk by accessing the kiosk via the VPN assigned IP.
Each kiosk node was incredibly easy to setup once I had a VPN server setup. It also brought management benefits and licensing revenue I originally didn't think about. with this infrastructure I was easily able to roll out services accessible via mobile phones.
Best of luck!
Solutions exist to "dynamically" access a software on a computer behind a NAT, but usually mostly for UDP communication.
The UDP hole punching technique is one of them. However, this isn't guranteed to work in every possible situation. If both sides of the communication are behind a "Symmetric Cone NAT" it won't.
You obivously can reduce the probability a client can't communicate using UPnP as a backup (or even primary) alternative.
I don't know Web Services enough and don't even know if using UDP for your webservice is an option (or if it is even possible).
Using the same technique for directly TCP is likely to fail (TCP connections aren't stateless - that causes a lot of problems here).
An alternative using the same technique, would be to set up some VPN based on UDP (just like OpenVPN), but as you stated, you'll have to manage keys, certificates, and so on. This can be automated (I did it) but still, it's not really trivial.
===EDIT===
If you really want to use TCP, you could create a simple "proxy" software on the client boxes which would serve as a relay.
You would have the following schema:
Web Service on client boxes, behind a NAT
The "proxy" software on the same boxes, establishing an outgoing (thus non-blocked) TCP connection to your company servers
Your company servers host a WebService as well, which requires a something like a "Client Identifier" to redirect the request to the adequate established TCP connection.
The proxy program interrogates the local WebService and send back the response to the company servers, which relay the response to the originate requester as well.
An alternative: you might ask the proxy software to directly connect to the requester to enhance performance, but then you might encounter the same NAT problems you're trying to avoid.
It's things like this that are the reason people are tunneling everything over http now, and why certain hardware vendors charge a small fortune for Layer 7 packet filtering.
This is a tremendous amount of work to fix one problem when the customer has at least three problems. Besides the one you've identified, if they don't know their own password, then who does? An administrator who doesn't work there anymore? That's a problem.
Second, if they don't know the password, that means they're almost certainly far behind on firmware updates to their firewall.
I think they should seriously consider doing a PROM reset on their firewall and reconfiguring from scratch (and upgrading the firmware while they're at it).
3 birds, one stone.
I had to do something similar in the past and I believe
the best option is the first one you proposed.
You can do in the easy way, using ssh with its -R option, using
publick key auth and a couple of scripts to check for
connectivity. Don't forget the various keep alive and timeout
features of ssh.
Don't worry about the performances. Use unprivileged users and ports
if you can. Don't bother to setup a CA, the public key of each remote
server is easier to maintain unless you are in the thousands.
Monitoring is quite simple. Each server should test the service on the
central server. If it fails either the tunnel is down or there's no connectivity.
Restarting the tunnel will not harm in any case.
Or you can do it at the network level, using IPsec (strongswan).
This can be trickier to setup and it's the option I used but I will
use SSH the next time, it would have saved me a lot of time.
+1 for going with a SSH tunnel. It's well known, widely available and not too hard to configure.
However, as you point out, you are running SSL already, so the SSH encryption is redundant. Instead of SSH you could just use a regular tunneling proxy, that provides the tunnelling without the encryption. I've used this one in the past, and it has worked well, although I didn't load test it - it was used with just a handful of users.
Here's a blog from someone who used the tunnelling proxy to access his webcam from outside his firewall.
Set up an Apache in front of your Tomcat. This Apache should be visible from the internet, where the Tomcat should not.
Configure Apache to forward all traffic to the Tomcat. This can easily be accomplished using mod_proxy (check out the ProxyPass and ProxyPassReverse directives).
Have your SSL certificate located in the Apache, so that all clients can talk HTTPS with the Apache server, which in turn talks plain HTTP with Tomcat.
No tunneling or other nastyness + you will be surprised how easy it is to configure Apache to do this.
If you want to have a RESTful integration to the client server, a tunnel to the central server that works as a proxy, seems the best approach.
But if this is not a hard requirement, you can let the central server handle the RESTfull stuff and integrate the central server and client server with other middleware. Good candidates would be RMI or JMS. For example, a RMI connection initiated by the client allows the server to do RMI calls to the client.
You could try to connect to an pc/ server and tunnel all the data via hamachi (Free VPN Software) because this tool you can install and it will create a reverse connection (from inside your nat to outside) so you can connect to it
site: http://hamachi.cc/