How to access a web service behind a NAT? - web-services

We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses.
The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc.
I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic.
Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc.
Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step.
Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier.
What else should I be looking at to get this accomplished?

Is there no way this service can by hosted publicly by you or a hosting provider rather than with the customer?
I had a similar situation when I was developing kiosks. I never knew what type of network environment I'd have to deal with on the next installation.
I ended up creating a PPTP VPN to allow all the kiosks to connect to one server I hosted publicly. We then created a controller web service to expose access to the kiosks that were all connected via the VPN. I'm not sure how familiar you are with VPN's but with the VPN connection I was able to completely circumvent the firewall in front of each kiosk by accessing the kiosk via the VPN assigned IP.
Each kiosk node was incredibly easy to setup once I had a VPN server setup. It also brought management benefits and licensing revenue I originally didn't think about. with this infrastructure I was easily able to roll out services accessible via mobile phones.
Best of luck!

Solutions exist to "dynamically" access a software on a computer behind a NAT, but usually mostly for UDP communication.
The UDP hole punching technique is one of them. However, this isn't guranteed to work in every possible situation. If both sides of the communication are behind a "Symmetric Cone NAT" it won't.
You obivously can reduce the probability a client can't communicate using UPnP as a backup (or even primary) alternative.
I don't know Web Services enough and don't even know if using UDP for your webservice is an option (or if it is even possible).
Using the same technique for directly TCP is likely to fail (TCP connections aren't stateless - that causes a lot of problems here).
An alternative using the same technique, would be to set up some VPN based on UDP (just like OpenVPN), but as you stated, you'll have to manage keys, certificates, and so on. This can be automated (I did it) but still, it's not really trivial.
===EDIT===
If you really want to use TCP, you could create a simple "proxy" software on the client boxes which would serve as a relay.
You would have the following schema:
Web Service on client boxes, behind a NAT
The "proxy" software on the same boxes, establishing an outgoing (thus non-blocked) TCP connection to your company servers
Your company servers host a WebService as well, which requires a something like a "Client Identifier" to redirect the request to the adequate established TCP connection.
The proxy program interrogates the local WebService and send back the response to the company servers, which relay the response to the originate requester as well.
An alternative: you might ask the proxy software to directly connect to the requester to enhance performance, but then you might encounter the same NAT problems you're trying to avoid.

It's things like this that are the reason people are tunneling everything over http now, and why certain hardware vendors charge a small fortune for Layer 7 packet filtering.
This is a tremendous amount of work to fix one problem when the customer has at least three problems. Besides the one you've identified, if they don't know their own password, then who does? An administrator who doesn't work there anymore? That's a problem.
Second, if they don't know the password, that means they're almost certainly far behind on firmware updates to their firewall.
I think they should seriously consider doing a PROM reset on their firewall and reconfiguring from scratch (and upgrading the firmware while they're at it).
3 birds, one stone.

I had to do something similar in the past and I believe
the best option is the first one you proposed.
You can do in the easy way, using ssh with its -R option, using
publick key auth and a couple of scripts to check for
connectivity. Don't forget the various keep alive and timeout
features of ssh.
Don't worry about the performances. Use unprivileged users and ports
if you can. Don't bother to setup a CA, the public key of each remote
server is easier to maintain unless you are in the thousands.
Monitoring is quite simple. Each server should test the service on the
central server. If it fails either the tunnel is down or there's no connectivity.
Restarting the tunnel will not harm in any case.
Or you can do it at the network level, using IPsec (strongswan).
This can be trickier to setup and it's the option I used but I will
use SSH the next time, it would have saved me a lot of time.

+1 for going with a SSH tunnel. It's well known, widely available and not too hard to configure.
However, as you point out, you are running SSL already, so the SSH encryption is redundant. Instead of SSH you could just use a regular tunneling proxy, that provides the tunnelling without the encryption. I've used this one in the past, and it has worked well, although I didn't load test it - it was used with just a handful of users.
Here's a blog from someone who used the tunnelling proxy to access his webcam from outside his firewall.

Set up an Apache in front of your Tomcat. This Apache should be visible from the internet, where the Tomcat should not.
Configure Apache to forward all traffic to the Tomcat. This can easily be accomplished using mod_proxy (check out the ProxyPass and ProxyPassReverse directives).
Have your SSL certificate located in the Apache, so that all clients can talk HTTPS with the Apache server, which in turn talks plain HTTP with Tomcat.
No tunneling or other nastyness + you will be surprised how easy it is to configure Apache to do this.

If you want to have a RESTful integration to the client server, a tunnel to the central server that works as a proxy, seems the best approach.
But if this is not a hard requirement, you can let the central server handle the RESTfull stuff and integrate the central server and client server with other middleware. Good candidates would be RMI or JMS. For example, a RMI connection initiated by the client allows the server to do RMI calls to the client.

You could try to connect to an pc/ server and tunnel all the data via hamachi (Free VPN Software) because this tool you can install and it will create a reverse connection (from inside your nat to outside) so you can connect to it
site: http://hamachi.cc/

Related

implementing server for licencing management

I would like to implement the server side of a licence management software. I use C++ in LINUX OS.
When the software starts it must connect to a server that checks privileges and allows/disallow running of some features.
My question is about the implementation of the communication between client and server across internet:
The server will have a static IP on internet so is it enough to use a simple TCP/IP socket client that will connect to a TCP/IP socket server ( providing IP/PORT) ?
I am familiar with socket communication , but less with communication across internet so my question is whether this is the right approach or do I need to use a different mechanism like a http client server or other.
Regards
AFG
Here are some benefits to using HTTP as a transport:
easier to get right, more likely to work in production: Yes, you will probably have to add additional dependencies to deal with HTTP (client and server side), but it's still preferable to yet another homegrown protocol, which you have to implement, maintain, care about backwards compatibility, deal with multiplatform issues (eg. endianness), etc. In terms of implementation ease, using an HTTP based solution should be far easier in the common case (especially true if you build a REST style service API for license checking).
More help available: HTTP as the foundation of the web is one of the most widely used technologies today. Most (all?) problems you will run into are probably publicly documented with solutions/workarounds.
Encryption 'for free': Encryption is already a solved problem (HTTPS/SSL), both with regard to transport as well as with regard to what you have to implement on your end, and it's just a matter of setting it up.
Server Authentication 'for free': HTTPS/SSL doesn't only solve encryption but also server authentication, so that the client can verify whether it's actually talking to the right service.
Guaranteed to work on the internet: HTTP/HTTPS traffic is common on the internet, so you won't run into routing problems or firewalls which are hard to traverse. This might be a problem when using your own protocol.
Flexibility out of the box: You also put less constraints on clients communicating with your server, as it's very simple to build a client in many different environments, as long as they can talk HTTP (and maybe SSL), and they know how to issue the request to your server (ie. what your service API looks like).
Easy to integrate with administrative webapp: If you want to allow users to manage their accounts associated with licenses in some way (update contact info etc.), then you might even combine the license server with that application. You can also build the license administration UI part into the same app if that's useful.
And as a last remark (this puts additional constraints on your client side HTTPS/SSL implementation): you can even use client side SSL certificates, which essentially allow authenticating the client to the server. Depending on how you use them, client side certificates are harder to manage, but they can be eg. expired, or revoked, so to some extent they actually are licenses (to connect to the server).
HTTP is not a different mechanism. It is a protocol operated over TCP/IP connections.
Internet uses IP transport exclusively. You can use UDP, TCP or SCTP session (well, UDP is not much of a session) layer on top of it. TCP is the general choice.
Sockets are operating system interface. They are the only interface to network in most systems, but some systems have different interface. Nothing to do with the transport itself.
IP addresses are in practice tied to network topology, so I strongly discourage hardcoding the IP address into the server. If you have to change network provider for any reason, you won't be getting the same IP address. Use DNS, it's just one gethostbyname call.
And don't forget to authenticate the server; even with hardcoded IP it's too easy to redirect it.

IIS binding and throughput, how do they work?

A consultant at work mentioned that you can have web services running on different endpoints and hence utilize the network correctly if I have more than one network card with different bandwidths.
Not being too network savvy, is he saying I can take my web service and tie it down to one network card and make sure clients make calls at that network card to access it as I have more bandwidth at that card?
Can I do this without changing the clients?
Also if my web service has a number of web methods and I want some web methods to run on a different network card, would I have to split the web service so that the web methods are on different web services? In other words I would have to write two web services?
Are you really maxing out your network that you need to implement something like this? I would look into bottlenecks within the application first before going down this road.
If your network is the bottleneck, then perhaps moving you web service to a completely different server might be a better solution. It'll mostly likely be cleaner and easier to implement.
Having said that, it can probably be done, but would be convoluted. Network cards would need to be on different networks. Wouldn't make sense if it's the same network. Each network card will have different IP address assigned.
In IIS, you'll need to make sure that site which houses your web service is configured for one particular IP address.
Can I do this without changing the clients?
Depends. You will need to make sure whoever is calling your web service does it using the IP address configured within IIS. That might mean either creating a DNS record that points to that particular IP address OR editing your clients to point to the right IP address.

Communication between two computers without opening ports, using a third computer to set up the connection

Let's say I have a server, and two clients connected to it. (via TCP, but it doesn't matter)
My goal is to allow a direct connection between those two clients. This is to allow direct voice contact between two players, for example, or any other client plugin they may have installed which don't need server interaction (like playing some kind of random game between the two). The server can be there to help setting up the connection.
From duskwuff's answer, I got several leads:
http://en.wikipedia.org/wiki/STUN which describes an algorithm to do that, and
http://en.wikipedia.org/wiki/UDP_hole_punching
From those, I got more leads:
http://www.h-online.com/security/features/How-Skype-Co-get-round-firewalls-747197.html
http://nutss.gforge.cis.cornell.edu/stunt.php -- A possible STUN implementation with TCP
With time, I could surely work out something for my program. For now I'm using C++ and TCP (Qt Sockets or Boost sockets), but if needed I don't mind doing UDP in C and wrapping it.
The bounty is there for any programmer having experience with those in C and C++ that may give tips to make this easier, by linking to example programs, updated libraries, or any other useful information. A documented, flexible & working C++ TCP implementation would be the best but I'll take what I get!
Punching TCP holes in NAT is sometimes/often possible (it depends of the NAT behavior). This is not a simple subject to learn, but read the corresponding chapter about NAT traversal from Practical JXTA II (available online on Scribd) to understand the nature of the issues to solve.
Then, read this. It comes from the guy who wrote that: http://nutss.gforge.cis.cornell.edu/stunt.php (one of the links in your question).
I am not a C/C++ specialist, but the issues to solve are not language specific. As long as you have access to TCP from your code base, that's enough. Keep in mind that implementing UDP traversal is easier than TCP.
Hope these tips help.
P.S.: I am not aware of a C/C++ implementation of the solution. The code mentioned in Cornell's link is NOT operational as confirmed by the author. I tried to resuscitate it myself, but he let me know it was completely tweaked for research purposes and far from production ready.
I'm not aware of any way to reliably punch through firewalls for TCP, but there's a similar method for UDP traffic that's pretty well documented:
http://en.wikipedia.org/wiki/STUN
http://en.wikipedia.org/wiki/UDP_hole_punching
A few links to projects that might be of interest or helpful:
http://sourceforge.net/projects/stun/
http://udt.sourceforge.net/
http://www.telehash.org/
You're looking for rendezvous server for NAT hole punching: the server that is publicly accessible (not behind NAT/firewall or they are properly configured) to help computers behind NAT/firewall to establish peer-to-peer connection.
UDP is more popular in NAT punching because provides much better results than TCP. Clear and informative description of UDP NAT hole punching can be found here.
If you need reliable communication, you can use reliable protocols over UDP:
SCTP (libraries) - standardized one, or
one of many custom protocols, e.g. RakNet (I used this library, it's quite mature and feature-rich and has NAT punching implementation), Enet or many others (Q8)
Ephemeral ports won't magically eliminate the need to relay through the server, because they are only valid during the life of the session opened through a well known service port. Basically ephemeral ports depend on a server session.
You will need to use the server to relay communications between both clients, that is act as a proxy server. One option would be to setup a SSH tunnel through a SSH proxy server, with the added benefit of security.
Still this doesn't guarantee that the firewall won't block the connection. That depends on the firewall type and configuration. Most residential routers that act as firewalls, by default block all incoming connections. This is normally fine because most of the time the computers behind the firewall act only as clients, which initiate the connections to the outside. And this setup varies, because some restrict initiating connections only to well known service ports like HTTP, HTTPS, FTP, SFTP, SSH, etc., and if your proxy server uses a non-well-known-service port then the connection will be blocked.
But firewalls can be setup to block outgoing traffic also, this is most common in corporate networks, which don't even allow direct connections to web servers and route everything through proxy servers, in order to control resource usage.
You can also research on the use of UPnP to open ports dynamically.

Http tunnel sample

Is it possible to create an HTTP tunnel in Delphi or C++?
My application connects to several HTTP servers that do not belong to the company I work for. Because of that, our users need to open their firewall ports to allow those connections. I thought about creating a tunnel at my company and redirecting HTTP requests made by my application through this tunnel. This way, my clients will only need to open one port and the tunnel will handle all requests. All requests are made with POST or GET using indy components.
EDIT: I can't use an HTTP proxy. Some of my users have already got their own HTTP proxy and it is going to be impossible to connect to two different proxy servers at the same time.
Here is a free component is kind of old but it works you can get yourself inspired from there
TGpHTTPProxy
Or you can try this samples
https://sites.google.com/site/delphibasics/home/delphibasicssnippets/examplesocks4proxybyaphex
https://sites.google.com/site/delphibasics/home/delphibasicssnippets/multi-threadedhttpproxyserver
As Warren P. and Rob Kennedy suggest, you really just need a proxy server. Don't write a tunnel yourself, it's a huge overkill and it's far from easy (writing a robust socket application is more time consuming than it first appears to be).
If you want something dead simple look for datapipe.c or netcat (nc) unix command. SSH can create tunnels too (look in OpenSSH and PuTTy docs).
Here is a free open source HTTP-Tunnel and UDP-Tunnel: http://barbatunnel.codeplex.com/

Which one can I choose? SSH or AMQP?

My application runs in Windows and is implemented using C++/Qt.
The application will invoke another application deployed in the Linux server which in turn will invoke some third party tools. The Linux server application will send some status updates based on the running of third party tools. Usually the third party application will run for hours and the updates will be sent at various stages. The Linux server may also has to send some files in addition to the status updates and the Windows client will also send some files required for the running of those third party tools.
I planned to implement this in libssh2 since file transfers can be done and applications can be executed as well using libssh2_channel_exec(). Updates can be sent and received through non-blocking socket transfers. Also the transfers must be secured and they are password authenticated, so I thought SSH will conform my requirements.
I also looked into Qpid of apache which implements the AMQP. The messaging seems to be a more appropriate one for my status updates since the updates are less frequent. But I am not so sure about the secured connection, password authentication and also the application invocation.
So, which one can I choose between these two? Or is there any other better option available? I am not quite used to network programming so any pointers, links regarding this are welcome..
Have you considered some web-based solutions like XML-RPC, REST, SOAP or other? Note that you can either have constant network connection and stream updates or just make your client ask for update as often as it needs.
Also, I think that building solution based on some of these protocols will give you easier coding - no need for some low-level solutions when you have great libraries. As for security part, I would consider SSL that is part of HTTPS protocol to be secure enough. Of course you can also do it hybrid style, for example SSH tunel to secure server and use SSH key authorization.
But if you are sure youwant SSH or AMQP then use first one - I think it has better security. Also, try not using username/passowrd. Instead use mentioned above keys.
Start with SSH, and then consider layering other protocols on top. You can use SSH port forwarding to create a VPN connection to a server, and maybe that will make it easier to use something like AMQP or 0MQ.