Background: I've got an C++/Qt-based application that communicates with servers on the user's LAN. It uses non-blocking TCP and UDP sockets, and the networking is implemented via calls to the BSD sockets API (i.e. socket()/send()/recv()/select()/etc). It all works well.
The other day, just for fun, I decided to recompile the application using emscripten, so that it could run as a WebAssembly app inside a web browser.
This worked surprisingly well -- within an hour or two, I had my app up and running inside Google Chrome. However, the app's usefulness in this configuration is severely limited by the fact that it isn't able to connect to any servers -- presumably this is because it is running in a restricted/sandboxed environment.
If I wanted to pursue this line of development beyond the clever-hack-demo stage and try to make it useful, I would need to find a way for my program to discover and connect to servers on the user's LAN.
My question is: is that functionality at all possible for a Emscripten/WebAssembly-based app to perform? If so, what steps would I need to take? (i.e. would it require upgrading the LAN's servers to handle WebSocket-based connections? Would it require adding some sort of proxy server to run on the web server that the web page was served from? Is UDP even a thing in a web-app context? Are there other hoops that would also have to be jumped through?)
I use WebRequest in a client to consume a web service on Internet. Each request is triggered in a separate thread.
It works well if hosting the client in IIS. But most of the requests will get timed out error if the client is hosted in a windows service.
When I tried to debug the problem using Fiddler, the WebRequest worked well as all traffic went through 127.0.0.1:8888
Without Fiddler, the traffic goes to Internet directly through a random port, and the time out problem hits again.
The windows service runs under Local System account.
Why do I get time out if the client is in windows service without using a proxy?
Update: My original question wasn't clear. The requests are made concurrently (or at a very short interval). This is to do with the connection limit in the ServicePoint class. By default only 2 connections are allowed to the same external destination. If the destination is local, the limit will be int.Max value. That's why fiddler can magically fix the problem with the proxy. So I manually set the DefaultConnectionLimit to 100 and the requests are on wire.
Adjusting HttpWebRequest Connection Timeout in C#
The most common source of problems that is "magically" fixed by running Fiddler is when your .NET code fails to call Close() on the object returned by GetResponseStream(). See http://www.telerik.com/automated-testing-tools/blog/13-02-28/help-running-fiddler-fixes-my-app.aspx for more details.
We provide couple of SOAP web service.
Yesterday our service was down, we couldn't access to the service from the outside (can't even load the wsdl), but we could access to the service if we were connected with terminal service on the server.
The thing is one of our partner was calling our web service with 130 simultaneous threads.
So I think the service was down because this partner was occupying all the available connection. And this limitation is done by .net because I can easily read static file (txt) on my server from the outside, and the service accept the connection if it's from the local IP.
Here is my question : how can I limit the simultaneous connection count for one client ? I know I can do it for every one in IIS Manager, I can do it for outgoing request (connectionmanagement configuration). But I can't find it for incoming request.
It's strange because I think it's one of the first thing I'd set to prevent DOS attack.
(.net 3.5 , IIS 6)
I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment.
I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences.
Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche.
My first reaction is "why can't they just add my server's addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days.
My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there.
Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me.
Thanks for reading my long-winded question,
Ripred
If your software can make an SSL connection on port 443, then you are 99% of the way there.
Typically HTTP proxies are set up to proxy SSL-on-443 (for the purposes of HTTPS). You just need to teach your software to use the HTTP proxy. Check the HTTP RFCs for the full details, but the Cliffs Notes version is:
Connect to the HTTP proxy on the proxy port;
Send to the proxy:
.
CONNECT your.real.server:443 HTTP/1.1\r\n
Host: your.real.server:443\r\n
User-Agent: YourSoftware/1.234\r\n
\r\n
Then parse the proxy response, which will start with a HTTP status code, followed by HTTP headers, followed by a blank line. You'll then be talking with your destination (if the status code indicated success, anyway), and can start talking SSL.
In many corporate environments you'll have to authenticate with the proxy - this is almost always HTTP Basic Authentication, which is pretty easy - again, see the RFCs.
We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses.
The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc.
I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic.
Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc.
Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step.
Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier.
What else should I be looking at to get this accomplished?
Is there no way this service can by hosted publicly by you or a hosting provider rather than with the customer?
I had a similar situation when I was developing kiosks. I never knew what type of network environment I'd have to deal with on the next installation.
I ended up creating a PPTP VPN to allow all the kiosks to connect to one server I hosted publicly. We then created a controller web service to expose access to the kiosks that were all connected via the VPN. I'm not sure how familiar you are with VPN's but with the VPN connection I was able to completely circumvent the firewall in front of each kiosk by accessing the kiosk via the VPN assigned IP.
Each kiosk node was incredibly easy to setup once I had a VPN server setup. It also brought management benefits and licensing revenue I originally didn't think about. with this infrastructure I was easily able to roll out services accessible via mobile phones.
Best of luck!
Solutions exist to "dynamically" access a software on a computer behind a NAT, but usually mostly for UDP communication.
The UDP hole punching technique is one of them. However, this isn't guranteed to work in every possible situation. If both sides of the communication are behind a "Symmetric Cone NAT" it won't.
You obivously can reduce the probability a client can't communicate using UPnP as a backup (or even primary) alternative.
I don't know Web Services enough and don't even know if using UDP for your webservice is an option (or if it is even possible).
Using the same technique for directly TCP is likely to fail (TCP connections aren't stateless - that causes a lot of problems here).
An alternative using the same technique, would be to set up some VPN based on UDP (just like OpenVPN), but as you stated, you'll have to manage keys, certificates, and so on. This can be automated (I did it) but still, it's not really trivial.
===EDIT===
If you really want to use TCP, you could create a simple "proxy" software on the client boxes which would serve as a relay.
You would have the following schema:
Web Service on client boxes, behind a NAT
The "proxy" software on the same boxes, establishing an outgoing (thus non-blocked) TCP connection to your company servers
Your company servers host a WebService as well, which requires a something like a "Client Identifier" to redirect the request to the adequate established TCP connection.
The proxy program interrogates the local WebService and send back the response to the company servers, which relay the response to the originate requester as well.
An alternative: you might ask the proxy software to directly connect to the requester to enhance performance, but then you might encounter the same NAT problems you're trying to avoid.
It's things like this that are the reason people are tunneling everything over http now, and why certain hardware vendors charge a small fortune for Layer 7 packet filtering.
This is a tremendous amount of work to fix one problem when the customer has at least three problems. Besides the one you've identified, if they don't know their own password, then who does? An administrator who doesn't work there anymore? That's a problem.
Second, if they don't know the password, that means they're almost certainly far behind on firmware updates to their firewall.
I think they should seriously consider doing a PROM reset on their firewall and reconfiguring from scratch (and upgrading the firmware while they're at it).
3 birds, one stone.
I had to do something similar in the past and I believe
the best option is the first one you proposed.
You can do in the easy way, using ssh with its -R option, using
publick key auth and a couple of scripts to check for
connectivity. Don't forget the various keep alive and timeout
features of ssh.
Don't worry about the performances. Use unprivileged users and ports
if you can. Don't bother to setup a CA, the public key of each remote
server is easier to maintain unless you are in the thousands.
Monitoring is quite simple. Each server should test the service on the
central server. If it fails either the tunnel is down or there's no connectivity.
Restarting the tunnel will not harm in any case.
Or you can do it at the network level, using IPsec (strongswan).
This can be trickier to setup and it's the option I used but I will
use SSH the next time, it would have saved me a lot of time.
+1 for going with a SSH tunnel. It's well known, widely available and not too hard to configure.
However, as you point out, you are running SSL already, so the SSH encryption is redundant. Instead of SSH you could just use a regular tunneling proxy, that provides the tunnelling without the encryption. I've used this one in the past, and it has worked well, although I didn't load test it - it was used with just a handful of users.
Here's a blog from someone who used the tunnelling proxy to access his webcam from outside his firewall.
Set up an Apache in front of your Tomcat. This Apache should be visible from the internet, where the Tomcat should not.
Configure Apache to forward all traffic to the Tomcat. This can easily be accomplished using mod_proxy (check out the ProxyPass and ProxyPassReverse directives).
Have your SSL certificate located in the Apache, so that all clients can talk HTTPS with the Apache server, which in turn talks plain HTTP with Tomcat.
No tunneling or other nastyness + you will be surprised how easy it is to configure Apache to do this.
If you want to have a RESTful integration to the client server, a tunnel to the central server that works as a proxy, seems the best approach.
But if this is not a hard requirement, you can let the central server handle the RESTfull stuff and integrate the central server and client server with other middleware. Good candidates would be RMI or JMS. For example, a RMI connection initiated by the client allows the server to do RMI calls to the client.
You could try to connect to an pc/ server and tunnel all the data via hamachi (Free VPN Software) because this tool you can install and it will create a reverse connection (from inside your nat to outside) so you can connect to it
site: http://hamachi.cc/