Create a SOCKS Proxy that does nothing special - c++

I am trying to create a SOCKS proxy in C++ that runs as a background process on localhost.
If the user's browser is configured to use the proxy, I want all HTTP requests to be passed along through the normal TCP/IP stack. i.e. The browser will behave exactly as it normally would.
Eventually I will add another layer which will check to see if the requested resource matches certain criteria, and if so will handle the request differently. But for now I'm just trying to solve the basic problem... how to create a SOCKS proxy that doesn't change anything?

I would look into the Squid project, depending on what you need it for.
http://www.squid-cache.org/
GPL licensed source.
Insanely nice for many good things.
Jacob

It is far easier to build a HTTP Proxy then a SOCKS4/SOCKS5 as HTTP protocol is human readable and SOCKS protocols are not. Here is an exemple of a HTTP proxy I build for experience some years ago. It used to work fine with old browsers, now its broken as it cannot handle persistent connections, but it still is a good source to learn how it works.
Maybe you rather use a already existing HTTP proxy software like Squid.

Related

Restful service in web application

I am new to RESTful webservice. Whatever I have read over the internet about RESTful webservice, I came to know that REST works similar to servlet + webservice.
Our traditional webservice looks like JSP-> Servlet -> Service -> DAO -> Database.
Will REST replace Servlet in this heirarchy?
My ultimate goal is that my web application should support mobile application and normal browser also. Is it good idea to use REST in that case. If not, in what situation we should use REST?
I hope my question is clear.
Please help me.
Thanks in advance.
There are many ways we can achieve Machine to Machine communication.
Web services also helps communicating between applications made in different platforms.
For example a .net GUI can call a java server side program for data.
REST is one of that kind, based on HTTP protocol.
SOAP web service is heavy weight (using lots of XML) where as REST is simple and you can expose any of your APIS simply using REST.
A services exposed as REST services can be invoked by a client using on of the HTTP verbs GET, POST, PUT, and DELETE with their meaning same as in HTTP.
RESTful Web Services expose the state of its resources.
An 'Employee' data can be queried and represented in any format (Json, XML ...) using REST.
Rest won't replace the Servlet in your hierarchy, actually the HTTP based REST methods are written on this servlets.
Please go through this URL : http://docs.oracle.com/javaee/6/tutorial/doc/gijqy.html
Using REST is not related to browser experience on mobile or other devices. It totally depends on the client side technology used and your browser compatibility with those technologies.
Using REST is a good idea to access data at client side using simple AJAX calls.
REST means Representational State Transfer. It is a way of thinking about architecting network communication between client and server, with the focus being on transferring a resource from server to client and back again.
To understand the significance of this first consider a different architecture, Remote Procedure Call. This is where the client calls a function on the server as if the function existed on the client.
So you want to edit a photo that exists on the server. Your client is a photo editing app that uses RPC to achieve this. You want to blur the photo so your client calls the blur() function using RPC, and the server blurs the image and sends back the updated image. Then you want to rotate the image, so your client calls the rotate() function and the server rotates the image and sends the rotated image to your client.
You might have noticed two issues. Firstly, every time you carry out an action on the photo the server needs to do some work and send you back the updated image. This uses a lot of bandwidth.
Secondly what happens if tomorrow the server developers (who might be nothing to do with the client developers) decide that rotate() is the wrong function name, it should really be rotate_image(), and they update the server. Your client continues to call rotate() but this now fails because such a function doesn't exist on the client.
REST is an alternative way of thinking about client/server communication. Instead of telling the server to carry out an action on the resource (eg rotate the photo), why doesn't the client not just get a representation of the resource and carry out all the actions it wants to (blur, rotate etc) and then send the new state of the resource back to the server.
If you did it this way the protocol to communicate between client and server can be kept very simple and will require very few updates. All you need is functions for the client to get the resource and functions to put it back on the server. The client will have to know how to blur the image and rotate the image, but it doesn't need to know how to tell the server to do this, it just needs a way of telling the server to save the updated image.
This means that the developers of the client can work away implementing new features independently to the developers of the server. Very handy if the developers of the client are nothing to do with the server (the developers of Firefox have nothing to do with the New York Times website and vice versa)
HTTP is one such protocol that follows this architecture pattern and it allows the web to grow as it has. There are a small set of verbs (functions) in HTTP and they are concerned only with transferring a representation of the resource back and forth between client and server.
Using HTTP your photo client simply sends a GET message to the server to get the photo. The client can then do everything it wants to to the photo. When it is finished it sends the PUT message with the updated photo to the server.
Because there are not domain specific actions in the protocol (blur, rotate, resize) this protocol can also be used for any number of resources. HTTP doesn't care if the resource is a HTML document, a WAV file, a Javascript script, a PNG image. The client obviously cares because it needs to understand the resource it gets, and the server might care as well. But the protocol between the client and server doesn't need to care. The only thing HTTP knows is that there is a variable Content-Type in the HTTP header where the server can tell the client what type of resource this is.
This is powerful because it means you can update your client independently to updating your server without updating the transfer protocol. HTTP hasn't been updated in years. HTML on the other hand is updated constantly, and web servers and web browser are updated constantly (Chrome is on version 33). These updates can happen independently to each other because HTTP never (rarely) changes.
A web browser from 10 years ago can still communicate with a modern web server over HTTP to get a resource. The browser might not understand the resource, say it gets a WebM video that it can't understand, but it can still get this resource without the network communication failing.
Contrast that with the example of RPC above where the client server communication will break if the server changes rotate() to rotate_image(). Every single client will have to be updated with this new function or they will crash when trying to talk to the server.
So REST is a way of thinking about client server communication, it is an architecture design/pattern. HTTP is a protocol that works under this way of thinking that focuses on simply transferring state of a resource between server and client.
Now it is important to understand that historically a lot of people, including web developers, didn't get this. So you got things like developers putting verbs into resource names to try and simulate Remote Procedure Call over HTTP. Things like
GET http://www.mywebsite.com/image/blur_image
And they would hard code the URI /image/blur_image into their client and then try and make sure the guys developing the server never changed the URI blur_image. You get back to all problems of RPC. As soon as the server guys move the resource blur_image (which is not really a resource to start with) to /image/blur_my_image the client falls over because it has that hard coded as an action to perform, rather than simply getting /image and doing what ever it wants to it.
So there are lot of examples on the web of doing REST wrong. Anything that tightly couples client and server communication is doing REST wrong. Your client should be able to survive URIs changing, or Content-Types being updated, without falling over. It can complain it doesn't understand a resource (eg Netscape Navigator 2.0 complaining it has no idea what a HTML5 document is), but it should complain that a URI has changed. This is the discoverability aspect of REST, which I haven't gone into too much, but basically your client should be able to start at the root of the server http://www.mywebsite.com and if it understand the content types it should be able to continue on to the resource it wants. You should never need to hard code a URI into your client other than the root of the server.
I could write a book about this stuff (and many have), but I hope that serves as a good introduction about what REST actually is.
#javafan I just checked the mykong example you provided. Please note that that is not standard http servlet implementation, it is a Jersy way of implimentinmg rest. So when you map all your URIs goes through this servlet com.sun.jersey.spi.container.servlet.ServletContainer and you write classes with annotation #path etc the Jersy runtime environment will do the necessary processing for you like converting the input and output objects to necessary formats (json, xml etc) depending on your configuration. You can write a simple servlet and add methods in it with #path annotation in it and that will be invoked inturn when you make the corresponding request. but the doGet and doPost methods are standard servlet methods that processes GET and POST method by default. You can ad another methods to the same servlet and add more qualifiers to process your request.
#GET, #Produces("xml") etc.
I hope this helps.

c++ how to listen HTTP requests

Im new in C++.
I need to listen HTTP requests.
Please advice me some good tutorials or examples
Thanks
update:
Platform: Windows
Language: C++
I will explain more clearly what i need
when user clicks row on this page: http://ucp-anticheat.org/monitor.html applications is automatically starts on client machine.
I want to make same thing.
I think on client side is service which listens http requests and if url starts with steam:// service automatically runs application...
Do i need to listen http requests?
What is best solution for my problem?
You can listen to http requests through a web server like mongoose , which can be easily used in C++ http://code.google.com/p/mongoose/ , and here is a good example of using mongoose web server http://code.google.com/p/mongoose/source/browse/examples/hello.c
I m not sure what you mean 'client side', if you are meaning Browser as your client, you can't control nothing outside your browser. If you want to control a machine, you need your client machine to run your exe, that has the code to act based on your server instructions.
You should create a simple server program, create a SOCKET listening on default http, https etc, ports. Usually we do it inside a loop (at each one you make a read).
Now... would be easer if you specified if you are on Unix like OS or Windows, but from now on you can google it. Like sys/socket.h or try "man 7 socket" on almost all linux (at least the ones I know).
If you want to sniff something you can google some specific apps around web.
If i get your question right, you want to be able to launch an application when someone clicks a link with a custom protocol, like steam:// or telnet://. You are looking for an Protocol Handler.
A simple way to register such an application is using the ftype program, as described here.

HTTP connection through DMZ / proxy in C++

I want to connect from webserver via dedicated proxy to the intranet. I am not sure if it matters I want to send and receive XML. It would be great if I could use HTTP.
I know of one open port 78xx which I successfully used with a TCP socket as described in this excellent tutorial
Is it possible? Or does the answer depend on the actual proxy configuration - if it scans for the protocol, and dislikes it it's gonna be blocked!?
And what library would you recommend? I just found pion - Can i link it statically? It's almost not possible to install sth on the web server for me.
EDIT My question is probably two-fold:
First, I have to add, there is an existing communication client+server, but the server is a mixup of the concrete socket and networking implementation and the API to the database, consisting of about 10 commands I find hard to extend. So I ask for a generic lib so I can rewrite that API from scratch.
Second, I need session handling, the webapplication passes the user login data to that client and there is a session-id returned which is used for all further communication - until it expires. That was the reason I asked for HTTP, but meanwhile i realized http itself is stateless.
The answer is.... in progress.- I need to practice more with c++ tcp libs etc.
My post was unfortunately hard too understand, Had some confusion about that all.

Secure data transfer over http with custom server

I am pretty new to security aspect of application. I have a C++ window service (server) that listens to a particular port for http requests. The http requests can be made via ajax or C# client. Due to some scope change now we have to secure this communication between the clients and custom server written in C++.
Therefore i am looking for options to secure this communication. Can someone help me out with the possible approaches i can take to achieve this.
Thanks
Dpak
Given that you have an existing HTTP server (non-IIS) and you want to implement HTTPS (which is easy to screw up and hard to get right), you have a couple of options:
Rewrite your server as a COM object, and then put together an IIS webservice that calls your COM object to implement the webservice. With this done, you can then configure IIS to provide your webservice via HTTP and HTTPS.
Install a proxy server (Internet Security and Acceleration Server or Apache with mod_proxy) on the same host as your existing server and setup the proxy server to listen via HTTPS and then reverse proxy the requests to your service.
The second option requires little to no changes to your application; the first option is the better long-term architectural move.
Use HTTPS.
A good toolkit for securing your communication channel is OpenSSL.
That said, even with a toolkit, there are plenty of ways to make mistakes when implementing your security layer that can leave your data open to attack. You should consider using an existing https server and having it forward the requests to your server on the loopback channel.
It's reasonably easy to do this using either OpenSSL or Microsoft's SChannel SSPI interface.
How complex it is for you depends on how you've structured your server. If it's a traditional style BSD sockets 'select' type server then it should be fairly straight forward to take the examples from either OpenSSL or SChannel and get something working pretty quickly.
If you're using a more complex server design (async sockets, IOCP, etc) then it's a bit more work as the examples don't tend to show these things. I wrote an article for Windows Developer Magazine back in 2002 which is available here which shows how to use OpenSSL with async sockets and this code can be used to work with overlapped I/O and IOCP based servers if you need to.

Non-Transparent Proxy Caching of SSL

I asked the question before but didn't phrase it quite right. I'm using RESTful principles to build a secure web-app that uses both transport authentication/encryption and message level security.
The message level security is essentially client-independent (still encrypted though), and hence this allows the individual messages to be cached, or stored on an intermediary server without significant risk of exposing private data.
Transport level security is needed to authenticate both end-points using TLS client-authentication. The situation is analogous to having a central mainframe where messages originate, and caches at each branch where the clients are located. I want the client->cache and cache->mainframe connections to be secured using TLS and the individual X509 Certificates. Hence, the client will know it is talking to a proxy, and the mainframe will know it is talking to the proxy and not directly to the client.
Is there some way of doing this using HTTP standards, and not through some hack?
Essentially, I want the client to try and access the mainframe URI, to know it has to go through the proxy, and use TLS with the proxy (with the proxy having its own certificate), and then for the proxy to proceed to connect to the mainframe (with each having their own certificate) on behalf of the client. The proxy can cache the data the mainframe returns, and use that instead of having to connect to the mainframe each time.
Does anybody know proxy/caching software or a method that will allow this?
Would this get more responses on serverfault.com as it's essentially a server software/config question rather than a programming problem per se?
Basically, it sounds like you want a standard SSL reverse proxy with caching. You could do this without writing any code with Apache + mod_cache, configured as a reverse proxy.
The kicker is the message security. It'd only work if your requests are 100% cacheable based only on path/querystring, and if they were "unique by client" (eg, a client ID in the QS or something). Something tells me that one or both of these are not true. This would be pretty trivial to build in ASP.NET, or by extending mod_cache (basically just standard response caching, bucketed by the client cert thumbprint).