block certain websites using C++ - c++

I am looking for a way to block certain websites using C++.
The agent is installed in the End-User, and if an unauthorized site is accessed, it must be redirected to an error page.
The hosts file is not used because the user can modify it arbitrarily.
Here is a list of possible searches.
(I prefer blocking through hooking rather than developing a network driver.)
Windows Filtering Platform
Send FIN packet to server using WinPCAP
DNS Query control
Browsers target IE, Chrome, and Edge.
What are the pros and cons of the above list?
Or, Is there any possible way other than the above list?
Since there is not enough development time, there is not much learning time.
If you know the API, documentation, or code snippets you can refer to, please respond.
Thankds.

Related

DLL injection for browser alone

I want to be able to type www.mydomain.com into my web browser but have the actual traffic go to something.mydomain.com. I thought to, maybe inject a dll into the process browser(firefox.exe). I tried to use some methods like hooking, dll injection using create remote thread etc. But, since I'm a newbie, especially when it comes to C++ or Assembly level languages, I coudn't understand much about it. The one's I could understand are no longer compatible with Win 7 or higherCould someone help me by directing me in the right path.
All I want is know how to intercept/manipulate an outgoing URL request from the browser. I found that TCP/IP first creates some socket using socket() function and then connect() function. I sthere a way to intercept that?
I want this to be easy, simple and compatible with windows XP to 10. If it's not easy I'm okay with building different codes for different versions. If the script is cross platform, it would be even more awesome.
I don't think what you want to do (or more precisely the way you want to do it) is possible without being the owner of the domain and setting a HTTP redirect on the server.
Modifying the hosts file or setting up your own DNS server and having the machine or its router use that to resolve DNS queries is really the only way but...
Dependant on the browser this may not be possible. Current versions of firefox and chrome implement dns prefetching which essentially means that they come preloaded with a bunch of popular dns entries for faster page loading times.

Socket file transfer from webserver

So, I have a desktop application and I want it to be able to check a website for new versions of itself. I am completely new to sockets (Windsocks and Berkeley), so before I invest time learning network programming I want some guidance to point me in the right direction.
The application is going to pretty much download an installation file from its website. The connection will not be secure as it doesn't matter if users can see it or not. Also the application's website will most likely be hosted # godaddy (in case somebody wants to be specific).
So my questions are; What technology should I be looking into, FTP, TCP or UDP? What are some things I should keep in mind as far as the client/server communication when it comes to file transfer with a remote server? Does anybody knows if godaddy allows this type of thing?
PS. If you think this might be a little too much to accomplish without enough theoretical/technical background, then please don't hesitate to recommend a book.
Use HTTP, and use a library to download a URL to a file. This should take 1-5 lines of code.
Why build a file transfer protocol yourself using sockets? Everything you need is built-in with HTTP. There are pre-made clients and servers available.

Socket Server vs. Standard Servers

I'm working on a project of which a large part is server side software. I started programming in C++ using the sockets library. But, one of my partners suggested that we use a standard server like IIS, Apache or nginx.
Which one is better to do, in the long run? When I program it in C++, I have direct access to the raw requests where as in the case of using standard servers I need to use a scripting language to handle the requests. In any case, which one is the better option and why?
Also, when it comes to security for things like DDOS attacks etc., do the standard servers already have protection? If I would want to implement it in my socket server, what is the best way?
"Server side software" could mean lots of different things, for example this could be a trivial app which "echoes" everything back on a specific port, to a telnet/ftp server to a webserver running lots of "services".
So where in this gamut of possibilities does your particular application lie? Without further information, it's difficult to make any suggestions, but let's see..
Web Services, i.e. your "server side" requirement is to handle individual requests and respond having done some set of business logic. Typically communication is via SOAP/XML, and this is ideal if you bave web based clients (though nothing prevents your from accessing these services via standalone clients). Typially you host these on web servers as you mentioned, and often they are easiest written in Java (I've yet to come across one that needed to be written in C++!)
Simple web site - slightly different to the above, respods to HTML get/post requests and serves up static or dymanic content (I'm guessing this is not what you're after!)
Standalone server which responds to something specific, here you'd have to implement your own "messaging"/protocols etc. and the server will carry out a specific function on incoming request and potentially send responses back. Key thing here is that the server does something specific, and is not a generic container (at which point 1 makes more sense!)
So where does your application lie? If 1/2 use Java or some scripting language (such as Perl/ASP/JSP etc.) If 3, you can certainly use C++, and if you do, use a suitable abstraction, such as boost::asio and Google Protocol buffers, save yourself a lot of headache...
With regards to security, ofcourse bugs and security holes are found all the time, however the good thing with some of these OS projects is that the community will tackle and fix them. Let's just say, you'll be safer using them than your own custom handrolled imlpementation, the likelyhood that you'll be able to address all the issues that they would have encountered in the years they've been around is very small (no disrespect to your abilities!)
EDIT: now that there's a little more info, here is one possible approach (this is what I've done in the past, and I've jused Java most of the way..)
The client facing server should be something reliable, esp. if it's over the internet, here I would use a proven product, something like Apache is good or IIS (depends on which technologies you have available). IMHO, I would go for jBoss AS - really powerful and easily customisable piece of kit, and integrates really nicely with lots of different things (all Java ofcourse!) You could then have a simple bit of Java which can then delegate to your actual Server processes that do the work..
For the Server procesess you can use C++ if that's what you are comfortable with
There is one key bit which I left out, and this is how 1 & 2 talk to each other. This is where you should look at an open source messaging product (even more higher level than asio or protocol buffers), and here I would look at something like Zero MQ, or Red Hat Messaging (both are MQ messaging protocols), the great advantage of this type of "messaging bus" is that there is no tight coupling between your servers, with your own handrolled implementation, you'll be doing lots of boilerplate to get the interaction to work just right, with something like MQ, you'll have multiplatform communication without having to get into the details... You wil save yourself a lot of time and bother if you elect to use something like that.. (btw. there are other messaging products out there, and some are easier to use - such as Tibco RV or EMS etc, however they are commercial products and licenses will cost a lot of money!)
With a messaging solution your servers become trivial as they simply handle incoming messagins and send messages back out again, and you can focus on the business logic...
my two pennies... :)
If you opt for 1st solution in Nim's list (web services) I would suggest you to have a look at WSO's web services framework for C++ , Axis CPP and Axis2/C web services framework (if you are not restricted to C++). Web Services might be the best solution for your requirement as you can quickly build them and use either as processing or proxy modules on the server side of your system.

Wait for certain website to be accessed

My objective is to have an event that is triggered when a website is accessed.
Now, maybe through the window title, or the text in the window. Or maybe even reading a URL.
As of now I am aware of FindWindow (class,title);
However all attempts to put this line of code into a loop and it's exit condition being when the window appears have been fruitless.
Any assistance would be very helpful.
That's not possible. At least if I understood you correctly.
You want to register a callback when ANY software on your machine accesses a specific website?
Just imagine a browser uses SSL, there is no way to detect this by listening to the traffic or something similar.
However, if you want to be notified about all connections to a specific IP, then you could use sniffing mechanisms of your kernel or even redirect all traffic to this IP to a proxy you have set up with iptables or similar.
Windows has a sniffing library called WinPCap, on linux you could use tcpdump.
Though, more information about your problem would be nice.
Looking for window titles can be a bit problematic. I don't know how much control you have over the desktop, but you might consider building an addon for Firefox (or the equivalent in IE) to look for this particular site.
https://developer.mozilla.org/En/Extensions/Firefox
You might also consider building a simple local proxy server (depending on what you are doing) that looks for this site and performs some action. You would have to make sure all the browsers on the machine point to this local proxy to get it working correctly. See the link below for some discussion on a custom proxy server:
How to create a simple proxy in C#?

How to keep a C++ realtime server application with a modern web client interface?

I develop industrial client/server application (C++) with strong real time requirements.
I feel it is time to change the look of the client interface - which is developed in MFC - but I am wondering which would be the right choice.
If I go for a web client is there any way to exchange data between C++ and javascript other than AJAX <-> Web service <-> COM ?
Requirements for the web client are: Quick statuses refresh, user commands, tables
My team had to make that same decision a few months ago...
The cool thing about making it a web application would be that it would be very easy to modify later on. Even the user of the interface (with a little know-how) could modify it to suit his/her needs. Custom software becomes just that much easier.
We went with a web interface and ajax seems the way to go, it was quite responsive.
On the other hand, depending on how strong your real time requirements are, it might prove difficult. We had the challenge of plotting real time data through a browser, we ended up going with a firefox plugin to draw the plot. If you're simply trying to display real time text data, it shouldn't be as big an issue.
Run some tests for your specific application and see what it looks like.
Something else to consider, if you are having a web page be an interface to your server, keep in mind you will need to figure a way to update one client when another changes the state of the server if you plan on allowing multiple interfaces to your server.
I usually build my applications 2-folded :
Have the real heavy-duty application CLI-only. The protocol used is usually text-only based, composed of requests and answers.
Wrap a GUI around as another process that talks to the CLI back-end.
The web interface is then just another GUI to wrap around. It is also much easier to wrap a REST/JSON based API on the CLI interface (just automatically translate the messages).
The debugging is also quite easy to do, since you can just dump the requests between the 2 elements and reproduce the bugs much more easily.
Write an HTTP server in your server to handle the AJAX feedback. If you don't want to serve files, create your server on a non-standard port (eg. 8081) and use a regular web server for the actual web page delivery. Now have your AJAX engine communicate with the server on the Bizarro port instead of port 80.
But it's not that hard to write the file server part, also. If you do that, you also get to generate web pages on-the-fly with your data pre-filled, if you want.
Google Desktop Search does this now. When I search my desktop for 'foobar', the URL that opens is this:
http://127.0.0.1:4664/search?q=foobar&flags=68&num=10
In this case, the 4664 is the Bizarro port. (GoogleDesktop serves all the data here; it only uses the Bizarro port to avoid conflicts with any web server I might be running.)
You may want to consider where your data lives. If your application feeds a back-end database, you could write a web app leaving your c++ code in tact -- the web application would be independent and offer up pages to web users and talk directly to the database -- In this case you have as many options, and more, as you have indicated.