I am no expert in network programming although I do have some knowledge of Winsock, for any experts out there I am wanting to know if there is a way I can capture data at the socket coming from an application on my machine and do something with it. ie: I sent a message via MSN but I want to capture it from a custom application before it actually gets sent.
Thanks.
You can certainly capture the packets. Tools like Wireshark are proof of that (have a look at the WinPCap library). Just keep in mind that you are capturing what an application sends, so if the application sends encrypted data using SSL/TLS or similar, that is what you are going to get. You won't be able to decrypt and view the original data without the security keys used.
Altering and/or discarding packets, on the other hand, is much harder, requiring much lower level access to the system, but it is possible (see WinDivert, for example).
Related
I am trying to write a simple client-server application where a client can send or broadcast a message to one or all clients in the network. The server stores all IP addresses that are connected to it, and broadcasts a new IP if a new client connects itself.
I'm not quite sure how to implement the sending of a single message to another client. Would I just have to send a TCP message to the server and put the desired recipient as data in the TCP layer which is then extracted by the server so it knows where to send it?
I also want to add encryption to the messages which would then no longer allow the server to read the data, so I'm not sure how to solve that!?
I am using c++ and Qt5 for the implementation
I'm not quite sure how to implement the sending of a single message to
another client. Would I just have to send a TCP message to the server
and put the desired recipient as data in the TCP layer which is then
extracted by the server so it knows where to send it?
In an ideal world, the clients could talk to each other directly, since they could find out the IP addresses of the other clients from the server (either via its broadcast or by requesting a list of IP addresses from the server). If all of your clients are running on the same LAN, that can work well.
Assuming you want your system to run on the general Internet, however, that won't work so well, since many/most clients will be behind various firewalls and so they won't accept incoming TCP connections. (There are some ways around that, but they require a very advanced understanding of how TCP works, and even then they only work in certain situations, so I don't recommend attempting them in a first project)
Therefore, for a reliable client->client messaging mechanism, your best bet is indeed to have the sending client send the message to the server, along with some short header that tells the server which other client(s) the message ought to be forwarded to. My own client/server messaging system works along these lines, and I've found it to work well.
I also want to add encryption to the messages which would then no
longer allow the server to read the data, so I'm not sure how to solve
that!?
Don't worry about adding encryption until you've got the basic non-encrypted functionality working first, since encryption will make things much more difficult to debug. That said, it's perfectly possible to pass encrypted/opaque data to the server, as long as the aforementioned header data (which tells the server where to forward the message to) is not encrypted (since the server will need to be able to read the header to know what to do with the encrypted data). The trickier part will be when the receiving client gets the forwarded data from the server -- how will the receiving client know how to decrypt it? You'll need some external mechanism for clients to share keys (either symmetric keys or public/private keypairs), since if you sent the encryption keys themselves through the server, there wouldn't be much point in encrypting anything (since the server could retain a copy of any keys it forwarded, and use them to decrypt, if it wanted to)
I'm trying to build an application which allows a user to transfer files/directories from its computer to another computer when he wants in LAN ~ TCP/IP without any intervention of the receiver computer's user.
To deal this, I think Sockets would be the best alternative. Because if I use FTP and let the receiver's computer's FTP port open continually that would be a vulnerability.
Is the use of Sockets the best choise?
If yes, how to send directories and non-text files throw sockets?
When it comes to security, it's really a matter of "whatever you do, it will only be as safe as the keeping of the password/credentials needed to log in". Using FTPS or SSH protocols will encrypt the traffic between the machines, ensuring that nobody outside can "see" what the files are (or passwords, etc). SSH also has features to identify if the remote machine suddenly changes, so you can identify if somebody has introduced a "man in the middle" attack (that is, pretending to be your actual machine you are sending to)
As for sending non-text files, it shouldn't really be any different than sending text-files in all cases I'm aware of. Of coruse, if you use FTP, you need to set the protocol to "binary mode" before sending binary files, as some systems will otherwise "modify" the content (e.g. translating CR, LF and CRLF sequences to match the target - and a JPG image will certainly look quite weird when all the bytes with value 0x0A has been replaced with 0x0D 0x0A in the file...).
Of course, you could also set up a web-server with suitable software on the receiving machine and use HTTP/HTTPS protocol to upload files - with or without password protection (and in HTTPS, the password is safe as long as nobody "outside group of trust" has access to the actual sending/receiving machine, as the traffic is encrypted).
There are literally several hundred other solutions. Without understanding MUCH more about exactly what problem you are trying to solve, it's hard to make very concrete solutions.
You are going to need some sort of server on the receiving machine as there is not normally any process listening and writing what it receives into the filesystem. Have a practice with netcat (also known as nc) before you write too much code. See here.
Im working in a project that has jabber has communication platform.
The thing is that i need clients (a lot of clients) to communicate between each other not only for signalization, but to change data between them.
Imagine that the client A has 3 services available. The client B could request to A to start sending him info from each service (like a stream service) until the client B says to A to stop the services.
These services could only send one character with 100ms interval or 1000characters with 100ms interval or even send some data when its needed.
When the info sended to B, arrives it has to know what service corresponds, what action and the values (example), so im using json over jabber.
My problem is that im wasting a lot of bandwith with jabber xmpp protocol just to send a message with a body like:
{"s":"x", "x":5} //each 100ms (5 represents any number)
I really don't want to have parallel communication (like direct sockets), because jabber has all of that implemented and its easy scalable, firewall problems, sometimes i use http communications (im using BOSH in this case).
I know that there is some compression that i can do, but im wondering if you recommends something else that could not have such ammount of xml behind my message and still, using jabber.
Thanks a lot for your help.
Best Regards,
Eduardo
It sounds like, except for your significant data transfer, XMPP suits your application well.
As you probably know, XMPP was never designed or intended to be used as a big pipe for data transfer. Most applications that involve significant data transfer, such as file transfers and voice/video, use XMPP just for negotiation of a separate "out of band" stream. You say this might cause problems for you because of firewalls and web clients.
If your application is mostly transferring text, then you really should try out compression... it offers significant savings on bandwidth, if that's your most constrained resource. The downside is that it will take more client and server memory (around 300KB by default, but that can be reduced with marginal compression loss).
Alternatively you can look at tunneling your data base64-encoded using In-Band Bytestreams. I don't have your sample data, or know how you are wrapping them for transport, and this could come off worse or better. I would say it would come off better if you stripped out your JSON and made it into a more efficient binary format instead. Base64 data will not compress so well, and is roughly 33% larger than the raw data. The savings would be in being able to strip out JSON and any other extraneous wrappings, while keeping the data within the XMPP stream.
In the end scaling most applications is hard, whichever technologies you use. It requires primarily insight - you shouldn't change anything without testing it first, and you should be testing beforehand to find out what you ought to change. You should be analyzing your system for the primary bottlenecks (is it really the client's bandwidth??). Rarely in my experience has XML itself been the direct bottleneck. However ultimately all these things are unique to your application, it's not easy to give generic advice at scale.
No, Xml is no trash. Its human readable, very extensible and can be compressed extremely well.
XMPP supports stream compression, and this stream compression (mostly zlib) works extremely well according to all my tests. So if its important for you that you optimize the number of bytes you send over the wire or are on low bandwidth then use stream compression when you are on sockets. When you are on Bosh then you have to use either a server which supports HTTP compression or use a proxy in between to enable compression. But keep in mind that BOSH has also lots of overhead with all the HTTP headers.
I have a certain application running on my computer. The same application can run on many computers on a LAN or different places in the world. I want to communicate between them. So I basically want a p2p system. But I will always know which computers(specific IP address) will be peers. I just want peers to have join and leave functionality. The single most important aim will be communication speed and time required. I assume simple UDP multicast (if anything like that exists) between peers will be fastest possible solution. I dont want to retransmit messages even if lost. Should I use an existing p2p library e.g. libjingle,etc. or just create some basic framework from scratch as my needs are pretty basic?
I think you're missing the point of UDP. It's not saving any time in a sense that a message gets faster to the destination, it's just you're posting the message and don't care if it arrives safely to the other side. On WAN - it will probably not arrive on the other side. UDP accross networks is problematic, as it can be thrown out by any router on the way which is tight on bandwidth - there's no guarantee of delivery for it.
I wouldn't suggest using UDP out of the topology under your control.
As to P2P vs directed sockets - the question is what it is that you need to move around. Do you need bi/multidirectional communication between all the peers, or you're talking to a single server from all the nodes?
You mentioned multicast - that would mean that you have some centralized source of data that transmits information and all the rest listen - in this case there's no benefit for P2P, and multicast, as a UDP protocol, may not work well accross multiple networks. But you can use TCP connections to each of the nodes, and "multicast" on your own, and not through IGMP. You can (and should) use threading and non-blocking sockets if you're concerned about sending blocking you, and of course you can use the QoS settings to "ask" routers to rush your sockets through.
You can use zeromq for support all network communication:
zeromq is a simple library encapsulate TCP and UDP for high level communication.
For P2P you can use the different mode of 0mq :
mode PGM/EPGM for discover member of P2P on your LAN (it use multicast)
mode REQ/REP for ask a question to one member
mode PULL/PUSH for duplicate one resource on the net
mode Publish/subscribe for transmission a file to all requester
Warning, zeromq is hard to install on windows...
And for HMI, use green-shoes ?
i think you should succeed using multicast,
unfortunately i do not know any library,
but still in case you have to do it from scratch
take a look at this:
http://www.tldp.org/HOWTO/Multicast-HOWTO.html
good luck :-)
I want to setup a statistics monitoring platform to watch a specific service, but I'm not quiet sure how to go about it. Processing the intercepted data isn't my concern, just how to go about it. One idea was to setup a proxy between the client application and the service so that all TCP traffic went first to my proxy, the proxy would then delegate the intercepted messages to an awaiting thread/fork to pass the message on and recieve the results. The other was to try and sniff the traffic between client & service.
My primary goal is to avoid any serious loss in transmission speed between client & application but get 100% complete communications between client & service.
Environment: UBuntu 8.04
Language: c/c++
In the background I was thinking of using a sqlite DB running completely in memory or a 20-25MB memcache dameon slaved to my process.
Update:
Specifically I am trying to track the usage of keys for a memcache daemon, storing the # of sets/gets success/fails on the key. The idea is that most keys have some sort of separating character [`|_-#] to create a sort of namespace. The idea is to step in between the daemon and the client, split the keys apart by a configured separator and record statistics on them.
Exactly what are you trying to track? If you want a simple count of packets or bytes, or basic header information, then iptables will record that for you:
iptables -I INPUT -p tcp -d $HOST_IP --dport $HOST_PORT -j LOG $LOG_OPTIONS
If you need more detailed information, look into the iptables ULOG target, which sends each packet to userspace for analysis.
See http://www.netfilter.org for very thorough docs.
If you want to go the sniffer way, it might be easier to use tcpflow instead of tcpdump or libpcap. tcpflow will only output TCP payload so you don't need to care about reassembling the data stream yourself. If you prefer using a library instead of gluing a bunch of programs together you might be interested in libnids.
libnids and tcpflow are also available on other Unix flavours and do not restrict you to just Linux (contrarily to iptables).
http://www.circlemud.org/~jelson/software/tcpflow/
http://libnids.sourceforge.net/
You didn't mention one approach: you could modify memcached or your client to record the statistics you need. This is probably the easiest and cleanest approach.
Between the proxy and the libpcap approach, there are a couple of tradeoffs:
- If you do the packet capture approach, you have to reassemble the TCP
streams into something usable yourself. OTOH, if your monitor program
gets bogged down, it'll just lose some packets, it won't break the cache.
Same if it crashes. You also don't have to reconfigure anything; packet
capture is transparent.
- If you do the proxy approach, the kernel handles all the TCP work for
you. You'll never lose requests. But if your monitor bogs down, it'll bog
down the app. And if your monitor crashes, it'll break caching. You
probably will have to reconfigure your app and/or memcached servers so
that the connections go through the proxy.
In short, the proxy will probably be easier to code, but implementing it may be a royal pain, and it had better be perfect or its taking down your caching. Changing the app or memcached seems like the sanest approach to me.
BTW: You have looked at memcached's built-in statistics reporting? I don't think its granular enough for what you want, but if you haven't seen it, take a look before doing actual work :-D
iptables provides libipq, a userspace packet queuing library. From the manpage:
Netfilter provides a mechanism for
passing packets out of the stack for
queueing to userspace, then receiving
these packets back into the kernel
with a verdict specifying what to do
with the packets (such as ACCEPT or
DROP). These packets may also be
modified in userspace prior to
reinjection back into the kernel.
By setting up tailored iptables rules that forward packets to libipq, in addition to specifying the verdict for them, it's possible to do packet inspection for statistics analysis.
Another viable option is manually sniff packets by means of libpcap or PF_PACKET socket with the socket-filter support.