Server and client on same machine with no loopback - c++

I'm running a server an a client on the same machine(linux).
How do I force the packets to go through the network(switch) and not through the loopback?
Thanks,
Michael

Since you're asking this on a programming site, I'll assume you have source code.
When you create the client-side socket, you can limit it to a specific interface. Usually you don't (you just call connect() without bind()ing it first) , and let the OS figure out the best outgoing interface, but this is not mandatory.

You can try setting the SO_BINDTODEVICE socket option on both the client and the server sockets and give it the external NIC interface as parameter.
See: http://codingrelic.geekhold.com/2009/10/code-snippet-sobindtodevice.html for an example
I am not sure this is enough - there might be a sanity check in the kernel IP stack to drop packets whose Ethernet destination and source are both you. There might be a sysctl to disable this check or you can compile your own kernel without the check for this specific test.

Maybe you should try connecting via a proxy server?

You can't, unless you have some device out there on a network whose job it is to send the data back to you. Normally, there is nothing that would do that. If you send the data out onto the network, you won't get it back.
If you have set something up to return the data to you, send the data to that, following whatever mechanism it supports.

Related

0MQ - get message ip

First, I want to give thanks for that amazing lib! I love it. A client is connecting himself to a server. The server should save the IP and do stuff with it later on (I really need the IP). I found that answer: http://lists.zeromq.org/pipermail/zeromq-dev/2010-September/006381.html but I don't understand how I get the IP out of the message (a XREP)... I think I am only able to read the ID, but the IP is managed internally by 0MQ. His second solution suggests to send the IP as part of the message, but I don't understand how to get the "public"-IP. I found that post: Get TCP address information in ZeroMQ
is pass bind a service to an ephemeral port, get a full connection endpoint ("tcp://ipaddress:port")
I don't get how this works. Does he mean something like a web-service?
In my opinion, it would be best to get the IP out of 0MQ (it has the IP already). I would even adjust 0MQ for that, if somebody could point to the place where the IP is saved, couldn't find it. The socket types are not that important, at the moment. I would prefer smth REQ-REP like. Thank you!
Summary:
TL;DR answer to your question is: you can't get IP address of the peer that sent a message, using ZeroMQ API.
Explanation:
ZeroMQ does not expose peer IP address because it is irrelevant for the message based communication that ZeroMQ is designed for. When it is possible for ZeroMQ to get IP address of client that is connecting to server (in example using method described here), it is useless. For a longer explanation here is how it works inside ZeroMQ and any other server implementation.
Server side of the connection does not handle connected clients by the means of the hashtable that maps IP to client, but by keeping track of connected "sockets" (socket descriptors) - when a server accepts (using accept()) a connection, it receives from operating system socket descriptor to use to communicate with connected peer. All server has to do is keep that descriptor around to read() from and write() to that client. Another client that connects to server receives another socket descriptor.
To summarize: even if ZeroMQ would be able to provide you with IP of connected peer, you should not depend on it. ZeroMQ hides from you connection management so you can focus on messaging. Connection management includes reconnections, which may result in a change of IP without changing the actual ZeroMQ socket connected on the other side.
So here's an example of why you might want to get the ip address a message was delivered from: we have a server whose job it is to synchronize updates onto occasionally-connected clients (think mobile devices here, though this is an extreme example of a mobile deivce.)
When the mobile unit comes onto the network, it sends a list of it's firmware files to the server via a dealer-router connection. The server has a list of all applicable firmware files; if the client needs an update it will initiate an update via a separate mechanism.
Since the IPs for the devices can (and do) change, we need to know the IP address associated with the mobile device FOR THIS CONNECTION, i.e. right now.
Yes, we absolutely can have the client send it's IP address in the message, but that's a waste of another n bytes of valuable satellite air time, and while not pure evil, is sure annoying. Zmq already has this information, if it didn't have it, it wouldn't be able to generate replies. The address is in the socket data, there's no reason the message couldn't (optionally, for all you guys who use wired networks and think disconnects are the exception) include a reference to the socket structure so you can get the address out of it. Other than pedantic religiosity, which is far too common in zmq.
The way ZeroMQ is designed there's no information provided on the remote IP. As far as I know you have to manage this through your application by sending that information as a message of some sort.
The messages themselves use an IP-agnostic ID which has more to do with the instance of ZeroMQ running than any particular interface. This is because there may be more than one transport method and interface connecting the two instances.

Single socket sends and receives over both wlan and eth interface

Using C++ I create a single UDP socket, supplying both an IPv4 address and port. I run this on Ubuntu and have both a wlan0 and eth0 interface up and running. Apparently something decides that both interfaces should be used, I appreciate that. Sending and receiving using a different interface does create a kind of a pickle (NAT traversal???) for me though. Using Wireshark I can see packages coming in, but my application does not register them.
To clarify:
I have a tracker which will supply me with a peer. The tracker will also contact that peer to send me a message. In order to overcome NAT traversal issues, I will send a puncture message.
The problem now is that the puncture messages is sent over wlan (I am testing locally with two machines), whereas the messages from the peer are coming in over eth.
So, I think the simplest solution would be to simply use one interface. (Or both one socket)
EDIT:
I will try what is mentioned here on specifying a single interface.
#Barmar, pointed out that UDP sockets may change interface when sendto is called with a destination address that would benefit from it.
I am still fuzzy on the reason for my problem though. Can someone explain why this is an issue in the first place?
EDIT2:
The above mentioned solution of forcing one interface for the socket bind did not work. Apparently the sendto method will choose to ignore this and still go for the other interface if it feels that that will work better.
Does anyone know how to make sure that socket sticks to the interface it was assigned to?
If you need to ensure that UDP replies come from the same address that the request was sent to, the solution is to use multiple sockets. You open one socket for each IP of the server (this may be more than one socket per interface, because of interface aliases), and bind the socket to that IP. Then you use select() or poll() to wait for requests on all sockets at once. When a request comes in on a particular socket, you send the reply out through that same socket, and its source IP will match the original packet's destination.

Routing sockets to another port

I have a system where I want to listen to a socket and wait to client connect and then pass the connection to another application that I'll start as soon as the connection is established.
I do not have control on this other application and can only set the port where it will listen, but I want to have one process for each new client.
This is what I'm trying to do:
I've been searching for a solution, but I thing I don't have the right terminology, but I managed to find on Richard Stevens' "Unix Network Programming" something about the AF_ROUTE family of sockets that may be combined with a SOCK_RAW to route a connection to another IP and port. But there's too little documentation about how to use this flag and seems to require superuser privileges (that I want to avoid).
Maybe there's an easier solution but I'm probably using the wrong terms. Is it clear what I want to do?
I don't think you'll be able to just "pass" the socket like you want to, especially if you can't change and recompile "APP". Sockets include various administrative overhead (resource management, etc) that are linked to the process they are owned by. In addition, if you can't recompile APP, there is no way to make it bypass the steps involved with accepting a connection and simple have an already open connected "handed" to it by your router.
However, have you considered simply using router as a pass-through? Basically, have your "Router" process connect via sockets to the each "APP" process it spawns, and simply echo whatever it recieves from the appropriate client to the appropriate APP, and visa versa for APP to client?
This does add overhead, and you will have to manage a small mapping to keep track of which clients go to which apps, but it might work (assuming the APP or client aren't basing any behavior off of the IP address they are connected to, etc). Assuming you can't recompile APP, there might not be too many other options.
The code for this is relatively simple. Your handler for data recieved from APP just looks up the socket for the appropriate app from your mapping, and then does a non blocking send of this data out on it. Likewise the handler for data recieved from client. Depending on how exactly the clients and app behave, you may have to handle a bit of synchronization (if you recieve from both simultaneously).

Rebind a socket to a different interface

Is there an existing Linux/POSIX C/C++ library or example code for how to rebind a socket from one physical interface to another?
For example, I have ping transmitting on a socket that is associated with a physical connection A and I want to rebind that socket to physical connection B and have the ping packets continue being sent and received on connection B (after a short delay during switch-over).
I only need this for session-less protocols.
Thank you
Update:
I am trying to provide failover solution for use with PPP and Ethernet devices.
I have a basic script which can accomplish 90% of the functionality through use of iptables, NAT and routing table.
The problem is when the failover occurs, the pings continue being sent on the secondary connection, however, their source IP is from the old connection.
I've spoken with a couple of people who work on commercial routers and their suggestion is to rebind the socket to the secondary interface.
Update 2:
I apologise for not specifying this earlier. This solution will run on a router. I cannot change the ping program because it will run on the clients computer. I used ping as just an example, any connection that is not session-based should be capable of being switched over. I tested this feature on several commercial routers and it does work. Unfortunately, their software is proprietary, however, from various conversations and testing, I found that they are re-binding the sockets on failover.
As of your updated post, the problem is that changing the routing info is not going to change the source address of your ping, it will just force it out the second interface. This answer contains some relevant info.
You'll need to change the ping program. You can use a socket-per-interface approach and somehow inform the program when to fail over. Or you will have to close the socket and then bind to the second interface.
You can get the interface info required a couple of ways including calling ioctl() with the SIOCGIFCONF option and looping through the returned structures to get the interface address info.
I do't think that's quite a well-defined operation. the physical interfaces have different MAC addresses, so unless you have a routing layer mapping them (NAT or the like) then they're going to have different IP addresses.
Ports are identified by a triple of <IP addr, Port number, protocol> so if your IP address changes the port is going to change.
What are you really trying to do here?
I'm not at all sure what you're trying to accomplish, but I have a guess... Are you trying to do some kind of failover? If so, then there are indeed ways to accomplish that, but why not do it in the OS instead of the application?
On one end you can use CARP, and on the other you can use interface trunking/bonding (terminology varies) in failover mode.

Intercepting traffic to memcached for statistics/analysis

I want to setup a statistics monitoring platform to watch a specific service, but I'm not quiet sure how to go about it. Processing the intercepted data isn't my concern, just how to go about it. One idea was to setup a proxy between the client application and the service so that all TCP traffic went first to my proxy, the proxy would then delegate the intercepted messages to an awaiting thread/fork to pass the message on and recieve the results. The other was to try and sniff the traffic between client & service.
My primary goal is to avoid any serious loss in transmission speed between client & application but get 100% complete communications between client & service.
Environment: UBuntu 8.04
Language: c/c++
In the background I was thinking of using a sqlite DB running completely in memory or a 20-25MB memcache dameon slaved to my process.
Update:
Specifically I am trying to track the usage of keys for a memcache daemon, storing the # of sets/gets success/fails on the key. The idea is that most keys have some sort of separating character [`|_-#] to create a sort of namespace. The idea is to step in between the daemon and the client, split the keys apart by a configured separator and record statistics on them.
Exactly what are you trying to track? If you want a simple count of packets or bytes, or basic header information, then iptables will record that for you:
iptables -I INPUT -p tcp -d $HOST_IP --dport $HOST_PORT -j LOG $LOG_OPTIONS
If you need more detailed information, look into the iptables ULOG target, which sends each packet to userspace for analysis.
See http://www.netfilter.org for very thorough docs.
If you want to go the sniffer way, it might be easier to use tcpflow instead of tcpdump or libpcap. tcpflow will only output TCP payload so you don't need to care about reassembling the data stream yourself. If you prefer using a library instead of gluing a bunch of programs together you might be interested in libnids.
libnids and tcpflow are also available on other Unix flavours and do not restrict you to just Linux (contrarily to iptables).
http://www.circlemud.org/~jelson/software/tcpflow/
http://libnids.sourceforge.net/
You didn't mention one approach: you could modify memcached or your client to record the statistics you need. This is probably the easiest and cleanest approach.
Between the proxy and the libpcap approach, there are a couple of tradeoffs:
- If you do the packet capture approach, you have to reassemble the TCP
streams into something usable yourself. OTOH, if your monitor program
gets bogged down, it'll just lose some packets, it won't break the cache.
Same if it crashes. You also don't have to reconfigure anything; packet
capture is transparent.
- If you do the proxy approach, the kernel handles all the TCP work for
you. You'll never lose requests. But if your monitor bogs down, it'll bog
down the app. And if your monitor crashes, it'll break caching. You
probably will have to reconfigure your app and/or memcached servers so
that the connections go through the proxy.
In short, the proxy will probably be easier to code, but implementing it may be a royal pain, and it had better be perfect or its taking down your caching. Changing the app or memcached seems like the sanest approach to me.
BTW: You have looked at memcached's built-in statistics reporting? I don't think its granular enough for what you want, but if you haven't seen it, take a look before doing actual work :-D
iptables provides libipq, a userspace packet queuing library. From the manpage:
Netfilter provides a mechanism for
passing packets out of the stack for
queueing to userspace, then receiving
these packets back into the kernel
with a verdict specifying what to do
with the packets (such as ACCEPT or
DROP). These packets may also be
modified in userspace prior to
reinjection back into the kernel.
By setting up tailored iptables rules that forward packets to libipq, in addition to specifying the verdict for them, it's possible to do packet inspection for statistics analysis.
Another viable option is manually sniff packets by means of libpcap or PF_PACKET socket with the socket-filter support.