I'm currently working on a UDP socket application and I need to build in support so that IPV4 and IPV6 connections can send packets to a server.
I was hoping that someone could help me out and point me in the right direction; the majority of the documentation that I found was not complete. It'd also be helpful if you could point out any differences between Winsock and BSD sockets.
Thanks in advance!
The best approach is to create an IPv6 server socket that can also accept IPv4 connections. To do so, create a regular IPv6 socket, turn off the socket option IPV6_V6ONLY, bind it to the "any" address, and start receiving. IPv4 addresses will be presented as IPv6 addresses, in the IPv4-mapped format.
The major difference across systems is whether IPV6_V6ONLY is a) available, and b) turned on or off by default. It is turned off by default on Linux (i.e. allowing dual-stack sockets without setsockopt), and is turned on on most other systems.
In addition, the IPv6 stack on Windows XP doesn't support that option. In these cases, you will need to create two separate server sockets, and place them into select or into multiple threads.
The socket API is governed by IETF RFCs and should be the same on all platforms including windows WRT IPv6.
For IPv4/IPv6 applications it's ALL about getaddrinfo() and getnameinfo(). getaddrinfo is a genius - looks at DNS, port names and capabilities of the client to resolve the eternal question of “can I use IPv4, IPv6 or both to reach a particular destination?” Or if you're going the dual-stack route and want it to return IPv4-mapped IPv6 addresses, it will do that too.
It provides a direct sockaddr * structure that can be plugged into bind(), recvfrom(), sendto() and the address family for socket()… In many cases this means no messy sockaddr_in(6) structures to fill out and deal with.
For UDP implementations I would be careful about setting dual-stack sockets or, more generally, binding to all interfaces (INADDR_ANY). The classic issue is that, when addresses are not locked down (see bind()) to specific interfaces and the system has multiple interfaces requests, responses may transit from different addresses for computers with multiple addresses based on the whims of the OS routing table, confusing application protocols—especially any systems with authentication requirements.
For UDP implementations where this is not a problem, or TCP, dual stack sockets can save a lot of time when IPv*-enabling your system. One must be careful to not rely entirely on dual-stack where it`s not absolutely necessary as there are no shortage of reasonable platforms (Old Linux, BSD, Windows 2003) deployed with IPv6 stacks not capable of dual stack sockets.
I've been playing with this under Windows and it actually does appear to be a security issue there, if you bind to the loopback address then the IPv6 socket is correctly bound to [::1] but the mapped IPv4 socket is bound to INADDR_ANY, so your (supposedly) safely local-only app is actually exposed to the world.
The RFCs don't really specify the existence of the IPV6_V6ONLY socket option, but, if it is absent, the RFCs are pretty clear that the implementation should be as though that option is FALSE.
Where the option is present, I would argue that it should default FALSE, but, for reasons passing understanding, BSD and Windows implementations default to TRUE. There is a bizarre claim that this is a security concern because an unknowing IPv6 programmer could bind thinking they were binding only to IN6ADDR_ANY for only IPv6 and accidentally accept an IPv4 connection causing a security problem. I think this is both far-fetched and absurd in addition to a surprise to anyone expecting an RFC-compliant implementation.
In the case of Windows, non-compiance won't usually be a surprise. In the case of BSD, this is unfortunate at best.
As Craig M. Brandenburg observes, getaddrinfo does all the heavy lifting to make dual IPv4/IPv6 possible. I have an experimental server and client on my localhost. I use this in the server:
hints.ai_family = AF_INET6;
hints.ai_socktype = SOCK_DGRAM;
hints.ai_flags = AI_PASSIVE;
...
The client can then connect to the server using any kind of address:
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_DGRAM;
host_port = "4950"; // whatever
// All of these work.
host_ip = "127.0.0.1"; // Pure IPv4 address
host_ip = "::ffff:127.0.0.1"; // IPv4 address expressed as IPv6
host_ip = "::1"; // Pure IPv6 address
host_ip = "localhost"; // Domain name
int rv = getaddrinfo(host_ip, host_port, &hints, &result);
...
Related
Short and simple question: I am new to boost::asio and I was wondering if it is possible to create a tcp::acceptor listening for both, IPv4 and IPv6 connections together. The tutorials on boost's homepage show something like this:
_acceptor = new tcp::acceptor(_ioService, tcp::endpoint(tcp::v4(), 3456));
where the endpoint is always specified with a specific protocol. Is it not possible to listen for IPv4 and IPv6 on the same port at the same time?
If you create a IPv6 acceptor, it will accept both IPv4 and IPv6 connections if IPV6_V6ONLY socket option is cleared. IPv4 addresses will be presented as IPv6 addresses, in the IPv4-mapped format.
Problems arise mainly around whether IPV6_V6ONLY is available or what the default value is (turned on or off). So I find it's better to set it explicitly to what you want.
Also Windows XP doesn't support the option at all.
So if you want to be compatible across systems, it's recommended to create two sockets, one for v4 and one for v6 setting IPV6_V6ONLY.
It will be seem a weird question to some. but I've searched and didn't find any answer.
When I want a dual stack server, I need to listen on INADDR_ANY for IPv4 and to in6addr_any for IPv6.
If I have more than one network card then I need to choose if I want to listen to all, or to specify which card to listen.
For this exact propose I'm using getaddrinfo method with configurable host_name. If the host_name had not configured, then I call getaddrinfo with NULL, and get the two "ANY" addresses. If I configured it with an IP (v6 or v4) I get only one address, which is also fine.
But when I'm using my hostname as the configured host_name, on a Windows machine I'm getting from getaddrinfo 3 address: one IPv4 address, and two IPv6 address. the first is seen by ipconfig as "Link-local IPv6 Address" the second is seen as "IPv6 Address" under the section "Tunnel adapter 6TO4 Adapter:".
The addresses ordered like this:
IPv6 Link Local
IPv6 Address
IPv4
So, if I'm listening to all the addresses the dual stack is actually triple stack. If I take the first IPv6 address, (as it was the convention in IPv4 server with configured host_name) I'm listening only on the "Link-local IPv6 Address" which is less accessible than the "IPv6 Address" and many client can't connect to it, while they can connect to the IPv4 address.
Now I'm trying to complicate it further. I'm connected my cellphone to the USB and activate the USB Tethering. when I resolve addresses by getaddrinfo I'm getting 5 addresses:
by this order:
USB IPv6 Link Local
Ethernet IPv6 Link Local
IPv6 Address
USB IPv4
Ethernet IPv4
So my questions are:
If it was IPv4 only I would say I take only the first IPv4. and don't care about the rest. but when using IPv6, it look like the last IPv6 is the most appropriate. is there any convention for it?
If I have multi-network machine I need to choose the first network, and listen on both IPv4 and IPv6, but here the results are mixed. again, is there any convention?
Do I need to listen to all IPv6 addresses? in that case I will listen to an IPv6 address which I don't listen to the corresponding IPv4. and I hope to avoid from it.
Thanks for any help or comment.
But please don't advice to listen only on "ANY" since I can't.
Link-local addresses are valid only within a network segment and often just for your machine to the machine at the other end of the communication link. For instance, your USB link-local address will work only for communications between your phone and your computer but not beyond that; your link-local Ethernet IPv6 address will be usable from all machines on the same hub/switch but not beyond a router (somewhat similar to a private IPv4 address). If this is not your expected use case, I suggest that you simply ignore link-local addresses.
Auto-assigned link-local addresses are created with a very specific pattern and mask, so you can detect them programmatically. Link-local IPv6 addresses are in the fe80::/64 range (meaning the first bytes of the address are fe80:0000:0000:0000 and the 8 remaining bytes can be anything), and link-local IPv4 addresses range from 169.254.1.0 to 169.254.255.255.
Also note that all hosts configure all IPv6-capable interfaces with a link-local address, and will retain it even if they are assigned another address, so there's no getting away from it.
Old post, i know, how did you resolve it finally? I am really interested to know.
For this I would recommend the option you avoid, bind to ANY, wildcard "::" , bind(.., "::", ..) and use some firewall or pack filter rules to rule out the connections you don't want to.
Background of the question:
On our machine we have multiple network interfaces which lead to different networks. There are possible overlapping IP addresses (e.g. two different machines in two different networks with the same IP address). So when we want to connect with specific peer then we need to specify not only it's IP address but also our network interface which lead to the proper network.
We want to write application in C/C++ able to connect with specific peers via TCP.
Question:
I'm trying to make a TCP connection using socket with SO_BINDTODEVICE set. Here is a simplified snippet:
sockfd = socket(AF_INET, SOCK_STREAM, 0);
setsockopt(sockfd, SOL_SOCKET, SO_BINDTODEVICE, interface_name,
strlen(interface_name));
connect(sockfd, (sockaddr *) &serv_addr, sizeof(serv_addr));
(I know about struct ifreq, but it seems that only the first field in it (ifr_name field is in use). So I can pass only name of the interface.)
If forced interface is the same as interface according to the routing table, then everything works correctly.
If forced interface is different, then (tested with Wireshark):
SYN is sent from forced interface to desired peer.
SYN,ACK from desired peer is received on forced interface.
ACK is not sent from forced interface and connection is not established. (And goto step 2.)
How to check where SYN,ACK or ACK is rejected by our system? And how correctly force TCP socket to make connection using specific interface (against routing table)?
Maybe there are some other, more convenient ways to create TCP connection on desired interface?
Thanks!
I know it wouldn't be your quite answer, but you could disable other interfaces and just enable the network you want, in your case it seems that you need all the interfaces, but I think this approach could help others. you could enable/disable network interface with something like this :
enable
ifr.ifr_flags = true;
strcpy(ifr.ifr_name, "eth0"); //you could put any interface name beside "eth0"
res = ioctl(sockfd, SIOCSIFFLAGS, &ifr);
and for disable you just need to set flag to false and the rest of the code is the same :
ifr.ifr_flags = true;
Don't use SO_BINDTODEVICE. Its not supported on all platforms and there's an easier way.
Instead bind the socket to the local IP address on the correct network that you want to use to connect to the remote side.
Ie,
sockfd = socket(AF_INET, SOCK_STREAM, 0);
struct sockaddr_in sin;
memset(&sin, 0, sizeof(sin));
sin.sin_family = AF_INET;
sin.sin_port = 0; //Auto-determine port.
sin.sin_addr.s_addr = inet_addr("192.168.1.123"); //Your IP address on same network as peer you want to connect to
bind(sockfd, (sockaddr*)&sin, sizeof(sin));
Then call connect.
For the server side you'd do the same thing except specify a port instead of 0, then call listen instead of connect.
This is a problem with kernel configuration - on many distributions it is by default configured to reject incoming packets in this specific case.
I found the solution to this problem in this answer to another similar question:
To allow such traffic you have to set some variables on your machine (as root):
sysctl -w net.ipv4.conf.all.accept_local=1
sysctl -w net.ipv4.conf.all.rp_filter=0
sysctl -w net.ipv4.conf.your_nic.rp_filter=0
where your_nic is the network interface receiving the packet. Beware to change both net.ipv4.conf.all.rp_filter and net.ipv4.conf.your_nic.rp_filter, it will not work otherwise (the kernel defaults to the most restrictive setting).
I am developing a gaming server using the Winsock2 API from Windows, just for now until porting it to Linux.
The main problem I have found is that I don't know how to differentiate gaming clients that come from the same router/network. Let´s imagine 2 gamers that are in the same network going to the Internet through the same router IP and port with, for example IP 220.100.100.100 and port 5000, how can my C/C++ server differentiate both TCP connections and know that they are two different gamers?
Can I find any difference in the sockaddr_in struct that returns the socket when accept(...) returns ??
If the two clients (gamers) are behind a router, it is hard to differentiate them just using sockets.
If you are using TCP then it shouldn't be a problem. Each client connects over a unique socket so you know anything coming in on that socket is from them. When they first log on, I assume they supply some credentials (name, password) so just associate the credentials with the socket.
If you are using a connectionless protocol, like UDP, the first time they contact you, you give them a unique number or token. Next time they contact you, they must include their token in the message, so that you can identify them.
Am I missing something?
Couldnt you differentiate by testing the socket descriptor that the call to socket(); returns?
I've got a little udp example program written using ipv4. If I alter the code to ipv6 would I still be able to communicate with anyone using the listener with an ipv4 address? I was looking at porting examples at
http://ou800doc.caldera.com/en/SDK_netapi/sockC.PortIPv4appIPv6.html
I'm not sure if simply altering the code would ensure that it worked or if I'd have to write it in duel-stack mode.
Yes and no... IPv6 does contain completely different addressing, so you'll have to recode your app to use the alternative headers and structure sizes.
However, the IPv4 address range is available within IPv6, the syntax is to add two colons before the standard address (eg ::10.11.12.13). You can also embed IPv4 addresses within IPv6 packets.
Not without the assistance of an IPv4/IPv6 gateway in the network, and even then communication will be limited by the typical problems introduced by network address translating gateways. The traditional advice for programmers facing decisions like this is to recommend supporting both IPv4 and IPv6 at the same time.
IPv4 and IPv6 are inherently incompatible with each other.
A few basic reasons:
the address space is completely different (IPv6 has 128 bit addresses, IPv4 has 32 bit addresses)
the protocol header of IPv6 looks nothing like the protocol header of IPv4. if you try to parse an IPv6 packet as IPv4 you'll get nonsense.
The obvious result of these is that if you open an IPv6 socket you can't listen to it using an IPv4 socket.