I just noticed libcurl does not set SNI field when I use an IP for an HTTPS call. I found this:
https://github.com/curl/curl/blame/master/lib/vtls/openssl.c
#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
if((0 == Curl_inet_pton(AF_INET, hostname, &addr)) &&
#ifdef ENABLE_IPV6
(0 == Curl_inet_pton(AF_INET6, hostname, &addr)) &&
#endif
sni &&
!SSL_set_tlsext_host_name(BACKEND->handle, hostname))
infof(data, "WARNING: failed to configure server name indication (SNI) "
"TLS extension\n");
#endif
According to Curl_inet_pton documentation:
* inet_pton(af, src, dst)
* convert from presentation format (which usually means ASCII printable)
* to network format (which is usually some kind of binary format).
* return:
* 1 if the address was valid for the specified address family
* 0 if the address wasn't valid (`dst' is untouched in this case)
* -1 if some other error occurred (`dst' is untouched in this case, too)
As expected, it returns 1 for IP addresses (e.g. 192.160.0.1) and thus does not call SSL_set_tlsext_host_name.
Why?
Because it's not allowed.
RFC 3546 "Transport Layer Security (TLS) Extensions" section 3.1 spells it out quite clearly:
Literal IPv4 and IPv6 addresses are not permitted in "HostName".
Related
I'm using pcap to capture ip (both v4 and v6) packets on my router. It works just fine but I've noticed that sometimes the ethertype of an ethernet frame (LINKTYPE_ETHERNET) or a linux cooked capture encapsulation (LINKTYPE_LINUX_SLL) does not correctly indicate the version of the ip packet they contain.
I was expecting that if I get a frame whose ethertype is 0x0800 (ETHERTYPE_IP) then it should contain an ipv4 packet with version == 4 and if I get a frame whose ethertype is 0x86DD (ETHERTYPE_IPV6) then it should contain an ipv6 packet with version == 6.
Most of the time the above is true but sometimes it's not. I would get a frame whose ethertype is ETHERTYPE_IP but somehow it contains an ipv6 packet or I get a frame whose ethertype is ETHERTYPE_IPV6 but it contains an ipv4 packet.
I seem to have heard "ipv4 over ipv6" or "ipv6 over ipv4" but I don't know exactly how they work or if they apply to my problem, but otherwise I'm not sure what's causing this inconsistency.
EDIT
I think my actually question is whether such behavior is normal. If so should I simply ignore the ethertype field and just check the version field in the ip header to determine if it's ipv4 or ipv6.
According to my (limited) understanding, both IPv4 and IPv6 can appear after an IPv4 (0x800) Ethernet type (ethtype). This is related to transmitting IPv4 packets over IPv6. When the ethtype is 0x800 and the IP header is version 6, then the address in the IP header is an IPv4 address mapped to IPv6.
One example that shows this is Linux UDP receive code, which checks for ethtype 0x800 and then converts the source address to ipv4 using ipv6_addr_set_v4mapped
Both the IP address and Port are confirmed not used by netstat -a -n. When I use gdb and break in the method calling bind I see that the correct IP address and Port are being used along with a reasonable socket address length of 16. This is for a UDP Listener. The remote IP is static and read from a configuration file.
This is the code,
void CSocket::Bind(IpEndPoint& endPoint)
{
int bindResult = bind( socketHandle, endPoint.GetSockAddrPtr(),
endPoint.GetAddrLength());
if(bindResult < 0)
{
TRACE_ERROR("Failed to bind to socket. %s. IpAddress %s Port %d AddrLength %d",
strerror(errno), endPoint.GetIpAddressString(),
ntohs(endPoint.GetPort()), endPoint.GetAddrLength());
this is from gdb,
Breakpoint 1, CSocket::Bind (this=0x819fa24, ipAddress="192.0.2.77",
port=4185) at Socket.cpp:126
and this is the TRACE_ERROR from the code above
ERROR: Failed to bind to socket. errno 99 (Cannot assign requested address).
IpAddress 192.0.2.77 Port 4185 AddrLength 16
I've been re-reading Beej's Guide to Network Programming but not finding a clue. This is UDP so a connection should not be required to bind. The firewall is off. Where else should I be looking?
Following on what #Aconcagua said: You want to bind an address that is local (not one that's "not in use"). You can't just make up a local address. You either use INADDR_ANY to bind to any address, or you need to bind one that is assigned to one of your local interfaces. This is likely the problem. (bind sets the local address, connect sets the remote address -- or, with UDP, you can specify the remote address per packet with sendto.) – Gil Hamilton
I have a socket server that listens and accepts connections from client, which works as follow:
... do some pre-processing (socket, binds, etc)
//listen to client
if (listen(sockfd, BACKLOG) == -1) {
perror("listen");
exit(1);
}
printf("server: waiting for connections...\n");
while(1) { // main accept() loop
sin_size = sizeof client_addr;
new_fd = accept(sockfd, (struct sockaddr *)&their_addr, &sin_size);
if (new_fd == -1) {
perror("accept");
continue;
}
//do something .....
.....
}
How can I restrict the server so it only accepts connection from specific IP addresses? For instance, I can create a text file containing a white list of IP addresses to accept, in the following format:
202.168.2.5 - 202.168.2.127
92.104.3.1 - 92.104.4.254
//and so on
So basically I want to reject connection from all the IP addresses not included in the whitelist. If the socket library API does not support this, I am okay with the idea of accepting the connections first, then just immediately close the socketfd if the peeraddress is not in the whitelist. But how to perform this, how can I check that a specific IP address is within the range specified in my whitelist? Any examples would be appreciated.
You want to call getpeername to get the address information from the client. Then check if their IP address is found in the whitelist. If not, disconnect them.
In order to check that their ip address lies within a given range, you want to convert the address bytes into one number. You can do that with the following:
unsigned int n = bytes[0] << 24 | bytes[1] << 16 | bytes[2] << 8 | bytes[3];
If the lower bound of the address range is A, and the upper bound is B, and the client's ip address is X, then they are white listed if (A <= X && X <= B).
If each range of ip addresses tests false, then they aren't on the white list and you should disconnect them.
Not sure what the question is here, or rather what the problem is. The client's address will be in their_addr, so just search your whitelist for that. If not found, close. You will probably want to either convert their_addr into the same format as your whitelist entries, or possibly vice versa.
On Windows only, you can use WSAAccept() instead of accept(). WSAAccept() has a parameter that you can pass a callback function to. Before a new connection is accepted, the callback is invoked with the addresses and QOS values for that connection. The callback can then return CF_ACCEPT, CF_DEFER, or CF_REJECT as needed.
Is there a way in which you can set the network interface to which the DNS requests can be bound to.
We have a project which requires to use a highpriority streaming session go through one interface and all the other requests channeled through the second one.
example: setting 'eth0' so that all the ares requests will go through 'eth0' and not on 'wlan0'.
I was not able to find any API in c-ares (in ares_init_options() API) that gives this option of setting interface.
Can you please let me know if there is some way to achive this or if I missed something.
Thanks,
Arjun
If you have a fairly new c-ares (c-ares >= 1.7.4), check out ares.h (It's the only place I've actually found it referenced).
/* These next 3 configure local binding for the out-going socket
* connection. Use these to specify source IP and/or network device
* on multi-homed systems.
*/
CARES_EXTERN void ares_set_local_ip4(ares_channel channel, unsigned int local_ip);
/* local_ip6 should be 16 bytes in length */
CARES_EXTERN void ares_set_local_ip6(ares_channel channel,
const unsigned char* local_ip6);
/* local_dev_name should be null terminated. */
CARES_EXTERN void ares_set_local_dev(ares_channel channel,
const char* local_dev_name);
I am using BSD sockets over a wlan. I have noticed that my server computer's ip address changes occasionally when I connect to it. The problem is that I enter the ip address into my code as a literal string. So whenever it changes I have to go into the code and change it there. How can I change the code so that it will use whatever the ip is at the time? This is the call in the server code
if ((status = getaddrinfo("192.168.2.2", port, &hints, &servinfo)) != 0)
and the client side is the same. I tried NULL for the address on both sides, but the client will not connect and just gives me a "Connection refused" error.
Thanks for any help.
Use a domain name that can be looked up in your hosts file or in DNS, rather than an IP address.
How about a command line parameter?
int main( inr argc, char* argv[] ) {
const char* addr = "myfancyhost.domain.com"; /* default address */
if ( argc > 1 ) {
addr = argv[1]; /* explicit address */
}
if ((status = getaddrinfo(addr, ...
Give your server a name, and use gethostbyname to find its address (and, generally, put the server name into a configuration file instead of hard-coding it, though hard-coding a default if you can't find the config file doesn't hurt).