I have this simple program:
int main ()
{
/* INITIALIZING OPENSSL */
SSL_library_init();
SSL_load_error_strings();
ERR_load_BIO_strings();
OpenSSL_add_all_algorithms();
BIO *bio;
connectServerSSL(bio);
login(bio);
}
And this functions:
void connectServerSSL (BIO *bio)
{
SSL_CTX * ctx = SSL_CTX_new(SSLv23_client_method());
SSL * ssl;
if(! SSL_CTX_load_verify_locations(ctx, NULL, "/etc/ssl/certs"))
{
callError(ERR_LOADCERT);
}
bio = BIO_new_ssl_connect(ctx);
BIO_get_ssl(bio, &ssl);
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY);
BIO_set_conn_hostname(bio, hostnamePort);
if(BIO_do_connect(bio) <= 0)
{
callError(ERR_CONNECTION);
}
if(SSL_get_verify_result(ssl) != X509_V_OK)
{
callError(ERR_VALIDCERT);
}
}
When I use this:
BIO_write(bio, request.c_str(), request.size())
In function connectServerSSL it works OK.
But when i want to use it in some other function:
void login (BIO *bio)
{
BIO_write(bio, request.c_str(), request.size());
}
I get Segmentation fault (core dumped).
In C and C++, even pointers are passed by value. So you need to either change the parameter of connectServerSSL to a BIO *& or else redefine it in the C-style of things:
void connectServerSSL (BIO ** bio_ptr)
{
SSL_CTX * ctx = SSL_CTX_new(SSLv23_client_method());
SSL * ssl;
if(! SSL_CTX_load_verify_locations(ctx, NULL, "/etc/ssl/certs"))
{
callError(ERR_LOADCERT);
}
BIO * bio = BIO_new_ssl_connect(ctx);
BIO_get_ssl(bio, &ssl);
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY);
BIO_set_conn_hostname(bio, hostnamePort);
if(BIO_do_connect(bio) <= 0)
{
callError(ERR_CONNECTION);
}
if(SSL_get_verify_result(ssl) != X509_V_OK)
{
callError(ERR_VALIDCERT);
}
*bio_ptr = bio;
}
// Example usage:
void example()
{
BIO * bio;
connectServerSSL(&bio);
BIO_write(bio, request.c_str(), request.size());
}
Your connectServerSSL() function has a parameter named bio, and it writes only to this temporary variable, not to the bio variable in main(), which is uninitialized.
Change the signature of connectServerSSL() to BIO* connectServerSSL(void) and call it with bio = connectServerSSL(). You could also call the function with a BIO** newbio argument, call it with &bio, and set *newbio, which will update the bio variable in main().
Some good habits to help avoid bugs like this: initialize your variables to default values, use assertions to check that your inputs are valid, and single-step through in a debugger. If you’d initialized the variable in main() to BIO* bio = NULL, or better yet, BIO* const bio = connectServerSSL(), then it would have been obvious in the debugger that it was still uninitialized on return.
Related
I'm trying to create a basic server and client using OpenSSL and its BIOs but BIO_do_connect returns -1. ERR_get_error returns 0 after that.
I've tried to minimize the code below by just writing // check [condition]. In my real code I'm doing the same thing with an if check and then I print out the error returned by ERR_get_error. (so if condition is true I'm printing an error msg)
This is my code for the server:
// init OpenSSL
SSL_load_error_strings();
ERR_load_BIO_strings();
SSL_library_init();
OpenSSL_add_all_algorithms();
SSL_CTX *ctx = SSL_CTX_new(SSLv23_server_method());
SSL_CTX_set_default_passwd_cb(ctx, &myPasswordCallback);
int certState = SSL_CTX_use_certificate_file(ctx, "../certs/cert.pem", SSL_FILETYPE_PEM);
// check certState < 0
int keyState = SSL_CTX_use_PrivateKey_file(ctx, "../certs/key.pem", SSL_FILETYPE_PEM);
// check keyState < 0
BIO *serverBio = BIO_new_ssl(ctx, 0);
// check serverBio == nullptr
SSL *serverSsl = nullptr;
BIO_get_ssl(serverBio, &serverSsl);
// check serverSsl == nullptr
SSL_set_mode(serverSsl, SSL_MODE_AUTO_RETRY);
BIO *acceptBio = BIO_new_accept("6672");
// check acceptBio == nullptr
int setupAcceptResult = BIO_do_accept(acceptBio);
// check setupAcceptResult <= 0
int acceptResult = BIO_do_accept(acceptBio);
// check acceptResult <= 0
BIO *clientBio = BIO_pop(acceptBio);
// check clientBio == nullptr
BIO_free_all(clientBio);
BIO_free_all(acceptBio);
BIO_free_all(serverBio);
// cleanup OpenSSL
SSL_CTX_free(ctx);
EVP_cleanup();
ERR_free_strings();
This server runs fine but my client fails to connect to it:
// init OpenSSL
SSL_load_error_strings();
ERR_load_BIO_strings();
SSL_library_init();
OpenSSL_add_all_algorithms();
SSL_CTX *ctx = SSL_CTX_new(SSLv23_client_method());
SSL_CTX_set_default_passwd_cb(ctx, &myPasswordCallback);
int certState = SSL_CTX_use_certificate_file(ctx, "../certs/cert.pem", SSL_FILETYPE_PEM);
// check certState < 0
int keyState = SSL_CTX_use_PrivateKey_file(ctx, "../certs/key.pem", SSL_FILETYPE_PEM);
// check keyState < 0
BIO *clientBio = BIO_new_ssl_connect(ctx);
SSL *clientSsl = nullptr;
BIO_get_ssl(clientBio, &clientSsl);
// check clientSsl == nullptr
SSL_set_mode(clientSsl, SSL_MODE_AUTO_RETRY);
BIO_set_conn_hostname(clientBio, "localhost:6672");
long connectionState = BIO_do_connect(clientBio);
// check connectionState <= 0
// here it fails; connectionState is -1
long sslState = SSL_get_verify_result(clientSsl);
// check sslState != X509_V_OK
BIO_free_all(clientBio);
SSL_CTX_free(ctx);
EVP_cleanup();
ERR_free_strings();
I'm sorry for posting so much code. I didn't really find a complete example of OpenSSL server/client using BIOs.
You server code is essentially this:
setup serverBio as SSL
create a new BIO acceptBio without SSL
accept the connection connection -> clientBio
free everything
The server is not doing any SSL handshake here since the serverBio gets not used for the newly created TCP connection clientBio.
Apart from that I recommend that you test your server and client first against known good client and server so that you can faster figure out where the problem is. openssl s_client and openssl s_server provide such test client and server. Also packet capturing (wireshark) helps to find out what happens between server and client.
I have a very simple application that goes through a list of hostnames and connects to each one of them on the HTTPS port to obtain fresh server data for client identified data.
In order to obtain the data i use OpenSSL but it seems like it is leaking the memory everytime.
Class responsible for connecting/putting/receivng the SSL data.
class CConnector
{
public:
static std::string GetData (const std::string& strHostName)
{
// Initialize malloc, free, etc for OpenSSL's use
CRYPTO_malloc_init();
// Initialize OpenSSL's SSL libraries
SSL_library_init();
// Load all available encryption algorithms
OpenSSL_add_all_algorithms();
//
std::string strRequest="GET /\r\n";
// Set up a SSL_CTX object, which will tell our BIO object how to do its work
SSL_CTX* ctx = SSL_CTX_new(SSLv23_client_method());
// Create our BIO object for SSL connections.
BIO* bio = BIO_new_ssl_connect(ctx);
// Create a SSL object pointer, which our BIO object will provide.
SSL* ssl = NULL;
// Failure?
if (bio == NULL)
{
CLogger::Instance()->Write(XLOGEVENT_LOCATION,CLogger::eState::ERROR, "BIO");
ERR_print_errors_fp(stderr);
if(ctx!=NULL)SSL_CTX_free(ctx);
if(bio!=NULL)BIO_free_all(bio);
return "";
}
// Makes ssl point to bio's SSL object.
BIO_get_ssl(bio, &ssl);
// Set the SSL to automatically retry on failure.
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY);
// We're connection to google.com on port 443.
std::string strHost = GetHostFromURL(strHostName);
strHost+=":https";
//
BIO_set_conn_hostname(bio, strHost.data());
// Same as before, try to connect.
if (BIO_do_connect(bio) <= 0)
{
CLogger::Instance()->Write(XLOGEVENT_LOCATION,CLogger::eState::ERROR, "cannot connect");
if(ctx!=NULL)SSL_CTX_free(ctx);
if(bio!=NULL)BIO_free_all(bio);
return "";
}
// Now we need to do the SSL handshake, so we can communicate.
if (BIO_do_handshake(bio) <= 0)
{
CLogger::Instance()->Write(XLOGEVENT_LOCATION,CLogger::eState::ERROR, "SSL Handshake");
if(ctx!=NULL)SSL_CTX_free(ctx);
if(bio!=NULL)BIO_free_all(bio);
return "";
}
// Create a buffer for grabbing information from the page.
char buf[1024];
memset(buf, 0, sizeof(buf));
// BIO_puts sends a null-terminated string to the server.
BIO_puts(bio, strRequest.c_str());
int iChars = 0;
while (1)
{
iChars = BIO_read(bio, buf, sizeof(buf)-1);
// Close reading
if (iChars <= 0)
break;
// Terminate the string
buf[iChars] = 0;
// Add to the final output
strOutput.append(buf);
}
SSL_shutdown(ssl);
SSL_CTX_free(ctx);
BIO_free_all(bio);
}
private:
};
And the main program calling the class method
while(1)
{
for(int a = 0; a < m_vHostNames.size(); a++)
{
std::string strOutput = CConnector::GetData(m_vHostNames[a]);
// Process the data
}
sleep(10000);
}
The debugger/profiler output:
Question:
Do i free the OpenSSL correctly? Or there is something else required?
Thank you for any input into this.
I am using openssl and zmq to write a server and a client.
My client and server need mutual authentication.
but after I set SSL_CTX_set_verify(ssl_ctx,SSL_VERIFY_FAIL_IF_NO_PEER_CERT,NULL) on server, the handshake always successes whether the client send the certificate or not.
In addition, SSL_get_peer_certificate(tls->get_ssl_()) return null and SSL_get_verify_result(tls->get_ssl_()) return 0 which means X509_V_OK.
I am really confused and desperate now. Any suggestions or corrections?
This is part of my code:
OpenSSL_add_all_algorithms();
SSL_library_init();
SSL_load_error_strings();
ERR_load_BIO_strings();
const SSL_METHOD *meth;
SSL_CTX *ssl_ctx;
//**************************part of client************************
{
meth = SSLv23_client_method();
ssl_ctx = SSL_CTX_new(meth);
SSL_CTX_set_verify(ssl_ctx,SSL_VERIFY_PEER,NULL);
int rc1 = SSL_CTX_load_verify_locations(ssl_ctx, ".\\demoCA\\private\\server_chain.pem",".\\demoCA\\private\\");///
SSL_CTX_set_default_passwd_cb_userdata(ssl_ctx,"pw");
std::string cert_chain(".\\demoCA\\private\\client_chain.pem");
std::string cert(".\\demoCA\\private\\client_crt.pem");
std::string key(".\\demoCA\\private\\client_key.pem");
int code = SSL_CTX_use_certificate_chain_file(ssl_ctx,cert_chain.c_str());
if (code != 1)
{
std::cout<<"error1\n";
//throw TLSException("failed to read credentials.");
}
code = SSL_CTX_use_PrivateKey_file(ssl_ctx,key.c_str(),SSL_FILETYPE_PEM);
i f (code != 1)
{
std::cout<<"error2\n";
//throw TLSException("failed to read credentials.");
}
if(!SSL_CTX_check_private_key(ssl_ctx))
{
std::cout<<"key wrong";
system("pause");
exit(0);
}
}
//*****************part of server****************************
{
meth = SSLv23_server_method();
ssl_ctx = SSL_CTX_new(meth);
SSL_CTX_set_verify(ssl_ctx,SSL_VERIFY_FAIL_IF_NO_PEER_CERT,NULL)
SSL_CTX_set_client_CA_list(ssl_ctx,SSL_load_client_CA_file(".\\demoCA\\private\\client_chain.pem"));//
SSL_CTX_set_default_passwd_cb_userdata(ssl_ctx,"pw");
std::string cert_chain(".\\demoCA\\private\\server_chain.pem");
std::string cert(".\\demoCA\\private\\server_crt.pem");
std::string key(".\\demoCA\\private\\server_key.pem");
int rc = SSL_CTX_use_certificate_file(ssl_ctx,cert.c_str(),SSL_FILETYPE_PEM);
if (rc!=1)
{
//throw TLSException("failed to read credentials.");
std::cout<<"error1\n";
}
rc = SSL_CTX_use_PrivateKey_file(ssl_ctx,key.c_str(),SSL_FILETYPE_PEM);
if (rc!=1)
{
//throw TLSException("failed to read credentials.");
std::cout<<"error2\n";
}
int rcode = SSL_CTX_check_private_key(ssl_ctx);
if(rcode!=1)
{
std::cout<<"key wrong";
system("pause");
//exit(0);
}
}
From the documentation of SSL_CTX_set_verify:
SSL_VERIFY_FAIL_IF_NO_PEER_CERT
Server mode: if the client did not return a certificate, the TLS/SSL handshake is immediately terminated with a "handshake failure" alert. This flag must be used together with SSL_VERIFY_PEER.
You did not use it together with SSL_VERIFY_PEER as described in the documentation and thus it has no effect.
I googled a lot and didn't get an answer, hence posting it here.
In the following C program(the server code) I want a Unix domain socket server that's listening at /tmp/unix-test-socket. My problem is that the client code is sucessfully able to connect to the server. However, once its connected and I have "accepted" the connection, the select call does NOT block.
So let me explain.
Initially the unix_domain_socket = 3
As soon as I get the first request, accept the connection and, store it in unix_domain_socket_connections[max_unix_domain_socket_connections]. The value of the socket fd is 4.
When I run the server code goes into a loop because the select call believes that always there is data in socket 4.
I run the CLIENT side as:
./unix-client "/tmp/unix-test-socket" SEND_DATA
Output from the SERVER side:
Client sent us a message!
Successfully accepted the new ION connection with fd 4!
[program_select_to_look_at_right_sockets]: Storing fd 4
Data Arrived on UNIX domain socket 4
length 10 SEND_DATA <-- I get the data sent by the client
[program_select_to_look_at_right_sockets]: Storing fd 4 *<-- Why isnt select blocking and why does it think there is still data on socket 4*
Data Arrived on UNIX domain socket 4
[program_select_to_look_at_right_sockets]: Storing fd 4
Data Arrived on UNIX domain socket 4
SERVER CODE:
int unix_domain_socket = 0;
int max_unix_domain_socket_connections;
int unix_domain_socket_connections[2];
char *unix_domain_socket_name = "/tmp/unix-test-socket";
int open_unix_domain_server()
{
int socket_fd, result;
struct sockaddr_un name;
int client_sent_quit_message;
socklen_t socket_length;
max_unix_domain_socket_connections = 0;
memset((void *) &name, 0, sizeof(name));
socket_fd = socket(AF_LOCAL, SOCK_STREAM, 0);
name.sun_family = AF_UNIX;
strcpy(name.sun_path, unix_domain_socket_name);
socket_length = strlen(name.sun_path) + sizeof(name.sun_family);
/* Remove this socket if it already exists */
unlink(name.sun_path);
result = bind(socket_fd, (struct sockaddr *) &name, socket_length);
if (result < 0)
goto Error;
result = listen(socket_fd, MAX_UNIX_DOMAIN_SOCKETS);
return socket_fd;
Error:
printf("[%s] Error in either listen or bind!\n", __FUNCTION__);
return -1;
}
int accept_new_unix_domain_connection()
{
int client_fd;
struct sockaddr_un new_connection;
socklen_t new_conn_length = sizeof(new_connection);
memset((void *) &new_connection, 0, sizeof(new_connection));
client_fd = accept(unix_domain_socket, (struct sockaddr *) &new_connection,
&new_conn_length);
if (client_fd < 0)
{
printf("The following error occurred accept failed %d %d\n", errno,
unix_domain_socket);
}
unix_domain_socket_connections[max_unix_domain_socket_connections] =
client_fd;
max_unix_domain_socket_connections++;
return client_fd;
}
int check_if_new_client_is_unix_domain(fd_set readfds)
{
int unix_fd = 0;
for (unix_fd = 0; unix_fd < 2; unix_fd++)
{
if (FD_ISSET(unix_domain_socket_connections[unix_fd], &readfds))
{
printf("Data Arrived on UNIX domain socket %d\n",
unix_domain_socket_connections[unix_fd]);
return 1;
}
}
return 0;
}
int process_data_on_unix_domain_socket(int unix_socket)
{
int length = 0;
char* data_from_gridFtp;
/* First, read the length of the text message from the socket. If
read returns zero, the client closed the connection. */
if (read(unix_socket, &length, sizeof(length)) == 0)
return 0;
/* Allocate a buffer to hold the text. */
data_from_gridFtp = (char*) malloc(length + 1);
/* Read the text itself, and print it. */
recv(unix_socket, data_from_gridFtp, length, 0);
printf("length %d %s\n", length, data_from_gridFtp);
return length;
}
void program_select_to_look_at_right_sockets(fd_set *readfds, int *maxfds)
{
int unix_fd = 0;
FD_ZERO(readfds);
FD_SET(unix_domain_socket, readfds);
for (unix_fd = 0; unix_fd < 2; unix_fd++)
{
if (unix_domain_socket_connections[unix_fd])
{
printf("[%s]: Storing fd %d\n", __FUNCTION__,
unix_domain_socket_connections[unix_fd]);
FD_SET(unix_domain_socket_connections[unix_fd], readfds);
if (*maxfds < unix_domain_socket_connections[unix_fd])
*maxfds = unix_domain_socket_connections[unix_fd];
}
}
}
int main(int argc, char**argv)
{
int result, maxfds, clientfd, loop;
fd_set readfds;
int activity;
socklen_t client_len;
struct sockaddr_in client_address;
FD_ZERO(&readfds);
unix_domain_socket = open_unix_domain_server();
if (unix_domain_socket < 0)
return -1;
maxfds = unix_domain_socket;
FD_SET(unix_domain_socket, &readfds);
for (loop = 0; loop < 4; loop++)
{
program_select_to_look_at_right_sockets(&readfds, &maxfds);
activity = select(maxfds + 1, &readfds, NULL, NULL, NULL);
if (FD_ISSET(unix_domain_socket, &readfds))
{
printf("client sent us a message!\n");
clientfd = accept_new_unix_domain_connection();
if (clientfd < 0)
break;
}
else if (check_if_new_client_is_unix_domain(readfds))
{
process_data_on_unix_domain_socket(clientfd);
}
}
}
CLIENT CODE:
/* Write TEXT to the socket given by file descriptor SOCKET_FD. */
void write_text(int socket_fd, const char* text)
{
/* Write the number of bytes in the string, including
NUL-termination. */
int length = strlen(text) + 1;
send(socket_fd, &length, sizeof(length), 0);
/* Write the string. */
send(socket_fd, text, length, 0);
}
int main(int argc, char* const argv[])
{
const char* const socket_name = argv[1];
const char* const message = argv[2];
int socket_fd;
struct sockaddr_un name;
/* Create the socket. */
socket_fd = socket(PF_LOCAL, SOCK_STREAM, 0);
/* Store the server’s name in the socket address. */
name.sun_family = AF_UNIX;
strcpy(name.sun_path, socket_name);
/* Connect the socket. */
connect(socket_fd, (struct sockaddr *) &name, SUN_LEN(&name));
/* Write the text on the command line to the socket. */
write_text(socket_fd, message);
close(socket_fd);
return 0;
}
You will find that select() will return "ready for reading" if the far end has closed... The rule for "ready for reading" is that it is true iff a read() would not block. read() does not block if it returns 0.
According to the select linux man pages there is a bug related to this behaviour:
Under Linux, select() may report a socket file descriptor as "ready for reading", while nevertheless a subsequent read blocks. This could for example happen when data has arrived but upon examination has wrong checksum and is discarded. There may be other circumstances in which a file descriptor is spuriously reported as ready.
On the other hand, I recommend you to consider the strategy for handle activity and process data into the loop(irrelevant parts removed):
for (loop = 0; loop<4; loop++)
{
// ...
activity = select( maxfds + 1 , &readfds , NULL , NULL , NULL);
// ...
}
This will block for the first sockect while the 2nd, 3th, and fourth might be ready. At least use a timeout and check errno for handle timeout event. See select man pages for more info.
I am creating a winsock UDP program. code i am using is shown below.
I am always getting port assignment error.
I am not able to understand why port always allocated is zero. If some can help me with this....
void UDPecho(const char *, const char *);
void errexit(const char *, ...);
#define LINELEN 128
#define WSVERS MAKEWORD(2, 0)
void main(int argc, char *argv[])
{
char *host = "localhost";
char *service = "echo";
WSADATA wsadata;
switch (argc) {
case 1:
host = "localhost";
break;
case 3:
service = argv[2];
/* FALL THROUGH */
case 2:
host = argv[1];
break;
default:
fprintf(stderr, "usage: UDPecho [host [port]]\n");
exit(1);
}
if (WSAStartup(WSVERS, &wsadata))
errexit("WSAStartup failed\n");
UDPecho(host, service);
WSACleanup();
exit(0);
}
void UDPecho(const char *host, const char *service)
{
char buf[LINELEN+1];
SOCKET s;
int nchars;
struct hostent *phe;
struct servent *pse;
struct protoent *ppe;
struct sockaddr_in sin, my_sin;
int type, status, client_port, size;
char *transport = "udp";
memset(&sin, 0, sizeof(sin));
sin.sin_family = AF_INET;
/* Map service name to port number */
if ( pse = getservbyname(service, transport) )
sin.sin_port = pse->s_port;
else if ( (sin.sin_port = htons((u_short)atoi(service)))== 0)
errexit("can't get \"%s\" service entry\n", service);
/* Map host name to IP address, allowing for dotted decimal */
if ( phe = gethostbyname(host) )
memcpy(&sin.sin_addr, phe->h_addr, phe->h_length);
else if ( (sin.sin_addr.s_addr = inet_addr(host)) == INADDR_NONE)
errexit("can't get \"%s\" host entry\n", host);
printf("Our target server is at address %s\n", inet_ntoa(sin.sin_addr));
printf("The size of an FD set is %d\n", sizeof(FD_SET));
/* Map protocol name to protocol number */
if ( (ppe = getprotobyname(transport)) == 0)
errexit("can't get \"%s\" protocol entry\n", transport);
/* Use protocol to choose a socket type */
if (strcmp(transport, "udp") == 0)
type = SOCK_DGRAM;
else
type = SOCK_STREAM;
/* Allocate a socket */
s = socket(PF_INET, type, ppe->p_proto);
if (s == INVALID_SOCKET)
errexit("can't create socket: %d\n", GetLastError());
size = sizeof(sin);
memset(&my_sin, 0, sizeof(sin));
getsockname (s, (struct sockaddr *) &my_sin, &size);
client_port = ntohs(my_sin.sin_port);
if (client_port != 0)
printf ("We are using port %2d\n", client_port);
else {
printf("No port assigned yet\n");
}
}
void errexit(const char *format, ...)
{
va_list args;
va_start(args, format);
vfprintf(stderr, format, args);
va_end(args);
WSACleanup();
exit(1);
}
UDP doesn't bind to the listening port until you either issue a sendto() or a bind() on the socket. The latter lets you select the port that you want to listen on. Sendto(), on the other hand, will pick an ephemeral port for you. I would expect that the port will remain zero until you do one of these two things.
Clarification
I looked into this a little more after some of the comment. According to the Single UNIX Specification the result of calling socket() is an unbound socket. A socket is bound explicitly by calling bind() or implicitly sendto().
Think of a socket's name as a tuple containing its (Address Family, Protocol, local IP Address, and local Port Number). The first two are specified in the socket() call and the last two by calling bind(). In the case of connectionless protocols, a call to sendto() on a disconnected socket will result in an implicit bind to an OS chosen port number.
The most surprising thing is that the only reference that I can find to this behavior is in the remarks section of the Microsoft documentation for sendto().
If the socket is unbound, unique values are assigned to the local association by the system and the socket is then marked as bound. An application can use getsockname (Windows Sockets) to determine the local socket name in this case.
The Single UNIX Specification for getsockname() states:
If the socket has not been bound to a local name, the value stored in the object pointed to by address is unspecified.
It seems that a successful return with an unspecified result is the "standard" behavior... hmmm... The implementations that I have tried all return successfully with a socket address of 0.0.0.0:0 which corresponds to INADDR_ANY with an unspecified port. After calling either bind() or sendto(), getsockname() returns a populated socket address though the address portion might still be INADDR_ANY.