I am experimenting with IPv6 sockets, particularly the "dual stack" capability offered on Windows Vista and later, and apparently on Unix by default. I am finding that when I bind my server to a specific IP address, or to the hostname resolution of my local machine, I cannot accept a connection from an IPv4 client. When I bind to INADDR_ANY however, I can.
Please consider the following code for my server. You can see that I follow Microsoft's advice of creating an IPv6 socket, then setting the IPV6_V6ONLY flag to zero:
addrinfo* result, *pCurrent, hints;
memset(&hints, 0, sizeof hints); // Must do this!
hints.ai_family = AF_INET6;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE; // We intend to use the addrinfo in a call to connect(). (I know it is ignored if we specify a server to connect to...)
int nRet = getaddrinfo("powerhouse", "82", &hints, &result);
SOCKET sock = socket(AF_INET6, SOCK_STREAM, IPPROTO_TCP);
int no = 0;
if (setsockopt(sock, IPPROTO_IPV6, IPV6_V6ONLY, (char*)&no, sizeof(no)) != 0)
return -1;
if (bind(sock, result->ai_addr, result->ai_addrlen) == SOCKET_ERROR)
return -1;
if (listen(sock, SOMAXCONN) == SOCKET_ERROR)
return -1;
SOCKET sockClient = accept(sock, NULL, NULL);
Here is the code for my client. You can see I create an IPv4 socket and attempt to connect to my server:
addrinfo* result, *pCurrent, hints;
memset(&hints, 0, sizeof hints); // Must do this!
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
if (getaddrinfo("powerhouse", "82", &hints, &result) != 0)
return -1;
SOCKET sock = socket(result->ai_family, result->ai_socktype, result->ai_protocol);
int nRet = connect(sock, result->ai_addr, result->ai_addrlen);
The result from my connect call is always 10061: connection refused.
If I change my server code to bind to :: (or pass a NULL host to getaddrinfo() (same thing)), and change my client code to specify a NULL host in the getaddrinfo() call, then the V4 client can connect fine.
Can anyone explain why please? I have not read anything that we must specify a NULL host (hence use INADDR_ANY) if we want dual-socket behaviour. This can't be a requirement, because what I have a multihomed host and I want to accept IPv4 on only some of the available IPs?
EDIT 15/05/2013:
This is the relevant documentation which has gotten me confused as to why my code fails:
From Dual-Stack Sockets for IPv6 Winsock Applications
"Windows Vista and later offer the ability to create a single IPv6
socket which can handle both IPv6 and IPv4 traffic. For example, a TCP
listening socket for IPv6 is created, put into dual stack mode, and
bound to port 5001. This dual-stack socket can accept connections from
IPv6 TCP clients connecting to port 5001 and from IPv4 TCP clients
connecting to port 5001."
"By default, an IPv6 socket created on Windows Vista and later only
operates over the IPv6 protocol. In order to make an IPv6 socket into
a dual-stack socket, the setsockopt function must be called with the
IPV6_V6ONLY socket option to set this value to zero before the socket
is bound to an IP address. When the IPV6_V6ONLY socket option is set
to zero, a socket created for the AF_INET6 address family can be used
to send and receive packets to and from an IPv6 address or an IPv4
mapped address. (emphasis mine)"
See Question&Answers more detail:
os