but my problem is, this callback function is being executed after execution of "SSL_accept" function, but I have to choose and use the appropriate certificate before using "SSL_new" command, which is way before execution of SSL_accept.
When you start your server, you provide a default SSL_CTX
. This is used for non-SNI clients, like SSLv3 clients and TLS clients that don't utilize SNI (like Windows XP). This is needed because the callback is not invoked in this situation.
Here are some examples to tickle the behavior using OpenSSL's s_client
. To simulate a non-SNI client so that your get_ssl_servername_cb
is not called, issue:
openssl s_client -connect localhost:8443 -ssl3
# SNI added at TLSv1
openssl s_client -connect localhost:8443 -tls1
# Windows XP client
To simulate a SNI client so that your get_ssl_servername_cb
is called, issue:
openssl s_client -connect localhost:8443 -tls1 -servername localhost
You can also avoid the certificate verification errors by adding -CAfile
. This is from one of my test scripts (for testing DSS/DSA certificates on localhost
):
printf "GET / HTTP/1.1
" | /usr/local/ssl/bin/openssl s_client
-connect localhost:8443 -tls1 -servername localhost
-CAfile pki/signing-dss-cert.pem
so my question is, how can I use "SSL_CTX_set_tlsext_servername_callback" function for SNI?
See the OpenSSL source code at <openssl dir>/apps/s_server.c
; or see How to implement Server Name Indication(SNI) on OpenSSL in C or C++?.
In your get_ssl_servername_cb
(set with SSL_CTX_set_tlsext_servername_callback
), you examine the server name. One of two situations occur: you already have a SSL_CTX
for the server's name, or you need to create a SSL_CTX
for server's name.
Once you fetch the SSL_CTX
from cache or create a new SSL_CTX
, you then use SSL_set_SSL_CTX
to swap in the context. There's an example of swapping in the new context in the OpenSSL source files. See the code for s_server.c
(in <openssl dir>/apps/s_server.c
). Follow the trail of ctx2
,
Here's what it looks like in one of my projects. IsDomainInDefaultCert
determines if the requested server name is provided by the default server certificate. If not, GetServerContext
fetches the needed SSL_CTX
. GetServerContext
pulls the needed certificate out of an app-level cache; or creates it and puts it in the app-level cache (GetServerContext
also asserts one reference count on the SSL_CTX
so the OpenSSL library does not delete it from under the app).
static int ServerNameCallback(SSL *ssl, int *ad, void *arg)
{
UNUSED(ad);
UNUSED(arg);
ASSERT(ssl);
if (ssl == NULL)
return SSL_TLSEXT_ERR_NOACK;
const char* servername = SSL_get_servername(ssl, TLSEXT_NAMETYPE_host_name);
ASSERT(servername && servername[0]);
if (!servername || servername[0] == '')
return SSL_TLSEXT_ERR_NOACK;
/* Does the default cert already handle this domain? */
if (IsDomainInDefCert(servername))
return SSL_TLSEXT_ERR_OK;
/* Need a new certificate for this domain */
SSL_CTX* ctx = GetServerContext(servername);
ASSERT(ctx != NULL);
if (ctx == NULL)
return SSL_TLSEXT_ERR_NOACK;
/* Useless return value */
SSL_CTX* v = SSL_set_SSL_CTX(ssl, ctx);
ASSERT(v == ctx);
if (v != ctx)
return SSL_TLSEXT_ERR_NOACK;
return SSL_TLSEXT_ERR_OK;
}
In the code above, ad
and arg
are unused parameters. I don't know what ad
does because I don't use it. arg
can be used to pass in a context to the callback. I don't use arg
either, but s_server.c
uses it to print some debug information (the arg
is a pointer to a BIO
s tied to stderr
(and a few others), IIRC).
For completeness, SSL_CTX
are reference counted and they can be re-used. A newly created SSL_CTX
has a count of 1, which is delegated to the OpenSSL internal caching mechanism. When you hand the SSL_CTX
to a SSL
object, the count increments to 2. When the SSL
object calls SSL_CTX_free
on the SSL_CTX
, the function will decrement the reference count. If the context is expired and the reference count is 1, then the OpenSSL library will delete it from its internal cache.