WebRTC DTLS-SRTP OpenSSL Server Handshake Failure - c++

Here is my procedure in OpenSSL Server Mode,
Initialization Part of SSL and BIO variables:
map<int, SSL> m_SSLMap;
map<int, BIO> m_BioWriteMap;
map<int, BIO> m_BioReadMap;
int InitializeServerNegotiationMode(int iFd)
{
SSL *pServSslFd;
BIO *pWb, *pRb;
pServSslFd = SSL_new(m_pCtx);
assert(pServSslFd);
if ( SSL_version(pServSslFd) == DTLS1_VERSION)
{
pWb = BIO_new(BIO_s_mem());
pRb = BIO_new(BIO_s_mem());
assert(pWb);
assert(pRb);
SSL_set_bio(pServSslFd, pRb, pWb);
SSL_set_accept_state(pServSslFd);
}
m_SSLMap[iFd] = *pServSslFd;
m_BioReadMap[iFd] = *pRb;
m_BioWriteMap[iFd] = *pWb;
return INITIALIZATION_SUCCESS;
}
Server Mode Negotiation Operations when DTLS data comes to the server:
int ServerModeDTLSNegotiation(int iChannel, const char *pBuff, const int iLen, int iFd)
{
SSL *pServSslFd;
BIO *pRbio;
BIO *pWbio;
pServSslFd = &m_SSLMap[iFd];
pRbio = &m_BioReadMap[iFd];
pWbio = &m_BioWriteMap[iFd];
char buff[4096];
memset(buff, 0, strlen(buff));
BIO_write(pRbio, pBuff, iLen);
if(!SSL_is_init_finished(pServSslFd))
{
int iRet = SSL_do_handshake(pServSslFd);
}
int iNewLen = BIO_read(pWbio, buff, 2048);
if(iNewLen>0)
{
char *pNewData = new char[iNewLen+1];
for(int i=0;i<iNewLen;i++)
pNewData[i] = buff[i];
m_pEventHandler->SendReply(iChannel, (unsigned char *)pNewData, iNewLen);
}
else
{
printf("[DTLS]:: HandShaking Response failed for this data,
return -1;
}
return NEGOTIATION_SUCCESS;
}
Here I am attaching Wireshark TCP-Dump for better monitoring about the issue.
https://www.dropbox.com/s/quidcs6gilnvt2o/WebRTC%20DTLS%20Handshake%20Failure.pcapng?dl=0
Now, I am confident about my initialization of SSL_CTX variable. Because, Sometimes Handshake successfully negotiate for every port. But sometimes Handshake fails for one or two port. I am working for 5 days to solve WebRTC DTLS Server Mode Negotiation for Google Chrome. But I haven't found the root cause for this problem.

The link for TCP-Dump is not working.
Anyway, it seems your solution should work.
As it's a server program, it's definitely multi threaded. But it's really dangerous to initialize SSL variables or to perform handshake procedure without locking. In that case so many things can happen if these two methods are processed by multiple thread.
My suggestion is to add locking mechanism for these methods.

Related

ZMQ_HEARTBEAT_TTL does not discard outgoing queue even if ZMQ_LINGER is set

I have a server which uses a ZMQ_ROUTER to communicate with ZMQ_DEALER clients. I set the ZMQ_HEARTBEAT_IVL and ZMQ_HEARTBEAT_TTL options on the client socket to make the client and server ping pong each other. Beside, because of the ZMQ_HEARTBEAT_TTL option, the server will timeout the connection if it does not receive any pings from the client in a time period, according to zmq man page:
The ZMQ_HEARTBEAT_TTL option shall set the timeout on the remote peer for ZMTP heartbeats. If
this option is greater than 0, the remote side shall time out the connection if it does not
receive any more traffic within the TTL period. This option does not have any effect if
ZMQ_HEARTBEAT_IVL is not set or is 0. Internally, this value is rounded down to the nearest
decisecond, any value less than 100 will have no effect.
Therefore, what I expect the server to behave is that, when it does not receive any traffic from a client in a time period, it will close the connection to that client and discard all the messages in the outgoing queue after the linger time expires. I create a toy example to check if my hypothesis is correct and it turns out that it is not. The chain of events is as followed:
The server sends a bunch of data to the client.
The client receives and processes the data, which is slow.
All send commands return successfully.
While the client is still receiving the data, I unplug the internet cable.
After a few seconds (set by the ZMQ_HEARTBEAT_TTL option), the server starts sending FIN signals to the client, which are not being ACKed back.
The outgoing messages are not discarded (I check the memory consumption) even after a while. They are discarded only if I call zmq_close on the router socket.
So my question is, is this suppose to be how one should use the ZMQ heartbeat mechanism? If it is not then is there any solution for what I want to achieve? I figure that I can do heartbeat myself instead of using ZMQ's built in. However, even if I do, it seems that ZMQ does not provide a way to close a connection between a ZMQ_ROUTER and a ZMQ_DEALER, although that another version of ZMQ_ROUTER - ZMQ_STREAM provides a way to do this by sending an identity frame followed by an empty frame.
The toy example is below, any help would be thankful.
Server's side:
#include <zmq.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char **argv)
{
void *context = zmq_ctx_new();
void *router = zmq_socket(context, ZMQ_ROUTER);
int router_mandatory = 1;
zmq_setsockopt(router, ZMQ_ROUTER_MANDATORY, &router_mandatory, sizeof(router_mandatory));
int hwm = 0;
zmq_setsockopt(router, ZMQ_SNDHWM, &hwm, sizeof(hwm));
int linger = 3000;
zmq_setsockopt(router, ZMQ_LINGER, &linger, sizeof(linger));
char bind_addr[1024];
sprintf(bind_addr, "tcp://%s:%s", argv[1], argv[2]);
if (zmq_bind(router, bind_addr) == -1) {
perror("ERROR");
exit(1);
}
// Receive client identity (only 1)
zmq_msg_t identity;
zmq_msg_init(&identity);
zmq_msg_recv(&identity, router, 0);
zmq_msg_t dump;
zmq_msg_init(&dump);
zmq_msg_recv(&dump, router, 0);
printf("%s\n", (char *) zmq_msg_data(&dump)); // hello
zmq_msg_close(&dump);
char buff[1 << 16];
for (int i = 0; i < 50000; ++i) {
if (zmq_send(router, zmq_msg_data(&identity),
zmq_msg_size(&identity),
ZMQ_SNDMORE) == -1) {
perror("ERROR");
exit(1);
}
if (zmq_send(router, buff, 1 << 16, 0) == -1) {
perror("ERROR");
exit(1);
}
}
printf("OK IM DONE SENDING\n");
// All send commands have returned successfully
// While the client is still receiving data, I unplug the intenet cable on the client machine
// After a while, the server starts sending FIN signals
printf("SLEEP before closing\n"); // At this point, the messages are not discarded (memory usage is high).
getchar();
zmq_close(router);
zmq_ctx_destroy(context);
}
Client's side:
#include <zmq.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char **argv)
{
void *context = zmq_ctx_new();
void *dealer = zmq_socket(context, ZMQ_DEALER);
int heartbeat_ivl = 3000;
int heartbeat_timeout = 6000;
zmq_setsockopt(dealer, ZMQ_HEARTBEAT_IVL, &heartbeat_ivl, sizeof(heartbeat_ivl));
zmq_setsockopt(dealer, ZMQ_HEARTBEAT_TIMEOUT, &heartbeat_timeout, sizeof(heartbeat_timeout));
zmq_setsockopt(dealer, ZMQ_HEARTBEAT_TTL, &heartbeat_timeout, sizeof(heartbeat_timeout));
int hwm = 0;
zmq_setsockopt(dealer, ZMQ_RCVHWM, &hwm, sizeof(hwm));
char connect_addr[1024];
sprintf(connect_addr, "tcp://%s:%s", argv[1], argv[2]);
zmq_connect(dealer, connect_addr);
zmq_send(dealer, "hello", 6, 0);
size_t size = 0;
int i = 0;
while (size < (1ll << 16) * 50000) {
zmq_msg_t msg;
zmq_msg_init(&msg);
if (zmq_msg_recv(&msg, dealer, 0) == -1) {
perror("ERROR");
exit(1);
}
size += zmq_msg_size(&msg);
printf("i = %d, size = %ld, total = %ld\n", i, zmq_msg_size(&msg), size); // This causes the cliet to be slow
// Somewhere in this loop I unplug the internet cable.
// The client starts sending FIN signals as well as trying to reconnect. The recv command hangs forever.
zmq_msg_close(&msg);
++i;
}
zmq_close(dealer);
zmq_ctx_destroy(context);
}
PS: I know that setting the highwater mark to unlimited is bad practice, however I figure that the problem will be the same even if the highwater mark is low so let's ignore it for now.

C++ Creating multiple socket clients

I'm trying to build a simulation for multiple socket clients.
My server has the following code to listen to multiple clients
My socket are from a very simple class drive from CAsyncSocket and my environment is windows MFC.
m_server.Create(....); // with the correct values
if (m_server.Listen()==FALSE)
and later on the OnSocketAccept() function
if (m_server.Accept(tempSock))
{
CSocketThread* pThread = (CSocketThread*)AfxBeginThread(RUNTIME_CLASS(CSocketThread), THREAD_PRIORITY_NORMAL, 0, CREATE_SUSPENDED);
...
My simulation apps has the following code:
for (int i = 0; i < numOfClients; i++)
{
m_sConnected[i].Create();
int rVal = m_sConnected[i].Connect(csIPAddress.GetString(), m_port);
That doesn't work.
In WireShark I can see that my (numOfClients = 10 for example) 10 clients are connected with different client source port.
But each new socket of m_sConnected[i] is becoming NULL after the second connection to all sockets including m_sConnected[0].
Closing the sockets or destroy the simulation app create socket close at the server side for all open threads for the listen sockets.
What is the problem?
Can I use the same process/thread for all my socket clients?
10x
UrAv.
your problem is that you are not using the CSocketThread object the right way.
as mentiend in microsoft documention
after the accept function you need to do the following :
CSockThread* pSockThread = (CSockThread*)AfxBeginThread( RUNTIME_CLASS(CSockThread), THREAD_PRIORITY_NORMAL, 0, CREATE_SUSPENDED);
if (NULL != pSockThread) {
// Detach the newly accepted socket and save
//the SOCKET handle in our new thread object.
//After detaching it, it should no longer be
//used in the context of this thread.
pSockThread->m_hConnected = sConnected.Detach();
pSockThread->ResumeThread();
} }
when you attach your socket to the thread then it will run.
link to microsoft doc:
https://msdn.microsoft.com/en-us/library/wxzt95kb.aspx
your solution has worked for me. I have used multiple threads to stress test the server in c++ under linux. Pasting my code, it will be helpful to somebody...Experts can improve my code, if they find any flaws in my handling of code. I know, I am doing something wrong but no other go to test the server as no one has provided the solution for this till now. I am able to test the server for 100000 clients using this code. - Kranti.
include //for threading , link with lpthread
void *connection_handler(void *);
#define PORT 9998
#define SERVER_IP "127.0.0.1"
#define MAXSZ 100
#define MAXSOCK 70000
int main()
{
int sockfd[MAXSOCK];//to create socket
int socket_desc , new_socket[MAXSOCK], *new_sock;
struct sockaddr_in serverAddress;//client will connect on this
int n;
char msg1[MAXSZ];
char msg2[MAXSZ];
int NoOfClients = MAXSOCK;
memset(msg2,0,100);
void *ret;
for(int i=0;i<NoOfClients;i++){
//create socket
sockfd[i]=socket(AF_INET,SOCK_STREAM,0);
//initialize the socket addresses
memset(&serverAddress,0,sizeof(serverAddress));
serverAddress.sin_family=AF_INET;
serverAddress.sin_addr.s_addr=inet_addr(SERVER_IP);
serverAddress.sin_port=htons(PORT);
//client connect to server on port
new_socket[i] = connect(sockfd[i],(struct sockaddr *)&serverAddress,sizeof(serverAddress));
printf("new socket connected= %d",new_socket[i]);
pthread_t sniffer_thread[MAXSOCK];
new_sock = malloc(sizeof(int));
*new_sock = new_socket[i];
int p=-1;
if( p = pthread_create( &sniffer_thread[i] , NULL , connection_handler , (void*) new_sock) < 0)
{
perror("could not create thread");
return 1;
}
}
return 0;
}
/*
* This will handle connection for each client
* */
void *connection_handler(void *socket_desc)
{
//Get the socket descriptor
int sock = *(int*)socket_desc;
int read_size;
char *message , client_message[50];
printf("we are in connection handler");
//Send some messages to the server
message = "Greetings! I am your connection handler\n";
int wlen = write(sock , message , strlen(message));
printf("write length is %d", wlen);
//Free the socket pointer
//close(sock);
free(sock);
return 0;
}

RPC: Port mapper failure - RPC: Unable to send on OpenMp Multithreading Application

I am trying to get NFS disk quotas via rpc protocol. By using OpenMP, i am creating parallel udp connections. The program runs without any error with 1 and 2 core. When i increase core number to 4, after a while program gives "RPC: Port mapper failure - RPC: Unable to send" error on "clntudp_create()" function, which is creating udp connection . I am using following code for this job.
/*
* 0 -> True
* 1 -> Couldn't resolve hostname
* 2 -> Udp connection couldn't initiate
* 3 -> Query couldn't run
* 4 -> Unknown error
*/
long long int diskOfUser::getNfsQuota(string blockDevice, int uid)
{
//some address resolving operations
int whereToSplit = blockDevice.find(':');
if(whereToSplit == std::string::npos)
return 5;
std::string host = blockDevice.substr(0, whereToSplit);
std::string path = blockDevice.substr(whereToSplit+1, blockDevice.size() - whereToSplit);
char *pathToGo = new char[path.size()+1];
strcpy(pathToGo,path.c_str());
char *hostToGo = new char[host.size()+1];
strcpy(hostToGo,host.c_str());
//variable definitions
struct getquota_args *gqArgs;
gqArgs = new struct getquota_args;
struct getquota_rslt *gqRslt; //quota result object
gqRslt = new struct getquota_rslt;
struct sockaddr_in server_addr; //udp connection address
struct hostent *hp;
struct timeval timeout, totTimeOut; //timeout values
CLIENT *client = NULL; //udp client
int socket = RPC_ANYSOCK;
timeout.tv_usec = 0;
timeout.tv_sec = 6;
totTimeOut.tv_sec = 25;
totTimeOut.tv_usec = 0;
//try to resolve host
if ((hp = gethostbyname(hostToGo)) == NULL)
{
delete pathToGo;
delete hostToGo;
return 1; //hostname couldn't resolve
}
/* Try to start UDP connection (Problematic part)*/
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(875);
memcpy(&server_addr.sin_addr, hp->h_addr, hp->h_length);
server_addr.sin_port = 0;
if ((client = clntudp_create(&server_addr, (u_long)100011, (u_long)1, timeout, &socket)) == NULL)
{
delete pathToGo;
delete hostToGo;
clnt_pcreateerror("error"); // gives the following error "RPC: Port mapper failure - RPC: Unable to send"
return 2; //udp connection could not start
}
/* UDP connection created */
//quota asking operations ...
}
Is there any way to make this job thread safe?
EDIT: "RPC: Unable to send; errno = Bad file descriptor" clnt_sperror() function is giving this error. I am starting four process with one core separately, program is running with no error. But 1 process with 4 core is always blowing up.
ANSWER:
/* UDP connection oluşturuluyor */
if ((client = clnt_create(hostToGo, (u_long)100011, (u_long)1, "UDP")) == NULL)
{
delete pathToGo;
delete hostToGo;
return 2; //rquotad ile udp connection oluşturma işlemi başarısız oldu
}
/* UDP connection oluşturuldu */
client->cl_auth = authunix_create_default(); //rpc authentication handle
gqArgs->gqa_uid = uid; //argumanlar set ediliyor
gqArgs->gqa_pathp = pathToGo;
I reached my goal with clnt_create() function. It is thread safe alternative of clntudp_create(). But, on the server side rquota.d seralizing all the request and answering one by one. Because of this it is not fully optimize.

Linux TCP Server Issue C++

I have been trying to figure out this problem for over a month now. I have no where else to turn.
I have a server that listens to many multicast channels (100ish). Each socket is its own thread. Then I have a client listener (single threaded) that handles all incoming connections, disconnects, and client messaging within the same server. The idea is that a client comes in, connects, requests data from a multicast channels and I send the data back to the client. The client stays connected and I relay the UDP data back to the client. The client can either request UDP or TCP has the protocol for the data relay. At one point this was working beautifully for a couple of weeks. We did some code and kernel changes, and now we cant figure out whats gone wrong.
The server will run for hours and have hundreds of clients connected throughout the day. But at some point, randomly, the server will just stop. And by stop, I mean: all UDP sockets stop receiving/handling data (tcpdump shows data still coming to the box), the client_listener thread stops receiving client packets. BUT!!! the main client_listener socket can still receive new connections and new disconnects on the main socket. On a new connection, the main sockets is able to send a "Connection Established" packet back to the client, but then when the client responds, the select never returns.
I can post code if someone would like. If anyone has any suggestions where to look or if this sounds like something. Please let me know.
If you have any questions, please ask.
Thank you.
I would like to share my TCP Server code:
This is a single thread. Works fine for hours and then I will only receive "New Connections" and "Disconnects". NO CLIENT PACKETS WILL COME IN.
int opt = 1;
int addrlen;
int sd;
int max_sd;
int valread;
int activity;
int new_socket;
char buffer[MAX_BUFFER_SIZE];
int client_socket[m_max_clients];
struct sockaddr_in address;
fd_set readfds;
for(int i = 0; i<m_max_clients; i++)
{
client_socket[i]=0;
}
if((m_master_socket = socket(AF_INET,SOCK_STREAM,0))==0)
LOG(FATAL)<<"Unable to create master socket";
if(setsockopt(m_master_socket,SOL_SOCKET,SO_REUSEADDR,(char*)&opt,sizeof(opt))<0)
LOG(FATAL)<<"Unable to set master socket";
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons(m_listenPort);
if(bind(m_master_socket,(struct sockaddr*)& address, sizeof(address))!=0)
LOG(FATAL)<<"Unable to bind master socket";
if(listen(m_master_socket,SOMAXCONN)!=0)
LOG(FATAL)<<"listen() failed with err";
addrlen = sizeof(address);
LOG(INFO)<<"Waiting for connections......";
while(true)
{
FD_ZERO(&readfds);
FD_SET(m_master_socket, &readfds);
max_sd = m_master_socket;
for(int i = 0; i<m_max_clients; i++)
{
sd = client_socket[i];
if(sd > 0)
FD_SET(sd, &readfds);
if(sd>max_sd)
max_sd = sd;
}
activity = select(max_sd+1,&readfds,NULL,NULL,NULL);
if((activity<0)&&(errno!=EINTR))
{
// int err = errno;
// LOG(ERROR)<<"SELECT ERROR:"<<activity<<" "<<err;
continue;
}
if(FD_ISSET(m_master_socket, &readfds))
{
if((new_socket = accept(m_master_socket,(struct sockaddr*)&address, (socklen_t*)&addrlen))<0)
LOG(FATAL)<<"ERROR:ACCEPT FAILED!";
LOG(INFO)<<"New Connection, socket fd is (" << new_socket << ") client_addr:" << inet_ntoa(address.sin_addr) << " Port:" << ntohs(address.sin_port);
for(int i =0;i<m_max_clients;i++)
{
if(client_socket[i]==0)
{
//try to set the socket to non blocking, tcp nagle and keep alive
if ( !SetSocketBlockingEnabled(new_socket, false) )
LOG(INFO)<<"UNABLE TO SET NON-BLOCK: ("<<new_socket<<")" ;
if ( !SetSocketNoDelay(new_socket,false) )
LOG(INFO)<<"UNABLE TO SET DELAY: ("<<new_socket<<")" ;
// if ( !SetSocketKeepAlive(new_socket,true) )
// LOG(INFO)<<"UNABLE TO SET KeepAlive: ("<<new_socket<<")" ;
ClientConnection* con = new ClientConnection(m_mocSrv, m_udpPortGenerator, inet_ntoa(address.sin_addr), ntohs(address.sin_port), new_socket);
if(con->login())
{
client_socket[i] = new_socket;
m_clientConnectionSocketMap[new_socket] = con;
LOG(INFO)<<"Client Connection Logon Complete";
}
else
delete con;
break;
}
}//for
}
else
{
try{
for(int i = 0; i<m_max_clients; i++)
{
sd = client_socket[i];
if(FD_ISSET(sd,&readfds))
{
if ( (valread = recv(sd, buffer, sizeof(buffer),MSG_DONTWAIT|MSG_NOSIGNAL)) <= 0 )
{
//remove from the fd listening set
LOG(INFO)<<"RESET CLIENT_SOCKET:("<<sd<<")";
client_socket[i]=0;
handleDisconnect(sd,true);
}
else
{
std::map<int, ClientConnection*>::iterator client_connection_socket_iter = m_clientConnectionSocketMap.find(sd);
if(client_connection_socket_iter != m_clientConnectionSocketMap.end())
{
client_connection_socket_iter->second->handle_message(buffer, valread);
if(client_connection_socket_iter->second->m_logoff)
{
LOG(INFO)<<"SOCKET LOGGED OFF:"<<sd;
client_socket[i]=0;
handleDisconnect(sd,true);
}
}
else
{
LOG(ERROR)<<"UNABLE TO FIND SOCKET DESCRIPTOR:"<<sd;
}
}
}
}
}catch(...)
{
LOG(ERROR)<<"EXCEPTION CATCH!!!";
}
}
}
From the information given I would state the following:
Do not use a thread for each connection. Since you're on Linux use EPOLL Edge Triggered Multiplexing. Most newer web frameworks use this technology. For more info check 10K Problem.
By eliminating threads from the equation you're eliminating the possibilities of a deadlock and reducing the complexity of debugging / worrying about thread safe variables.
Ensure each connection when finished is completely closed.
Ensure that you do not have some new firewall rules that popped up in iptables since the upgrade.
Check any firewalls on the network to see if they are restricting certain types of activity (is your server on a new IP since the upgrade?)
In short I would put my money on a thread deadlock and / or starvation. I've personally conducted experiments in which I've created a multithreaded server vs a single threaded server using Epoll. The results where night and day, Epoll blows away multithreaded implementation (for I/O) and makes the code simpler to write, debug and maintain.

OpenSSL CRYPTO_malloc leaking, How do i free it up?

I have a very simple application that goes through a list of hostnames and connects to each one of them on the HTTPS port to obtain fresh server data for client identified data.
In order to obtain the data i use OpenSSL but it seems like it is leaking the memory everytime.
Class responsible for connecting/putting/receivng the SSL data.
class CConnector
{
public:
static std::string GetData (const std::string& strHostName)
{
// Initialize malloc, free, etc for OpenSSL's use
CRYPTO_malloc_init();
// Initialize OpenSSL's SSL libraries
SSL_library_init();
// Load all available encryption algorithms
OpenSSL_add_all_algorithms();
//
std::string strRequest="GET /\r\n";
// Set up a SSL_CTX object, which will tell our BIO object how to do its work
SSL_CTX* ctx = SSL_CTX_new(SSLv23_client_method());
// Create our BIO object for SSL connections.
BIO* bio = BIO_new_ssl_connect(ctx);
// Create a SSL object pointer, which our BIO object will provide.
SSL* ssl = NULL;
// Failure?
if (bio == NULL)
{
CLogger::Instance()->Write(XLOGEVENT_LOCATION,CLogger::eState::ERROR, "BIO");
ERR_print_errors_fp(stderr);
if(ctx!=NULL)SSL_CTX_free(ctx);
if(bio!=NULL)BIO_free_all(bio);
return "";
}
// Makes ssl point to bio's SSL object.
BIO_get_ssl(bio, &ssl);
// Set the SSL to automatically retry on failure.
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY);
// We're connection to google.com on port 443.
std::string strHost = GetHostFromURL(strHostName);
strHost+=":https";
//
BIO_set_conn_hostname(bio, strHost.data());
// Same as before, try to connect.
if (BIO_do_connect(bio) <= 0)
{
CLogger::Instance()->Write(XLOGEVENT_LOCATION,CLogger::eState::ERROR, "cannot connect");
if(ctx!=NULL)SSL_CTX_free(ctx);
if(bio!=NULL)BIO_free_all(bio);
return "";
}
// Now we need to do the SSL handshake, so we can communicate.
if (BIO_do_handshake(bio) <= 0)
{
CLogger::Instance()->Write(XLOGEVENT_LOCATION,CLogger::eState::ERROR, "SSL Handshake");
if(ctx!=NULL)SSL_CTX_free(ctx);
if(bio!=NULL)BIO_free_all(bio);
return "";
}
// Create a buffer for grabbing information from the page.
char buf[1024];
memset(buf, 0, sizeof(buf));
// BIO_puts sends a null-terminated string to the server.
BIO_puts(bio, strRequest.c_str());
int iChars = 0;
while (1)
{
iChars = BIO_read(bio, buf, sizeof(buf)-1);
// Close reading
if (iChars <= 0)
break;
// Terminate the string
buf[iChars] = 0;
// Add to the final output
strOutput.append(buf);
}
SSL_shutdown(ssl);
SSL_CTX_free(ctx);
BIO_free_all(bio);
}
private:
};
And the main program calling the class method
while(1)
{
for(int a = 0; a < m_vHostNames.size(); a++)
{
std::string strOutput = CConnector::GetData(m_vHostNames[a]);
// Process the data
}
sleep(10000);
}
The debugger/profiler output:
Question:
Do i free the OpenSSL correctly? Or there is something else required?
Thank you for any input into this.