wpa_supplicant C API undefined behavior - c++

I managed to get the wpa_supplicant C API to work. But it behaves completly different each time I restart my Program.
The Connection succeeds every time. But then the troubles begin:
Sometimes SCAN replies an empty String but returns 0 (Ok).
In another run it replies "OK\n" and returns 0. When I loop and wait for an return of 0 and a "OK\n"-reply it runs forever with an empty reply and a 0 return.
In rare cases when SCAN returns 0 and replies "OK\n" I move on and wait for SCAN_RESULTS to return 0. At this point it behaves completely random. Sometimes it replies the whole Scan-Results. Sometimes it replies nothing but return 0 and the Scan-results are in my Event-Pipeline.
Or like in most cases: It returns 0 but does nothing. No reply, no Events. Nothing.
For debugging I reduced my Code to this snippet and try to figure out whats wrong. Im done, tried everything and I am somewhat frustrated with the Documentation of the ctrl-interface which doesn't define any workflow or tips. Im sick of reverse engineering the wpa_cli.c to figure out their flow.
I have to attach that mostly the first PING works well. Every other PING results in empty Strings.
/* some includes */
wpa_ctrl* _wpac;
static void callback(char* rply, size_t rplylen){
std::cout << std::string(rply,rplylen) << std::endl;
}
bool ScanResults() {
if(_wpac)
{
char rply[4096]; //same as in wpa_cli.c
size_t rplylen;
int retval = wpa_ctrl_request(_wpac,"SCAN_RESULTS",12,rply,&rplylen,callback);
if(retval == 0) {
std::string rplystring = std::string(rply,rplylen);
std::string message = std::string("wpa_ctrl(SCAN_RESULTS) replied: '").append(rplystring).append("' (").append(std::to_string(retval)).append(")");
std::cout << message << std::cout;
std::cout << std::string("wpa_ctrl(SCAN_RESULTS): Available (").append(std::to_string(retval)).append(")") << std::endl;
return true;
}
else
std::cout << std::string("wpa_ctrl(SCAN_RESULTS): Unavailable (").append(std::to_string(retval)).append(")") << std::endl;
return false;
}
return false;
}
bool InitScan() {
if(_wpac)
{
char rply[4096]; //same as in wpa_cli.c
size_t rplylen;
int retval = wpa_ctrl_request(_wpac,"SCAN",4,rply,&rplylen,callback);
if(retval == 0) {
std::string rplystring = std::string(rply,rplylen);
std::string message = std::string("wpa_ctrl(SCAN) replied: '").append(rplystring).append("' (").append(std::to_string(retval)).append(")");
std::cout << message << std::endl;
if(rplystring == "OK\n") {
std::string message = std::string("wpa_ctrl(SCAN): Scan initiated (").append(std::to_string(retval)).append(")");
std::cout << message << std::endl;
return true;
}
}
std::string message = std::string("wpa_ctrl(SCAN) failed: (").append(std::to_string(retval)).append(")");
std::cout << message << std::endl;
}
return false;
}
int main(){
std::string connection_string = std::string("/var/run/wpa_supplicant/").append(_interface);
wpa_ctrl* _wpac = wpa_ctrl_open(connection_string.c_str());
if(!_wpac)
return 1;
/* Well Working Attach to as Eventlistener omitted */
while(!InitScan())
sleep(1);
while(!ScanResults())
sleep(1)
return 0;
}

Try doing something like this in the appropriate places in your code
char rply[4096];
size_t rplylen = sizeof(rply);
static char cmd[] = "SCAN"; //maybe a bit easier to deal with since you need a command length
int retval = wpa_ctrl_request(_wpac, cmd, sizeof(cmd)-1, rply, &rplylen, NULL);
NULL, because I suspect you really don't need a callback routine. But put one in if you want to.

Related

ZMQ messages not being received

Please forgive me if I'm missing something simple, this is my first time doing anything with messaging and I inherited this codebase from someone else.
I am trying to send a message from a windows machine with an IP of 10.10.10.200 to an Ubuntu machine with an IP of 10.10.10.15.
I got the following result when running TCPView from the Windows machine, which makes me suspect that the problem lies in the Ubuntu machine. If I'm reading that right, then my app on the windows machine has created a connection on port 5556 which is what it is supposed to do. In case I'm wrong, I'll include the windows code too.
my_app.exe 5436 TCP MY_COMPUTER 5556 MY_COMPUTER 0 LISTENING
Windows app code:
void
NetworkManager::initializePublisher()
{
globalContext = zmq_ctx_new();
globalPublisher = zmq_socket(globalContext, ZMQ_PUB);
string protocol = "tcp://*:";
string portNumber = PUBLISHING_PORT; //5556
string address = protocol + portNumber;
char *address_ptr = new char[address.size() + 1];
strncpy_s(address_ptr, address.size() + 1, address.c_str(), address.size());
int bind_res = zmq_bind(globalPublisher, address_ptr);
if (bind_res != 0)
{
cerr << "FATAL: couldn't bind to port[" << portNumber << "] and protocol [" << protocol << "]" << endl;
}
cout << " Connection: " << address << endl;
}
void
NetworkManager::publishMessage(MESSAGE msgToSend)
{
// Get the size of the message to be sent
int sizeOfMessageToSend = MSG_MAX_SIZE;//sizeof(msgToSend);
// Copy IDVS message to buffer
char buffToSend[MSG_MAX_SIZE] = "";
// Pack the message id
size_t indexOfId = MSG_ID_SIZE + 1;
size_t indexOfName = MSG_NAME_SIZE + 1;
size_t indexOfdata = MSG_DATABUFFER_SIZE + 1;
memcpy(buffToSend, msgToSend.get_msg_id(), indexOfId - 1);
// Pack the message name
memcpy(buffToSend + indexOfId, msgToSend.get_msg_name(), indexOfName - 1);
// Pack the data buffer
memcpy(buffToSend + indexOfId + indexOfName, msgToSend.get_msg_data(), indexOfdata - 1);
// Send message
int sizeOfSentMessage = zmq_send(globalPublisher, buffToSend, MSG_MAX_SIZE, ZMQ_DONTWAIT);
getSubscriptionConnectionError();
// If message size doesn't match, we have an issue, otherwise, we are good
if (sizeOfSentMessage != sizeOfMessageToSend)
{
int errorCode = zmq_errno();
cerr << "FATAL: couldn't not send message." << endl;
cerr << "ERROR: " << errorCode << endl;
}
}
I can include more of this side's code if you think it's needed, but the error is popping up on the Ubuntu side, so I'm going to focus there.
The problem is when I call zmq_recv it returns -1 and when I check zmq_errno I get EAGAIN (Non-blocking mode was requested and no messages are available at the moment.) I also checked with netstat and I didn't see anything on port 5556
First is the function to connect to the publisher, then the function to get data, followed by main.
Ubuntu side code:
void
*connectoToPublisher()
{
void *context = zmq_ctx_new();
void *subscriber = zmq_socket(context, ZMQ_SUB);
string protocol = "tcp://";
string ipAddress = PUB_IP; //10.10.10.15
string portNumber = PUB_PORT; // 5556
string address = protocol + ipAddress + ":" + portNumber;
cout << "Address: " << address << endl;
char *address_ptr = new char[address.size() + 1];
strcpy(address_ptr, address.c_str());
// ------ Connect to Publisher ------
bool isConnectionEstablished = false;
int connectionStatus;
while (isConnectionEstablished == false)
{
connectionStatus = zmq_connect(subscriber, address_ptr);
switch (connectionStatus)
{
case 0: //we are good.
cout << "Connection Established!" << endl;
isConnectionEstablished = true;
break;
case -1:
isConnectionEstablished = false;
cout << "Connection Failed!" << endl;
getSubscriptionConnectionError();
cout << "Trying again in 5 seconds..." << endl;
break;
default:
cout << "Hit default connecting to publisher!" << endl;
break;
}
if (isConnectionEstablished == true)
{
break;
}
sleep(5); // Try again
}
// by the time we get here we should have connected to the pub
return subscriber;
}
static void *
getData(void *subscriber)
{
const char *filter = ""; // Get all messages
int subFilterResult = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, filter, strlen(filter));
// ------ Get in main loop ------
while (1)
{
//get messages from publisher
char bufferReceived[MSG_MAX_SIZE] = "";
size_t expected_messageSize = sizeof(bufferReceived);
int actual_messageSize = zmq_recv(subscriber, bufferReceived, MSG_MAX_SIZE, ZMQ_DONTWAIT);
if (expected_messageSize == actual_messageSize)
{
MESSAGE msg = getMessage(bufferReceived); //Uses memcpy to copy id, name, and data strutct data from buffer into struct of MESSAGE
if (strcmp(msg.get_msg_id(), "IDXY_00000") == 0)
{
DATA = getData(msg); //Uses memcpy to copy data from buffer into struct of DATA
}
} else
{
// Something went wrong
getReceivedError(); //This just calls zmq_errno and cout the error
}
usleep(1);
}
}
int main (int argc, char*argv[])
{
//Doing some stuff...
void *subscriber_socket = connectoToHeadTrackerPublisher();
// Initialize Mux Lock
pthread_mutex_init(&receiverMutex, NULL);
// Initializing some variables...
// Launch Thread to get updates from windows machine
pthread_t publisherThread;
pthread_create(&publisherThread,
NULL, getData, subscriber_socket);
// UI stuff
zmq_close(subscriber_socket);
return 0;
}
If you cannot provide a solution, then I will accept identifying the problem as a solution. My main issue is that I don't have the knowledge or experience with messaging or networking to correctly identify the issue. Typically if I know what is wrong, I can fix it.
Ok, this has nothing to do with signalling / messaging framework
Your Ubuntu code instructs the ZeroMQ Context()-instance engine to create a new SUB-socket instance and next the code insist this socket to try to _connect() ( to setup a tcp:// transport-class connection towards the peering counterparty ) to "opposite" access-point, sitting on an address of the Ubuntu localhost:port# that was setup as 10.10.10.15:5556, while the intended PUB-side archetype access-point actually lives not on this Ubuntu machine, but on another, Windows host, IP:port# of which is 10.10.10.200:5556
This seems to be the root-cause of the problem, so change it accordingly to match the physical layout and you may get the toys work.

Creating a pseudo terminal in C++ that can be used by other programs

I have created a pseudo terminal in C++ using the following code:
int main(int, char const *[])
{
int master, slave;
char name[1024];
char mode[] = "0777"; //I know this isn't good, it is for testing at the moment
int access;
int e = openpty(&master, &slave, &name[0], 0, 0);
if(0 > e) {
std::printf("Error: %s\n", strerror(errno));
return -1;
}
if( 0 != unlockpt(slave) )
{
perror("Slave Error");
}
access = strtol(mode, 0, 8);
if( 0 > chmod(name, access) )
{
perror("Permission Error");
}
//std::cout << "Master: " << master << std::endl;
std::printf("Slave PTY: %s\n", name);
int r;
prompt = "login: ";
while(true)
{
std::cout << prompt << std::flush;
r = read(master, &name[0], sizeof(name)-1);
checkInput(name);
name[r] = '\0';
std::printf("%s", &name[0]);
std::printf("\n");
}
close(slave);
close(master);
return 0;
}
It works pretty well in the sense that from another terminal, I can do:
printf 'username' > /dev/pts/x
and it will appear and be processed as it should.
My question is: when I try to use screen, nothing appears on the screen terminal. Then when I type, it comes through to my slave 1 character at a time.
Does anyone know why this is? Or how I can fix it.
I can provide more detail if required.
Thank you :)
Because you're not flushing the buffer after you use printf.
As pauls answer already suggest you need to flush the buffer.
To do so you can use the tcflush function.
The first argument is the int of the file descriptor and the second can be one of the following:
TCIFLUSH Flushes input data that has been received by the system but
not read by an application.
TCOFLUSH Flushes output data that has been written by an application
but not sent to the terminal.
TCIOFLUSH Flushes both input and output data.
For more information see: https://www.ibm.com/docs/en/zos/2.3.0?topic=functions-tcflush-flush-input-output-terminal

MHD_resume_connection() of libmicrohttpd not working properly with external select

I encountered some problems with MHD_suspend_connection() and MHD_resume_connection() in libmicrohttpd while using the external event loop. Afterwards I have wrote a small example (without error handling) below. My question is: What am I doing wrong? Or is it a bug in the library? It should work as far as I understand the manual. Using external select with suspend/resume is allowed explicitly.
The problem is that connections are not resumed correctly. Processing the connection does not continue right after calling MHD_resume_connection(). In some versions of my program, it did continue after another request was incomming. In other versions later requests was not handled at all (access_handler() was never called). In some of this versions I got a response for the first request while stopping libmicrohttpd. When I enable MHD_USE_SELECT_INTERNALLY and remove my external loop (let it sleep), everything works.
I tested it on Debian (libmicrohttpd 0.9.37) and Arch (libmicrohttpd 0.9.50). The problem exists on both systems but maybe the behavior was a little bit different.
#include <algorithm>
#include <csignal>
#include <cstring>
#include <iostream>
#include <vector>
#include <sys/select.h>
#include <microhttpd.h>
using std::cerr;
using std::cout;
using std::endl;
static volatile bool run_loop = true;
static MHD_Daemon *ctx = nullptr;
static MHD_Response *response = nullptr;
static std::vector<MHD_Connection*> susspended;
void sighandler(int)
{
run_loop = false;
}
int handle_access(void *cls, struct MHD_Connection *connection,
const char *url, const char *method, const char *version,
const char *upload_data, size_t *upload_data_size,
void **con_cls)
{
static int second_call_marker;
static int third_call_marker;
if (*con_cls == nullptr) {
cout << "New connection" << endl;
*con_cls = &second_call_marker;
return MHD_YES;
} else if (*con_cls == &second_call_marker) {
cout << "Suspending connection" << endl;
MHD_suspend_connection(connection);
susspended.push_back(connection);
*con_cls = &third_call_marker;
return MHD_YES;
} else {
cout << "Send response" << endl;
return MHD_queue_response(connection, 200, response);
}
}
void myapp()
{
std::signal(SIGINT, &sighandler);
std::signal(SIGINT, &sighandler);
ctx = MHD_start_daemon(MHD_USE_DUAL_STACK //| MHD_USE_EPOLL
| MHD_USE_SUSPEND_RESUME | MHD_USE_DEBUG,
8080, nullptr, nullptr,
&handle_access, nullptr,
MHD_OPTION_END);
response = MHD_create_response_from_buffer(4, const_cast<char*>("TEST"),
MHD_RESPMEM_PERSISTENT);
while (run_loop) {
int max;
fd_set rs, ws, es;
struct timeval tv;
struct timeval *tvp;
max = 0;
FD_ZERO(&rs);
FD_ZERO(&ws);
FD_ZERO(&es);
cout << "Wait for IO activity" << endl;
MHD_UNSIGNED_LONG_LONG mhd_timeout;
MHD_get_fdset(ctx, &rs, &ws, &es, &max);
if (MHD_get_timeout(ctx, &mhd_timeout) == MHD_YES) {
//tv.tv_sec = std::min(mhd_timeout / 1000, 1ull);
tv.tv_sec = mhd_timeout / 1000;
tv.tv_usec = (mhd_timeout % 1000) * 1000;
tvp = &tv;
} else {
//tv.tv_sec = 2;
//tv.tv_usec = 0;
//tvp = &tv;
tvp = nullptr;
}
if (select(max + 1, &rs, &ws, &es, tvp) < 0 && errno != EINTR)
throw "select() failed";
cout << "Handle IO activity" << endl;
if (MHD_run_from_select(ctx, &rs, &ws, &es) != MHD_YES)
throw "MHD_run_from_select() failed";
for (MHD_Connection *connection : susspended) {
cout << "Resume connection" << endl;
MHD_resume_connection(connection);
}
susspended.clear();
}
cout << "Stop server" << endl;
MHD_stop_daemon(ctx);
}
int main(int argc, char *argv[])
{
try {
myapp();
} catch (const char *str) {
cerr << "Error: " << str << endl;
cerr << "Errno: " << errno << " (" << strerror(errno) << ")" << endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
I've compiled and run your sample on Windows and am seeing the same behavior w/ 0.9.51.
It's not a bug in microhttpd. The problem is that you are resuming a connection before queuing a response on it. The only code you have that creates a response relies on more activity on the connection so it's a catch-22.
The point of MHD_suspend_connection/MHD_resume_connection is to not block new connections while long-running work is going on. Thus typically after suspending the connection you need to kick off that work on another thread to continue while the listening socket is maintained. When that thread has queued the response it can resume the connection and the event loop will know it is ready to send back to the client.
I'm not sure of your other design requirements but you may not need to be implementing external select. That is to say, suspend/resume does not require it (I've used suspend/resume just fine with MHD_USE_SELECT_INTERNALLY, e.g.).
I dont know if it's mentioned. But you have a multi-threading bug, and perhaps, "intent bug". As the lib, may or may not use threads, depending on other factors. You can see if you are using threads, by printing the thread id, from the functions. But, your answerToConnection function, sets your vector (without mutex protection), and then you are immediately looking at it, and retrying potentially from another thread. this goes against the intent/purpose of suspend/retry, since suspend is really for something taking "a long time". The gotcha, is that you dont own the calling code, so, you dont know when it's totally done. however, you can age your retry, with a timeval, so, you dont retry too soon. at least a value of tv_usec +1. you need to note, that you are using the vector from two or more threads, without mutex protection.

Handeling SSL Client not reading all data

I am trying to accomplish, that my ssl server does not break down, when a client does not collect all data. (fixed with one minor bug)
when the data is too long.
Basically what I'm trying to do is write in a non-blocking way. For that I found two different approaches:
First approach
using this code
int flags = fcntl(ret.fdsock, F_GETFL, 0);
fcntl(ret.fdsock, F_SETFL, flags | O_NONBLOCK);
and creating the ssl connection with it
Second approach:
Doing this directly after creating the SSL Object using SSL_new(ctx)
BIO *sock = BIO_new_socket(ret.fdsock, BIO_NOCLOSE);
BIO_set_nbio(sock, 1);
SSL_set_bio(client, sock, sock);
Both of which have their downsides, but neither of which helps solving the problem.
The first approach seems to read in a unblocking way just fine, but when I write more data, than the client reads, my server crashes.
The second approach does not seem to do anything, so my guess is, that I did something wrong or did not understand what a BIO actually does.
For more Information here is how the server writes to the client:
int SSLConnection::send(char* msg, const int size){
int rest_size = size;
int bytes_sent = 0;
char* begin = msg;
std::cout << "expected bytes to send: " << size << std::endl;
while(rest_size > 0) {
int tmp_bytes_sent = SSL_write(connection, begin, rest_size);
std::cout << "any error : " << ERR_get_error()<< std::endl;
std::cout << "tmp_bytes_sent: " << tmp_bytes_sent << std::endl;
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
std::cout << "ssl error : " << SSL_get_error(this->connection, tmp_bytes_sent)<< std::endl;
} else {
bytes_sent += tmp_bytes_sent;
rest_size -= tmp_bytes_sent;
begin = msg+bytes_sent;
}
}
return bytes_sent;
}
Output:
expected bytes to send: 78888890
Betätigen Sie die <RETURN> Taste, um das Fenster zu schließen...
(means: hit <return> to close window)
EDIT: After people said, that I need to cache errors appropriate, here is my new code:
Setup:
connection = SSL_new(ctx);
if (connection){
BIO * sbio = BIO_new_socket(ret.fdsock, BIO_NOCLOSE);
if (sbio) {
BIO_set_nbio(sbio, false);
SSL_set_bio(connection, sbio, sbio);
SSL_set_accept_state(connection);
} else {
std::cout << "Bio is null" << std::endl;
}
} else {
std::cout << "client is null" << std::endl;
}
Sending:
int SSLConnection::send(char* msg, const int size){
if(connection == NULL) {
std::cout << "ERR: Connection is NULL" << std::endl;
return -1;
}
int rest_size = size;
int bytes_sent = 0;
char* begin = msg;
std::cout << "expected bytes to send: " << size << std::endl;
while(rest_size > 0) {
int tmp_bytes_sent = SSL_write(connection, begin, rest_size);
std::cout << "any error : " << ERR_get_error()<< std::endl;
std::cout << "tmp_bytes_sent: " << tmp_bytes_sent << std::endl;
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
std::cout << "ssl error : " << SSL_get_error(this->connection, tmp_bytes_sent)<< std::endl;
break;
} else if (tmp_bytes_sent == 0){
std::cout << "tmp_bytes are 0" << std::endl;
break;
} else {
bytes_sent += tmp_bytes_sent;
rest_size -= tmp_bytes_sent;
begin = msg+bytes_sent;
}
}
return bytes_sent;
}
Using a client, that fetches 60 bytes, here is the output:
Output writing 1,000,000 Bytes:
expected bytes to send: 1000000
any error : 0
tmp_bytes_sent: 16384
any error : 0
tmp_bytes_sent: 16384
Betätigen Sie die <RETURN> Taste, um das Fenster zu schließen...
(translates to: hit <RETURN> to close window)
Output writing 1,000 bytes:
expected bytes to send: 1000
any error : 0
tmp_bytes_sent: 1000
connection closed <- expected output
First, a warning: non-blocking I/O over SSL is a rather baroque API, and it's difficult to use correctly. In particular, the SSL layer sometimes needs to read internal data before it can write user data (or vice versa), and the caller's code is expected to be able to handle that based on the error-codes feedback it gets from the SSL calls it makes. It can be made to work correctly, but it's not easy or obvious -- you are de facto required to implement a state machine in your code that echoes the state machine inside the SSL library.
Below is a simplified version of the logic that is required (it's extracted from the Write() method in this file which is part of this library, in case you want to see a complete, working implementation)
enum {
SSL_STATE_READ_WANTS_READABLE_SOCKET = 0x01,
SSL_STATE_READ_WANTS_WRITEABLE_SOCKET = 0x02,
SSL_STATE_WRITE_WANTS_READABLE_SOCKET = 0x04,
SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET = 0x08
};
// a bit-chord of SSL_STATE_* bits to keep track of what
// the SSL layer needs us to do next before it can make more progress
uint32_t _sslState = 0;
// Note that this method returns the number of bytes sent, or -1
// if there was a fatal error. So if this method returns 0 that just
// means that this function was not able to send any bytes at this time.
int32_t SSLSocketDataIO :: Write(const void *buffer, uint32 size)
{
int32_t bytes = SSL_write(_ssl, buffer, size);
if (bytes > 0)
{
// SSL was able to send some bytes, so clear the relevant SSL-state-flags
_sslState &= ~(SSL_STATE_WRITE_WANTS_READABLE_SOCKET | SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET);
}
else if (bytes == 0)
{
return -1; // the SSL connection was closed, so return failure
}
else
{
// The SSL layer's internal needs aren't being met, so we now have to
// ask it what its problem is, then give it what it wants. :P
int err = SSL_get_error(_ssl, bytes);
if (err == SSL_ERROR_WANT_READ)
{
// SSL can't write anything more until the socket becomes readable,
// so we need to go back to our event loop, wait until the
// socket select()'s as readable, and then call SSL_Write() again.
_sslState |= SSL_STATE_WRITE_WANTS_READABLE_SOCKET;
_sslState &= ~SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET;
bytes = 0; // Tell the caller we weren't able to send anything yet
}
else if (err == SSL_ERROR_WANT_WRITE)
{
// SSL can't write anything more until the socket becomes writable,
// so we need to go back to our event loop, wait until the
// socket select()'s as writeable, and then call SSL_Write() again.
_sslState &= ~SSL_STATE_WRITE_WANTS_READABLE_SOCKET;
_sslState |= SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET;
bytes = 0; // Tell the caller we weren't able to send anything yet
}
else
{
// SSL had some other problem I don't know how to deal with,
// so just print some debug output and then return failure.
fprintf(stderr,"SSL_write() ERROR!");
ERR_print_errors_fp(stderr);
}
}
return bytes; // Returns the number of bytes we actually sent
}
I think your problem is
rest_size -= bytes_sent;
You should do rest_size -= tmp_bytes_sent;
Also
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
//its an error condition
return bytes_sent;
}
I dont know whether this will fix the issue, but the code you pasted has the above mentioned issues
When I write more data, than the client reads, my server crashes.
No it doesn't, unless you've violently miscoded something else that you haven't posted here. It either loops forever or it gets an error: probably ECONNRESET, which means the client has behaved as you described, and you've detected it, so you should close the connection and forget about him. Instead of which, you are just looping forever, trying to send the data to a broken connection, which can never happen.
And when you get an error, there's not much use in just printing a -1. You should print the error, with perror() or errno or strerror().
Speaking of looping forever, don't loop like this. SSL_write() can return 0, which you aren't handling at all: this will cause an infinite loop. See also David Schwartz's comments below.
NB you should definitely use the second approach. OpenSSL needs to know that the socket is in non-blocking mode.
Both of which have their downsides
Such as?
And as noted in the other answer,
rest_size -= bytes_sent;
should be
rest_size -= tmp_bytes_sent;

C/pp Sockets, recv()/send() works only under gdb

EDIT: In Socket::CanReceive() was logic error. I was checking for input 1 milisecond. That's why while stepping in gdb, everything worked.
I got a problem with the C sockets. send()/recv() don't do anything if they're in non-debug mode. I can't even std::cout their return value. For some reason std::cout isn't working in My method. I can't std::cerr errno too. There is no point in checking that in gdb, because there everything works perfectly. Wireshark doesn't log packets in non-debug mode.
//b - buffer
//s - size
//sd - socket descriptor
int32_t TCP::Receive(char* b, uint32_t s)
{ Error::Critical.SetErrorNumber(Error::List::NoError);
if (!Socket::Validate(sd))
{ Error::Critical.SetErrorNumber(Error::List::InvalidSocket);
return -1;
}
if (Disconnected())
{ Error::Critical.SetErrorNumber(Error::List::NotConnected);
return -1;
}
if (!Socket::CanReceive(sd, readTimeout))
return false;
if (!b)
{ b = new char [s + 1];
std::memset(b, '\0', s + 1);
}
int32_t bytes = recv(sd, b, s, 0);
if (bytes == -1)
{ Error::Critical.SetErrorNumber(errno);
std::cerr << errno << "\n";
return false;
}
std::cout << bytes;
return bytes;
}
Interesting is fact that gdb without stepping fails too. If I don't set breakpoint in this method, it fails and wireshark nothing log. I thought it could be issue with timings, so server has no time to respond or something, but guess what? sleep() doesn't work in both methods.
I don't post TCP::Send(), because there is only line of difference.
You're not flushing the streams, so you don't see output. Change:
std::cerr << errno << "\n";
to
std::cerr << errno << std::endl;
and
std::cout << bytes;
to
std::cout << bytes << std::endl;