is_master_ def:
volatile bool is_master_;
is_master_ value is set to true by another thread , but It seems that is_master_ value dosnt flush (it doesn't cout the FATAL ERROR HAS OCCURRED... ). If i add cout << "foo" <
void MasterSlaveSynchronize::validateSingleMaster(){
if(is_master_){
cout << "FATAL ERROR HAS OCCURRED BOTH MASTER";
if(!is_leader_master_){
cout << "CHOSE AS VICTIM IN MASTER-MATSER. SET THIS HOST AS SLAVE";
is_master_ = false;
}
}
}
The caller code:
while(1){
int n = recvfrom(sockId, buf, HEARBEAT_SIZE, 0, (struct sockaddr *) &from,
&length);
if (n < 0) {
REGISTER_ERROR("Failed to recieved hearbeat");
} else {
gettimeofday(&instance_->last_hearbeat_got_, NULL);
instance_->validateSingleMaster();
}
}
You wanted me to post my comment as an answer:
Maybe it does, but because you don't use a newline your output stream
doesn't flush.
This behaviour is explained reasonably well here:
Why does printf not flush after the call unless a newline is in the format string?
Related
I'm following the tutorial (big code block near the bottom of that section)
here:http://beej.us/guide/bgnet/output/html/multipage/advanced.html#select
And the main server code code is like so:
while (true)
{
read_fds = master;
if (select(fd_max + 1, &read_fds, NULL, NULL, NULL) == -1)
{
cerr << "ERROR. Select failed" << endl;
return -1;
}
for (int i = 0; i <= fd_max; i++)
{
if (FD_ISSET(i, &read_fds))
{
if (i == welcome_socket)
{
cout << "NEW CONNECTION" << endl;
client_len = sizeof(struct sockaddr_in);
client_sock = accept(welcome_socket, (struct sockaddr *) &client_addr, &client_len);
if (client_sock != -1)
{
FD_SET(client_sock, &master);
if (client_sock > fd_max)
{
fd_max = client_sock;
}
}
}
else
{
int length, total_read = 0;
// CONNECTION CLOSED BY CLIENT
if (safe_recv(client_sock, &length, sizeof(int)) <= 0)
{
cout << "CONNECTION CLOSED" << endl;
close(i);
FD_CLR(i, &master);
}
else
{
char *message = (char *)memset((char *)malloc(length + 1), 0, length);
// while ((total_read += safe_recv(client_sock, message + total_read, length - total_read)) < length) {}
safe_recv(client_sock, message, length);
// RESPOND WITH MESSAGE
cout << "MESSAGE: " << message << endl;
write(client_sock, process(message), length);
free(message);
}
}
}
}
}
What I'm doing is first sending (from the client) the length of the string, then the string itself. Then the server sends back process(message).
When I only have 1 connection, I'm seeing correct behaviour. However if 1 is connected already and I connect a new client, what I'm seeing is:
1st client no longer sends or receives anything from server (concluded because nothing is printed to stdout on client side)
2nd client is working as expected
When 2nd connection exits, server counts that as both connections exiting (prints CONNECTION CLOSED twice)
I've tried to keep this very similar to the tutorial code. I've run the tutorial server, and that works as intended with several clients.
I'm new to network programming, so I apologise if this is a beginner problem or just something dumb I overlooked.
The code reads from and writes to only client_sock, and client_sock is replaced with the new socket in the accept handling portion of the code.
Most likely you want to interact with i rather than client_sock.
I have created a pseudo terminal in C++ using the following code:
int main(int, char const *[])
{
int master, slave;
char name[1024];
char mode[] = "0777"; //I know this isn't good, it is for testing at the moment
int access;
int e = openpty(&master, &slave, &name[0], 0, 0);
if(0 > e) {
std::printf("Error: %s\n", strerror(errno));
return -1;
}
if( 0 != unlockpt(slave) )
{
perror("Slave Error");
}
access = strtol(mode, 0, 8);
if( 0 > chmod(name, access) )
{
perror("Permission Error");
}
//std::cout << "Master: " << master << std::endl;
std::printf("Slave PTY: %s\n", name);
int r;
prompt = "login: ";
while(true)
{
std::cout << prompt << std::flush;
r = read(master, &name[0], sizeof(name)-1);
checkInput(name);
name[r] = '\0';
std::printf("%s", &name[0]);
std::printf("\n");
}
close(slave);
close(master);
return 0;
}
It works pretty well in the sense that from another terminal, I can do:
printf 'username' > /dev/pts/x
and it will appear and be processed as it should.
My question is: when I try to use screen, nothing appears on the screen terminal. Then when I type, it comes through to my slave 1 character at a time.
Does anyone know why this is? Or how I can fix it.
I can provide more detail if required.
Thank you :)
Because you're not flushing the buffer after you use printf.
As pauls answer already suggest you need to flush the buffer.
To do so you can use the tcflush function.
The first argument is the int of the file descriptor and the second can be one of the following:
TCIFLUSH Flushes input data that has been received by the system but
not read by an application.
TCOFLUSH Flushes output data that has been written by an application
but not sent to the terminal.
TCIOFLUSH Flushes both input and output data.
For more information see: https://www.ibm.com/docs/en/zos/2.3.0?topic=functions-tcflush-flush-input-output-terminal
I've been learning sockets, and I have created a basic server where you can telnet into and type messages, then press enter and the message is printed on the server.
Since it's telnet, every key press gets sent to the server. So I basically hold all sent bytes in a buffer, and then when a carriage return ("\r\n") is received, I discard that, and print out the clients current buffer. Then I clear the clients buffer.
My problem is that every once in a while (and I'm not quite sure how to replicate it), the first "line" of data I send in gets an extra space tacked onto each character. For example, I'll type "Test" on the telnet client, but my server will receive it as "T e s t ". I always clear the receiving buffer before receiving any data. One obvious solution is just to remove all spaces serverside, but then that messes up my ability to send more than one word. Is this just an issue with my telnet, or is there something I can do on the server to fix this?
I am using the WinSock2 API and Windows 10 Telnet.
EDIT:
I have checked the hex value of the extra character, and it is 0x20.
EDIT:
Here is the code that receives and handles the incoming telnet data.
// This client is trying to send some data to us
memset(receiveBuffer, sizeof(receiveBuffer), 0);
int receivedBytes = recv(client->socket, receiveBuffer, sizeof(receiveBuffer), 0);
if (receivedBytes == SOCKET_ERROR)
{
FD_CLR(client->socket, &masterFDSet);
std::cerr << "Error! recv(): " << WSAGetLastError() << std::endl;
closesocket(client->socket);
client->isDisconnected = true;
continue;
}
else if (receivedBytes == 0)
{
FD_CLR(client->socket, &masterFDSet);
std::cout << "Socket " << client->socket << " was closed by the client." << std::endl;
closesocket(client->socket);
client->isDisconnected = true;
continue;
}
// Print out the hex value of the incoming data, for debug purposes
const int siz_ar = strlen(receiveBuffer);
for (int i = 0; i < siz_ar; i++)
{
std::cout << std::hex << (int)receiveBuffer[i] << " " << std::dec;
}
std::cout << std::endl;
std::string stringCRLF = "\r\n"; // Carraige return representation
std::string stringBS = "\b"; // Backspace representation
std::string commandBuffer = receiveBuffer;
if (commandBuffer.find(stringCRLF) != std::string::npos)
{
// New line detected. Process message.
ProcessClientMessage(client);
}
else if (commandBuffer.find(stringBS) != std::string::npos)
{
// Backspace detected,
int size = strlen(client->dataBuffer);
client->dataBuffer[size - 1] = '\0';
}
else
{
// Strip any extra dumb characters that might have found their way in there
commandBuffer.erase(std::remove(commandBuffer.begin(), commandBuffer.end(), '\r'), commandBuffer.end());
commandBuffer.erase(std::remove(commandBuffer.begin(), commandBuffer.end(), '\n'), commandBuffer.end());
// Add the new data to the clients data buffer
strcat_s(client->dataBuffer, sizeof(client->dataBuffer), commandBuffer.c_str());
}
std::cout << "length of data buffer is " << strlen(client->dataBuffer) << std::endl;
You have two major problems.
First, you have a variable, receivedBytes that knows the number of bytes you received. Why then do you call strlen? You have no guarantee that the data you received is a C-style string. It could, for example, contain embedded zero bytes. Do not call strlen on it.
Second, you check the data you just received for a \r\n, rather than the full receive buffer. And you receive data into the beginning of the receive buffer, not the first unused space in it. As a result, if one call to recv gets the \r and the next gets the \n, your code will do the wrong thing.
You never actually wrote code to receive a message. You never actually created a message buffer to hold the received message.
Your code, my comments:
memset(receiveBuffer, sizeof(receiveBuffer), 0);
You don't need this. You shouldn't need this. If you do there is a bug later in your code.
int receivedBytes = recv(client->socket, receiveBuffer, sizeof(receiveBuffer), 0);
if (receivedBytes == SOCKET_ERROR)
{
FD_CLR(client->socket, &masterFDSet);
std::cerr << "Error! recv(): " << WSAGetLastError() << std::endl;
closesocket(client->socket);
client->isDisconnected = true;
continue;
You mean 'break'. You got an error. You closed the socket. There is nothing to continue.
}
else if (receivedBytes == 0)
{
FD_CLR(client->socket, &masterFDSet);
std::cout << "Socket " << client->socket << " was closed by the client." << std::endl;
closesocket(client->socket);
client->isDisconnected = true;
continue;
Ditto. You mean 'break'. You got an error. You closed the socket. There is nothing to continue.
}
// Print out the hex value of the incoming data, for debug purposes
const int siz_ar = strlen(receiveBuffer);
Bzzzzzzzzzzzzt. There is no guarantee there is a null anywhere in the buffer. You don't need this variable. The correct value is already present, in receivedBytes.
for (int i = 0; i < siz_ar; i++)
That should be `for (int i = 0; i < receivedBytes; i++)
{
std::cout << std::hex << (int)receiveBuffer[i] << " " << std::dec;
}
std::cout << std::endl;
std::string stringCRLF = "\r\n"; // Carraige return representation
No. That is a carriage return (\r) followed by a line feed (\n), often called CRLF as indeed you have yourself in the variable name. This is the standard line terminator in Telnet.
std::string stringBS = "\b"; // Backspace representation
std::string commandBuffer = receiveBuffer;
Bzzt. This copy should be length-delimited by receivedBytes.
if (commandBuffer.find(stringCRLF) != std::string::npos)
As noted by #DavidShwartz you can't assume you got the CR and the LF in the same buffer.
{
// New line detected. Process message.
ProcessClientMessage(client);
}
else if (commandBuffer.find(stringBS) != std::string::npos)
{
// Backspace detected,
int size = strlen(client->dataBuffer);
client->dataBuffer[size - 1] = '\0';
This doesn't make any sense. You are using strlen() to tell you where the trailing null is, and then you're putting a null there. You also have the problem that there may not be a trailing null. In any case what you should be doing is removing the backspace and the character before it, which requires different code. You're also operating on the wrong data buffer.
}
else
{
// Strip any extra dumb characters that might have found their way in there
commandBuffer.erase(std::remove(commandBuffer.begin(), commandBuffer.end(), '\r'), commandBuffer.end());
commandBuffer.erase(std::remove(commandBuffer.begin(), commandBuffer.end(), '\n'), commandBuffer.end());
// Add the new data to the clients data buffer
strcat_s(client->dataBuffer, sizeof(client->dataBuffer), commandBuffer.c_str());
}
I am trying to accomplish, that my ssl server does not break down, when a client does not collect all data. (fixed with one minor bug)
when the data is too long.
Basically what I'm trying to do is write in a non-blocking way. For that I found two different approaches:
First approach
using this code
int flags = fcntl(ret.fdsock, F_GETFL, 0);
fcntl(ret.fdsock, F_SETFL, flags | O_NONBLOCK);
and creating the ssl connection with it
Second approach:
Doing this directly after creating the SSL Object using SSL_new(ctx)
BIO *sock = BIO_new_socket(ret.fdsock, BIO_NOCLOSE);
BIO_set_nbio(sock, 1);
SSL_set_bio(client, sock, sock);
Both of which have their downsides, but neither of which helps solving the problem.
The first approach seems to read in a unblocking way just fine, but when I write more data, than the client reads, my server crashes.
The second approach does not seem to do anything, so my guess is, that I did something wrong or did not understand what a BIO actually does.
For more Information here is how the server writes to the client:
int SSLConnection::send(char* msg, const int size){
int rest_size = size;
int bytes_sent = 0;
char* begin = msg;
std::cout << "expected bytes to send: " << size << std::endl;
while(rest_size > 0) {
int tmp_bytes_sent = SSL_write(connection, begin, rest_size);
std::cout << "any error : " << ERR_get_error()<< std::endl;
std::cout << "tmp_bytes_sent: " << tmp_bytes_sent << std::endl;
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
std::cout << "ssl error : " << SSL_get_error(this->connection, tmp_bytes_sent)<< std::endl;
} else {
bytes_sent += tmp_bytes_sent;
rest_size -= tmp_bytes_sent;
begin = msg+bytes_sent;
}
}
return bytes_sent;
}
Output:
expected bytes to send: 78888890
Betätigen Sie die <RETURN> Taste, um das Fenster zu schließen...
(means: hit <return> to close window)
EDIT: After people said, that I need to cache errors appropriate, here is my new code:
Setup:
connection = SSL_new(ctx);
if (connection){
BIO * sbio = BIO_new_socket(ret.fdsock, BIO_NOCLOSE);
if (sbio) {
BIO_set_nbio(sbio, false);
SSL_set_bio(connection, sbio, sbio);
SSL_set_accept_state(connection);
} else {
std::cout << "Bio is null" << std::endl;
}
} else {
std::cout << "client is null" << std::endl;
}
Sending:
int SSLConnection::send(char* msg, const int size){
if(connection == NULL) {
std::cout << "ERR: Connection is NULL" << std::endl;
return -1;
}
int rest_size = size;
int bytes_sent = 0;
char* begin = msg;
std::cout << "expected bytes to send: " << size << std::endl;
while(rest_size > 0) {
int tmp_bytes_sent = SSL_write(connection, begin, rest_size);
std::cout << "any error : " << ERR_get_error()<< std::endl;
std::cout << "tmp_bytes_sent: " << tmp_bytes_sent << std::endl;
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
std::cout << "ssl error : " << SSL_get_error(this->connection, tmp_bytes_sent)<< std::endl;
break;
} else if (tmp_bytes_sent == 0){
std::cout << "tmp_bytes are 0" << std::endl;
break;
} else {
bytes_sent += tmp_bytes_sent;
rest_size -= tmp_bytes_sent;
begin = msg+bytes_sent;
}
}
return bytes_sent;
}
Using a client, that fetches 60 bytes, here is the output:
Output writing 1,000,000 Bytes:
expected bytes to send: 1000000
any error : 0
tmp_bytes_sent: 16384
any error : 0
tmp_bytes_sent: 16384
Betätigen Sie die <RETURN> Taste, um das Fenster zu schließen...
(translates to: hit <RETURN> to close window)
Output writing 1,000 bytes:
expected bytes to send: 1000
any error : 0
tmp_bytes_sent: 1000
connection closed <- expected output
First, a warning: non-blocking I/O over SSL is a rather baroque API, and it's difficult to use correctly. In particular, the SSL layer sometimes needs to read internal data before it can write user data (or vice versa), and the caller's code is expected to be able to handle that based on the error-codes feedback it gets from the SSL calls it makes. It can be made to work correctly, but it's not easy or obvious -- you are de facto required to implement a state machine in your code that echoes the state machine inside the SSL library.
Below is a simplified version of the logic that is required (it's extracted from the Write() method in this file which is part of this library, in case you want to see a complete, working implementation)
enum {
SSL_STATE_READ_WANTS_READABLE_SOCKET = 0x01,
SSL_STATE_READ_WANTS_WRITEABLE_SOCKET = 0x02,
SSL_STATE_WRITE_WANTS_READABLE_SOCKET = 0x04,
SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET = 0x08
};
// a bit-chord of SSL_STATE_* bits to keep track of what
// the SSL layer needs us to do next before it can make more progress
uint32_t _sslState = 0;
// Note that this method returns the number of bytes sent, or -1
// if there was a fatal error. So if this method returns 0 that just
// means that this function was not able to send any bytes at this time.
int32_t SSLSocketDataIO :: Write(const void *buffer, uint32 size)
{
int32_t bytes = SSL_write(_ssl, buffer, size);
if (bytes > 0)
{
// SSL was able to send some bytes, so clear the relevant SSL-state-flags
_sslState &= ~(SSL_STATE_WRITE_WANTS_READABLE_SOCKET | SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET);
}
else if (bytes == 0)
{
return -1; // the SSL connection was closed, so return failure
}
else
{
// The SSL layer's internal needs aren't being met, so we now have to
// ask it what its problem is, then give it what it wants. :P
int err = SSL_get_error(_ssl, bytes);
if (err == SSL_ERROR_WANT_READ)
{
// SSL can't write anything more until the socket becomes readable,
// so we need to go back to our event loop, wait until the
// socket select()'s as readable, and then call SSL_Write() again.
_sslState |= SSL_STATE_WRITE_WANTS_READABLE_SOCKET;
_sslState &= ~SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET;
bytes = 0; // Tell the caller we weren't able to send anything yet
}
else if (err == SSL_ERROR_WANT_WRITE)
{
// SSL can't write anything more until the socket becomes writable,
// so we need to go back to our event loop, wait until the
// socket select()'s as writeable, and then call SSL_Write() again.
_sslState &= ~SSL_STATE_WRITE_WANTS_READABLE_SOCKET;
_sslState |= SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET;
bytes = 0; // Tell the caller we weren't able to send anything yet
}
else
{
// SSL had some other problem I don't know how to deal with,
// so just print some debug output and then return failure.
fprintf(stderr,"SSL_write() ERROR!");
ERR_print_errors_fp(stderr);
}
}
return bytes; // Returns the number of bytes we actually sent
}
I think your problem is
rest_size -= bytes_sent;
You should do rest_size -= tmp_bytes_sent;
Also
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
//its an error condition
return bytes_sent;
}
I dont know whether this will fix the issue, but the code you pasted has the above mentioned issues
When I write more data, than the client reads, my server crashes.
No it doesn't, unless you've violently miscoded something else that you haven't posted here. It either loops forever or it gets an error: probably ECONNRESET, which means the client has behaved as you described, and you've detected it, so you should close the connection and forget about him. Instead of which, you are just looping forever, trying to send the data to a broken connection, which can never happen.
And when you get an error, there's not much use in just printing a -1. You should print the error, with perror() or errno or strerror().
Speaking of looping forever, don't loop like this. SSL_write() can return 0, which you aren't handling at all: this will cause an infinite loop. See also David Schwartz's comments below.
NB you should definitely use the second approach. OpenSSL needs to know that the socket is in non-blocking mode.
Both of which have their downsides
Such as?
And as noted in the other answer,
rest_size -= bytes_sent;
should be
rest_size -= tmp_bytes_sent;
EDIT: In Socket::CanReceive() was logic error. I was checking for input 1 milisecond. That's why while stepping in gdb, everything worked.
I got a problem with the C sockets. send()/recv() don't do anything if they're in non-debug mode. I can't even std::cout their return value. For some reason std::cout isn't working in My method. I can't std::cerr errno too. There is no point in checking that in gdb, because there everything works perfectly. Wireshark doesn't log packets in non-debug mode.
//b - buffer
//s - size
//sd - socket descriptor
int32_t TCP::Receive(char* b, uint32_t s)
{ Error::Critical.SetErrorNumber(Error::List::NoError);
if (!Socket::Validate(sd))
{ Error::Critical.SetErrorNumber(Error::List::InvalidSocket);
return -1;
}
if (Disconnected())
{ Error::Critical.SetErrorNumber(Error::List::NotConnected);
return -1;
}
if (!Socket::CanReceive(sd, readTimeout))
return false;
if (!b)
{ b = new char [s + 1];
std::memset(b, '\0', s + 1);
}
int32_t bytes = recv(sd, b, s, 0);
if (bytes == -1)
{ Error::Critical.SetErrorNumber(errno);
std::cerr << errno << "\n";
return false;
}
std::cout << bytes;
return bytes;
}
Interesting is fact that gdb without stepping fails too. If I don't set breakpoint in this method, it fails and wireshark nothing log. I thought it could be issue with timings, so server has no time to respond or something, but guess what? sleep() doesn't work in both methods.
I don't post TCP::Send(), because there is only line of difference.
You're not flushing the streams, so you don't see output. Change:
std::cerr << errno << "\n";
to
std::cerr << errno << std::endl;
and
std::cout << bytes;
to
std::cout << bytes << std::endl;