Losing characters in TCP Telnet transmission - c++

I'm using Winsock to send commands through Telnet ; but for some reason when I try to send a string, a few characters get dropped occasionally. I use send:
int SendData(const string & text)
{
send(hSocket,text.c_str(),static_cast<int>(text.size()),0);
Sleep(100);
send(hSocket,"\r",1,0);
Sleep(100);
return 0;
}
Any suggestions?
Update:
I checked and the error still occurs even if all the characters are sent. So I decided to change the Send function so that it sends individual characters and checks if they have been sent:
void SafeSend(const string &text)
{
char char_text[1];
for(size_t i = 0; i <text.size(); ++i)
{
char_text[0] = text[i];
while(send(hSocket,char_text,1,0) != 1);
}
}
Also, it drops characters in a peculiar way ; i.e. in the middle of the sentence. E.g.
set variable [fp]exit_flag = true
is sent as
ariable [fp]exit_flag = true
Or
set variable [fp]app_flag = true
is sent as
setrable [fp]app_flag = true

As mentioned in the comments you absolutely need to check the return value of send as it can return after sending only a part of your buffer.
You nearly always want to call send in a loop similar to the following (not tested as I don't have a Windows development environment available at the moment):
bool SendString(const std::string& text) {
int remaining = text.length();
const char* buf = text.data();
while (remaining > 0) {
int sent = send(hSocket, buf, remaining, 0);
if (sent == SOCKET_ERROR) {
/* Error occurred check WSAGetLastError() */
return false;
}
remaining -= sent;
buf += sent;
}
return true;
}
Update:
This is not relevant for the OP, but calls to recv should also structured in the same way as above.
To debug the problem further, Wireshark (or equivalent software) is excellent in tracking down the source of the problem.
Filter the packets you want to look at (it has lots of options) and check if they include what you think they include.
Also note that telnet is a protocol with numerous RFCs. Most of the time you can get away with just sending raw text, but it's not really guaranteed to work.
You mention that the windows telnet client sends different bytes from you, capture a minimal sequence from both clients and compare them. Use the RFCs to figure out what the other client does different and why. You can use "View -> Packet Bytes" to bring up the data of the packet and can easily inspect and copy/paste the hex dump.

Related

How to edit SIM800l library to ensure that a call is established

I use SIM800l to make calls with arduino UNO with AT commands. By using this library I make calls with gprsTest.callUp(number) function. The problem is that it returns true even the number is wrong or there is no credit.
It is clear on this part code from GPRS_Shield_Arduino.cpp library why it is happening. It doesnt check the return of ATDnumberhere;
bool GPRS::callUp(char *number)
{
//char cmd[24];
if(!sim900_check_with_cmd("AT+COLP=1\r\n","OK\r\n",CMD)) {
return false;
}
delay(1000);
//HACERR quitar SPRINTF para ahorar memoria ???
//sprintf(cmd,"ATD%s;\r\n", number);
//sim900_send_cmd(cmd);
sim900_send_cmd("ATD");
sim900_send_cmd(number);
sim900_send_cmd(";\r\n");
return true;
}
The return of ATDnumberhere; on software serial communication is:
If number is wrong
ERROR
If there is no credit
`MO CONNECTED //instant response
+COLP: "003069XXXXXXXX",129,"",0,"" // after 3 sec
OK`
If it is calling and no answer
MO RING //instant response, it is ringing
NO ANSWER // after some sec
If it is calling and hang up
MO RING //instant response
NO CARRIER // after some sec
If the receiver has not carrier
ATD6985952400;
NO CARRIER
If it is calling , answer and hang up
MO RING
MO CONNECTED
+COLP: "69XXXXXXXX",129,"",0,""
OK
NO CARRIER
The question is how to use different returns for every case by this function gprsTest.callUp(number) , or at least how to return true if it is ringing ?
This library code seems better than the worst I have seen at first glance, but it still have some issues. The most severe is its Final result code handling.
The sim900_check_with_cmd function is conceptually almost there, however only checking for OK is in no way acceptable. It should check for every single possible final result code the modem might send.
From your output examples you have the following final result codes
OK
ERROR
NO CARRIER
NO ANSWER
but there exists a few more as well. You can look at the code for atinout for an example of a is_final_result_code function (you can also compare to isFinalResponseError and isFinalResponseSuccess1 in ST-Ericsson's U300 RIL).
The unconditional return true; at the end of GPRS::callUp is an error, but it might be deliberate due to lack of ideas for implementing a better API so that the calling client could check the intermediate result codes. But that is such a wrong way to do it.
The library really should do all the stateful command line invocation and final result code parsing with no exceptions. Just doing parts of that in the library and leaving some of it up to the client is just bad design.
When clients want to inspect or act on intermediate result codes or information text that comes between the command line and the final result code, the correct way to do it is to let the library "deframe" everything it receives from the modem into individual complete lines, and for everything that is not a final result code provide this to the client through a callback function.
The following is from an unfinished update to my atinout program:
bool send_commandline(
const char *cmdline,
const char *prefix,
void (*handler)(const char *response_line, void *ptr),
void *ptr,
FILE *modem)
{
int res;
char response_line[1024];
DEBUG(DEBUG_MODEM_WRITE, ">%s\n", cmdline);
res = fputs(cmdline, modem);
if (res < 0) {
error(ERR "failed to send '%s' to modem (res = %d)", cmdline, res);
return false;
}
/*
* Adding a tiny delay here to avoid losing input data which
* sometimes happens when immediately jumping into reading
* responses from the modem.
*/
sleep_milliseconds(200);
do {
const char *line;
line = fgets(response_line, (int)sizeof(response_line), modem);
if (line == NULL) {
error(ERR "EOF from modem");
return false;
}
DEBUG(DEBUG_MODEM_READ, "<%s\n", line);
if (prefix[0] == '\0') {
handler(response_line, ptr);
} else if (STARTS_WITH(response_line, prefix)) {
handler(response_line + strlen(prefix) + strlen(" "), ptr);
}
} while (! is_final_result(response_line));
return strcmp(response_line, "OK\r\n") == 0;
}
You can use that as a basis for implementing proper handling. If you want to
get error responses out of the function, add an additional callback argument and change to
success = strcmp(response_line, "OK\r\n") == 0;
if (!success) {
error_handler(response_line, ptr);
}
return success;
Tip: Read all of chapter 5 in the V.250 specification, it will teach you almost everything you need to know about command lines, result codes and response handling. Like for instance that a command line should also be terminated with \r only, not \r\n-
1 Note that CONNECT is not a final result code, it is an intermediate result code, so the name isFinalResponseSuccess is strictly speaking not 100% correct.

libssh - send request multiple times and read answer from channel

I have created one session and one channel. A request (ssh_channel_request_exec) is supposed to be sent about every second and I want to read the answer (ssh_channel_read) to this request. However, I could not find an example on how to make multiple requests, the api only contains an example on how to make a request, read the answer and close the channel.
When I try requesting and reading from the channel twice in sequence, ssh_channel_request_exec returns an error the second time. Is it necessary to open a new channel for each request?
int ret = ssh_channel_request_exec(m_channel, request.c_str());
if (ret != SSH_OK)
{
ssh_channel_close(m_channel);
ssh_channel_free(m_channel);
return false;
}
std::stringstream sstream;
nbytes = ssh_channel_read(m_channel, buffer, sizeof(buffer), 0);
while (nbytes > 0)
{
sstream.write(buffer, nbytes);
nbytes = ssh_channel_read(m_channel, buffer, sizeof(buffer), 0);
}
if (nbytes < 0) // 'nbytes' becomes zero the first time, the channel is not closed the second time!
{
ssh_channel_close(m_channel);
ssh_channel_free(m_channel);
return false;
}
response = sstream.str();
The api also contains a ssh_channel_send_eof function that is used after reading from the channel in an example. What is good for? The api only says "Send an end of file on the channel. This doesn't close the channel. You may still read from it but not write."
A check for ssh_channel_is_eof (api: "Check if remote has sent an EOF") shows that the channel is not eof the first time, but the second time it is.
Yes. Non-interactive exec is intended for a single command to be executed and therefore you can run only on in single channel, as visible in the official documentation.

Windws C++ Intermittent Socket Disconnect

I've got a server that uses a two thread system to manage between 100 and 200 concurrent connections. It uses TCP sockets, as packet delivery guarantee is important (it's a communication system where missed remote API calls could FUBAR a client).
I've implemented a custom protocol layer to separate incoming bytes into packets and dispatch them properly (the library is included below). I realize the issues of using MSG_PEEK, but to my knowledge, it is the only system that will fulfill the needs of the library implementation. I am open to suggestions, especially if it could be part of the problem.
Basically, the problem is that, randomly, the server will drop the client's socket due to a lack of incoming packets for more than 20 seconds, despite the client successfully sending a keepalive packet every 4. I can verify that the server itself didn't go offline and that the connection of the users (including myself) experiencing the problem is stable.
The library for sending/receiving is here:
short ncsocket::send(wstring command, wstring data) {
wstringstream ss;
int datalen = ((int)command.length() * 2) + ((int)data.length() * 2) + 12;
ss << zero_pad_int(datalen) << L"|" << command << L"|" << data;
int tosend = datalen;
short __rc = 0;
do{
int res = ::send(this->sock, (const char*)ss.str().c_str(), datalen, NULL);
if (res != SOCKET_ERROR)
tosend -= res;
else
return FALSE;
__rc++;
Sleep(10);
} while (tosend != 0 && __rc < 10);
if (tosend == 0)
return TRUE;
return FALSE;
}
short ncsocket::recv(netcommand& nc) {
vector<wchar_t> buffer(BUFFER_SIZE);
int recvd = ::recv(this->sock, (char*)buffer.data(), BUFFER_SIZE, MSG_PEEK);
if (recvd > 0) {
if (recvd > 8) {
wchar_t* lenstr = new wchar_t[4];
memcpy(lenstr, buffer.data(), 8);
int fulllen = _wtoi(lenstr);
delete lenstr;
if (fulllen > 0) {
if (recvd >= fulllen) {
buffer.resize(fulllen / 2);
recvd = ::recv(this->sock, (char*)buffer.data(), fulllen, NULL);
if (recvd >= fulllen) {
buffer.resize(buffer.size() + 2);
buffer.push_back((char)L'\0');
vector<wstring> data = parsewstring(L"|", buffer.data(), 2);
if (data.size() == 3) {
nc.command = data[1];
nc.payload = data[2];
return TRUE;
}
else
return FALSE;
}
else
return FALSE;
}
else
return FALSE;
}
else {
::recv(this->sock, (char*)buffer.data(), BUFFER_SIZE, NULL);
return FALSE;
}
}
else
return FALSE;
}
else
return FALSE;
}
This is the code for determining if too much time has passed:
if ((int)difftime(time(0), regusrs[i].last_recvd) > SERVER_TIMEOUT) {
regusrs[i].sock.end();
regusrs[i].is_valid = FALSE;
send_to_all(L"removeuser", regusrs[i].server_user_id);
wstringstream log_entry;
log_entry << regusrs[i].firstname << L" " << regusrs[i].lastname << L" (suid:" << regusrs[i].server_user_id << L",p:" << regusrs[i].parent << L",pid:" << regusrs[i].parentid << L") was disconnected due to idle";
write_to_log_file(server_log, log_entry.str());
}
The "regusrs[i]" is using the currently iterated member of a vector I use to story socket descriptors and user information. The 'is_valid' check is there to tell if the associated user is an actual user - this is done to prevent the system from having to deallocate the member of the vector - it just returns it to the pool of available slots. No thread access/out-of-range issues that way.
Anyway, I started to wonder if it was the server itself was the problem. I'm testing on another server currently, but I wanted to see if another set of eyes could stop something out of place or cue me in on a concept with sockets and extended keepalives that I'm not aware of.
Thanks in advance!
I think I see what you're doing with MSG_PEEK, where you wait until it looks like you have enough data to read a full packet. However, I would be suspicious of this. (It's hard to determine the dynamic behaviour of your system just by looking at this small part of the source and not the whole thing.)
To avoid use of MSG_PEEK, follow these two principles:
When you get a notification that data is ready (I assume you're using select), then read all the waiting data from recv(). You may use more than one recv() call, so you can handle the incoming data in pieces.
If you read only a partial packet (length or payload), then save it somewhere for the next time you get a read notification. Put the packets and payloads back together yourself, don't leave them in the socket buffer.
As an aside, the use of new/memcpy/wtoi/delete is woefully inefficient. You don't need to allocate memory at all, you can use a local variable. And then you don't even need the memcpy at all, just a cast.
I presume you already assume that your packets can be no longer than 999 bytes in length.

Berkeley Socket Send returning 0 on successful non-blocking send

I am writing a non-blocking chat server, so far the server works fine, but I can't figure out how to correct for partial sends if they happen. The send(int, char*, int); function always returns 0 on a success and -1 on a failed send. Every doc/man page I have read says it should return the number of bytes actually feed to the network buffer. I have checked to be sure that I can send to the server and recv back the data repeatedly without problem.
This is the function I use to call the send. I both tried to print the return data to the console first, then tried line breaking on the return ReturnValue; while debugging. Same result, ReturnValue is always 0 or -1;
int Connection::Transmit(string MessageToSend)
{
// check for send attempts on a closed socket
// return if it happens.
if(this->Socket_Filedescriptor == -1)
return -1;
// Send a message to the client on the other end
// note, the last parameter is a flag bit which
// is used for declaring out of bound data transmissions.
ReturnValue = send(Socket_Filedescriptor,
MessageToSend.c_str(),
MessageToSend.length(),
0);
return ReturnValue;
}
Why don't you try to send in a loop? For instance:
int Connection::Transmit(string MessageToSend)
{
// check for send attempts on a closed socket
// return if it happens.
if(this->Socket_Filedescriptor == -1)
return -1;
int expected = MessageToSend.length();
int sent = 0;
// Send a message to the client on the other end
// note, the last parameter is a flag bit which
// is used for declaring out of bound data transmissions.
while(sent < expected) {
ReturnValue = send(Socket_Filedescriptor,
MessageToSend.c_str() + sent, // Send from correct location
MessageToSend.length() - sent, // Update how much remains
0);
if(ReturnValue == -1)
return -1; // Error occurred
sent += ReturnValue;
}
return sent;
}
This way your code will continually try to send all the data until either an error occurs, or all data is sent successfully.

Client application crash causes Server to crash? (C++)

I'm not sure if this is a known issue that I am running into, but I couldn't find a good search string that would give me any useful results.
Anyway, here's the basic rundown:
we've got a relatively simple application that takes data from a source (DB or file) and streams that data over TCP to connected clients as new data comes in. its a relatively low number of clients; i would say at max 10 clients per server, so we have the following rough design:
client: connect to server, set to read (with timeout set to higher than the server heartbeat message frequency). It blocks on read.
server: one listening thread that accepts connections and then spawns a writer thread to read from the data source and write to the client. The writer thread is also detached(using boost::thread so just call the .detach() function). It blocks on writes indefinetly, but does check errno for errors before writing. We start the servers using a single perl script and calling "fork" for each server process.
The problem(s):
at seemingly random times, the client will shutdown with a "connection terminated (SUCCESFUL)" indicating that the remote server shutdown the socket on purpose. However, when this happens the SERVER application ALSO closes, without any errors or anything. it just crashes.
Now, to further the problem, we have multiple instances of the server app being started by a startup script running different files and different ports. When ONE of the servers crashes like this, ALL the servers crash out.
Both the server and client using the same "Connection" library created in-house. It's mostly a C++ wrapper for the C socket calls.
here's some rough code for the write and read function in the Connection libary:
int connectionTimeout_read = 60 * 60 * 1000;
int Socket::readUntil(char* buf, int amount) const
{
int readyFds = epoll_wait(epfd,epEvents,1,connectionTimeout_read);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TIMEOUT;
return 0;
}
int fd = epEvents[0].data.fd;
if( fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
int rec = recv(fd,buf,amount,MSG_WAITALL);
if(rec == 0)
status = CONNECTION_CLOSED;
else if(rec < 0)
status = convertFlagToStatus(errno);
else
status = CONNECTION_NORMAL;
lastReadBytes = rec;
return rec;
}
int Socket::write(const void* buf, int size) const
{
int readyFds = epoll_wait(epfd,epEvents,1,-1);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TERMINATED;
return 0;
}
int fd = epEvents[0].data.fd;
if(fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
if(epEvents[0].events != EPOLLOUT)
{
status = CONNECTION_CLOSED;
return 0;
}
int bytesWrote = ::send(socket, buf, size,0);
if(bytesWrote < 0)
status = convertFlagToStatus(errno);
lastWriteBytes = bytesWrote;
return bytesWrote;
}
Any help solving this mystery bug would be great! at the VERY least, I would like it to NOT crash out the server even if the client crashes (which is really strange for me, since there is no two-way communication).
Also, for reference, here is the server listening code:
while(server.getStatus() == connection::CONNECTION_NORMAL)
{
connection::Socket s = server.listen();
if(s.getStatus() != connection::CONNECTION_NORMAL)
{
fprintf(stdout,"failed to accept a socket. error: %s\n",connection::getStatusString(s.getStatus()));
}
DATASOURCE* dataSource;
dataSource = open_datasource(XXXX); /* edited */ if(dataSource == NULL)
{
fprintf(stdout,"FATAL ERROR. DATASOURCE NOT FOUND\n");
return;
}
boost::thread fileSender(Sender(s,dataSource));
fileSender.detach();
}
...And also here is the spawned child sending thread:
::signal(SIGPIPE,SIG_IGN);
//const int headerNeeds = 29;
const int BUFFERSIZE = 2000;
char buf[BUFFERSIZE];
bool running = true;
while(running)
{
memset(buf,'\0',BUFFERSIZE*sizeof(char));
unsigned int readBytes = 0;
while((readBytes = read_datasource(buf,sizeof(unsigned char),BUFFERSIZE,dataSource)) == 0)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
}
socket.write(buf,readBytes);
if(socket.getStatus() != connection::CONNECTION_NORMAL)
running = false;
}
fprintf(stdout,"socket error: %s\n",connection::getStatusString(socket.getStatus()));
socket.close();
fprintf(stdout,"sender exiting...\n");
Any insights would be welcome! Thanks in advance.
You've probably got everything backwards... when the server crashes, the OS will close all sockets. So the server crash happens first and causes the client to get the disconnect message (FIN flag in a TCP segment, actually), the crash is not a result of the socket closing.
Since you have multiple server processes crashing at the same time, I'd look at resources they share, and also any scheduled tasks that all servers would try to execute at the same time.
EDIT: You don't have a single client connecting to multiple servers, do you? Note that TCP connections are always bidirectional, so the server process does get feedback if a client disconnects. Some internet providers have even been caught generating RST packets on connections that fail some test for suspicious traffic.
Write a signal handler. Make sure it uses only raw I/O functions to log problems (open, write, close, not fwrite, not printf).
Check return values. Check for negative return value from write on a socket, but check all return values.
Thanks for all the comments and suggestions.
After looking through the code and adding the signal handling as Ben suggested, the applications themselves are far more stable. Thank you for all your input.
The original problem, however, was due to a rogue script that one of the admins was running as root that would randomly kill certain processes on the server-side machine (i won't get into what it was trying to do in reality; safe to say it was buggy).
Lesson learned: check the environment.
Thank you all for the advice.