I have problem with WebSocketPP Server. I want it to handle multiple clients.
Here is my OnOpen method:
void Server::onOpen(
Server* srv,
WSServer* ws,
websocketpp::connection_hdl& hdl)
{
ServerPlayerTracker con;
con.con = &hdl;
con.protocolVersion = 0;
con.verified = false;
con.playerID = srv->playerCount++;
con.roomID = 0;
srv->players.push_back(con);
}
But in disconnection i have problem. I cant find what player with ID disconnected. Here is my OnClose method:
void Server::onClose(
Server* srv,
WSServer* ws,
websocketpp::connection_hdl& hdl)
{
for (int i = 0; i < srv->players.size(); i++)
{
if (srv->players[i].connected)
{
if ((*srv->players[i].con).lock() == hdl.lock())
{
printf("[!] Player disconnected with ID: %d\n",
srv->players[i].playerID);
srv->players.erase(srv->players.begin() + i);
}
}
}
}
In line (*srv->players[i].con).lock() == hdl.lock() it throws exception like
'this was 0xFFFFFFFFFFFFFFF7.' in file 'memory' line 75. I think it's problem with converting weak_ptr to shared_ptr. Is there any way to fix that?
My comments seemed enough to fix the problem (see comments).
For future reference and to indicate that this problem has been answered, I have created this answer.
I'm not 100% sure what is (or is not) working in your current code, since it's quite different from the way the connections are stored and retrieved within the example code (See websocketPP github/documentation example "associative storage").
Using the example, it should be rather easy to set up a multiple client structure, in the way it was intended by the library creator.
For your specific error, I believe you're on the right track about the shared/weak pointer conversion.
Best solution would be to use the list in the way it's used in the example.
Especially interesting is the "con_list" which saves all connections.
It's a typedef of std::map<connection_hdl,connection_data,std::owner_less<connection_hdl>> con_list; con_list m_connections; and should enable you to store and retrieve connections (and their session data).
Related
I use SIM800l to make calls with arduino UNO with AT commands. By using this library I make calls with gprsTest.callUp(number) function. The problem is that it returns true even the number is wrong or there is no credit.
It is clear on this part code from GPRS_Shield_Arduino.cpp library why it is happening. It doesnt check the return of ATDnumberhere;
bool GPRS::callUp(char *number)
{
//char cmd[24];
if(!sim900_check_with_cmd("AT+COLP=1\r\n","OK\r\n",CMD)) {
return false;
}
delay(1000);
//HACERR quitar SPRINTF para ahorar memoria ???
//sprintf(cmd,"ATD%s;\r\n", number);
//sim900_send_cmd(cmd);
sim900_send_cmd("ATD");
sim900_send_cmd(number);
sim900_send_cmd(";\r\n");
return true;
}
The return of ATDnumberhere; on software serial communication is:
If number is wrong
ERROR
If there is no credit
`MO CONNECTED //instant response
+COLP: "003069XXXXXXXX",129,"",0,"" // after 3 sec
OK`
If it is calling and no answer
MO RING //instant response, it is ringing
NO ANSWER // after some sec
If it is calling and hang up
MO RING //instant response
NO CARRIER // after some sec
If the receiver has not carrier
ATD6985952400;
NO CARRIER
If it is calling , answer and hang up
MO RING
MO CONNECTED
+COLP: "69XXXXXXXX",129,"",0,""
OK
NO CARRIER
The question is how to use different returns for every case by this function gprsTest.callUp(number) , or at least how to return true if it is ringing ?
This library code seems better than the worst I have seen at first glance, but it still have some issues. The most severe is its Final result code handling.
The sim900_check_with_cmd function is conceptually almost there, however only checking for OK is in no way acceptable. It should check for every single possible final result code the modem might send.
From your output examples you have the following final result codes
OK
ERROR
NO CARRIER
NO ANSWER
but there exists a few more as well. You can look at the code for atinout for an example of a is_final_result_code function (you can also compare to isFinalResponseError and isFinalResponseSuccess1 in ST-Ericsson's U300 RIL).
The unconditional return true; at the end of GPRS::callUp is an error, but it might be deliberate due to lack of ideas for implementing a better API so that the calling client could check the intermediate result codes. But that is such a wrong way to do it.
The library really should do all the stateful command line invocation and final result code parsing with no exceptions. Just doing parts of that in the library and leaving some of it up to the client is just bad design.
When clients want to inspect or act on intermediate result codes or information text that comes between the command line and the final result code, the correct way to do it is to let the library "deframe" everything it receives from the modem into individual complete lines, and for everything that is not a final result code provide this to the client through a callback function.
The following is from an unfinished update to my atinout program:
bool send_commandline(
const char *cmdline,
const char *prefix,
void (*handler)(const char *response_line, void *ptr),
void *ptr,
FILE *modem)
{
int res;
char response_line[1024];
DEBUG(DEBUG_MODEM_WRITE, ">%s\n", cmdline);
res = fputs(cmdline, modem);
if (res < 0) {
error(ERR "failed to send '%s' to modem (res = %d)", cmdline, res);
return false;
}
/*
* Adding a tiny delay here to avoid losing input data which
* sometimes happens when immediately jumping into reading
* responses from the modem.
*/
sleep_milliseconds(200);
do {
const char *line;
line = fgets(response_line, (int)sizeof(response_line), modem);
if (line == NULL) {
error(ERR "EOF from modem");
return false;
}
DEBUG(DEBUG_MODEM_READ, "<%s\n", line);
if (prefix[0] == '\0') {
handler(response_line, ptr);
} else if (STARTS_WITH(response_line, prefix)) {
handler(response_line + strlen(prefix) + strlen(" "), ptr);
}
} while (! is_final_result(response_line));
return strcmp(response_line, "OK\r\n") == 0;
}
You can use that as a basis for implementing proper handling. If you want to
get error responses out of the function, add an additional callback argument and change to
success = strcmp(response_line, "OK\r\n") == 0;
if (!success) {
error_handler(response_line, ptr);
}
return success;
Tip: Read all of chapter 5 in the V.250 specification, it will teach you almost everything you need to know about command lines, result codes and response handling. Like for instance that a command line should also be terminated with \r only, not \r\n-
1 Note that CONNECT is not a final result code, it is an intermediate result code, so the name isFinalResponseSuccess is strictly speaking not 100% correct.
I use qt with qml and c++. On my application i use a database.
It all works, if the database is reachable.
My problem is, that i would like to check, if database is reachable (like ping).
I tried
db.setDatabaseName(dsn);
if(db.isValid())
{
if(db.open())
{
//std::cout <<"Offene Datenbank";
connected=true;
}
else
{
connected=false;
}
}
else
{
connected=false;
}
and give the connected value as result. But that takes very long (maybe 30 seconds), if there is no connection. How i can check fast, if i have a database connection?
Is there maybe a way to break the command .open after 5 seconds not connected?
I think one easy solution is to just check the ping of the database sever. You can use platform specific ways for pinging.
This would work on Linux :
int exitCode = QProcess::execute("ping", QStringList() << "-c 2" << serverIp);
if (exitCode==0)
{
// is reachable
} else
{
// is not reachable
}
I have studied this question a bit. Here is what I found out.
The problem is in default db connection timeout - it is too long. Each db allows you to change it to an acceptable value, using their own API. In Qt there is one common db interface - QSqlDatabase. And it does not have such method. You can set connection settings by calling it's QSqlDatabase::setConnectOptions method, but it accepts only predefined list of options (which you can read in Qt's help).
For PostgreSQL there is an option connect_timeout, so you can write:
db.setConnectOptions("connect_timeout=5"); // Set to 5 seconds
For other databases there is no such parameter. Connection options of each db are parsed in it's 'driver' class, which derives QSqlDriver and is stored in a 'driver' library.
So, what you can do:
You can rewrite database's driver in order it to accept timeout option.
You can write separate code for each db, using it's native API.
UPDATE
Turns out, that ODBC has SQL_ATTR_CONNECTION_TIMEOUT option.
UPDATE 2
qsql_odbc.cpp:713
} else if (opt.toUpper() == QLatin1String("SQL_ATTR_CONNECTION_TIMEOUT")) {
v = val.toUInt();
r = SQLSetConnectAttr(hDbc, SQL_ATTR_CONNECTION_TIMEOUT, (SQLPOINTER) v, 0);
https://msdn.microsoft.com/en-us/library/ms713605(v=vs.85).aspx
SQL_ATTR_CONNECTION_TIMEOUT (ODBC 3.0)
An SQLUINTEGER value
corresponding to the number of seconds to wait for any request on the
connection to complete before returning to the application. The driver
should return SQLSTATE HYT00 (Timeout expired) anytime that it is
possible to time out in a situation not associated with query
execution or login.
If ValuePtr is equal to 0 (the default), there is no timeout.
Should work fine...
I suggest having some separate thread/class where you check connection and emit signal after some timeout if nothing happens (with a check - knowConnection - if we found out already if its connected).
This code is not tested and written from scratch on top of my head.. may contain some errors.
/// db connection validator in separate thread
void validator::doValidate() {
this->knowConnection = false;
db.setDatabaseName(dsn);
if(db.isValid())
{
QTimer::singleShot(1000, [this]() {
if (!this->knowConnection) {
emit connected(false);dm->connected=false;
}
});
if(db.open())
{
//std::cout <<"Offene Datenbank";
this->knowConnection = true;
dm->connected=true;
emit connected(true);
}
else
{
dm->connected=false;
this->knowConnection = true;
emit connected(false);
}
}
else
{
dm->connected=false;
this->knowConnection = true;
emit connected(false);
}
}
/// db manager in different thread
void dm::someDbFunction() {
if (connected) {
/// db logic
}
}
/// in gui or whatever
MainWindow::MainWindow() : whatever, val(new validator(..), .. {
connect(val, SIGNAL(connected(bool)), this, SLOT(statusSlot(bool));
....
}
void MainWindow::statusSlot(bool connected) {
ui->statusBar->setText((connected?"Connected":"Disconnected"));
}
I have written a client/server application where the server spawns multiple threads depending upon the request from client.
These threads are expected to send some data to the client(string).
The problem is, data gets overwritten on the client side. How do I tackle this issue ?
I have already read some other threads on similar issue but unable to find the exact solution.
Here is my client code to receive data.
while(1)
{
char buff[MAX_BUFF];
int bytes_read = read(sd,buff,MAX_BUFF);
if(bytes_read == 0)
{
break;
}
else if(bytes_read > 0)
{
if(buff[bytes_read-1]=='$')
{
buff[bytes_read-1]='\0';
cout<<buff;
}
else
{
cout<<buff;
}
}
}
Server Thread code :
void send_data(int sd,char *data)
{
write(sd,data,strlen(data));
cout<<data;
}
void *calcWordCount(void *arg)
{
tdata *tmp = (tdata *)arg;
string line = tmp->line;
string s = tmp->arg;
int sd = tmp->sd_c;
int line_no = tmp->line_no;
int startpos = 0;
int finds = 0;
while ((startpos = line.find(s, startpos)) != std::string::npos)
{
++finds;
startpos+=1;
pthread_mutex_lock(&myMux);
tcount++;
pthread_mutex_unlock(&myMux);
}
pthread_mutex_lock(&mapMux);
int t=wcount[s];
wcount[s]=t+finds;
pthread_mutex_unlock(&mapMux);
char buff[MAX_BUFF];
sprintf(buff,"%s",s.c_str());
sprintf(buff+strlen(buff),"%s"," occured ");
sprintf(buff+strlen(buff),"%d",finds);
sprintf(buff+strlen(buff),"%s"," times on line ");
sprintf(buff+strlen(buff),"%d",line_no);
sprintf(buff+strlen(buff),"\n",strlen("\n"));
send_data(sd,buff);
delete (tdata*)arg;
}
On the server side make sure the shared resource (the socket, along with its associated internal buffer) is protected against the concurrent access.
Define and implement an application level protocol used by the server to make it possible for the client to distinguish what the different threads sent.
As an additional note: One cannot rely on read()/write() reading/writing as much bytes as those two functions were told to read/write. It is an essential necessity to check their return value to learn how much bytes those functions actually read/wrote and loop around them until all data that was intended to be read/written had been read/written.
You should put some mutex to your socket.
When a thread use the socket it should block the socket.
Some mutex example.
I can't help you more without the server code. Because the problem is probably in the server.
I have a server application which sends some xor encrypted strings. I am reading them from my QT client application. Sometimes, the server is slower and I am not able to receive the entire string. I have tried something like below but it gets stuck ( see the comment below). How can I wait until I have the entire data. I tried bytesAviable() but then again i get stuck (infinite loop)
QTcpSocket * sock = static_cast<QTcpSocket*>(this->sender());
if (key == 0)
{
QString recv(sock->readLine());
key = recv.toInt();
qDebug() << "Cheia este " << key;
char * response = enc_dec("#AUTH|admin|admin",strlen("#AUTH|admin|admin"),key);
sock->write(response);
}
else
{
busy = true;
while (sock->bytesAvailable() > 0)
{
unsigned short word;
sock->read((char*)(&word),2);
qDebug()<<word;
//Sleep(100); if i do this than it works great!
QByteArray bts = sock->read(word);
while (bts.length() < word)
{
char bit; //here get's stuck
if (sock->read(&bit,1) > 0)
bts.append(bit);
sock->flush();
}
char * decodat = enc_dec((char*)bts.data(),bts.length() - 2,key);
qDebug() << decodat;
}
}
I don't know what the meaning of key == 0 is, but you are almost certainly misusing available(), like almost everybody else who has ever called it, including me. It tells you how much data can be read without blocking. It has nothing to do with how much data may eventually be delivered down the connection, and the reason is that there are TCP APIs that can tell you the former, but not the latter. Indeed the latter doesn't have any real meaning, considering that the peer could keep writing from now until Doomsday. You should just block and loop until you have read the amount of data you need for the next piece of work.
I offer you to do the following:
QObject::connect(this->m_TCPSocket, SIGNAL(readyRead()), this, SLOT(processRecivedDatagrams()));
Some explanation:
It is convinient to create a class instance of which will manage network;
One has the member which is pointer on TCPSocket;
In constructor implement connection of signal from socket readyRead() which is emmited when needed data was delivered with SLOT(processRecivedDatagrams()). which is responsible for processing recived datagrams/ in this case it is processRecivedDatagrams(), also implement this slot
Mind that class which manages network has to inherit from QObject and also in its declaration include macrosQ_OBject` for MOC.
update:
i also offer you to store recived data in container like stack or queue this will allow you to synhronize sender and reciver (container in this case acts like buffer)
// SLOT:
void Network::processRecivedDatagrams(void)
{
if (!this->m_flagLocked) // use analog of mutex
{
this->m_flagLocked = true; // lock resource
QByteArray datagram;
do
{
datagram.resize(m_TCPSocket->pendingDatagramSize());
m_TCPSocket->readDatagram(datagram.data(), datagram.size());
}
Qt::String YourString; // actualy I don`t remember how to declare Qt string
while (m_TCPSocket->hasPendingDatagrams());
QDataStream in (&datagram, QIODevice::ReadOnly);
in >> YourString
--numberOfDatagrams;
}
this->m_flagLocked = false; // unlock resource
}
}
I'm not sure if this is a known issue that I am running into, but I couldn't find a good search string that would give me any useful results.
Anyway, here's the basic rundown:
we've got a relatively simple application that takes data from a source (DB or file) and streams that data over TCP to connected clients as new data comes in. its a relatively low number of clients; i would say at max 10 clients per server, so we have the following rough design:
client: connect to server, set to read (with timeout set to higher than the server heartbeat message frequency). It blocks on read.
server: one listening thread that accepts connections and then spawns a writer thread to read from the data source and write to the client. The writer thread is also detached(using boost::thread so just call the .detach() function). It blocks on writes indefinetly, but does check errno for errors before writing. We start the servers using a single perl script and calling "fork" for each server process.
The problem(s):
at seemingly random times, the client will shutdown with a "connection terminated (SUCCESFUL)" indicating that the remote server shutdown the socket on purpose. However, when this happens the SERVER application ALSO closes, without any errors or anything. it just crashes.
Now, to further the problem, we have multiple instances of the server app being started by a startup script running different files and different ports. When ONE of the servers crashes like this, ALL the servers crash out.
Both the server and client using the same "Connection" library created in-house. It's mostly a C++ wrapper for the C socket calls.
here's some rough code for the write and read function in the Connection libary:
int connectionTimeout_read = 60 * 60 * 1000;
int Socket::readUntil(char* buf, int amount) const
{
int readyFds = epoll_wait(epfd,epEvents,1,connectionTimeout_read);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TIMEOUT;
return 0;
}
int fd = epEvents[0].data.fd;
if( fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
int rec = recv(fd,buf,amount,MSG_WAITALL);
if(rec == 0)
status = CONNECTION_CLOSED;
else if(rec < 0)
status = convertFlagToStatus(errno);
else
status = CONNECTION_NORMAL;
lastReadBytes = rec;
return rec;
}
int Socket::write(const void* buf, int size) const
{
int readyFds = epoll_wait(epfd,epEvents,1,-1);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TERMINATED;
return 0;
}
int fd = epEvents[0].data.fd;
if(fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
if(epEvents[0].events != EPOLLOUT)
{
status = CONNECTION_CLOSED;
return 0;
}
int bytesWrote = ::send(socket, buf, size,0);
if(bytesWrote < 0)
status = convertFlagToStatus(errno);
lastWriteBytes = bytesWrote;
return bytesWrote;
}
Any help solving this mystery bug would be great! at the VERY least, I would like it to NOT crash out the server even if the client crashes (which is really strange for me, since there is no two-way communication).
Also, for reference, here is the server listening code:
while(server.getStatus() == connection::CONNECTION_NORMAL)
{
connection::Socket s = server.listen();
if(s.getStatus() != connection::CONNECTION_NORMAL)
{
fprintf(stdout,"failed to accept a socket. error: %s\n",connection::getStatusString(s.getStatus()));
}
DATASOURCE* dataSource;
dataSource = open_datasource(XXXX); /* edited */ if(dataSource == NULL)
{
fprintf(stdout,"FATAL ERROR. DATASOURCE NOT FOUND\n");
return;
}
boost::thread fileSender(Sender(s,dataSource));
fileSender.detach();
}
...And also here is the spawned child sending thread:
::signal(SIGPIPE,SIG_IGN);
//const int headerNeeds = 29;
const int BUFFERSIZE = 2000;
char buf[BUFFERSIZE];
bool running = true;
while(running)
{
memset(buf,'\0',BUFFERSIZE*sizeof(char));
unsigned int readBytes = 0;
while((readBytes = read_datasource(buf,sizeof(unsigned char),BUFFERSIZE,dataSource)) == 0)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
}
socket.write(buf,readBytes);
if(socket.getStatus() != connection::CONNECTION_NORMAL)
running = false;
}
fprintf(stdout,"socket error: %s\n",connection::getStatusString(socket.getStatus()));
socket.close();
fprintf(stdout,"sender exiting...\n");
Any insights would be welcome! Thanks in advance.
You've probably got everything backwards... when the server crashes, the OS will close all sockets. So the server crash happens first and causes the client to get the disconnect message (FIN flag in a TCP segment, actually), the crash is not a result of the socket closing.
Since you have multiple server processes crashing at the same time, I'd look at resources they share, and also any scheduled tasks that all servers would try to execute at the same time.
EDIT: You don't have a single client connecting to multiple servers, do you? Note that TCP connections are always bidirectional, so the server process does get feedback if a client disconnects. Some internet providers have even been caught generating RST packets on connections that fail some test for suspicious traffic.
Write a signal handler. Make sure it uses only raw I/O functions to log problems (open, write, close, not fwrite, not printf).
Check return values. Check for negative return value from write on a socket, but check all return values.
Thanks for all the comments and suggestions.
After looking through the code and adding the signal handling as Ben suggested, the applications themselves are far more stable. Thank you for all your input.
The original problem, however, was due to a rogue script that one of the admins was running as root that would randomly kill certain processes on the server-side machine (i won't get into what it was trying to do in reality; safe to say it was buggy).
Lesson learned: check the environment.
Thank you all for the advice.