Pseudocode for sink accessing wireless medium into Finite State Machine - c++

This pseudocode is for a sink that tries to access the wireless medium in send and receive data from sensors.
set pc = 0.01
send a polling packet
If no sensor responds to polling packet, set pc = min (pc + 0.01, 1.0)
If a data packet is successfully received from one of the sensors, keep pc at current value
If there is a collision between two or more sensors as indicated by a corrupted data packet, set pc = pc / 2
Repeat step 2
I have read the link by How to read a FSM diagram and it really helped me for the sensor part. But I am still confused about trying to convert the above pseudocode into an FSM.
Can anyone suggest a link or ebook that gives a clear explanation about converting the pseudocode into a FSM?

I'm not sure what you're really looking for here; coding this simply would be pretty straight forward, and this problem doesn't look like it deserves the full-blown FSM table-driven approach to me.
Here's some C-like pseudo-code:
double pc = 0.01;
int sensorsfd;
void loop(void) {
for (;;) {
fd_set readfds, writefds, exceptfds;
FD_ZERO(&readfds);
FD_ZERO(&writefds);
FD_ZERO(&exceptfds);
FD_SET(sensorsfd, &readfds);
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 1; /* 0.001 seconds */
int r;
send_polling_packet();
r = select(sensorsfd+1, &readfds, &writefds, &exceptfds, &tv);
if (r == -1 && errno == EINTR) {
continue;
} else if (r == -1) {
perror("select() failed");
exit(1);
} else if (r == 0) {
/* timeout expired */
pc = min (pc + 0.01, 1.0);
} else if (r == 1) {
/* sensorsfd won't block when reading it */
packet_t p = read_packet(sensorsfd);
/* should also handle _no packet_ if the sensors
socket returns EOF */
if (packet_corrupted(p)) {
pc /= 2;
} else {
handle_packet(p);
}
} else {
/* error in program logic */
}
}
}
Pseudo-code in the sense that I just wrote this and have no mechanism to test
it. If your program gets much more complicated than this, you would probably
want to encapsulate all the select(2) setup into a function of its own, and
possibly all the details of handling the packet from the sensor's socket.

Related

How to detect if i have something to read from a socket ? (c++)

i`m trying to recive an answer from a client in a certain amount of s. How could i do that ? I mean i have that code
unsigned secondsElapsed = 0;
while(secondsElapsed <= TIMER){
char tBuffer[32];
if (recv(clientSocket, tBuffer, sizeof(tBuffer), MSG_PEEK | MSG_DONTWAIT) == 0){
myPlayer->dcPlayer();
\\ More stuff to do if player is dissconected
\\ But if is not dc, and is typing, how can i check my socket to see
\\ if i have an answer there to read, else i`ll increment
\\ secondsElapsed until is equal to TIMER or until i get an answer
\\ from my client.
usleep(1000000);
secondsElapsed++;
}
So, the question is: How can i check if my client send me an answer ? If i try to read, then my program will be stuck and i will can`t increment secondsElapsed.
You have a number of options here. First, you might use non-blocking sockets (usually not-so-great solution). The better option is to use OS-provided polling/async notification mechanism - for example, on *Nix user can choose from select, poll and epoll, while Windows has it's own async event notification API, for example, I/O completion ports, described here: https://msdn.microsoft.com/en-gb/library/windows/desktop/aa365198(v=vs.85).aspx
You can use select() (or pselect() or (e)poll() on *Nix systems) to know when data is available before you then call recv() (or read()) to read it, eg:
char tBuffer[32];
float secondsElapsed = 0;
clock_t start = clock(), end;
do {
fd_set rfd;
FD_ZERO(&rfd);
FD_SET(clientSocket, &rfd);
struct timeval timeout;
timeout.tv_sec = TIMER - secondsElapsed;
timeout.tv_usec = 0;
int ret = select(clientSocket+1, &rfd, NULL, NULL, &timeout);
if (ret == -1) {
myPlayer->dcPlayer();
break;
}
if (ret == 0) {
// timeout ...
break;
}
// data available, read it...
ret = recv(clientSocket, tBuffer, sizeof(tBuffer), 0);
if (ret <= 0) {
myPlayer->dcPlayer();
break;
}
// use tBuffer up to ret number of bytes...
if (... /*no more data is expected*/) {
break;
}
end = clock();
secondsElapsed = end - start; // time difference is now a float
seconds /= CLOCKS_PER_SEC; // this division is now floating point
}
while (secondsElapsed <= TIMER);

libusb_get_string_descriptor_ascii() timeout error?

I'm trying to get the serial number of a USB device using libusb-1.0.
The problem I have is that sometimes the libusb_get_string_descriptor_ascii() function returns -7 (LIBUSB_ERROR_TIMEOUT) in my code, but other times the serial number is correctly written in my array and I can't figure out what is happening. Am I using libusb incorrectly? Thank you.
void EnumerateUsbDevices(uint16_t uVendorId, uint16_t uProductId) {
libusb_context *pContext;
libusb_device **ppDeviceList;
libusb_device_descriptor oDeviceDescriptor;
libusb_device_handle *hHandle;
int iReturnValue = libusb_init(&pContext);
if (iReturnValue != LIBUSB_SUCCESS) {
return;
}
libusb_set_debug(pContext, 3);
ssize_t nbUsbDevices = libusb_get_device_list(pContext, &ppDeviceList);
for (ssize_t i = 0; i < nbUsbDevices; ++i) {
libusb_device *pDevice = ppDeviceList[i];
iReturnValue = libusb_get_device_descriptor(pDevice, &oDeviceDescriptor);
if (iReturnValue != LIBUSB_SUCCESS) {
continue;
}
if (oDeviceDescriptor.idVendor == uVendorId && oDeviceDescriptor.idProduct == uProductId) {
iReturnValue = libusb_open(pDevice, &hHandle);
if (iReturnValue != LIBUSB_SUCCESS) {
continue;
}
unsigned char uSerialNumber[255] = {};
int iSerialNumberSize = libusb_get_string_descriptor_ascii(hHandle, oDeviceDescriptor.iSerialNumber, uSerialNumber, sizeof(uSerialNumber));
std::cout << iSerialNumberSize << std::endl; // Print size of serial number <--
libusb_close(hHandle);
}
}
libusb_free_device_list(ppDeviceList, 1);
libusb_exit(pContext);
}
I see nothing wrong with your code. I would not care to much about timeouts in the context of USB. It is a bus after all and can be occupied with different traffic.
As you may know there is depending on the version of USB a portion of the bandwidth reserved for control transfers. libusb_get_string_descriptor_ascii simply sends all the required control transfers to get the string. If any of those times out it will abort. You can try to send this control transfers yourself and use bigger timeout values but I guess the possibility of a timeout will always be there to wait for you (pun intended).
So it turns out my device was getting into weird states, possibly not being closed properly or the like. Anyway, calling libusb_reset_device(hHandle); just after the libusb_open() call seems to fix my sporadic timeout issue.
libusb_reset_device()

Windws C++ Intermittent Socket Disconnect

I've got a server that uses a two thread system to manage between 100 and 200 concurrent connections. It uses TCP sockets, as packet delivery guarantee is important (it's a communication system where missed remote API calls could FUBAR a client).
I've implemented a custom protocol layer to separate incoming bytes into packets and dispatch them properly (the library is included below). I realize the issues of using MSG_PEEK, but to my knowledge, it is the only system that will fulfill the needs of the library implementation. I am open to suggestions, especially if it could be part of the problem.
Basically, the problem is that, randomly, the server will drop the client's socket due to a lack of incoming packets for more than 20 seconds, despite the client successfully sending a keepalive packet every 4. I can verify that the server itself didn't go offline and that the connection of the users (including myself) experiencing the problem is stable.
The library for sending/receiving is here:
short ncsocket::send(wstring command, wstring data) {
wstringstream ss;
int datalen = ((int)command.length() * 2) + ((int)data.length() * 2) + 12;
ss << zero_pad_int(datalen) << L"|" << command << L"|" << data;
int tosend = datalen;
short __rc = 0;
do{
int res = ::send(this->sock, (const char*)ss.str().c_str(), datalen, NULL);
if (res != SOCKET_ERROR)
tosend -= res;
else
return FALSE;
__rc++;
Sleep(10);
} while (tosend != 0 && __rc < 10);
if (tosend == 0)
return TRUE;
return FALSE;
}
short ncsocket::recv(netcommand& nc) {
vector<wchar_t> buffer(BUFFER_SIZE);
int recvd = ::recv(this->sock, (char*)buffer.data(), BUFFER_SIZE, MSG_PEEK);
if (recvd > 0) {
if (recvd > 8) {
wchar_t* lenstr = new wchar_t[4];
memcpy(lenstr, buffer.data(), 8);
int fulllen = _wtoi(lenstr);
delete lenstr;
if (fulllen > 0) {
if (recvd >= fulllen) {
buffer.resize(fulllen / 2);
recvd = ::recv(this->sock, (char*)buffer.data(), fulllen, NULL);
if (recvd >= fulllen) {
buffer.resize(buffer.size() + 2);
buffer.push_back((char)L'\0');
vector<wstring> data = parsewstring(L"|", buffer.data(), 2);
if (data.size() == 3) {
nc.command = data[1];
nc.payload = data[2];
return TRUE;
}
else
return FALSE;
}
else
return FALSE;
}
else
return FALSE;
}
else {
::recv(this->sock, (char*)buffer.data(), BUFFER_SIZE, NULL);
return FALSE;
}
}
else
return FALSE;
}
else
return FALSE;
}
This is the code for determining if too much time has passed:
if ((int)difftime(time(0), regusrs[i].last_recvd) > SERVER_TIMEOUT) {
regusrs[i].sock.end();
regusrs[i].is_valid = FALSE;
send_to_all(L"removeuser", regusrs[i].server_user_id);
wstringstream log_entry;
log_entry << regusrs[i].firstname << L" " << regusrs[i].lastname << L" (suid:" << regusrs[i].server_user_id << L",p:" << regusrs[i].parent << L",pid:" << regusrs[i].parentid << L") was disconnected due to idle";
write_to_log_file(server_log, log_entry.str());
}
The "regusrs[i]" is using the currently iterated member of a vector I use to story socket descriptors and user information. The 'is_valid' check is there to tell if the associated user is an actual user - this is done to prevent the system from having to deallocate the member of the vector - it just returns it to the pool of available slots. No thread access/out-of-range issues that way.
Anyway, I started to wonder if it was the server itself was the problem. I'm testing on another server currently, but I wanted to see if another set of eyes could stop something out of place or cue me in on a concept with sockets and extended keepalives that I'm not aware of.
Thanks in advance!
I think I see what you're doing with MSG_PEEK, where you wait until it looks like you have enough data to read a full packet. However, I would be suspicious of this. (It's hard to determine the dynamic behaviour of your system just by looking at this small part of the source and not the whole thing.)
To avoid use of MSG_PEEK, follow these two principles:
When you get a notification that data is ready (I assume you're using select), then read all the waiting data from recv(). You may use more than one recv() call, so you can handle the incoming data in pieces.
If you read only a partial packet (length or payload), then save it somewhere for the next time you get a read notification. Put the packets and payloads back together yourself, don't leave them in the socket buffer.
As an aside, the use of new/memcpy/wtoi/delete is woefully inefficient. You don't need to allocate memory at all, you can use a local variable. And then you don't even need the memcpy at all, just a cast.
I presume you already assume that your packets can be no longer than 999 bytes in length.

Linux poll on serial transmission end

I'm implementing RS485 on arm developement board using serial port and gpio for data enable.
I'm setting data enable to high before sending and I want it to be set low after transmission is complete.
It can be simply done by writing:
//fd = open("/dev/ttyO2", ...);
DataEnable.Set(true);
write(fd, data, datalen);
tcdrain(fd); //Wait until all data is sent
DataEnable.Set(false);
I wanted to change from blocking-mode to non-blocking and use poll with fd. But I dont see any poll event corresponding to 'transmission complete'.
How can I get notified when all data has been sent?
System: linux
Language: c++
Board: BeagleBone Black
I don't think it's possible. You'll either have to run tcdrain in another thread and have it notify the the main thread, or use timeout on poll and poll to see if the output has been drained.
You can use the TIOCOUTQ ioctl to get the number of bytes in the output buffer and tune the timeout according to baud rate. That should reduce the amount of polling you need to do to just once or twice. Something like:
enum { writing, draining, idle } write_state;
while(1) {
int write_event, timeout = -1;
...
if (write_state == writing) {
poll_fds[poll_len].fd = write_fd;
poll_fds[poll_len].event = POLLOUT;
write_event = poll_len++
} else if (write == draining) {
int outq;
ioctl(write_fd, TIOCOUTQ, &outq);
if (outq == 0) {
DataEnable.Set(false);
write_state = idle;
} else {
// 10 bits per byte, 1000 millisecond in a second
timeout = outq * 10 * 1000 / baud_rate;
if (timeout < 1) {
timeout = 1;
}
}
}
int r = poll(poll_fds, poll_len, timeout);
...
if (write_state == writing && r > 0 && (poll_fds[write_event].revent & POLLOUT)) {
DataEnable.Set(true); // Gets set even if already set.
int n = write(write_fd, write_data, write_datalen);
write_data += n;
write_datalen -= n;
if (write_datalen == 0) {
state = draining;
}
}
}
Stale thread, but I have been working on RS-485 with a 16550-compatible UART under Linux and find
tcdrain works - but it adds a delay of 10 to 20 msec. Seems to be polled
The value returned by TIOCOUTQ seems to count bytes in the OS buffer, but NOT bytes in the UART FIFO, so it may underestimate the delay required if transmission has already started.
I am currently using CLOCK_MONOTONIC to timestamp each send, calculating when the send should be complete, when checking that time against the next send, delaying if necessary. Sucks, but seems to work

Is poll() an edge triggered function?

I am responsible for a server that exports data over a TCP connection. With each data record that the server transmits, it requires the client to send a short "\n" acknowledgement message back. I have a customer who claims that the acknowledgement that he sends is not read from the web server. The following is code that I am using for I/O on the socket:
bool can_send = true;
char tx_buff[1024];
char rx_buff[1024];
struct pollfd poll_descriptor;
int rcd;
poll_descriptor.fd = socket_handle;
poll_descriptor.events = POLLIN | POLLOUT;
poll_descriptor.revents = 0;
while(!should_quit && is_connected)
{
// if we know that data can be written, we need to do this before we poll the OS for
// events. This will prevent the 100 msec latency that would otherwise occur
fill_write_buffer(write_buffer);
while(can_send && !should_quit && !write_buffer.empty())
{
uint4 tx_len = write_buffer.copy(tx_buff, sizeof(tx_buff));
rcd = ::send(
socket_handle,
tx_buff,
tx_len,
0);
if(rcd == -1 && errno != EINTR)
throw SocketException("socket write failure");
write_buffer.pop(rcd);
if(rcd > 0)
on_low_level_write(tx_buff, rcd);
if(rcd < tx_len)
can_send = false;
}
// we will use poll for up to 100 msec to determine whether the socket can be read or
// written
if(!can_send)
poll_descriptor.events = POLLIN | POLLOUT;
else
poll_descriptor.events = POLLIN;
poll(&poll_descriptor, 1, 100);
// check to see if an error has occurred
if((poll_descriptor.revents & POLLERR) != 0 ||
(poll_descriptor.revents & POLLHUP) != 0 ||
(poll_descriptor.revents & POLLNVAL) != 0)
throw SocketException("socket hung up or socket error");
// check to see if anything can be written
if((poll_descriptor.revents & POLLOUT) != 0)
can_send = true;
// check to see if anything can be read
if((poll_descriptor.revents & POLLIN) != 0)
{
ssize_t bytes_read;
ssize_t total_bytes_read = 0;
int bytes_remaining = 0;
do
{
bytes_read = ::recv(
socket_handle,
rx_buff,
sizeof(rx_buff),
0);
if(bytes_read > 0)
{
total_bytes_read += bytes_read;
on_low_level_read(rx_buff,bytes_read);
}
else if(bytes_read == -1)
throw SocketException("read failure");
ioctl(
socket_handle,
FIONREAD,
&bytes_remaining);
}
while(bytes_remaining != 0);
// recv() will return 0 if the socket has been closed
if(total_bytes_read > 0)
read_event::cpost(this);
else
{
is_connected = false;
closed_event::cpost(this);
}
}
}
I have written this code based upon the assumption that poll() is a level triggered function and will unblock immediately as long as there is data to be read from the socket. Everything that I have read seems to back up this assumption. Is there a reason that I may have missed that would cause the above code to miss a read event?
It is not edge triggered. It is always level triggered. I will have to read your code to answer your actual question though. But that answers the question in the title. :-)
I can see no clear reason in your code why you might be seeing the behavior you are seeing. But the scope of your question is a lot larger than the code you're presenting, and I cannot pretend that this is a complete problem diagnosis.
It is level triggered. POLLIN fires if there is data in the socket receive buffer when you poll, and POLLOUT fires if there is room in the socket send buffer (which there almost always is).
Based on your own assessment of the problem (that is, you are blocked on poll when you expect to be able to read the acknowledgement), then you will eventually get a timeout.
If the customer's machine is more than 50ms away from your server, then you will always timeout on the connection before receiving the acknowledgement, since you only wait 100ms. This is because it will take a minimum of 50ms for the data to reach the customer, and a minimum of 50ms for the acknowledgement to return.