I’m working on an embedded application, where i receive some sensor values over UDP. The board I’m using runs the 2.4 kernel on an ARM processor. The problem is the following: once my internal socket buffer is full only the newest value gets replaced. So the internal buffer is not implemented as a circular buffer, which it should be, as i found out studying some articles. Can i somehow change the behaviour of the internal receive buffer?
I already found out that there is no way to "flush" that buffer from the application side. The best idea I’ve got is checking whether the receive buffer is full, before receiving any packets and if so fist read out all the old packets manually. Is there any better approach?
I hope it's somehow clear what I mean, any help is appreciated.
The best idea I’ve got is checking whether the receive buffer is full,
before receiving any packets and if so fist read out all the old
packets manually.
I'd not bother checking whether the receive buffer is full, rather always read all packets until no more are there and use the last received, which contains the newest value.
Related
So I'm working with a camera which is connected to the computer by an ethernet cable and apparently, has to be accessed as a tcp/ip stream socket.
Basically, I want something like taking an image every 1 second. I noticed though that input data from the camera keeps coming in, while what I want is just to get the most recent data from the camera and nothing else, i.e. only the most current image at that time.
What I read so far is that I need to read the input data multiple times until I reach the 'most current data'. Is this really the only way to do this? I really don't like the idea of one process being busy all the time just to 'throw away' the incoming data from the stream socket.
Can't I, in theory, decrease the 'input buffer size' for the input from the socket so that I can receive only one picture's worth of data? And then, every further imcoming data would be just wasted, so when the input buffer is then flushed once, it gets filled with the newest data or something like that. (I mean, there has to be some limit on how much input data from the stream can 'pile up' waiting to be processed/read, right? What happens when that limit is reached? Does the further data gets thrown away or is the 'buffer' overwritten with the new data?)
Is that even possible? I'm a complete beginner at this, so I'm just theorizing. If something like that is possible, can anyone show the outline of how to code that? (I have to use the boost asio library on Ubuntu for this stuff)
That would be very helpful!
Yes, it's the only way to do it.
The whole reason for using TCP is that it is a "reliable" protocol, with guaranteed delivery. As opposed to UDP.
TCP's job is to deliver the data to the receiver, in the order it was sent, without losing anything. If the data cannot be delivered, the connection gets broken, at some point, when TCP gives up. But, as long as there's an active connection, the receiver is going to get everything that the sender sends.
If you don't want to get some data that the sender gets, you must make whatever appropriate arrangements there are, with the sender, for that to happen. TCP is not going to discard data, just because the receiver doesn't want it.
I'm experiencing a frustrating behaviour of windows sockets that I cant find any info on, so I thought I'd try here.
My problem is as follows:
I have a C++ application that serves as a device driver, communicating with a serial device connected
through a serial to TCP/IP converter.
The serial protocol requires a lot of single byte messages to be communicated between the device and
my software. I noticed that these small messages are only sent about 3 times after startup, after which they are no longer actually transmitted (checked with wireshark). All the while, the send() method keeps returning > 0, indicating that the message has been copied to it's send buffer.
I'm using blocking sockets.
I discovered this issue because this particular driver eventually has to drop it's connection when the send buffer is completely filled (select() fails due to this after about 5 hours, but it happens much sooner when I reduce SO_SNDBUF size).
I checked, and noticed that when I call send with messages of 2 bytes or larger, transmission never fails.
Any input would be very much appreciated, I am out of ideas how to fix this.
This is a rare case when you should set TCP_NODELAY so that the sends are written individually, not coalesced. But I think you have another problem as well. Are you sure you're reading everything that's being sent back? And acting on it properly? It sounds like an application protocol problem to me.
EDIT!
Just read that read will block until the buffer is full. How on earth to I receive smaller packets with out having to send 1MB (my max buffer length) each time? What If I want to send arbitrarily length messages?
In Java you seem to be able to just send a char array without any worries. But in C++ with the boost sockets I seem to either have to keep calling socket.read(...) until I think I have everything or send my full buffer length of data which seems wasteful.
Old original question for context.
Yet again boost sockets has me completely stumped. I am using
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> socket; I
used the boost SSL example for guidance but I have dedicated a thread
to it rather than having the async calls.
The first socket.read_some(...) of the socket is fine and it reads
all the bytes. After that it reads 1 byte and then all the rest on the
next socket.read_some(...) which had me really confused. I then
noticed that read_some typically has this behaviour. So I moved to
boost::asio::read as socket does have a member function read which
surprised me. However noticed boost::asio has a read function that
takes a socket and buffer. However it is permanently blocking.
//read blocking data method
//now
bytesread = boost::asio::read(socket,buffer(readBuffer, max_length)); << perminatly blocks never seems to read.
//was
//bytesread = socket.read_some(buffer(readBuffer, max_length)); << after the 1st read it will always read one byte and need another
socket.read_some(...) call to read the rest.
What do I need to do make boost::asio::read(...) work?
note .. I have used wireshark to make sure that the server is not
sending the data broken up. The server is not faulty.
Read with read_some() in a loop merging the buffers until you get a complete application message. Assume you can get back anything between 1 byte and full length of your buffer.
Regarding "knowing when you are finished" - that goes into your application level protocol, which could use either delimited messages, fixed length messages, fixed length headers that tell payload length, etc.
I am writing an application on Ubuntu Linux in C++ to read data from a serial port. It is working successfully by my code calling select() and then ioctl(fd,FIONREAD,&bytes_avail) to find out how many bytes are available before finally obtaining the data using read().
My question is this: Every time select returns with data, the number of bytes available is reported as 8. I am guessing that this is a buffer size set somewhere and that select returns notification to the user when this buffer is full.
I am new to Linux as a developer (but not new to C++) and I have tried to research (without success) if it is possible to change the size of this buffer, or indeed if my assumptions are even true. In my application timing is critical and I need to be alerted whenever there is a new byte on the read buffer. Is this possible, without delving into kernel code?
You want to use the serial IOCTL TIOCSSERIAL which allows changing both receive buffer depth and send buffer depth (among other things). The maximums depend on your hardware, but if a 16550A is in play, the max buffer depth is 14.
You can find code that does something similar to what you want to do here
The original link went bad: http://www.groupsrv.com/linux/about57282.html
The new one will have to do until I write another or find a better example.
You can try to play with the VMIN and VTIME values of the c_cc member of the termios struct.
Some info here, especially in the section 3.2.
I have an application that compresses and sends data via socket and data received is written in remote machine. During recovery, this data is decompressed and retrieved. Compression/Decompression is done using "zlib".But during decompression I face the following problem randomly:
zlib inflate() fails with error "Z_DATA_ERROR" for binary files like .xls,.qbw etc.
The application compresses data in blocks say "1024" bytes in a loop with data read from the file and decompresses in the same way.From the forum posts, I found that one reason for Z_DATA_ERROR is due to data corruption. As of now, to avoid this problem, we have introduced CRC check of data compressed during send and what is received.
Any possible reasons on why this happens is really appreciated! (as this occurs randomly and for the same file, it works the other time around).Is it bcoz of incorrect handling of zlib inflate() and deflate() ?
Note: If needed,will post the exact code snippet for further analysis!
Thanks...Udhai
You didn't mention if the socket was TCP or UDP; but based on the blocking and checksumming, I'm going out on a limb and guessing it's UDP.
If you're sending the compressed packets over UDP they could be received out-of-order on the other end, or the packets could be lost in transit.
Getting things like out-of-sequencing and lost packets right ends up being a lot of the work that is all fixed by using the TCP protocol - you have a simple pipe that guarantees the data arrives in-order and as-expected.
Also I'd make sure that the code on the receiving side is simple, and receives into buffers allocated on the heap and not on the stack (I've seen many a bug triggered by this).
Again, this is just an educated guess based on the detail of the question.