C++ blocking queue can't poll current data - c++

I have faced an odd situation.
I have a android app which calls C++ code.
In C++ code, I have two thread one is putting data into a blocking queue and another thread is taking data out of blocking queue.
The data structure is like:
typedef struct {
int len;
char* data;
} myStruct;
Every time I will put a pointer of myStruct into the queue and get this pointer out of the queue.
But sometime I will get a very big len from the data. like the log file I showed below:
Put Log Length: 128
Take Log Length: 128
Put Log Length: 171
Take Log Length: 171
Put Log Length: 73
Take Log Length: 73
Put Length: 99
Put Length: 72
Put Length: 124
Take Log Length: 72
......
Take Log Length: 2047249896
My Blocking Queue code is list below:
#include "BlockingQueue.h"
template<typename T>
void BlockingQueue<T>::put(const T& task)
{
std::unique_lock<std::mutex> lock(mtx);
q.push_back( task );
isEmpty.notify_all();
}
template<typename T>
T BlockingQueue<T>::take()
{
std::unique_lock<std::mutex> lock(mtx);
isEmpty.wait(lock, [this]{return !q.empty(); });
T front( q.front() );
q.pop_front();
return front;
}
template<typename T>
bool BlockingQueue<T>::empty()
{
std::unique_lock<std::mutex> lock(mtx);
return q.empty();
}

Related

Task Synchronization for Command/Response server (C++/ESP32/FreeRTOS)

I want to synchronize two tasks in a command/response communication server. One task sends data to a serial port and another task receives data on the serial port. Received data should either be returned to the sender task or do something else with it.
I unsuccessfully tried using volatile bool flags but have now found that won't work with C++ (See When to use volatile with multi threading?)
So trying to use semaphores to do it but can't quite figure out how. Some (bad) psuedo-code using volatile bool is below. How/where to modify for semaphore give/take?
Actual code/platform is C++ 11 running on ESP32 (ESP-IDF). Resources are very limited so no C++ std:: libraries.
volatile bool responsePending = false;
volatile bool cmdAccepted = false;
char sharedBuffer[100];
// SENDER //
void Task1()
{
char localBuffer[100];
while (1)
{
responsePending = true;
cmdAccepted = false;
sendMessage();
while (responsePending)
sleep();
strcpy(localBuffer, sharedBuffer);
cmdAccepted = true; // signal Task2
}
}
// RECEIVER //
void Task2()
{
char localBuf[100];
int fd = open();
while (1)
{
if (select())
{
read(fd, localBuf);
if (responsePending)
{
strcpy(sharedBuffer, localBuf);
responsePending = false; // signal Task1
while (!cmdAccepted)
sleep();
}
else
{
// Do something else with the received data
}
}
}
}
Create a queue which holds a struct. One tasks waits for the serial, if it got data it will put the message to the struct and the struct to the queue.
Other task waits for the queue, if there are items in the queue it will process the struct.
Example:
struct queueData{
char messageBuffer[100];
};
QueueHandle_t queueHandle;
void taskOne(){
while(){
// Task one checks if it got serial data.
if( gotSerialMsg() ){
// create a struct
queueData data;
// copy the data to the struct
strcpy( getSerialMSG(), data.messageBuffer );
// send struct to queue ( waits indefinietly )
xQueueSend(queueHandle, &data, portMAX_DELAY);
}
vTaskDelay(1); // Must feed other tasks
}
}
void taskTwo(){
while(){
// Check if a structs has an item
if( uxQueueMessagesWaiting(queueHandle) > 0 ){
// create a holding struct
queueData data;
// Receive the whole struct
if (xQueueReceive(queueHandle, &data, 0) == pdTRUE) {
// Struct holds message like: data.messageBuffer
}
}
vTaskDelay(1); // Must feed other tasks
}
}
The good thing in passing structs to queues is that you can always put more data into it. booleans or ints or any other thing.

boost::asio: register fd for EPOLLIN / EPOLLOUT once and leave it registered

I have a tcp client which is serviced by a boost::asio::io_context running on a single thread. It is configured non-blocking.
Reads/writes to this client are only ever done within the context of this thread.
I am using async_wait to await for the socket to become readable/writeable.
void Client::awaitReadable()
{
_socket.async_wait(tcp::socket::wait_read, std::bind_front(&Client::onReadable, this));
}
Whenever the socket becomes readable, my onReadable callback is fired, and I read all available data until receive asio::error::would_block.
void Client::onReadable(boost::system::error_code ec)
{
if (!ec)
{
while (1) // drain the socket
{
const std::size_t len = _socket.read_some(_read_buf.writeBuf(), ec);
if (ec)
break;
else
_read_buf.advance(len);
}
}
if (ec == asio::error::would_block)
{
const std::size_t read = _read_cb(*this, _read_buf.readBuf();
_read_buf.dataRead(read);
awaitReadable(); // I have to await readable again
}
else
{
onDisconnected(ec);
}
}
Once I've drained the socket I then need to call awaitReadable again to re-register my onReadable callback.
This necessarily involves a call to epoll_ctl, which effectively changes absolutely nothing.
When writing to the socket, the process if similar.
First, if the socket is currently writeable, I will attempt to send the data immediately. If, during the write, the I receive asio::error::would_block, I buffer the remaining unsent data and call my awaitWriteable function
void Client::write(Data buf)
{
if (_writeable)
{
const auto [ sent, ec ] = doWrite(buf); // calls awaitWriteable if would_block
if (ec == asio::error::would_block)
_write_buf.add(buf.data() + sent, buf.size() - sent);
}
else
{
_write_buf.add(buf); // will be sent when socket becomes writeable
}
}
The awaitWriteable function is very similar to the awaitReadable version
void Client::awaitWriteable()
{
_socket.async_wait(tcp::socket::wait_write, std::bind_front(&Client::onWriteable, this));
}
When the socket becomes writeable again I will be notified, and I will write more data to the socket.
void Client::onWriteable(boost::system::error_code ec)
{
if (!ec)
{
_writeable = true;
if (!_write_buf.empty())
{
const auto [ sent, ec ] = doWrite(_write_buf.writeBuf());
if (!ec)
_write_buf.sent(sent);
}
}
else
{
onDisconnected(ec);
}
}
The actual writing is factored out into a separate function as it is called both by the "synchronous write" function, and from the OnWriteable callback
std::pair<std::size_t, boost::system::error_code> Client::doWrite(Data buf)
{
boost::system::error_code ec;
std::size_t sent = _socket.write_some(buf + sent, ec);
if (ec)
{
if (ec == asio::error::would_block)
awaitWriteable();
else
onDisconnected(ec);
}
return {sent, ec};
}
So the way reads work is
awaitReadable.
when readable, read everything until would_block.
repeat.
and the way writes work is
once connected awaitWriteable.
when writeable, set a flag true, and if any data is pending, send as much as possible.
if the send results in would_block then awaitWriteable again.
when a client wants to send data, if the socket is currently writeable then "synchronously" send as much as possible.
if the send results in would_block then buffer any unsent data and awaitWriteable again.
Question:
I would like to register my socket file descriptor with epoll, and leave it registered forever.
Is there any way to side-step this need to continually call awaitReadable/awaitWriteable?
You're mixing sync/async primitives. So at least the blanket claim "It is configured non-blocking" is inaccurate, because asio is having to switch it for you when you mix sync primitives.
Note: not all Asio-aware IO objects support this. E.g. Beast's tcp_stream (and ssl_stream) object do explicitly not support mixing synchronous and asynchronous operations.
This necessarily involves a call to epoll_ctl, which effectively changes absolutely nothing.
Have you checked? Because it's up to the service implementation to decide how your handlers are serviced. It might be the case that fds are added and removed from the pollfd set. It might do cleverer things. It might not even use (e)poll on a particular system.
Regardless, is there something stopping you from using read operations directly in a loop. You can even used composed read operations, such as asio::async_read_until or asio::async_read with a CompletionCondition.
E.g. in to read incoming data in a loop, returning whenever 1024 bytes or more have been received:
void read_loop() {
net::async_read(
_socket, _read_buf, net::transfer_at_least(1024),
[this](error_code ec, size_t xferred) {
std::cout << "Received " << xferred //
<< " (" << ec.message() << ")" << std::endl;
if (!ec)
read_loop();
});
}
Here's a live demo reading itself:
Live On Coliru
#include <boost/asio.hpp>
#include <iostream>
namespace net = boost::asio;
using boost::system::error_code;
using net::ip::tcp;
using namespace std::chrono_literals;
struct Client {
Client(net::any_io_executor ex, tcp::endpoint ep) : _socket(ex) {
_socket.connect(ep);
assert(_socket.is_open());
std::cout << "Connected " << ep << " from " << _socket.local_endpoint() << "\n";
}
void read_loop() {
net::async_read(
_socket, _read_buf, net::transfer_at_least(1024),
[this](error_code ec, size_t xferred) {
std::cout << "Received " << xferred //
<< " (" << ec.message() << ")" << std::endl;
if (!ec)
read_loop();
});
}
auto get_histo() const {
std::array<unsigned, 256> histo {0};
auto f = buffers_begin(_read_buf.data()),
l = buffers_end(_read_buf.data());
while (f != l)
++histo[uint8_t(*f++)];
return histo;
}
private:
net::streambuf _read_buf;
tcp::socket _socket;
};
int main() {
net::io_context ioc;
Client c(ioc.get_executor(), {{}, 8989});
c.read_loop();
ioc.run_for(10s); // time limit for online compilers
// do something witty with the result
auto histo = c.get_histo();
for (uint8_t ch : {'a','q','e','x'})
std::cout << "Frequency of '" << ch << "' was " << histo[ch] << "\n";
}
Prints
Connected 0.0.0.0:8989 from 127.0.0.1:48730
Received 1024 (Success)
Received 447 (End of file)
Frequency of 'a' was 38
Frequency of 'q' was 2
Frequency of 'e' was 92
Frequency of 'x' was 8
In about 10ms.
BONUS: Profling epoll_ctl calls
Here is the same program eating a dictionay on my machine, while counting calls to epoll_ctl:
Note how only 3 epoll_ctl calls are ever issued:
Connected 0.0.0.0:8989 from 127.0.0.1:52974
Received 1024 (Success)
Received 1024 (Success)
Received 2048 (Success)
Received 4096 (Success)
Received 8192 (Success)
Received 16384 (Success)
Received 16384 (Success)
Received 16384 (Success)
Received 49152 (Success)
...
Received 65536 (Success)
Received 53562 (Success)
Received 0 (End of file)
Frequency of 'a' was 65630
Frequency of 'q' was 1492
Frequency of 'e' was 90579
Frequency of 'x' was 2139
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
0.00 0.000000 0 3 epoll_ctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.000000 3 total
Summary
Measure. Use async primitives to do the scheduling for you. The only reason to use async_wait in principle is when you have to call third-party code using the native_handle of the socket in response.

Unable to find the reason for "Broken Pipe" error while sending continuous data chunks through Beast websocket

I am working on streaming audio recognition with IBM Watson speech to text web service API. I have created a web-socket with boost (beast 1.68.0) library in C++(std 11).
I have successfully connected to the IBM server, and want to send a 231,296 bytes of raw audio data to server in following manner.
{
"action": "start",
"content-type": "audio/l16;rate=44100"
}
websocket.binary(true);
<bytes of binary audio data 50,000 bytes>
<bytes of binary audio data 50,000 bytes>
<bytes of binary audio data 50,000 bytes>
<bytes of binary audio data 50,000 bytes>
<bytes of binary audio data 31,296 bytes>
websocket.binary(false);
{
"action": "stop"
}
Expected Result from IBMServer is :
{"results": [
{"alternatives": [
{ "confidence": xxxx,
"transcript": "call Rohan Chauhan "
}],"final": true
}], "result_index": 0
}
But I am not getting the desired result: rather the error says
"Broken pipe"
DataSize is: 50000 | mIsLast is : 0
DataSize is: 50000 | mIsLast is : 0
what : Broken pipe
DataSize is: 50000 | mIsLast is : 0
what : Operation canceled
DataSize is: 50000 | mIsLast is : 0
what : Operation canceled
DataSize is: 31296 | mIsLast is : 0
what : Operation canceled
Here is my code which is an adaptation of the sample example given in beast library.
Foo.hpp
class IbmWebsocketSession: public std::enable_shared_from_this<IbmWebsocketSession> {
protected:
char binarydata[50000];
std::string TextStart;
std::string TextStop;
public:
explicit IbmWebsocketSession(net::io_context& ioc, ssl::context& ctx, SttService* ibmWatsonobj) :
mResolver(ioc), mWebSocket(ioc, ctx) {
TextStart ="{\"action\":\"start\",\"content-type\": \"audio/l16;rate=44100\"}";
TextStop = "{\"action\":\"stop\"}";
/**********************************************************************
* Desc : Send start frame
**********************************************************************/
void send_start(beast::error_code ec);
/**********************************************************************
* Desc : Send Binary data
**********************************************************************/
void send_binary(beast::error_code ec);
/**********************************************************************
* Desc : Send Stop frame
**********************************************************************/
void send_stop(beast::error_code ec);
/**********************************************************************
* Desc : Read the file for binary data to be sent
**********************************************************************/
void readFile(char *bdata, unsigned int *Len, unsigned int *start_pos,bool *ReachedEOF);
}
Foo.cpp
void IbmWebsocketSession::on_ssl_handshake(beast::error_code ec) {
if(ec)
return fail(ec, "connect");
// Perform the websocket handshake
ws_.async_handshake_ex(host, "/speech-to-text/api/v1/recognize", [Token](request_type& reqHead) {reqHead.insert(http::field::authorization,Token);},bind(&IbmWebsocketSession::send_start, shared_from_this(),placeholders::_1));
}
void IbmWebsocketSession::send_start(beast::error_code ec){
if(ec)
return fail(ec, "ssl_handshake");
ws_.async_write(net::buffer(TextStart),
bind(&IbmWebsocketSession::send_binary, shared_from_this(),placeholders::_1));
}
void IbmWebsocketSession::send_binary(beast::error_code ec) {
if(ec)
return fail(ec, "send_start");
readFile(binarydata, &Datasize, &StartPos, &IsLast);
ws_.binary(true);
if (!IsLast) {
ws_.async_write(net::buffer(binarydata, Datasize),
bind(&IbmWebsocketSession::send_binary, shared_from_this(),
placeholders::_1));
} else {
IbmWebsocketSession::on_binarysent(ec);
}
}
void IbmWebsocketSession::on_binarysent(beast::error_code ec) {
if(ec)
return fail(ec, "send_binary");
ws_.binary(false);
ws_.async_write(net::buffer(TextStop),
bind(&IbmWebsocketSession::read_response, shared_from_this(), placeholders::_1));
}
void IbmWebsocketSession::readFile(char *bdata, unsigned int *Len, unsigned int *start_pos,bool *ReachedEOF) {
unsigned int end = 0;
unsigned int start = 0;
unsigned int length = 0;
// Creation of ifstream class object to read the file
ifstream infile(filepath, ifstream::binary);
if (infile) {
// Get the size of the file
infile.seekg(0, ios::end);
end = infile.tellg();
infile.seekg(*start_pos, ios::beg);
start = infile.tellg();
length = end - start;
}
if ((size_t) length < 150) {
*Len = (size_t) length;
*ReachedEOF = true;
// cout << "Reached end of File (last 150 bytes)" << endl;
} else if ((size_t) length <= 50000) { //Maximumbytes to send are 50000
*Len = (size_t) length;
*start_pos += (size_t) length;
*ReachedEOF = false;
infile.read(bdata, length);
} else {
*Len = 50000;
*start_pos += 50000;
*ReachedEOF = false;
infile.read(bdata, 50000);
}
infile.close();
}
Any suggestions here?
From boost's documentation we have the following excerpt on websocket::async_write
This function is used to asynchronously write a complete message. This
call always returns immediately. The asynchronous operation will
continue until one of the following conditions is true:
The complete message is written.
An error occurs.
So when you create your buffer object to pass to it net::buffer(TextStart) for example the lifetime of the buffer passed to it is only until the function returns. It could be that even after the function returns you the async write is still operating on the buffer as per the documentation but the contents are no longer valid since the buffer was a local variable.
To remedy this you could, make your TextStart static or declare that as a member of your class and copy it to boost::asio::buffer there are plenty of examples on how to do that. Note I only mention TextStart in the IbmWebsocketSession::send_start function. The problem is pretty much the same throughout your code.
From IBM Watson's API definition, the Initiate a connection requires a certain format which can then be represented as a string. You have the string but missing the proper format due to which the connection is being closed by the peer and you are writing to a closed socket, thus a broken pipe.
The initiate connection requires :
var message = {
action: 'start',
content-type: 'audio/l16;rate=22050'
};
Which can be represented as string TextStart = "action: 'start',\r\ncontent-type: 'audio\/l16;rate=44100'" according to your requirements.
Following on from the discussion in the chat, the OP resolved the issue by adding the code:
if (!IsLast ) {
ws_.async_write(net::buffer(binarydata, Datasize),
bind(&IbmWebsocketSession::send_binary, shared_from_this(),
placeholders::_1));
}
else {
if (mIbmWatsonobj->IsGstFileWriteDone()) { //checks for the file write completion
IbmWebsocketSession::on_binarysent(ec);
} else {
std::this_thread::sleep_for(std::chrono::seconds(1));
IbmWebsocketSession::send_binary(ec);
}
}
Which from discussion stems from the fact that more bytes were being sent to the client before a file write was completed on the same set of bytes. The OP now verifies this before attempting to send more bytes.

deadlock in producers/consumers queue

I try to solve one interesting problem which is described below:
Write a console application which has N producers (N=1…10), M
consumers (M=1…10) and one data queue. Each producer and consumer is a
separate thread and all threads are working concurrently. Producer
thread sleeps 0…100 milliseconds randomly then it wakes up and
generates a random number between 1 and 100 and then puts this number
to data queue. Consumer thread sleeps 0…100 milliseconds randomly and
then wakes up and takes the number from the queue and saves it to the
output ‘data.txt’ file. All numbers are appended in the file and all
they are comma delimited (for example 4,67,99,23,…). When producer
thread puts the next number to data queue it checks the size of data
queue, and if it is >=100 the producer thread is blocked until the
number of elements gets <= 80. When consumer thread wants to take the
next number from data queue and no elements in it, consumer thread is
blocked until new element is added to data queue by a producer.
When we start application we need to insert the N (number of
producers) and the M (number of consumers) after which program starts
all threads. It should print current number of elements of data queue
in each second. When we stop program it should interrupt all producers
and wait for all consumers to save all queued data then program exits.
To solve this except all I wrote the thread-safe queue
template <class T>
class global::safe_queue
{
private:
sync* m_sync;
size_t m_lcorner;
size_t m_rcorner;
std::queue<T> m_data;
public:
safe_queue(sync* snc, size_t lcorn, size_t rcorn) :
m_sync(snc),
m_lcorner(lcorn),
m_rcorner(rcorn) {}
~safe_queue()
{
m_sync->lock();
while (m_data.size()){
m_data.pop();
}
m_sync->unlock();
}
size_t size() const
{
m_sync->lock();
size_t sz = m_data.size();
m_sync->unlock();
return sz;
}
size_t front() const
{
m_sync->lock();
T item = m_data.front();
m_sync->unlock();
return item;
}
void push(T item)
{
m_sync->lock();
while(m_data.size() >= m_rcorner) {
m_sync->unlock();
usleep(5);
m_sync->lock();
// conditional wait
//m_sync->wait();
}
m_data.push(item);
if(m_data.size() == 1) {
m_sync->unlock();
// conditional signal
//m_sync->signal();
}
m_sync->unlock();
}
T pop()
{
m_sync->lock();
while(m_data.size() == 0) {
m_sync->unlock();
usleep(5);
m_sync->lock();
// conditional wait
//m_sync->wait();
}
T item = m_data.front();
if(m_data.size() <= m_lcorner) {
m_sync->unlock();
// conditional signal
// m_sync->signal();
}
m_data.pop();
m_sync->unlock();
return item;
}
};
But as a result I get deadlock. What is wrong ?

How to discard data as it is sent with boost::asio?

I'm writing some code that reads and writes to serial device using boost::asio class. However, when sending several strings between programs, I've noticed that on the receiving program the data is read in the order as it was written to the serial port, and not as the data is sent from the other program - If I start reading data some seconds later, I don't get the values that I am sending at the moment but those that were sent previously. I'm assuming this is caused by how I am setting up my boost::asio::serial_port:
int main(int argc, char const *argv[]){
int baud=atoi(argv[1]);
std::string pty=argv[2];
printf("Virtual device: %s\n",pty.data());
printf("Baud rate: %d\n",baud);
boost::asio::io_service io;
boost::asio::serial_port port(io, pty);
port.set_option(boost::asio::serial_port_base::baud_rate(baud));
// counter that writes to serial port in 1s intervals
int val=0;
while (1){
std::string data=std::to_string(val);
data+='\n';
std::cout << data;
write(port,boost::asio::buffer(data.c_str(),data.size()));
sleep(1);
val++;
data.clear();
}
port.close();
return 0;
}
Is there a way to force past data to be discarded as soon as a new value is sent to the serial port (which I assume should be done on the write() part of the code)?
Boost.Asio does not provide a higher-level abstraction for flushing a serial port's buffers. However, this can often be accomplished by having platform specific calls, such as tcflush() or PurgeComm(), operate on a serial port's native_handle().
Each serial port has a receive and transmit buffer, and flushing operates on one or both of the buffers. For example, if two serial ports are connected (/dev/pts/3 and /dev/pts/4), and program A opens and writes to /dev/pts/3, then it can only flush the buffers associated with /dev/pts/3 (data received on /dev/pts/3 but not read, and data written to /dev/pts/3 but not transmitted). Therefore, if program B starts, opens /dev/pts/4, and wants to read non-stale data, then program B needs to flush the receive buffer for /dev/pts/4 after opening the serial port.
Here is a complete example running on CentOs. When the example runs as a writer, it will write a sequentially increasing number to the serial port once a second. When the example runs as a writer, it will read five numbers, sleep for 5 seconds and flush its read buffer every other iteration:
#include <iostream>
#include <vector>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
/// #brief Different ways a serial port may be flushed.
enum flush_type
{
flush_receive = TCIFLUSH,
flush_send = TCOFLUSH,
flush_both = TCIOFLUSH
};
/// #brief Flush a serial port's buffers.
///
/// #param serial_port Port to flush.
/// #param what Determines the buffers to flush.
/// #param error Set to indicate what error occurred, if any.
void flush_serial_port(
boost::asio::serial_port& serial_port,
flush_type what,
boost::system::error_code& error)
{
if (0 == ::tcflush(serial_port.lowest_layer().native_handle(), what))
{
error = boost::system::error_code();
}
else
{
error = boost::system::error_code(errno,
boost::asio::error::get_system_category());
}
}
/// #brief Reads 5 numbers from the serial port, then sleeps for 5 seconds,
/// flushing its read buffer every other iteration.
void read_main(boost::asio::serial_port& serial_port)
{
std::vector<unsigned char> buffer(5);
for (bool flush = false;; flush = !flush)
{
std::size_t bytes_transferred =
read(serial_port, boost::asio::buffer(buffer));
for (std::size_t i = 0; i < bytes_transferred; ++i)
std::cout << static_cast<unsigned int>(buffer[i]) << " ";
boost::this_thread::sleep_for(boost::chrono::seconds(5));
if (flush)
{
boost::system::error_code error;
flush_serial_port(serial_port, flush_receive, error);
std::cout << "flush: " << error.message() << std::endl;
}
else
{
std::cout << "noflush" << std::endl;
}
}
}
/// #brief Write a sequentially increasing number to the serial port
/// every second.
void write_main(boost::asio::serial_port& serial_port)
{
for (unsigned char i = 0; ; ++i)
{
write(serial_port, boost::asio::buffer(&i, sizeof i));
boost::this_thread::sleep_for(boost::chrono::seconds(1));
}
}
int main(int argc, char* argv[])
{
boost::asio::io_service io_service;
boost::asio::serial_port serial_port(io_service, argv[2]);
if (!strcmp(argv[1], "read"))
read_main(serial_port);
else if (!strcmp(argv[1], "write"))
write_main(serial_port);
}
Create virtual serial ports with socat:
$ socat -d -d PTY: PTY:
2014/03/23 16:22:22 socat[12056] N PTY is /dev/pts/3
2014/03/23 16:22:22 socat[12056] N PTY is /dev/pts/4
2014/03/23 16:22:22 socat[12056] N starting data transfer loop with
FDs [3,3] and [5,5]
Starting both the read and write examples:
$ ./a.out read /dev/pts/3 & ./a.out write /dev/pts/4
[1] 12238
0 1 2 3 4 noflush
5 6 7 8 9 flush: Success
14 15 16 17 18 noflush
19 20 21 22 23 flush: Success
28 29 30 31 32 noflush
33 34 35 36 37 flush: Success
As demonstrating in the output, numbers are only skipped in the sequence when the reader flushes its read buffer: 3 4 noflush 5 6 7 8 9 flush 14 15.