Queue- a Server Banner Behavior - c++

This Question is from Edx-UCSanDiegoX: ALGS201x Data Structure
Fundamentals Programming Challenge 1-3: Network packet processing
simulation
Hi Everyone, I have a very long question, I sincerely appreciate anyone who spend his/her precious time. Only Response Process(const Request &request) implementation is asked in the question, rest is given.
I having difficulty why Failing Test Case, I found the same answer with trying paper and pen :), maybe I made a silly mistake. Thank You.
Problem Description Task:
You are given a series of incoming network packets and your task is to simulate their processing. Packets arrive in some order. For each packet i , you know when it arrived Ai and the time it takes the processor to process it Pi (both in milliseconds). There is only one processor, and it processes the incoming packets in the order of their arrival. Of the processor started to process some packet, it doesn’t interrupt or stop until it finishes the processing of this packet, and the processing of packet I takes exactly Pi milliseconds.
The computer processing the packets has a network buffer of fixed size S. When packets arrive, they are stored in the buffer before being processed. However, if the buffer is full when a packet arrives (there are S packets which are arrived before this packet, and the computer hasn’t finished processing any of them), it is dropped and won’t be processed at all. If several packets arrive at the same time, they are all stored in the buffer (some of them may dropped because of their arrival, and it starts processing the next available packet from the buffer as soon as it finishes processing the previous one. If at some point the computer is not busy, and there are no packets in the buffer, the computer just waits for the next packet to arrive. Note that a packet leaves the buffer and free the space in the buffer as soon as computer finishes processing it.
Input Format:
The First line of input contains the size S of the buffer and the number n of incoming network packets. Each of the next n lines contains two numbers I-th line contains the time of arrival Ai and processing tine Pi (both in ms) of the the I-th packet. It is guaranteed that the sequence of arrival times is non-decreasing. (However, it can contain the exact same times of arrival in milliseconds in this case the packet which is earlier in the input is considered to have arrived earlier.
Output Format: For each packet output either the moment of the time when the processor began processing it or -1 if the packet was dropped. (output is same order as input packets are given)
Sample Runs
Sample 1
Input:
1 0
Output:
If there are no packets, you should not output anything
Sample 2
Input:
1 1
0 0
Output:
0
The only packet arrived at time 0. And computer started processing it.
Sample 3
Input:
1 2
0 1
0 1
Output:
0
1
The first packet arrived at time 0. the second packet also arrived at time 0 , but was dropped. Because the network buffer has size 1 and it was full with first packet already. The first packet started processing at time 0 , second was not processed at all.
Sample 4
Input :
1 2
0 1
1 1
Output:
0
1
The first packet arrived at time 0 , the computer started processing it immediately and finished at time 1 .The second packet arrived at time 1 and the computer started processing it immediately.
Failing Test Case:
Inputs:
3 6
0 2
1 2
2 2
3 2
4 2
5 2
My Output:
0
2
4
-1
6
-1
Correct Output:
0
2
4
6
8
-1
The Code
#include <iostream>
#include <queue>
#include <vector>
struct Request {
Request(int arrival_time, int process_time) :
arrival_time(arrival_time),
process_time(process_time)
{}
int arrival_time;
int process_time;
};
struct Response {
Response(bool dropped, int start_time) :
dropped(dropped),
start_time(start_time)
{}
bool dropped;
int start_time;
};
class Buffer {
public:
Buffer(int size) :
size_(size),
finish_time_()
{}
Response Process(const Request &request)
{
while (!finish_time_.empty() && finish_time_.front() <= request.arrival_time)
finish_time_.pop();
if (finish_time_.empty())
{
finish_time_.push(request.arrival_time + request.process_time);
return Response(false, request.arrival_time);
}
else
{
if (size_> (finish_time_.back() - request.arrival_time))
{
int before = finish_time_.back();
finish_time_.push(before + request.process_time);
return Response(false, before);
}
else
{
return Response(true, 5);
}
}
}
private:
int size_;
std::queue <int> finish_time_;
};
std::vector <Request> ReadRequests()
{
std::vector <Request> requests;
int count;
std::cin >> count;
for (int i = 0; i < count; ++i) {
int arrival_time, process_time;
std::cin >> arrival_time >> process_time;
requests.push_back(Request(arrival_time, process_time));
}
return requests;
}
std::vector <Response> ProcessRequests(const std::vector <Request> &requests, Buffer *buffer)
{
std::vector <Response> responses;
for (int i = 0; i < requests.size(); ++i)
responses.push_back(buffer->Process(requests[i]));
return responses;
}
void PrintResponses(const std::vector <Response> &responses)
{
for (int i = 0; i < responses.size(); ++i)
std::cout << (responses[i].dropped ? -1 : responses[i].start_time) << std::endl;
}
int main() {
int size;
std::cin >> size;
std::vector <Request> requests = ReadRequests();
Buffer buffer(size);
std::vector <Response> responses = ProcessRequests(requests, &buffer);
PrintResponses(responses);
return 0;
}

Related

ALSA Capture missing frames

I have inherited a chunk of code that is using ALSA to capture an audio input at 8KHz, 8 bits, 1 channel. The code looks rather simple, it sets channels to 1, rate to 8000, and period size to 8000. The goal of this program to gather audio data in 30+ min chunks at a time.
The main loop looks like
int retval;
snd_pcm_uframes_t numFrames = 8000;
while (!exit)
{
// Gather data
while( (unsigned int)(retval = snd_pcm_readi ( handle, buffer, numFrames ) ) != numFrames )
{
if( retval == -EPIPE )
{
cerr << "overrun " << endl;
snd_pcm_prepare( handle );
}
else if ( reval < 0 )
{
cerr << "Error : " << snd_strerror( retval ) << endl;
break;
}
}
// buffer processing logic here
}
We have been having behavioral issue (not getting the full 8K samples per second, and weird timing), so I added gettimeofday timestamps around the snd_pcm_readi loop to see how time was being used and I got the following :
loop 1 : 1.017 sec
loop 2 : 2.019 sec
loop 3 : 0 (less than 1ms)
loop 4 : 2.016 sec
loop 5 : .001 sec
.. the 2 loop pattern continues (even run 2.01x sec, odd run 0-1 ms) for the rest of the run. This means I am acutally getting on average less that 8000 samples per second (loss appears to be about 3 seconds per 10 minutes of running). This does not sync well with the other gathered data. Also we would have expected to process the data at about 1 second intervals, not have 2 back to back processes every 2 seconds or so.
As an additional check, I printed out the buffer values are setting the hardware parameters and I got the following :
Buffer Size : 43690
Periods : 5
Period Size : 8000
Period Time : 1000000
Rate : 8000
So in the end I have 2 questions :
1) Why do I get actual data at less than 8 Khz ? (Possible theory, actual hardware is not quite at 8KHz, even if ALSA thinks it can do it).
2) Why the 2 secs/0 secs cycle on the reads which should be 1 second each ? And what can be done to get it to a real 1 second cycle ?
Thanks for the help.
Dale Pennington
snd_pcm_readi() returns as many samples as are available. And it will not wait for more if the device is in non-blocking mode.
You have only retval samples. If you want to handle 8000 samples at once, call snd_pcm_readi() in a loop with the remaining buffer.

Issue sending/Receiving vector of double over TCP socket (missing data)

I am trying to send data from a vector over a TCP socket.
I'm working with a vector that I fill with values from 0 to 4999, and then send it to the socket.
Client side, I'm receiving the data into a vector, then I copy its data to another vector until I received all the data from the server.
The issue I'm facing is that when I receive my data, sometimes I will get all of it, and sometimes I will only receive the correct data from 0 to 1625 and then I get trash data until the end (please see the image below). I even received for example from 0 to 2600 correct data, then from 2601 to 3500 it's trash and finally from 3501 to 4999 it's correct again.
(left column is line number and right column is the data).
This is the server side :
vector<double> values2;
for(int i=0; i<5000; i++)
values2.push_back(i);
skt.sendmsg(&values2[0], values2.size()*sizeof(double));
The function sendmsg :
void Socket::sendmsg(const void *buf, size_t len){
int bytes=-1;
bytes = send(m_csock, buf, len, MSG_CONFIRM);
cout << "Bytes sent: " << bytes << endl;
}
Client side :
vector<double> final;
vector<double> msgrcvd(4096);
do{
bytes += recv(sock, &msgrcvd[0], msgrcvd.size()*sizeof(double), 0);
cout << "Bytes received: " << bytes << endl;
//Get rid of the trailing zeros
while(!msgrcvd.empty() && msgrcvd[msgrcvd.size() - 1] == 0){
msgrcvd.pop_back();
}
//Insert buffer content into final vector
final.insert(final.end(), msgrcvd.begin(), msgrcvd.end());
}while(bytes < sizeof(double)*5000);
//Write the received data in a txt file
for(int i=0; i<final.size(); i++)
myfile << final[i] << endl;
myfile.close();
The outputs of the bytes are correct, the server outputs 40 000 when sending the data and the client also outputs 40 000 when receiving the data.
Removing the trailing zeros and then inserting the content of the buffer into a new vector is not very efficient, but I don't think it's the issue. If you have any clues on how to make it more efficient, it would be great!
I don't really know if the issue is when I send the data or when I receive it, and also I don't really get why sometimes (rarely), I get all the data.
recv receives bytes, and doesn't necessarily wait for all the data that was sent. So you can be receiving part of a double.
Your code works if you receive complete double values, but will fail when you receive part of a value. You should receive your data in a char buffer, then unpack it into doubles. (Possibly converting endianness if the server and client are different.)
#include <cstring> // For memcpy
std::array<char, 1024> msgbuf;
double d;
char data[sizeof(double)];
int carryover = 0;
do {
int b = recv(sock, &msgbuf[carryover], msgbuf.size() * sizeof(msgbuf[0]) - carryover, 0);
bytes += b;
b += carryover;
const char *mp = &msgbuf[0];
while (b >= sizeof(double)) {
char *bp = data;
for (int i = 0; i < sizeof(double); ++i) {
*bp++ = *mp++;
}
std::memcpy(&d, data, sizeof(double));
final.push_back(d);
b -= sizeof(double);
}
carryover = b % sizeof(double);
// Take care of the extra bytes. Copy them down to the start of the buffer
for (int j = 0; j < carryover; ++j) {
msgbuf[j] = *mp++;
}
} while (bytes < sizeof(double) * 5000);
This uses type punning from What's a proper way of type-punning a float to an int and vice-versa? to convert the received binary data to a double, and assumes the endianness of the client and server are the same.
Incidentally, how does the receiver know how many values it is receiving? You have a mix of hard coded values (5000) and dynamic values (.size()) in your server code.
Note: code not compiled or tested
TL/DR:
Never-ever send raw data via a network socket and expect them properly received/unpacked on other side.
Detailed answer:
Network is built on top of various protocols, and this is for a reason. Once you send something, there is no warranty you counterparty is on the same OS and same software version. There is no standard how primitive types should be coded on byte level. There is no restriction how much intermittent nodes could be involved into the data delivery, and each of your send() may traverse via different routes. So, you have to formalize the way you send the data, then other party can be sure what is proper way to retrieve them from the socket.
Simplest solution: use a header before your data. So, you plan to send 5000 doubles? Then send a DWORD first, which contains 40000 inside (5k elements, 8 bytes each -> 40k) and push all your 5k doubles right after that. Then, your counterparty should read 4 bytes from the socket first, interpret it as DWORD and understand how much bytes should come then.
Next step: you may want to send not only doubles, but ints and strings as well. That way, you have to expand your header so it can indicate
Total size of further data (so called payload size)
Kind of the data (array of doubles, string, single int etc)
Advanced solution:
Take a look on ready-to-go solutions:
ProtoBuf https://developers.google.com/protocol-buffers/docs/cpptutorial
Boost.Serialization https://www.boost.org/doc/libs/1_67_0/libs/serialization/doc/index.html
Apache Thrift https://thrift.apache.org
YAS https://github.com/niXman/yas
Happy coding!

MPI error in communicating with many to many processors

I am writing a code, where each processor must interact with multiple processors.
Ex: I have 12 processors, so Processor 0 has to communicate to say 1,2,10 and 9. Lets call them as neighbours of Processor 0. Similarly I have
Processor 1 has to communicate to say 5 ,3.
Processor 2 has to communicate to 5,1,0,10,11
and so on.
The flow of data is 2 ways, i.e Processor 0 must send data to 1,2,10 and 9 and also receive data from them.
Also, there is no problem in Tag calculation.
I have created a code which works like this:
for(all neighbours)
{
store data in vector<double> x;
MPI_Send(x)
}
MPI_BARRIER();
for(all neighbours)
{
MPI_Recv(x);
do work with x
}
Now I testing this algorithm for different size of x and different arrangement of neighbours. The code works for some, but doesnot work for others, it simply resorts to deadlock.
I have also tried:
for(all neighbours)
{
store data in vector<double> x;
MPI_ISend(x)
}
MPI_Test();
for(all neighbours)
{
MPI_Recv(x);
do work with x
}
The result is same, although the deadlock is replcaed by NaN in result, as MPI_Test() tells me that some of the MPI_Isend() operation are not complete and it jumps immediately to MPI_Recv().
Can anyone guide me in this matter, what am I dong wrong? Or is my fundamental approach itself is incorrect?
EDIT: I am attaching code snippet for better understanding of the problem. I am basically workin on parallelizing an unstructured 3D-CFD solver
I have attached one of the files, with some explanation. I am not broadcasting, I am looping over the neighbours of the parent processor to send the data across the interface( this can be defined as a boundary between two interfaces) .
So, If say I have 12 processors, and say Processor 0 has to communicate to say 1,2,10 and 9. So 0 is the parent processor and 1,2,10 and 9 are its neighbours.
As the file was too long and a part of the solver, to make things simple, I have only kept the MPI function in it.
void Reader::MPI_InitializeInterface_Values() {
double nbr_interface_id;
Interface *interface;
MPI_Status status;
MPI_Request send_request, recv_request;
int err, flag;
int err2;
char buffer[MPI_MAX_ERROR_STRING];
int len;
int count;
for (int zone_no = 0; zone_no<this->GetNumberOfZones(); zone_no++) { // Number of zone per processor is 1, so basically each zone is an independent processor
UnstructuredGrid *zone = this->ZoneList[zone_no];
int no_of_interface = zone->GetNumberOfInterfaces();
// int count;
long int count_send = 0;
long int count_recv = 0;
long int max_size = 10000; // can be set from test case later
int max_size2 = 199;
int proc_no = FlowSolution::processor_number;
for (int interface_no = 0; interface_no < no_of_interface; interface_no++) { // interface is defined as a boundary between two zones
interface = zone->GetInterface(interface_no);
int no_faces = interface->GetNumberOfFaces();
if (no_faces != 0) {
std::vector< double > Variable_send; // The vector which stores the data to be sent across the interface
std::vector< double > Variable_recieve;
int total_size = FlowSolution::VariableOrder.size() * no_faces;
Variable_send.resize(total_size);
Variable_recieve.resize(total_size);
int nbr_proc_no = zone->GetInterface(interface_no)->GetNeighborZoneId(); // neighbour of parent processor
int j = 0;
nbr_interface_id = interface->GetShared_Interface_ID();
for (std::map<VARIABLE, int>::iterator iterator = FlowSolution::VariableOrder.begin(); iterator != FlowSolution::VariableOrder.end(); iterator++) {
for (int face_no = 0; face_no < no_faces; face_no++) {
Face *face = interface->GetFace(face_no);
int owner_id = face->Getinterface_Original_face_owner_id();
double value_send = zone->GetInterface(interface_no)->GetFace(face_no)->GetCell(owner_id)->GetPresentFlowSolution()->GetVariableValue((*iterator).first);
Variable_send[j] = value_send;
j++;
}
}
count_send = nbr_proc_no * max_size + nbr_interface_id; // tag for data to be sent
err2 = MPI_Isend(&Variable_send.front(), total_size, MPI_DOUBLE, nbr_proc_no, count_send, MPI_COMM_WORLD, &send_request);
}// end of sending
} // all the processors have sent data to their corresponding neighbours
MPI_Barrier(MPI_COMM_WORLD);
for (int interface_no = 0; interface_no < no_of_interface; interface_no++) { // loop over of neighbours of the current processor to receive data
interface = zone->GetInterface(interface_no);
int no_faces = interface->GetNumberOfFaces();
if (no_faces != 0) {
std::vector< double > Variable_recieve; // The vector which collects the data sent across the interface from
int total_size = FlowSolution::VariableOrder.size() * no_faces;
Variable_recieve.resize(total_size);
count_recv = proc_no * max_size + interface_no; // tag to receive data
int nbr_proc_no = zone->GetInterface(interface_no)->GetNeighborZoneId();
nbr_interface_id = interface->GetShared_Interface_ID();
MPI_Irecv(&Variable_recieve.front(), total_size, MPI_DOUBLE, nbr_proc_no, count_recv, MPI_COMM_WORLD, &recv_request);
/* Now some work is done using received data */
int j = 0;
for (std::map<VARIABLE, int>::iterator iterator = FlowSolution::VariableOrder.begin(); iterator != FlowSolution::VariableOrder.end(); iterator++) {
for (int face_no = 0; face_no < no_faces; face_no++) {
double value_recieve = Variable_recieve[j];
j++;
Face *face = interface->GetFace(face_no);
int owner_id = face->Getinterface_Original_face_owner_id();
interface->GetFictitiousCell(face_no)->GetPresentFlowSolution()->SetVariableValue((*iterator).first, value_recieve);
double value1 = face->GetCell(owner_id)->GetPresentFlowSolution()->GetVariableValue((*iterator).first);
double face_value = 0.5 * (value1 + value_recieve);
interface->GetFace(face_no)->GetPresentFlowSolution()->SetVariableValue((*iterator).first, face_value);
}
}
// Variable_recieve.clear();
}
}// end of receiving
}
Working from the problem statement:
Processor 0 has to send to 1, 2, 9 and 10, and receive from them.
Processor 1 has to send to 5 and 3, and receive from them.
Processor 2 has to send to 0, 1, 5, 10 and 11, and receive from them.
There are 12 total processors.
You can make life easier if you just run a 12-step program:
Step 1: Processor 0 sends, others receive as needed, then the converse occurs.
Step 2: Processor 1 sends, others receive as needed, then the converse occurs.
...
Step 12: Profit - there's nothing left to do (because every other processor has already interacted with Processor 11).
Each step can be implemented as an MPI_Scatterv (some sendcounts will be zero), followed by an MPI_Gatherv. 22 total calls and you're done.
There may be several possible reasons for a deadlock, so you have to be more specific, e. g. standard says: "When standard send operations are used, then a deadlock situation may occur where both processes are blocked because buffer space is not available."
You should use both Isend and Irecv. The general structure should be:
MPI_Request req[n];
MPI_Irecv(..., req[0]);
// ...
MPI_Irecv(..., req[n-1]);
MPI_Isend(..., req[0]);
// ...
MPI_Isend(..., req[n-1]);
MPI_Waitall(n, req, MPI_STATUSES_IGNORE);
By using AllGatherV, the problem can be solved. All I did was made the send count such that the send count only had the processors that I wanted to communicate with. Other processors had 0 send count.
This solved my problem
Thank you everyone for your answers!

can not work well with send() and recv(),the 1st byte of a short string got lost.c++

I am using send() and recv() in linux with c++.
I am trying to make up some kind of protocol.And some part of it works like:
A connect to B,B create a thread and wait
A send "backup" to B
B send "OK" to A
A send (some string like)"20001" to B.
In the last(4th) step A sends a short string to B,less than 10 bytes.
However when A sends a "20001" to B,B recvs a "0001",the first byte got lost,and I called recv() only once.
I checked the length,A send 6 bytes,B recv 18 or 19 bytes,the buffer that B used is 20 bytes long.Some of the codes:
send(datasock,conferenceid.c_str(),conferenceid.size()+1,0);//A send conferenceid,sent "20001" and returned 6 in the tests
char temp[20]={0};//B recv data
memset(temp,0,20);
recv(remote->sock_fd,temp,20,0);//got "0001" and returned 18 or 19 in the tests
The thing is,several hours ago in some other part of my pograme,when a "10001" was sent,a "001" was recieved.Somehow it worked well now hours later.
I am not familiar with network programming.Someone can tell me where can I find the lost bytes?
From man send
RETURN VALUE
On success, these calls return the number of characters sent. On error, -1 is returned, and errno is set appropriately.
send, as well as recv (and also similar write and read) when used synchronously (meaning that application will be locked while waiting for packets) and when the size of packet is known should be wrapped in a loop like this (example for write):
int write_all(int fd, const char *buf, int n)
{
int pos = 0;
while (pos < n)
{
int cnt = write(fd, buf + pos, n - pos);
if (cnt < 0)
{
perror("write_all");
exit(1);
}
if (cnt > 0)
{
pos += cnt;
}
}
return 0;
}

fast reading constant data stream from serial port in C++.net

I'm trying to establish a SerialPort connection which transfers 16 bit data packages at a rate of 10-20 kHz. Im programming this in C++/CLI. The sender just enters an infinte while-loop after recieving the letter "s" and constantly sends 2 bytes with the data.
A Problem with the sending side is very unlikely, since a more simple approach works perfectly but too slow (in this approach, the reciever sends always an "a" first, and then gets 1 package consisting of 2 bytes. It leads to a speed of around 500Hz).
Here is the important part of this working but slow approach:
public: SerialPort^ port;
in main:
Parity p = (Parity)Enum::Parse(Parity::typeid, "None");
StopBits s = (StopBits)Enum::Parse(StopBits::typeid, "1");
port = gcnew SerialPort("COM16",384000,p,8,s);
port->Open();
and then doing as often as wanted:
port->Write("a");
int i = port->ReadByte();
int j = port->ReadByte();
This is now the actual approach im working with:
static int values[1000000];
static int counter = 0;
void reader(void)
{
SerialPort^ port;
Parity p = (Parity)Enum::Parse(Parity::typeid, "None");
StopBits s = (StopBits)Enum::Parse(StopBits::typeid, "1");
port = gcnew SerialPort("COM16",384000,p,8,s);
port->Open();
unsigned int i = 0;
unsigned int j = 0;
port->Write("s"); //with this command, the sender starts to send constantly
while(true)
{
i = port->ReadByte();
j = port->ReadByte();
values[counter] = j + (i*256);
counter++;
}
}
in main:
Thread^ readThread = gcnew Thread(gcnew ThreadStart(reader));
readThread->Start();
The counter increases (much more) rapidly at a rate of 18472 packages/s, but the values are somehow wrong.
Here is an example:
The value should look like this, with the last 4 bits changing randomly (its a signal of an analogue-digital converter):
111111001100111
Here are some values of the threaded solution given in the code:
1110011001100111
1110011000100111
1110011000100111
1110011000100111
So it looks like the connection reads the data in the middle of the package (to be exact: 3 bits too late). What can i do? I want to avoid a solution where this error is fixed later in the code while reading the packages like this, because I don't know if the the shifting error gets worse when I edit the reading code later, which I will do most likely.
Thanks in advance,
Nikolas
PS: If this helps, here is the code of the sender-side (an AtMega168), written in C.
uint8_t activate = 0;
void uart_puti16(uint16_t val) //function that writes the data to serial port
{
while ( !( UCSR0A & (1<<UDRE0)) ) //wait until serial port is ready
nop(); // wait 1 cycle
UDR0 = val >> 8; //write first byte to sending register
while ( !( UCSR0A & (1<<UDRE0)) ) //wait until serial port is ready
nop(); // wait 1 cycle
UDR0 = val & 0xFF; //write second byte to sending register
}
in main:
while(1)
{
if(active == 1)
{
uart_puti16(read()); //read is the function that gives a 16bit data set
}
}
ISR(USART_RX_vect) //interrupt-handler for a recieved byte
{
if(UDR0 == 'a') //if only 1 single data package is requested
{
uart_puti16(read());
}
if(UDR0 == 's') //for activating constant sending
{
active = 1;
}
if(UDR0 == 'e') //for deactivating constant sending
{
active = 0;
}
}
At the given bit rate of 384,000 you should get 38,400 bytes of data (8 bits of real data plus 2 framing bits) per second, or 19,200 two-byte values per second.
How fast is counter increasing in both instances? I would expect any modern computer to keep up with that rate whether using events or directly polling.
You do not show your simpler approach which is stated to work. I suggest you post that.
Also, set a breakpoint at the line
values[counter] = j + (i*256);
There, inspect i and j. Share the values you see for those variables on the very first iteration through the loop.
This is a guess based entirely on reading the code at http://msdn.microsoft.com/en-us/library/system.io.ports.serialport.datareceived.aspx#Y228. With this caveat out of the way, here's my guess:
Your event handler is being called when data is available to read -- but you are only consuming two bytes of the available data. Your event handler may only be called every 1024 bytes. Or something similar. You might need to consume all the available data in the event handler for your program to continue as expected.
Try to re-write your handler to include a loop that reads until there is no more data available to consume.