file size and buffer overshoot - c++

I have a function which opens a file from an SD card, uses the file size to set the size of a buffer, writes a block of information to that buffer, then does something with that information, as shown in this code:
char filename = "filename.txt";
uint16_t duration;
uint16_t pixel;
int q = 0;
int w = 0;
bool largefile;
File f;
int readuntil;
long large_buffer;
f = SD.open(filename);
if(f.size() > 3072) {
w = 3072;
} else {
w = f.size();
}
uint8_t buffer[w];
while(f.available()) {
f.read(buffer, sizeof(buffer));
while(q < sizeof(buffer)) {
doStuffWithInformation(buffer[q++]);
}
q=0;
}
f.close;
This works great with smaller file sizes, but anything over the hard limit buffer size of 3072 (which I arrived at empirically, its just the amount of memory that can be safely committed to this function), runs into a problem. Larger files read fine until they hit the last loop of while(f.available()), where they read the end of the file, but then continue reading the buffer, the tail end of which is filled with data from the last loop, that wasn't overwritten by the latest f.read(). How can I make sure that the last loop of the while(f.available()) function only works with the information that was written to the buffer during the current loop? My only idea right now is to solve for factors of the file size, and set the buffer size as the largest factor less than 3072, but this seems intensive to run every time this function is called. Is there an elegant solution staring me in the face?

Your program is not behaving correctly because f.read() is not guaranteed to read the whole buffer. Moreover, it is bound to happen when you read the last chunk of the file, unless the file size is a factor of buffer size (3072 in your case).
While Arduino specification (https://www.arduino.cc/en/Reference/FileRead) doesn't say so, SD.read function returns the number of bytes read. See code of the library here: https://github.com/arduino-libraries/SD/blob/master/src/utility/SdFile.cpp, int16_t SdFile::read(void* buf, uint16_t nbyte)
Knowing that, you should change your loop as following (while also rewriting it as a for loop for better readability and removing q definition above):
while(f.available()) {
uint16_t sz = f.read(buffer, sizeof(buffer));
for (uint16_t q = 0; q < sz; ++q) {
doStuffWithInformation(buffer[q]);
}
}
On a side note, now, when you have this logic in place, it would make sense for you to do away with variable length array and use a fixed buffer of size 512 - the standard sector size on the SD card. Most likely, it will yield the same performance in regards to read, and slightly better performance in regards to sizeof, which will becomes a compile-time constant rather than a run-time calculation. This also makes your program simpler. This makes for following code:
f = SD.open(filename);
...
uint8_t buffer[512];

Related

How to concurrently write to a file in c++(in other words, whats the fastest way to write to a file)

I'm building a graphics engine, and I need to write te result image to a .bmp file. I'm storing the pixels in a vector<Color>. While also saving the width and the heigth of the image. Currently I'm writing the image as follows(I didn't write this code myself):
std::ostream &img::operator<<(std::ostream &out, EasyImage const &image) {
//temporaryily enable exceptions on output stream
enable_exceptions(out, std::ios::badbit | std::ios::failbit);
//declare some struct-vars we're going to need:
bmpfile_magic magic;
bmpfile_header file_header;
bmp_header header;
uint8_t padding[] =
{0, 0, 0, 0};
//calculate the total size of the pixel data
unsigned int line_width = image.get_width() * 3; //3 bytes per pixel
unsigned int line_padding = 0;
if (line_width % 4 != 0) {
line_padding = 4 - (line_width % 4);
}
//lines must be aligned to a multiple of 4 bytes
line_width += line_padding;
unsigned int pixel_size = image.get_height() * line_width;
//start filling the headers
magic.magic[0] = 'B';
magic.magic[1] = 'M';
file_header.file_size = to_little_endian(pixel_size + sizeof(file_header) + sizeof(header) + sizeof(magic));
file_header.bmp_offset = to_little_endian(sizeof(file_header) + sizeof(header) + sizeof(magic));
file_header.reserved_1 = 0;
file_header.reserved_2 = 0;
header.header_size = to_little_endian(sizeof(header));
header.width = to_little_endian(image.get_width());
header.height = to_little_endian(image.get_height());
header.nplanes = to_little_endian(1);
header.bits_per_pixel = to_little_endian(24);//3bytes or 24 bits per pixel
header.compress_type = 0; //no compression
header.pixel_size = pixel_size;
header.hres = to_little_endian(11811); //11811 pixels/meter or 300dpi
header.vres = to_little_endian(11811); //11811 pixels/meter or 300dpi
header.ncolors = 0; //no color palette
header.nimpcolors = 0;//no important colors
//okay that should be all the header stuff: let's write it to the stream
out.write((char *) &magic, sizeof(magic));
out.write((char *) &file_header, sizeof(file_header));
out.write((char *) &header, sizeof(header));
//okay let's write the pixels themselves:
//they are arranged left->right, bottom->top, b,g,r
// this is the main bottleneck
for (unsigned int i = 0; i < image.get_height(); i++) {
//loop over all lines
for (unsigned int j = 0; j < image.get_width(); j++) {
//loop over all pixels in a line
//we cast &color to char*. since the color fields are ordered blue,green,red they should be written automatically
//in the right order
out.write((char *) &image(j, i), 3 * sizeof(uint8_t));
}
if (line_padding > 0)
out.write((char *) padding, line_padding);
}
//okay we should be done
return out;
}
As you can see, the pixels are being written one by one. This is quite slow, I put some timers in my program, and found that the writing was my main bottleneck.
I tried to write entire (horizontal) lines, but I did not find how to do it(best I found was this.
Secondly, I wanted to write to the file using multithreading(not sure if I need to use threading or processing). using openMP. But that means I need to specify which byte address to write to, I think, which I couldn't solve.
Latstly, I thought about immidiatly writing to the file whenever I drew an object, but then I had the same issue with writing to specific locations in the file.
So, my question is: what's the best(fastest) way to tackle this problem. (Compiling this for windows and linux)
The fastest method to write to a file is to use hardware assist. Write your output to memory (a.k.a. buffer), then tell the hardware device to transfer from memory to the file (disk).
The next fastest method is to write all the data to a buffer then block write the data to the file. If you want other tasks or threads to execute during your writing, then create a thread that writes the buffer to the file.
When writing to a file, the more data per transaction, the more efficient the write will be. For example, 1 write of 1024 bytes is faster than 1024 writes of one byte.
The idea is to keep the data streaming. Slowing down the transfer rate may be faster than a burst write, delay, burst write, delay, etc.
Remember that the disk is essentially a serial device (unless you have a special hard drive). Bits are laid down on the platters using a bit stream. Writing data in parallel will have adverse effects because the head will have to be moved between the parallel activities.
Remember that if you use more than one core, there will be more traffic on the data bus. The transfer to the file will have to pause while other threads/tasks are using the data bus. So, if you can, block all tasks, then transfer your data. :-)
I've written programs that copy from slow memory to fast memory, then transferred from fast memory to the hard drive. That was also using interrupts (threads).
Summary
Fast writing to a file involves:
Keep the data streaming; minimize the pauses.
Write in binary mode (no translations, please).
Write in blocks (format into memory as necessary before writing the block).
Maximize the data in a transaction.
Use separate writing thread, if you want other tasks running "concurrently".
The hard drive is a serial device, not parallel. Bits are written to the platters in a serial stream.

Element-wise shifting from smaller array to a larger array

I am programming an ESP32 in the Arduino framework. For my application, I need to create a buffer which will store information from both the present and the last time it was accessed. Here is what I am attempting to do.
//first buffer
char buffer1[4];
//second buffer
char buffer2[8];
void setup {
//setup
}
//buffer1 values will change with each iteration of loop from external inputs
//buffer2 must store most recent values of buffer1 plus values of buffer1 from when loop last ran
for example:
**loop first iteration**
void loop {
buffer1[0] = {1};
buffer1[1] = {2};
buffer1[2] = {3};
buffer1[3] = {1};
saveold(); //this is the function I'm trying to implement to save values to buffer2 in an element-wise way
}
//value of buffer2 should now be: buffer2 = {1,2,3,1,0,0,0,0}
**loop second iteration**
void loop {
buffer1[0] = {2};
buffer1[1] = {3};
buffer1[2] = {4};
buffer1[3] = {2};
saveold();
}
//value of buffer2 should now be: buffer2 = {2,3,4,2,1,2,3,1}
From what I've been able to understand through searching online, the "saveold" function I'm trying to make
should implement some form of memmove for these array operations
I've tried to piece it together, but I always overwrite the value of buffer2 instead of somehow shifting new values in, while retaining the old ones
This is all I've got:
void saveold() {
memmove(&buffer2[0], &buffer1[0], (sizeof(buffer1[0]) * 4));
}
From my understanding, this copies buffer1 starting from index position 0 to buffer2, starting at index position 0, for 4 bytes (where 1 char = 1 byte).
Computer science is not my backround, so perhaps there is some fundamental solution or strategy that I am missing. Any pointers would be appreciated.
You have multiple options to implement saveold():
Solution 1
void saveold() {
// "shift" lower half into upper half, saving recent values (actually it's a copy)
buffer2[4] = buffer2[0];
buffer2[5] = buffer2[1];
buffer2[6] = buffer2[2];
buffer2[7] = buffer2[3];
// copy current values
buffer2[0] = buffer[0];
buffer2[1] = buffer[1];
buffer2[2] = buffer[2];
buffer2[3] = buffer[3];
}
Solution 2
void saveold() {
// "shift" lower half into upper half, saving recent values (actually it's a copy)
memcpy(buffer2 + 4, buffer2 + 0, 4 * sizeof buffer2[0]);
// copy current values
memcpy(buffer2 + 0, buffer1, 4 * sizeof buffer1[0]);
}
Some notes
There are even more ways to do it. Anyway, choose the one you understand best.
Be sure that buffer2 is exactly double size of buffer1.
memcpy() can be used safely if source and destination don't overlap. memmove() checks for overlaps and reacts accordingly.
&buffer1[0] is the same as buffer1 + 0. Feel free to use the expression you better understand.
sizeof is an operator, not a function. So sizeof buffer[0] evaluates to the size of buffer[0]. A common and most accepted expression to calculate the size of an array dimension is sizeof buffer1 / sizeof buffer1[0]. You only need parentheses if you evaluate the size of a data type, like sizeof (int).
Solution 3
The last note leads directly to this improvement of solution 1:
void saveold() {
// "shift" lower half into upper half, saving recent values
size_t size = sizeof buffer2 / sizeof buffer2[0];
for (int i = 0; i < size / 2; ++i) {
buffer2[size / 2 + i] = buffer2[i];
}
// copy current values
for (int i = 0; i < size / 2; ++i) {
buffer2[i] = buffer1[i];
}
}
To apply this knowledge to solution 2 is left as an exercise for you. ;-)
The correct way to do this is to use buffer pointers, not by doing hard-copy backups. Doing hardcopies with memcpy is particularly bad on slow legacy microcontrollers such as AVR. Not quite sure what MCU this ESP32 got, seems to be some oddball one from Tensilica. Anyway, this answer applies universally for any processor where you have more data than CPU data word length.
perhaps there is some fundamental solution or strategy that I am missing.
Indeed - it really sounds that what you are looking for is a ring buffer. That is, an array of fixed size which has a pointer to the beginning of the valid data, and another pointer at the end of the data. You move the pointers, not the data. This is much more efficient both in terms of execution speed and RAM usage, compared to making naive hardcopies with memcpy.

seekg, relative or absolute position?

I need to read an array from a file. The array is not ordered continuously in the file, have to jump "offset" bytes to get the next element.
What is more efficient, assuming that I read a very large file.
1) Use an incremental relative position.
2) Use an absolute position.
option 1:
int var[N];
seekg(0);
for (int i=0; i<N; i++) {
file.read( (char*) var+i, sizeof(int))
seekg(offset,ios_base::cur);
}
option 2:
int var[N];
for (int i=0; i<N; i++) {
file.seekg(offset*i);
read( (char*) var+i, sizeof(int))
}
read will already advance the position, so you don't need to seek inside the loop. Moreover, arrays are laid out contiguously in memory, so you can just say:
std::vector<int> var(N);
auto res = file.read(reinterpret_cast<char*>(var.data()), sizeof(int) * var.size());
Just make sure to check the value of res and of file afterwards:
if (!file || res != sizeof(int) * var.size())
{
// an error occurred
}
If you're reading from random parts of the file, it makes no difference how you seek (files are essentially "random access"). But be sure to run the above test after every single read to catch errors.
I'm 99.9% sure that it will make no difference at all (aside from correctness in terms of offset needs to be correctly adjusted for the fact that you've moved sizeof(int) bytes forward in the relative case, and not in the absolute case. In both cases, you do a seek, which will move the current position in the file. The actual code in the filesystem that deals with that will ultimately move to an absolute position by calculating it from the current one in the case of ios_base::cur).
If it's REALLY important for you to know which is better, then benchmark the two options. But I'm pretty certain that it makes absolutely no difference at all inside the actual seek function in the filesystem. It's just a large integer (probably 64 bits) keeping track of where in the file you are reading (or writing) next.

C++ Optimal Block Size For Reading From A File

I have a program that generates files containing random distributions of the character A - Z. I have written a method that reads these files (and counts each character) using fread with different buffer sizes in an attempt to determine the optimal block size for reads. Here is the method:
int get_histogram(FILE * fp, long *hist, int block_size, long *milliseconds, long *filelen)
{
char *buffer = new char[block_size];
bzero(buffer, block_size);
struct timeb t;
ftime(&t);
long start_in_ms = t.time * 1000 + t.millitm;
size_t bytes_read = 0;
while (!feof(fp))
{
bytes_read += fread(buffer, 1, block_size, fp);
if (ferror (fp))
{
return -1;
}
int i;
for (i = 0; i < block_size; i++)
{
int j;
for (j = 0; j < 26; j++)
{
if (buffer[i] == 'A' + j)
{
hist[j]++;
}
}
}
}
ftime(&t);
long end_in_ms = t.time * 1000 + t.millitm;
*milliseconds = end_in_ms - start_in_ms;
*filelen = bytes_read;
return 0;
}
However, when I plot bytes/second vs. block size (buffer size) using block sizes of 2 - 2^20, I get an optimal block size of 4 bytes -- which just can't be correct. Something must be wrong with my code but I can't find it.
Any advice is appreciated.
Regards.
EDIT:
The point of this exercise is to demonstrate the optimal buffer size by recording the read times (plus computation time) for different buffer sizes. The file pointer is opened and closed by the calling code.
There are many bugs in this code:
It uses new[], which is C++.
It doesn't free the allocated memory.
It always loops over block_size bytes of input, not bytes_read as returned by fread().
Also, the actual histogram code is rather inefficient, since it seems to loop over each character to determine which character it is.
UPDATE: Removed claim that using feof() before I/O is wrong, since that wasn't true. Thanks to Eric for pointing this out in a comment.
You're not stating what platform you're running this on, and what compile time parameters you use.
Of course, the fread() involves some overhead, leaving user mode and returning. On the other hand, instead of setting the hist[] information directly, you're looping through the alphabet. This is unnecessary and, without optimization, causes some overhead per byte.
I'd re-test this with hist[j-26]++ or something similar.
Typically, the best timing would be achieved if your buffer size equals the system's buffer size for the given media.

Can I make this C++ code faster without making it much more complex?

here's a problem I've solved from a programming problem website(codechef.com in case anyone doesn't want to see this solution before trying themselves). This solved the problem in about 5.43 seconds with the test data, others have solved this same problem with the same test data in 0.14 seconds but with much more complex code. Can anyone point out specific areas of my code where I am losing performance? I'm still learning C++ so I know there are a million ways I could solve this problem, but I'd like to know if I can improve my own solution with some subtle changes rather than rewrite the whole thing. Or if there are any relatively simple solutions which are comparable in length but would perform better than mine I'd be interested to see them also.
Please keep in mind I'm learning C++ so my goal here is to improve the code I understand, not just to be given a perfect solution.
Thanks
Problem:
The purpose of this problem is to verify whether the method you are using to read input data is sufficiently fast to handle problems branded with the enormous Input/Output warning. You are expected to be able to process at least 2.5MB of input data per second at runtime. Time limit to process the test data is 8 seconds.
The input begins with two positive integers n k (n, k<=10^7). The next n lines of input contain one positive integer ti, not greater than 10^9, each.
Output
Write a single integer to output, denoting how many integers ti are divisible by k.
Example
Input:
7 3
1
51
966369
7
9
999996
11
Output:
4
Solution:
#include <iostream>
#include <stdio.h>
using namespace std;
int main(){
//n is number of integers to perform calculation on
//k is the divisor
//inputnum is the number to be divided by k
//total is the total number of inputnums divisible by k
int n,k,inputnum,total;
//initialize total to zero
total=0;
//read in n and k from stdin
scanf("%i%i",&n,&k);
//loop n times and if k divides into n, increment total
for (n; n>0; n--)
{
scanf("%i",&inputnum);
if(inputnum % k==0) total += 1;
}
//output value of total
printf("%i",total);
return 0;
}
The speed is not being determined by the computation—most of the time the program takes to run is consumed by i/o.
Add setvbuf calls before the first scanf for a significant improvement:
setvbuf(stdin, NULL, _IOFBF, 32768);
setvbuf(stdout, NULL, _IOFBF, 32768);
-- edit --
The alleged magic numbers are the new buffer size. By default, FILE uses a buffer of 512 bytes. Increasing this size decreases the number of times that the C++ runtime library has to issue a read or write call to the operating system, which is by far the most expensive operation in your algorithm.
By keeping the buffer size a multiple of 512, that eliminates buffer fragmentation. Whether the size should be 1024*10 or 1024*1024 depends on the system it is intended to run on. For 16 bit systems, a buffer size larger than 32K or 64K generally causes difficulty in allocating the buffer, and maybe managing it. For any larger system, make it as large as useful—depending on available memory and what else it will be competing against.
Lacking any known memory contention, choose sizes for the buffers at about the size of the associated files. That is, if the input file is 250K, use that as the buffer size. There is definitely a diminishing return as the buffer size increases. For the 250K example, a 100K buffer would require three reads, while a default 512 byte buffer requires 500 reads. Further increasing the buffer size so only one read is needed is unlikely to make a significant performance improvement over three reads.
I tested the following on 28311552 lines of input. It's 10 times faster than your code. What it does is read a large block at once, then finishes up to the next newline. The goal here is to reduce I/O costs, since scanf() is reading a character at a time. Even with stdio, the buffer is likely too small.
Once the block is ready, I parse the numbers directly in memory.
This isn't the most elegant of codes, and I might have some edge cases a bit off, but it's enough to get you going with a faster approach.
Here are the timings (without the optimizer my solution is only about 6-7 times faster than your original reference)
[xavier:~/tmp] dalke% g++ -O3 my_solution.cpp
[xavier:~/tmp] dalke% time ./a.out < c.dat
15728647
0.284u 0.057s 0:00.39 84.6% 0+0k 0+1io 0pf+0w
[xavier:~/tmp] dalke% g++ -O3 your_solution.cpp
[xavier:~/tmp] dalke% time ./a.out < c.dat
15728647
3.585u 0.087s 0:03.72 98.3% 0+0k 0+0io 0pf+0w
Here's the code.
#include <iostream>
#include <stdio.h>
using namespace std;
const int BUFFER_SIZE=400000;
const int EXTRA=30; // well over the size of an integer
void read_to_newline(char *buffer) {
int c;
while (1) {
c = getc_unlocked(stdin);
if (c == '\n' || c == EOF) {
*buffer = '\0';
return;
}
*buffer++ = c;
}
}
int main() {
char buffer[BUFFER_SIZE+EXTRA];
char *end_buffer;
char *startptr, *endptr;
//n is number of integers to perform calculation on
//k is the divisor
//inputnum is the number to be divided by k
//total is the total number of inputnums divisible by k
int n,k,inputnum,total,nbytes;
//initialize total to zero
total=0;
//read in n and k from stdin
read_to_newline(buffer);
sscanf(buffer, "%i%i",&n,&k);
while (1) {
// Read a large block of values
// There should be one integer per line, with nothing else.
// This might truncate an integer!
nbytes = fread(buffer, 1, BUFFER_SIZE, stdin);
if (nbytes == 0) {
cerr << "Reached end of file too early" << endl;
break;
}
// Make sure I read to the next newline.
read_to_newline(buffer+nbytes);
startptr = buffer;
while (n>0) {
inputnum = 0;
// I had used strtol but that was too slow
// inputnum = strtol(startptr, &endptr, 10);
// Instead, parse the integers myself.
endptr = startptr;
while (*endptr >= '0') {
inputnum = inputnum * 10 + *endptr - '0';
endptr++;
}
// *endptr might be a '\n' or '\0'
// Might occur with the last field
if (startptr == endptr) {
break;
}
// skip the newline; go to the
// first digit of the next number.
if (*endptr == '\n') {
endptr++;
}
// Test if this is a factor
if (inputnum % k==0) total += 1;
// Advance to the next number
startptr = endptr;
// Reduce the count by one
n--;
}
// Either we are done, or we need new data
if (n==0) {
break;
}
}
// output value of total
printf("%i\n",total);
return 0;
}
Oh, and it very much assumes the input data is in the right format.
try to replace if statement with count += ((n%k)==0);. that might help little bit.
but i think you really need to buffer your input into temporary array. reading one integer from input at a time is expensive. if you can separate data acquisition and data processing, compiler may be able to generate optimized code for mathematical operations.
The I/O operations are bottleneck. Try to limit them whenever you can, for instance load all data to a buffer or array with buffered stream in one step.
Although your example is so simple that I hardly see what you can eliminate - assuming it's a part of the question to do subsequent reading from stdin.
A few comments to the code: Your example doesn't make use of any streams - no need to include iostream header. You already load C library elements to global namespace by including stdio.h instead of C++ version of the header cstdio, so using namespace std not necessary.
You can read each line with gets(), and parse the strings yourself without scanf(). (Normally I wouldn't recommend gets(), but in this case, the input is well-specified.)
A sample C program to solve this problem:
#include <stdio.h>
int main() {
int n,k,in,tot=0,i;
char s[1024];
gets(s);
sscanf(s,"%d %d",&n,&k);
while(n--) {
gets(s);
in=s[0]-'0';
for(i=1; s[i]!=0; i++) {
in=in*10 + s[i]-'0'; /* For each digit read, multiply the previous
value of in with 10 and add the current digit */
}
tot += in%k==0; /* returns 1 if in%k is 0, 0 otherwise */
}
printf("%d\n",tot);
return 0;
}
This program is approximately 2.6 times faster than the solution you gave above (on my machine).
You could try to read input line by line and use atoi() for each input row. This should be a little bit faster than scanf, because you remove the "scan" overhead of the format string.
I think the code is fine. I ran it on my computer in less than 0.3s
I even ran it on much larger inputs in less than a second.
How are you timing it?
One small thing you could do is remove the if statement.
start with total=n and then inside the loop:
total -= int( (input % k) / k + 1) //0 if divisible, 1 if not
Though I doubt CodeChef will accept it, one possibility is to use multiple threads, one to handle the I/O, and another to process the data. This is especially effective on a multi-core processor, but can help even with a single core. For example, on Windows you code use code like this (no real attempt at conforming with CodeChef requirements -- I doubt they'll accept it with the timing data in the output):
#include <windows.h>
#include <process.h>
#include <iostream>
#include <time.h>
#include "queue.hpp"
namespace jvc = JVC_thread_queue;
struct buffer {
static const int initial_size = 1024 * 1024;
char buf[initial_size];
size_t size;
buffer() : size(initial_size) {}
};
jvc::queue<buffer *> outputs;
void read(HANDLE file) {
// read data from specified file, put into buffers for processing.
//
char temp[32];
int temp_len = 0;
int i;
buffer *b;
DWORD read;
do {
b = new buffer;
// If we have a partial line from the previous buffer, copy it into this one.
if (temp_len != 0)
memcpy(b->buf, temp, temp_len);
// Then fill the buffer with data.
ReadFile(file, b->buf+temp_len, b->size-temp_len, &read, NULL);
// Look for partial line at end of buffer.
for (i=read; b->buf[i] != '\n'; --i)
;
// copy partial line to holding area.
memcpy(temp, b->buf+i, temp_len=read-i);
// adjust size.
b->size = i;
// put buffer into queue for processing thread.
// transfers ownership.
outputs.add(b);
} while (read != 0);
}
// A simplified istrstream that can only read int's.
class num_reader {
buffer &b;
char *pos;
char *end;
public:
num_reader(buffer *buf) : b(*buf), pos(b.buf), end(pos+b.size) {}
num_reader &operator>>(int &value){
int v = 0;
// skip leading "stuff" up to the first digit.
while ((pos < end) && !isdigit(*pos))
++pos;
// read digits, create value from them.
while ((pos < end) && isdigit(*pos)) {
v = 10 * v + *pos-'0';
++pos;
}
value = v;
return *this;
}
// return stream status -- only whether we're at end
operator bool() { return pos < end; }
};
int result;
unsigned __stdcall processing_thread(void *) {
int value;
int n, k;
int count = 0;
// Read first buffer: n & k followed by values.
buffer *b = outputs.pop();
num_reader input(b);
input >> n;
input >> k;
while (input >> value && ++count < n)
result += ((value %k ) == 0);
// Ownership was transferred -- delete buffer when finished.
delete b;
// Then read subsequent buffers:
while ((b=outputs.pop()) && (b->size != 0)) {
num_reader input(b);
while (input >> value && ++count < n)
result += ((value %k) == 0);
// Ownership was transferred -- delete buffer when finished.
delete b;
}
return 0;
}
int main() {
HANDLE standard_input = GetStdHandle(STD_INPUT_HANDLE);
HANDLE processor = (HANDLE)_beginthreadex(NULL, 0, processing_thread, NULL, 0, NULL);
clock_t start = clock();
read(standard_input);
WaitForSingleObject(processor, INFINITE);
clock_t finish = clock();
std::cout << (float)(finish-start)/CLOCKS_PER_SEC << " Seconds.\n";
std::cout << result;
return 0;
}
This uses a thread-safe queue class I wrote years ago:
#ifndef QUEUE_H_INCLUDED
#define QUEUE_H_INCLUDED
namespace JVC_thread_queue {
template<class T, unsigned max = 256>
class queue {
HANDLE space_avail; // at least one slot empty
HANDLE data_avail; // at least one slot full
CRITICAL_SECTION mutex; // protect buffer, in_pos, out_pos
T buffer[max];
long in_pos, out_pos;
public:
queue() : in_pos(0), out_pos(0) {
space_avail = CreateSemaphore(NULL, max, max, NULL);
data_avail = CreateSemaphore(NULL, 0, max, NULL);
InitializeCriticalSection(&mutex);
}
void add(T data) {
WaitForSingleObject(space_avail, INFINITE);
EnterCriticalSection(&mutex);
buffer[in_pos] = data;
in_pos = (in_pos + 1) % max;
LeaveCriticalSection(&mutex);
ReleaseSemaphore(data_avail, 1, NULL);
}
T pop() {
WaitForSingleObject(data_avail,INFINITE);
EnterCriticalSection(&mutex);
T retval = buffer[out_pos];
out_pos = (out_pos + 1) % max;
LeaveCriticalSection(&mutex);
ReleaseSemaphore(space_avail, 1, NULL);
return retval;
}
~queue() {
DeleteCriticalSection(&mutex);
CloseHandle(data_avail);
CloseHandle(space_avail);
}
};
}
#endif
Exactly how much you gain from this depends on the amount of time spent reading versus the amount of time spent on other processing. In this case, the other processing is sufficiently trivial that it probably doesn't gain much. If more time was spent on processing the data, multi-threading would probably gain more.
2.5mb/sec is 400ns/byte.
There are two big per-byte processes, file input and parsing.
For the file input, I would just load it into a big memory buffer. fread should be able to read that in at roughly full disc bandwidth.
For the parsing, sscanf is built for generality, not speed. atoi should be pretty fast. My habit, for better or worse, is to do it myself, as in:
#define DIGIT(c)((c)>='0' && (c) <= '9')
bool parsInt(char* &p, int& num){
while(*p && *p <= ' ') p++; // scan over whitespace
if (!DIGIT(*p)) return false;
num = 0;
while(DIGIT(*p)){
num = num * 10 + (*p++ - '0');
}
return true;
}
The loops, first over leading whitespace, then over the digits, should be nearly as fast as the machine can go, certainly a lot less than 400ns/byte.
Dividing two large numbers is hard. Perhaps an improvement would be to first characterize k a little by looking at some of the smaller primes. Let's say 2, 3, and 5 for now. If k is divisible by any of these, than inputnum also needs to be or inputnum is not divisible by k. Of course there are more tricks to play (you could use bitwise and of inputnum to 1 to determine whether you are divisible by 2), but I think just removing the low prime possibilities will give a reasonable speed improvement (worth a shot anyway).