I am trying to solve a school problem and I did that, but it should run faster and on less memory if possible - can you please help me achieve that?
Problem statement: Read a natural number N and a string from a file, and output in another file the same string N number of times.
Example of input file:
3
dog
Example of output file:
dog
dog
dog
Restrictions:
1 ≤ n ≤ 50, and the length of the line to be read is maximum 1,000,000
Time limit: 0.27 seconds
This is what I tried (but run time exceeds the limit):
#include<fstream>
using namespace std;
ifstream cin("afisaren.in");
ofstream cout("afisaren.out");
short n;
char s[1000005];
int main() {
cin >> n;
cin >> s;
while(n) {
cout << s << '\n';
n--;
}
cin.close();
cout.close();
return 0;
}
Generally when given this type of problem, you should profile your own code to see which part of the code is consuming what amount of time. This can mostly be done by adding a few calls to a timekeeping-function before and after code execution, to see how long it was executing. However this is not so easy with your code, since one of the biggest problems (optimisation-wise) is your char s[1000005]; line. The memory will be allocated before executing your main() function, which is operating system dependant (or rather depends on the libc and compiler used).
So first, do not use pre-allocated char-arrays. You're using C++! Why not simply read the text into a std::(w)string or any of the C++-classes which will do dynamic memory allocation (and not crash your program if line-length does exceed 1,000,000).
And second, the c++ std::streams usually perform a flush-to-disk every time a line-ending character is written. This is highly inefficient unless your text is exactly the same size as the block-size of the underlying file-system. To optimize this, create a memory object (i.e. std::string) and copy your text into it for k times, where k = fs-block-size / text-length. fs-block-size will most likely be 1024, 2048 or 4096 bytes. There are system-calls to find that out, but performance will usually not be affected too much when writing twice (or 4x) the fs-block-size, so you can safely assume it to be 4096 for close-to-or-maximum-performance.
Since the maximum number of repetitions is 1 < n < 50, and line length is 1,000,000 (approx. 1 MiB if ASCII), maximum file size for the output will be 50,000,000 characters. You could also write everything into memory and then write everything in one call to write(). This would probably be the most efficient way in terms of disk-activity, but obviously not regarding memory consumption.
I'm not a c++ expert but I had a similar problem when I used c++ style file streams, after googling a bit, I tried switching to c-style file system and it boosted my performance a lot because c++ file streams copy file contents into internal buffer and that takes time, you can try it c-style but usually it is not recommended to use c in c++.
Related
#include <iostream>
using namespace std;
int main() {
int n;
cin>>n;
int *arr=new int [n];
for(int k=0;n>k;k++)
{
cin>>*(arr+k);
}
long long sum1=0,sum2=0,sum3=0;
for(int k=0;n>k;k++)
{
sum1=sum1+*(arr+k);
if(*(arr+k)%2==0)
sum2++;
else
sum3++;
}
cout<<sum1<<" ";
cout<<sum3<<" ";
cout<<sum2;
return 0;
}
You're given a sequence of N integers, your task is to print sum of them, number of odd integers, and number of even integers respectively.
Input
The first line of input contains an integer N (1≤N≤10⁵).
The second line of input contains N integers separated by a single space (1≤Ai≤10⁵).
Output
Print the sum of them, number of odd integers, and number of even integers respectively, separated by a space.
Examples
input
5
1 2 3 4 5
output
15 3 2
Is there a better algorithm for this code? I need it to take less Execution Time.
Where can I find better algorithms for any code?
Unless you need to re-use the N integers that you have stored in the array, there's no point in storing them. You can get the sum as well as number of odd/even integers as you input them.
Additionally, you don't need long long as the input will never get that big, unless you mean 10^5?
Further, whenever you are thinking about improving performance you should take a look at the big O which in this case is O(N) where N is the number of integers that you have. From an algorithm point of view with N input there's generally very little that you can do to improve this. Maybe if we're talking streams, you can do some statistics but otherwise this implementation is as good as it gets. In some other situations, while the worst case can't be improved, we can improve the average case, which I don't think is applicable here.
Then you should look at profiling the code. That way you have a clear understanding of where bottlenecks are. For your code, there's probably not too much that can be done reasonably.
If we're trying to squeeze every ounce of performance possible, adjusting the compiler flags can bring some performance gains. You should research these but I would not prioritize this over the above.
I would also improve how you name your variables, but this has no impact on performance.
Actually C++ by default synchronizes cin/cout with C way of doing I/O - printf/scanf, which slows down I/O by quite a lot.
Switching to printf/scanf or adding something like ios::sync_with_stdio(0); at the start of main should speed this up a few times.
I have a relatively simple question to ask, there has been an ongoing discussion regarding many programming languages about which method provides the fastest file read. Mostly debated on read() or mmap(). As a person who also participated in these debates, I failed to find an answer to my current problem, because most answers help in the situation where the file to read is huge (example, how to read a 10 TB text file...).
But my problem is a bit different, I have lots of files, lets say a 100 million. I want to read the first 1-2 lines from these files. Whether the file is 10 kb or 100 TB is irrelevant. I just want the first one or two lines from every file. So I want to avoid reading or buffering the unnecessary parts of the files. My knowledge was not enough to thoroughly test which method is faster, or to discover what are all my options in the first place.
What I am doing right know: (I am doing this multithreaded for the moment)
for(const auto& p: std::filesystem::recursive_directory_iterator(path)) {
if (!std::filesystem::is_directory(p)) {
std::ifstream read_file(p.path().string());
if (read_file.is_open()) {
while (getline(read_file, line)) {
// Get two lines here.
}
}
}
}
What does C++, or the linux environment provide me in this situation ? Is there a faster or more efficient way to read small portions of millions of files ?
Thank you for your time.
Info: I have access to C++20 and Ubuntu 18.04
You can save one underlying call to fstat by not testing if the path is a directory, and then rely on is_open test
#include <iostream>
#include <fstream>
#include <filesystem>
#include <string>
int main()
{
std::string line,path=".";
for(const auto& p: std::filesystem::recursive_directory_iterator(path)) {
{
std::ifstream read_file(p.path().string());
if (read_file.is_open()) {
std::cout << "opened: " << p.path().string() << '\n';
while (getline(read_file, line)) {
// Get two lines here.
}
}
}
}
}
At least on Windows this code skips the directories. And as suggested in comments is_open test can even be skipped since getline doesn't read anything from a directory either.
Not the cleanest, but if it can save time it's worth it.
Any function in a program accessing a file under Linux will result in calling some "system calls" (for example read()).
All other available functions in some programming language (like fread(), fgets(), std::filesystem ...) call functions or methods which in turn call some system calls.
For this reason you can't be faster than calling the system calls directly.
I'm not 100% sure, but I think in most cases, the combination open(), read(), close() will be the fastest method for reading data from the start of a file.
(If the data is not located at the start of the file, pread() might be faster than read(); I'm not sure.)
Note that read() does not read a certain number of lines but a certain number of bytes (e.g. into an array of char), so you have to find the end(s) of the line(s) "manually" by searching the '\n' character(s) and/or the end of the file in the array of char.
Unfortunately, a line may be much longer than you expect, so reading the first N bytes from the file does not contain the first M lines and you have to call read() again.
In this case it depends on your system (e.g. file system or even hard disks) how many bytes you should read in each call to read() to get the maximum performance.
Example: Let's say in 75% of all files, the first N lines are found in the first 512 bytes of the file; in the other 25% of all files, first N lines are longer than 512 bytes in sum.
On some computers, reading 1024 bytes at once might require nearly the same time as reading 512 bytes, but reading 512 bytes twice will be much slower than reading 1024 bytes at once; on such computers it makes sense to read() 1024 bytes at once: You save a lot of time for 25% of the files and you lose only very little time for the other 75%.
On other computers, reading 512 bytes is significantly faster than reading 1024 bytes; on such computers it would be better to read() 512 bytes: Reading 1024 bytes would save you only little time when processing the 25% of files but cost you very much time when processing the other 75%.
I think in the most cases this "optimal value" will be a multiple of 512 bytes because most modern file systems organize files in units that have a multiple of 512 bytes.
I was just typing something similar to Martin Rosenau answer (when his popped up): unstructured read of the max length of two lines. But I would go further: queue that text buffer with corresponding file name and let another thread parse / analyze that. If parsing takes about the same time as reading, you can save half of the time. If it takes longer (unlikely) - you can use multiple threads and save even more.
Side note - you should not parallelize reading (tried that).
It may be worth experimenting: can you open one file, read it asynchronously while proceeding to open the next one? I don't know if any OS can overlap those things.
I have to read in large txt files (1Gb) line-by-line, and use fgets() to do so. I run an empty while loop and execution takes extremely long (30mins) with 99% CPU utilization.
int buffer_size = 30;
char buffer[buffer_size];
while (fgets(buffer, buffer_size, traceFile1) != NULL)
{
}
I did do some reading and apparently the overheads related to text parsing causes this. So the question is, is there any way to read in a txt file while avoiding this? I'm reading in traces for a network simulator, so each line typically has |Injection_cycle source destination|
I've been searching for a while, so if anyone has a smart answer to this I would be absolutely delighted :)
1GB = 1024MB= 1048576 KB = 1073741824 B
30 min = 1800 seconds
So you are basically comparing about 595k/s comparisons (checking if the current character is '\n' or '\t' or eof) and making around the same amount of memory assignations. Not just that but also making a jump operation since you have a loop statement.
This while not being too fast it's not too slow either, I've seen a few "critical cases" that are unable to use the computer memory system properly, how does the result scale for various sizes?
I think I'm rather off the reason, but hope it helps
I have a big file nearly 800M, and I want to read it line by line.
At first I wrote my program in Python, I use linecache.getline:
lines = linecache.getlines(fname)
It costs about 1.2s.
Now I want to transplant my program to C++.
I wrote these code:
std::ifstream DATA(fname);
std::string line;
vector<string> lines;
while (std::getline(DATA, line)){
lines.push_back(line);
}
But it's slow(costs minutes). How to improve it?
Joachim Pileborg mentioned mmap(), and on windows CreateFileMapping() will work.
My code runs under VS2013, when I use "DEBUG" mode, it takes 162 seconds;
When I use "RELEASE" mode, only 7 seconds!
(Great Thanks To #DietmarKühl and #Andrew)
First of all, you should probably make sure you are compiling with optimizations enabled. This might not matter for such a simple algorithm, but that really depends on your vector/string library implementations.
As suggested by #angew, std::ios_base::sync_with_stdio(false) makes a big difference on routines like the one you have written.
Another, lesser, optimization would be to use lines.reserve() to preallocate your vector so that push_back() doesn't result in huge copy operations. However, this is most useful if you happen to know in advance approximately how many lines you are likely to receive.
Using the optimizations suggested above, I get the following results for reading an 800MB text stream:
20 seconds ## if average line length = 10 characters
3 seconds ## if average line length = 100 characters
1 second ## if average line length = 1000 characters
As you can see, the speed is dominated by per-line overhead. This overhead is primarily occurring inside the std::string class.
It is likely that any approach based on storing a large quantity of std::string will be suboptimal in terms of memory allocation overhead. On a 64-bit system, std::string will require a minimum of 16 bytes of overhead per string. In fact, it is very possible that the overhead will be significantly greater than that -- and you could find that memory allocation (inside of std::string) becomes a significant bottleneck.
For optimal memory use and performance, consider writing your own routine that reads the file in large blocks rather than using getline(). Then you could apply something similar to the flyweight pattern to manage the indexing of the individual lines using a custom string class.
P.S. Another relevant factor will be the physical disk I/O, which might or might not be bypassed by caching.
For c++ you could try something like this:
void processData(string str)
{
vector<string> arr;
boost::split(arr, str, boost::is_any_of(" \n"));
do_some_operation(arr);
}
int main()
{
unsigned long long int read_bytes = 45 * 1024 *1024;
const char* fname = "input.txt";
ifstream fin(fname, ios::in);
char* memblock;
while(!fin.eof())
{
memblock = new char[read_bytes];
fin.read(memblock, read_bytes);
string str(memblock);
processData(str);
delete [] memblock;
}
return 0;
}
I'm solving a problem which requires very fast input/output. More precisely, the input data file will be up to 15MB. Is there a fast, way to read/print integer values.
Note: I don't know if it helps, but the input file has the following form:
line 1: a number n
line 2..n+1: three numbers a,b,c;
line n+2: a number r
line n+3..n+4+r: four numbers a,b,c,d
Note 2: The input file will be stdin.
Edit: Something like the following isn't fast enough:
void fast_scan(int &n) {
char buffer[10];
gets(buffer);
n=atoi(buffer);
}
void fast_scan_three(int &a,int &b,int &c) {
char buffval[3][20],buffer[60];
gets(buffer);
int n=strlen(buffer);
int buffindex=0, curindex=0;
for(int i=0; i<n; ++i) {
if(!isdigit(buffer[i]) && !isspace(buffer[i]))break;
if(isspace(buffer[i])) {
buffindex++;
curindex=0;
} else {
buffval[buffindex][curindex++]=buffer[i];
}
}
a=atoi(buffval[0]);
b=atoi(buffval[1]);
c=atoi(buffval[2]);
}
General input/output optimization principle is to perform as less I/O operations as possible reading/writing as much data as possible.
So performance-aware solution typically looks like this:
Read all data from device into some buffer (using the principle mentioned above)
Process the data generating resulting data to some buffer (on place or another one)
Output results from buffer to device (using the principle mentioned above)
E.g. you could use std::basic_istream::read to input data by big chunks instead of doing it line by line. The similar idea with output - generate single string as result adding line feed symbols manually and output it at once.
If you want to minimize the physical I/O operation overhead, load the whole file into memory by a technique called memory mapped files. I doubt you'll get a noticable performance gain though. Parsing will most likely be a lot costlier.
Consider using threads. Threading is useful for lots of things, but this is exactly the kind of problem that motivated the invention of threads.
The underlying idea is to separate the input, processing, and output, so these different operations can run in parallel. Do it right and you will see a significant speedup.
Have one thread doing close to pure input. It reads lines into a buffer of lines. Have a second thread do a quick pre-parse and organize the raw input into blocks. You have two things that need to be parsed, the line that contain the number of lines that contain triples and the line that contains the number of lines that contain quads. This thread forms the raw input into blocks that are still mostly text. A third thread parses the triples and quads, re-forming the the input into fully parsed structures. Since the data are now organized into independent blocks, you can have multiple instances of this third operation so as to better take advantage of the multiple processors on your computer. Finally, other threads will operate on these fully-parsed structures. Note: It might be better to combine some of these operations, for example combining the input and pre-parsing operations into one thread.
Put several input lines in a buffer, split them, and then parse them simultaneously in different threads.
It's only 15MB. I would just slurp the whole thing into a memory buffer, then parse it.
The parsing looks something like this, approximately:
#define DIGIT(c)((c) >= '0' && (c) <= '9')
while(*p == ' ') p++;
if (DIGIT(*p)){
a = 0;
while(DIGIT(*p){
a *= 10; a += (*p++ - '0');
}
}
// and so on...
You should be able to write this kind of code in your sleep.
I don't know if that's any faster than atoi, but it's not fussing around figuring out where numbers begin and end. I would stay away from scanf because it goes through nine yards of figuring out its format string.
If you run this whole thing in a loop 1000 times and grab some stack samples, you should see that it is spending nearly 100% of its time reading in the file and generating output (which you didn't mention).
You just can't beat that.
If you do see that noticeable time is spent in the actual parsing, it might be possible to do overlapped I/O, but the machine would have to be really slow, or the I/O really fast (like from a solid state drive) before that would make sense.