Why is splitting a string slower in C++ than Python? - c++

I'm trying to convert some code from Python to C++ in an effort to gain a little bit of speed and sharpen my rusty C++ skills. Yesterday I was shocked when a naive implementation of reading lines from stdin was much faster in Python than C++ (see this). Today, I finally figured out how to split a string in C++ with merging delimiters (similar semantics to python's split()), and am now experiencing deja vu! My C++ code takes much longer to do the work (though not an order of magnitude more, as was the case for yesterday's lesson).
Python Code:
#!/usr/bin/env python
from __future__ import print_function
import time
import sys
count = 0
start_time = time.time()
dummy = None
for line in sys.stdin:
dummy = line.split()
count += 1
delta_sec = int(time.time() - start_time)
print("Python: Saw {0} lines in {1} seconds. ".format(count, delta_sec), end='')
if delta_sec > 0:
lps = int(count/delta_sec)
print(" Crunch Speed: {0}".format(lps))
else:
print('')
C++ Code:
#include <iostream>
#include <string>
#include <sstream>
#include <time.h>
#include <vector>
using namespace std;
void split1(vector<string> &tokens, const string &str,
const string &delimiters = " ") {
// Skip delimiters at beginning
string::size_type lastPos = str.find_first_not_of(delimiters, 0);
// Find first non-delimiter
string::size_type pos = str.find_first_of(delimiters, lastPos);
while (string::npos != pos || string::npos != lastPos) {
// Found a token, add it to the vector
tokens.push_back(str.substr(lastPos, pos - lastPos));
// Skip delimiters
lastPos = str.find_first_not_of(delimiters, pos);
// Find next non-delimiter
pos = str.find_first_of(delimiters, lastPos);
}
}
void split2(vector<string> &tokens, const string &str, char delim=' ') {
stringstream ss(str); //convert string to stream
string item;
while(getline(ss, item, delim)) {
tokens.push_back(item); //add token to vector
}
}
int main() {
string input_line;
vector<string> spline;
long count = 0;
int sec, lps;
time_t start = time(NULL);
cin.sync_with_stdio(false); //disable synchronous IO
while(cin) {
getline(cin, input_line);
spline.clear(); //empty the vector for the next line to parse
//I'm trying one of the two implementations, per compilation, obviously:
// split1(spline, input_line);
split2(spline, input_line);
count++;
};
count--; //subtract for final over-read
sec = (int) time(NULL) - start;
cerr << "C++ : Saw " << count << " lines in " << sec << " seconds." ;
if (sec > 0) {
lps = count / sec;
cerr << " Crunch speed: " << lps << endl;
} else
cerr << endl;
return 0;
//compiled with: g++ -Wall -O3 -o split1 split_1.cpp
Note that I tried two different split implementations. One (split1) uses string methods to search for tokens and is able to merge multiple tokens as well as handle numerous tokens (it comes from here). The second (split2) uses getline to read the string as a stream, doesn't merge delimiters, and only supports a single delimeter character (that one was posted by several StackOverflow users in answers to string splitting questions).
I ran this multiple times in various orders. My test machine is a Macbook Pro (2011, 8GB, Quad Core), not that it matters much. I'm testing with a 20M line text file with three space-separated columns that each look similar to this: "foo.bar 127.0.0.1 home.foo.bar"
Results:
$ /usr/bin/time cat test_lines_double | ./split.py
15.61 real 0.01 user 0.38 sys
Python: Saw 20000000 lines in 15 seconds. Crunch Speed: 1333333
$ /usr/bin/time cat test_lines_double | ./split1
23.50 real 0.01 user 0.46 sys
C++ : Saw 20000000 lines in 23 seconds. Crunch speed: 869565
$ /usr/bin/time cat test_lines_double | ./split2
44.69 real 0.02 user 0.62 sys
C++ : Saw 20000000 lines in 45 seconds. Crunch speed: 444444
What am I doing wrong? Is there a better way to do string splitting in C++ that does not rely on external libraries (i.e. no boost), supports merging sequences of delimiters (like python's split), is thread safe (so no strtok), and whose performance is at least on par with python?
Edit 1 / Partial Solution?:
I tried making it a more fair comparison by having python reset the dummy list and append to it each time, as C++ does. This still isn't exactly what the C++ code is doing, but it's a bit closer. Basically, the loop is now:
for line in sys.stdin:
dummy = []
dummy += line.split()
count += 1
The performance of python is now about the same as the split1 C++ implementation.
/usr/bin/time cat test_lines_double | ./split5.py
22.61 real 0.01 user 0.40 sys
Python: Saw 20000000 lines in 22 seconds. Crunch Speed: 909090
I still am surprised that, even if Python is so optimized for string processing (as Matt Joiner suggested), that these C++ implementations would not be faster. If anyone has ideas about how to do this in a more optimal way using C++, please share your code. (I think my next step will be trying to implement this in pure C, although I'm not going to trade off programmer productivity to re-implement my overall project in C, so this will just be an experiment for string splitting speed.)
Thanks to all for your help.
Final Edit/Solution:
Please see Alf's accepted answer. Since python deals with strings strictly by reference and STL strings are often copied, performance is better with vanilla python implementations. For comparison, I compiled and ran my data through Alf's code, and here is the performance on the same machine as all the other runs, essentially identical to the naive python implementation (though faster than the python implementation that resets/appends the list, as shown in the above edit):
$ /usr/bin/time cat test_lines_double | ./split6
15.09 real 0.01 user 0.45 sys
C++ : Saw 20000000 lines in 15 seconds. Crunch speed: 1333333
My only small remaining gripe is regarding the amount of code necessary to get C++ to perform in this case.
One of the lessons here from this issue and yesterday's stdin line reading issue (linked above) are that one should always benchmark instead of making naive assumptions about languages' relative "default" performance. I appreciate the education.
Thanks again to all for your suggestions!

As a guess, Python strings are reference counted immutable strings, so that no strings are copied around in the Python code, while C++ std::string is a mutable value type, and is copied at the smallest opportunity.
If the goal is fast splitting, then one would use constant time substring operations, which means only referring to parts of the original string, as in Python (and Java, and C#…).
The C++ std::string class has one redeeming feature, though: it is standard, so that it can be used to pass strings safely and portably around where efficiency is not a main consideration. But enough chat. Code -- and on my machine this is of course faster than Python, since Python's string handling is implemented in C which is a subset of C++ (he he):
#include <iostream>
#include <string>
#include <sstream>
#include <time.h>
#include <vector>
using namespace std;
class StringRef
{
private:
char const* begin_;
int size_;
public:
int size() const { return size_; }
char const* begin() const { return begin_; }
char const* end() const { return begin_ + size_; }
StringRef( char const* const begin, int const size )
: begin_( begin )
, size_( size )
{}
};
vector<StringRef> split3( string const& str, char delimiter = ' ' )
{
vector<StringRef> result;
enum State { inSpace, inToken };
State state = inSpace;
char const* pTokenBegin = 0; // Init to satisfy compiler.
for( auto it = str.begin(); it != str.end(); ++it )
{
State const newState = (*it == delimiter? inSpace : inToken);
if( newState != state )
{
switch( newState )
{
case inSpace:
result.push_back( StringRef( pTokenBegin, &*it - pTokenBegin ) );
break;
case inToken:
pTokenBegin = &*it;
}
}
state = newState;
}
if( state == inToken )
{
result.push_back( StringRef( pTokenBegin, &*str.end() - pTokenBegin ) );
}
return result;
}
int main() {
string input_line;
vector<string> spline;
long count = 0;
int sec, lps;
time_t start = time(NULL);
cin.sync_with_stdio(false); //disable synchronous IO
while(cin) {
getline(cin, input_line);
//spline.clear(); //empty the vector for the next line to parse
//I'm trying one of the two implementations, per compilation, obviously:
// split1(spline, input_line);
//split2(spline, input_line);
vector<StringRef> const v = split3( input_line );
count++;
};
count--; //subtract for final over-read
sec = (int) time(NULL) - start;
cerr << "C++ : Saw " << count << " lines in " << sec << " seconds." ;
if (sec > 0) {
lps = count / sec;
cerr << " Crunch speed: " << lps << endl;
} else
cerr << endl;
return 0;
}
//compiled with: g++ -Wall -O3 -o split1 split_1.cpp -std=c++0x
Disclaimer: I hope there aren't any bugs. I haven't tested the functionality, but only checked the speed. But I think, even if there is a bug or two, correcting that won't significantly affect the speed.

I'm not providing any better solutions (at least performance-wise), but some additional data that could be interesting.
Using strtok_r (reentrant variant of strtok):
void splitc1(vector<string> &tokens, const string &str,
const string &delimiters = " ") {
char *saveptr;
char *cpy, *token;
cpy = (char*)malloc(str.size() + 1);
strcpy(cpy, str.c_str());
for(token = strtok_r(cpy, delimiters.c_str(), &saveptr);
token != NULL;
token = strtok_r(NULL, delimiters.c_str(), &saveptr)) {
tokens.push_back(string(token));
}
free(cpy);
}
Additionally using character strings for parameters, and fgets for input:
void splitc2(vector<string> &tokens, const char *str,
const char *delimiters) {
char *saveptr;
char *cpy, *token;
cpy = (char*)malloc(strlen(str) + 1);
strcpy(cpy, str);
for(token = strtok_r(cpy, delimiters, &saveptr);
token != NULL;
token = strtok_r(NULL, delimiters, &saveptr)) {
tokens.push_back(string(token));
}
free(cpy);
}
And, in some cases, where destroying the input string is acceptable:
void splitc3(vector<string> &tokens, char *str,
const char *delimiters) {
char *saveptr;
char *token;
for(token = strtok_r(str, delimiters, &saveptr);
token != NULL;
token = strtok_r(NULL, delimiters, &saveptr)) {
tokens.push_back(string(token));
}
}
The timings for these are as follows (including my results for the other variants from the question and the accepted answer):
split1.cpp: C++ : Saw 20000000 lines in 31 seconds. Crunch speed: 645161
split2.cpp: C++ : Saw 20000000 lines in 45 seconds. Crunch speed: 444444
split.py: Python: Saw 20000000 lines in 33 seconds. Crunch Speed: 606060
split5.py: Python: Saw 20000000 lines in 35 seconds. Crunch Speed: 571428
split6.cpp: C++ : Saw 20000000 lines in 18 seconds. Crunch speed: 1111111
splitc1.cpp: C++ : Saw 20000000 lines in 27 seconds. Crunch speed: 740740
splitc2.cpp: C++ : Saw 20000000 lines in 22 seconds. Crunch speed: 909090
splitc3.cpp: C++ : Saw 20000000 lines in 20 seconds. Crunch speed: 1000000
As we can see, the solution from the accepted answer is still fastest.
For anyone who would want to do further tests, I also put up a Github repo with all the programs from the question, the accepted answer, this answer, and additionally a Makefile and a script to generate test data: https://github.com/tobbez/string-splitting.

I suspect that this is because of the way std::vector gets resized during the process of a push_back() function call. If you try using std::list or std::vector::reserve() to reserve enough space for the sentences, you should get a much better performance. Or you could use a combination of both like below for split1():
void split1(vector<string> &tokens, const string &str,
const string &delimiters = " ") {
// Skip delimiters at beginning
string::size_type lastPos = str.find_first_not_of(delimiters, 0);
// Find first non-delimiter
string::size_type pos = str.find_first_of(delimiters, lastPos);
list<string> token_list;
while (string::npos != pos || string::npos != lastPos) {
// Found a token, add it to the list
token_list.push_back(str.substr(lastPos, pos - lastPos));
// Skip delimiters
lastPos = str.find_first_not_of(delimiters, pos);
// Find next non-delimiter
pos = str.find_first_of(delimiters, lastPos);
}
tokens.assign(token_list.begin(), token_list.end());
}
EDIT: The other obvious thing I see is that Python variable dummy gets assigned each time but not modified. So it's not a fair comparison against C++. You should try modifying your Python code to be dummy = [] to initialize it and then do dummy += line.split(). Can you report the runtime after this?
EDIT2: To make it even more fair can you modify the while loop in C++ code to be:
while(cin) {
getline(cin, input_line);
std::vector<string> spline; // create a new vector
//I'm trying one of the two implementations, per compilation, obviously:
// split1(spline, input_line);
split2(spline, input_line);
count++;
};

I think the following code is better, using some C++17 and C++14 features:
// These codes are un-tested when I write this post, but I'll test it
// When I'm free, and I sincerely welcome others to test and modify this
// code.
// C++17
#include <istream> // For std::istream.
#include <string_view> // new feature in C++17, sizeof(std::string_view) == 16 in libc++ on my x86-64 debian 9.4 computer.
#include <string>
#include <utility> // C++14 feature std::move.
template <template <class...> class Container, class Allocator>
void split1(Container<std::string_view, Allocator> &tokens,
std::string_view str,
std::string_view delimiter = " ")
{
/*
* The model of the input string:
*
* (optional) delimiter | content | delimiter | content | delimiter|
* ... | delimiter | content
*
* Using std::string::find_first_not_of or
* std::string_view::find_first_not_of is a bad idea, because it
* actually does the following thing:
*
* Finds the first character not equal to any of the characters
* in the given character sequence.
*
* Which means it does not treeat your delimiters as a whole, but as
* a group of characters.
*
* This has 2 effects:
*
* 1. When your delimiters is not a single character, this function
* won't behave as you predicted.
*
* 2. When your delimiters is just a single character, the function
* may have an additional overhead due to the fact that it has to
* check every character with a range of characters, although
* there's only one, but in order to assure the correctness, it still
* has an inner loop, which adds to the overhead.
*
* So, as a solution, I wrote the following code.
*
* The code below will skip the first delimiter prefix.
* However, if there's nothing between 2 delimiter, this code'll
* still treat as if there's sth. there.
*
* Note:
* Here I use C++ std version of substring search algorithm, but u
* can change it to Boyer-Moore, KMP(takes additional memory),
* Rabin-Karp and other algorithm to speed your code.
*
*/
// Establish the loop invariant 1.
typename std::string_view::size_type
next,
delimiter_size = delimiter.size(),
pos = str.find(delimiter) ? 0 : delimiter_size;
// The loop invariant:
// 1. At pos, it is the content that should be saved.
// 2. The next pos of delimiter is stored in next, which could be 0
// or std::string_view::npos.
do {
// Find the next delimiter, maintain loop invariant 2.
next = str.find(delimiter, pos);
// Found a token, add it to the vector
tokens.push_back(str.substr(pos, next));
// Skip delimiters, maintain the loop invariant 1.
//
// # next is the size of the just pushed token.
// Because when next == std::string_view::npos, the loop will
// terminate, so it doesn't matter even if the following
// expression have undefined behavior due to the overflow of
// argument.
pos = next + delimiter_size;
} while(next != std::string_view::npos);
}
template <template <class...> class Container, class traits, class Allocator2, class Allocator>
void split2(Container<std::basic_string<char, traits, Allocator2>, Allocator> &tokens,
std::istream &stream,
char delimiter = ' ')
{
std::string<char, traits, Allocator2> item;
// Unfortunately, std::getline can only accept a single-character
// delimiter.
while(std::getline(stream, item, delimiter))
// Move item into token. I haven't checked whether item can be
// reused after being moved.
tokens.push_back(std::move(item));
}
The choice of container:
std::vector.
Assuming the initial size of allocated internal array is 1, and the ultimate size is N, you will allocate and deallocate for log2(N) times, and you will copy the (2 ^ (log2(N) + 1) - 1) = (2N - 1) times. As pointed out in Is the poor performance of std::vector due to not calling realloc a logarithmic number of times?, this can have a poor performance when the size of vector is unpredictable and could be very large.
But, if you can estimate the size of it, this'll be less a problem.
std::list.
For every push_back, the time it consumed is a constant, but it'll probably takes more time than std::vector on individual push_back. Using a per-thread memory pool and a custom allocator can ease this problem.
std::forward_list.
Same as std::list, but occupy less memory per element. Require a wrapper class to work due to the lack of API push_back.
std::array.
If you can know the limit of growth, then you can use std::array. Of cause, you can't use it directly, since it doesn't have the API push_back. But you can define a wrapper, and I think it's the fastest way here and can save some memory if your estimation is quite accurate.
std::deque.
This option allows you to trade memory for performance. There'll be no (2 ^ (N + 1) - 1) times copy of element, just N times allocation, and no deallocation. Also, you'll has constant random access time, and the ability to add new elements at both ends.
According to std::deque-cppreference
On the other hand, deques typically have large minimal memory cost; a
deque holding just one element has to allocate its full internal array
(e.g. 8 times the object size on 64-bit libstdc++; 16 times the object size
or 4096 bytes, whichever is larger, on 64-bit libc++)
or you can use combo of these:
std::vector< std::array<T, 2 ^ M> >
This is similar to std::deque, the difference is just this container doesn't support to add element at the front. But it is still faster in performance, due to the fact that it won't copy the underlying std::array for (2 ^ (N + 1) - 1) times, it'll just copy the pointer array for (2 ^ (N - M + 1) - 1) times, and allocating new array only when the current is full and doesn't need to deallocate anything. By the way, you can get constant random access time.
std::list< std::array<T, ...> >
Greatly ease the pressure of memory framentation. It will only allocate new array when the current is full, and does not need to copy anything. You will still have to pay the price for an additional pointer conpared to combo 1.
std::forward_list< std::array<T, ...> >
Same as 2, but cost the same memory as combo 1.

You're making the mistaken assumption that your chosen C++ implementation is necessarily faster than Python's. String handling in Python is highly optimized. See this question for more: Why do std::string operations perform poorly?

If you take the split1 implementaion and change the signature to more closely match that of split2, by changing this:
void split1(vector<string> &tokens, const string &str, const string &delimiters = " ")
to this:
void split1(vector<string> &tokens, const string &str, const char delimiters = ' ')
You get a more dramatic difference between split1 and split2, and a fairer comparison:
split1 C++ : Saw 10000000 lines in 41 seconds. Crunch speed: 243902
split2 C++ : Saw 10000000 lines in 144 seconds. Crunch speed: 69444
split1' C++ : Saw 10000000 lines in 33 seconds. Crunch speed: 303030

void split5(vector<string> &tokens, const string &str, char delim=' ') {
enum { do_token, do_delim } state = do_delim;
int idx = 0, tok_start = 0;
for (string::const_iterator it = str.begin() ; ; ++it, ++idx) {
switch (state) {
case do_token:
if (it == str.end()) {
tokens.push_back (str.substr(tok_start, idx-tok_start));
return;
}
else if (*it == delim) {
state = do_delim;
tokens.push_back (str.substr(tok_start, idx-tok_start));
}
break;
case do_delim:
if (it == str.end()) {
return;
}
if (*it != delim) {
state = do_token;
tok_start = idx;
}
break;
}
}
}

I suspect that this is related to buffering on sys.stdin in Python, but no buffering in the C++ implementation.
See this post for details on how to change the buffer size, then try the comparison again:
Setting smaller buffer size for sys.stdin?

Related

Fast counting of nucleotide types in a large number of sequences

First, a bit of background about my question.
I work as a bioinformatician, which means that I do informatics treatment to try to answer a biological question. In my problem, I have to manipulate a file called a FASTA file which looks like this :
>Header 1
ATGACTGATCGNTGACTGACTGTAGCTAGC
>Header 2
ATGCATGCTAGCTGACTGATCGTAGCTAGC
ATCGATCGTAGCT
So a FASTA file is basically just a header, preceded by a '>' character, then a sequence on one or multiple lines that is composed of nucleotides. Nucleotides are characters that can take 5 possible values : A, T, C, G or N.
The thing I would like to do is count the number of times each nucleotide type appears so if we consider this dummy FASTA file :
>Header 1
ATTCGN
I should have, as a result :
A:1 T:2 C:1 G:1 N:1
Here is what I got so far :
ifstream sequence_file(input_file.c_str());
string line;
string sequence = "";
map<char, double> nucleotide_counts;
while(getline(sequence_file, line)) {
if(line[0] != '>') {
sequence += line;
}
else {
nucleotide_counts['A'] = boost::count(sequence, 'A');
nucleotide_counts['T'] = boost::count(sequence, 'T');
nucleotide_counts['C'] = boost::count(sequence, 'C');
nucleotide_counts['G'] = boost::count(sequence, 'G');
nucleotide_counts['N'] = boost::count(sequence, 'N');
sequence = "";
}
}
So it reads the file line by line, if it encounters a '>' as the first character of the line, it knows that the sequence is complete and starts to count. Now the problem I'm facing is that I have millions of sequences with several billions of nucleotides in total. I can see that my method is not optimized because I call boost::count five times on the same sequence.
Other things I have tried :
Parsing the sequence to increment a counter for each nucleotide types. I tried using a map<char, double> to map each nucleotide to a value but this was slower than the boost solution.
Using the std::count of the algorithm library but this was too slow too.
I searched the internet for solutions but every solution I found was good if the number of sequences was low, which is not my case. Would you have any idea that could help me speed things up ?
EDIT 1 :
I also tried this version but it was 2 times slower than the boost one :
ifstream sequence_file(input_file.c_str());
string line;
string sequence = "";
map<char, double> nucleotide_counts;
while(getline(sequence_file, line)) {
if(line[0] != '>') {
sequence += line;
}
else {
for(int i = 0; i < sequence.size(); i++) {
nucleotide_counts[sequence[i]]++;
}
sequence = "";
}
}
EDIT 2 : Thanks to everyone in this thread, I was able to obtain a speed up of about 30 times compared to the boost original solution. Here is the code :
#include <map> // std::array
#include <fstream> // std::ifstream
#include <string> // std::string
void count_nucleotides(std::array<double, 26> &nucleotide_counts, std::string sequence) {
for(unsigned int i = 0; i < sequence.size(); i++) {
++nucleotide_counts[sequence[i] - 'A'];
}
}
std::ifstream sequence_file(input_file.c_str());
std::string line;
std::string sequence = "";
std::array<double, 26> nucleotide_counts;
while(getline(sequence_file, line)) {
if(line[0] != '>') {
sequence += line;
}
else {
count_nucleotides(nucleotide_counts, sequence);
sequence = "";
}
}
In order of importance:
Good code for this task will 100% be I/O-bound. Your processor can count characters much faster than your disk can pump them to the CPU. Thus, the first question to me is: What is the throughput of your storage medium? What are your ideal RAM and cache throughputs? Those are the upper limits. If you've hit them, there's not much point in looking at your code further. It's possible that your boost solution is there already.
std::map lookups are relatively expensive. Yes, it's O(log(N)), but your N=5 is small and constant, so this tells you nothing. For 5 values, the map will have to chase about three pointers for every lookup (not to mention how impossible this is for the branch predictor). Your count solution has 5 map lookups and 5 traversals of each string, whereas your manual solution has a map lookup for every nucleotide (but only one traversal of the string).
Serious suggestion: Use a local variable for each counter. Those will almost surely get placed in CPU registers and are therefore essentially free. You won't ever pollute your cache with the counters that way, unlike map, unordered_map, vector etc.
Replacing abstraction by repetition like this is usually not a good idea, but in this case, it's pretty inconceivable that you'll ever need significantly more counters, so scalability is not an issue.
Consider std::string_view (which would require a different method of reading the file) to avoid creating copies of the data. You load the entire data into memory from disk and then, for each sequence, you copy it. That's not really necessary and (depending on how smart your compiler is) can bog you down. Especially since you keep appending to the string until the next header (which is more unnecessary copying - you could just count after every line).
If, for some reason, you are not hitting the theoretical throughputs, consider multithreading and/or vectorization. But I can't imagine this would be necessary.
By the way, boost::count is a thin wrapper around std::count at least in this version.
I think you did the right thing here though: Writing good and readable code, then identifying it as performance bottleneck and checking if you can make it run faster (potentially by making it slightly more ugly).
If this is the main task you have to perform, you might have an interest in an awk solution. Various problems with FASTA files are very easily tackled with awk:
awk '/^>/ && c { for(i in a) if (i ~ /[A-Z]/) printf i":"a[i]" "; print "" ; delete a }
/^>/ {print; c++; next}
{ for(i=1;i<=length($0);++i) a[substr($0,i,1)]++ }
END{ for(i in a) if (i ~ /[A-Z]/) printf i":"a[i]" "; print "" }' fastafile
This outputs on your example:
>Header 1
N:1 A:7 C:6 G:8 T:8
>Header 2
A:10 C:10 G:11 T:12
note: I am aware that this is not C++, but it is often useful to show other means to achieve the same goal.
Benchmarks with awk:
testfile: http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.gz
unziped size: 2.3G
total records: 5502947
total lines:
Script 0: (runtime: too long) The first mentioned script is utterly slow. Use only on small files
Script 1: (runtime: 484.31 sec) This is an optimised version where we do a targetted count:
/^>/ && f { for(i in c) printf i":"c[i]" "; print "" ; delete c }
/^>/ {print; f++; next}
{ s=$0
c["A"]+=gsub(/[aA]/,"",s)
c["C"]+=gsub(/[cC]/,"",s)
c["G"]+=gsub(/[gG]/,"",s)
c["T"]+=gsub(/[tT]/,"",s)
c["N"]+=gsub(/[nN]/,"",s)
}
END { for(i in c) printf i":"c[i]" "; print "" ; delete c }
Update 2: (runtime: 416.43 sec) Combine all the subsequences into a single sequence and count only ones:
function count() {
c["A"]+=gsub(/[aA]/,"",s)
c["C"]+=gsub(/[cC]/,"",s)
c["G"]+=gsub(/[gG]/,"",s)
c["T"]+=gsub(/[tT]/,"",s)
c["N"]+=gsub(/[nN]/,"",s)
}
/^>/ && f { count(); for(i in c) printf i":"c[i]" "; print "" ; delete c; string=""}
/^>/ {print; f++; next}
{ string=string $0 }
END { count(); for(i in c) printf i":"c[i]" "; print "" }
Update 3: (runtime: 396.12 sec) Refine how awk finds its records and fields, and abuse this in a single go.
function count() {
c["A"]+=gsub(/[aA]/,"",string)
c["C"]+=gsub(/[cC]/,"",string)
c["G"]+=gsub(/[gG]/,"",string)
c["T"]+=gsub(/[tT]/,"",string)
c["N"]+=gsub(/[nN]/,"",string)
}
BEGIN{RS="\n>"; FS="\n"}
{
print $1
string=substr($0,length($1)); count()
for(i in c) printf i":"c[i]" "; print ""
delete c; string=""
}
Update 4: (runtime: 259.69 sec) Update the regex search in gsub. This creates a worthy speedup:
function count() {
n=length(string);
gsub(/[aA]+/,"",string); m=length(string); c["A"]+=n-m; n=m
gsub(/[cC]+/,"",string); m=length(string); c["C"]+=n-m; n=m
gsub(/[gG]+/,"",string); m=length(string); c["G"]+=n-m; n=m
gsub(/[tT]+/,"",string); m=length(string); c["T"]+=n-m; n=m
gsub(/[nN]+/,"",string); m=length(string); c["N"]+=n-m; n=m
}
BEGIN{RS="\n>"; FS="\n"}
{
print ">"$1
string=substr($0,length($1)); count()
for(i in c) printf i":"c[i]" "; print ""
delete c; string=""
}
Don't use a map if you want speed and can use an array. Also, std::getline can use a custom delimiter (instead of \n).
ifstream sequence_file(input_file.c_str());
string sequence = "";
std::array<int, 26> nucleotide_counts;
// For one sequence
getline(sequence_file, sequence, '>');
for(auto&& c : sequence) {
++nucleotide_counts[c-'A'];
}
// nucleotide_counts['X'-'A'] contains the count of nucleotide X in the sequence
Demo
The reason why it's so slow is that you have indirect accesses all the time or 5 scans of the same string.
You don't need a map, use 5 integers, and increment them separately. Then it should be faster than the boost::count version because you don't traverse the string 5 times, and it will be faster than the map or the unordered_map increments because you won't have n indirect accesses.
so use something like:
switch(char)
{
case 'A':
++a;
break;
case 'G':
++g;
break;
}
...
Like people in comments suggested, try sth like that
enum eNucleotide {
NucleotideA = 0,
NucleotideT,
NucleotideC,
NucleotideG,
NucleotideN,
Size,
};
void countSequence(std::string line)
{
long nucleotide_counts[eNucleotide::Size] = { 0 };
if(line[0] != '>') {
for(int i = 0; i < line.size(); ++i)
{
switch (line[i])
{
case 'A':
++nucleotide_counts[NucleotideA];
break;
case 'T':
++nucleotide_counts[NucleotideT];
break;
case 'C':
++nucleotide_counts[NucleotideC];
break;
case 'G':
++nucleotide_counts[NucleotideC];
break;
case 'N':
++nucleotide_counts[NucleotideN];
break;
default :
/// error condition
break;
}
}
/// print results
std::cout << "A: " << nucleotide_counts[NucleotideA];
std::cout << "T: " << nucleotide_counts[NucleotideT];
std::cout << "C: " << nucleotide_counts[NucleotideC];
std::cout << "G: " << nucleotide_counts[NucleotideG];
std::cout << "N: " << nucleotide_counts[NucleotideN] << std::endl;
}
}
and call this function for every line content.(Didn't tested code.)

How to cut off parts of a string, which every string in a collection has

My currently problem is the following:
I have a std::vector of full path names to files.
Now i want to cut off the common prefix of all string.
Example
If I have these 3 strings in the vector:
/home/user/foo.txt
/home/user/bar.txt
/home/baz.txt
I would like to cut off /home/ from every string in the vector.
Question
Is there any method to achieve this in general?
I want an algorithm that drops the common prefix of all string.
I currently only have an idea which solves this problem in O(n m) with n strings and m is the longest string length, by just going through every string with every other string char by char.
Is there a faster or more elegant way solving this?
This can be done entirely with std:: algorithms.
synopsis:
sort the input range if not already sorted. The first and last paths in the sorted range
will be the most dissimilar. Best case is O(N), worst case O(N + N.logN)
use std::mismatch to determine the larges common sequence between the
two most dissimilar paths [insignificant]
run through each path erasing the first COUNT characters where COUNT is the number of characters in the longest common sequence. O (N)
Best case time complexity: O(2N), worst case O(2N + N.logN) (can someone check that?)
#include <iostream>
#include <algorithm>
#include <string>
#include <vector>
std::string common_substring(const std::string& l, const std::string& r)
{
return std::string(l.begin(),
std::mismatch(l.begin(), l.end(),
r.begin(), r.end()).first);
}
std::string mutating_common_substring(std::vector<std::string>& range)
{
if (range.empty())
return std::string();
else
{
if (not std::is_sorted(range.begin(), range.end()))
std::sort(range.begin(), range.end());
return common_substring(range.front(), range.back());
}
}
std::vector<std::string> chop(std::vector<std::string> samples)
{
auto str = mutating_common_substring(samples);
for (auto& s : samples)
{
s.erase(s.begin(), std::next(s.begin(), str.size()));
}
return samples;
}
int main()
{
std::vector<std::string> samples = {
"/home/user/foo.txt",
"/home/user/bar.txt",
"/home/baz.txt"
};
samples = chop(std::move(samples));
for (auto& s : samples)
{
std::cout << s << std::endl;
}
}
expected:
baz.txt
user/bar.txt
user/foo.txt
Here's an alternate `common_substring' which does not require a sort. time complexity is in theory O(N) but whether it's faster in practice you'd have to check:
std::string common_substring(const std::vector<std::string>& range)
{
if (range.empty())
{
return {};
}
return std::accumulate(std::next(range.begin(), 1), range.end(), range.front(),
[](auto const& best, const auto& sample)
{
return common_substring(best, sample);
});
}
update:
Elegance aside, this is probably the fastest way since it avoids any memory allocations, performing all transformations in-place. For most architectures and sample sizes, this will matter more than any other performance consideration.
#include <iostream>
#include <vector>
#include <string>
void reduce_to_common(std::string& best, const std::string& sample)
{
best.erase(std::mismatch(best.begin(), best.end(),
sample.begin(), sample.end()).first,
best.end());
}
void remove_common_prefix(std::vector<std::string>& range)
{
if (range.size())
{
auto iter = range.begin();
auto best = *iter;
for ( ; ++iter != range.end() ; )
{
reduce_to_common(best, *iter);
}
auto prefix_length = best.size();
for (auto& s : range)
{
s.erase(s.begin(), std::next(s.begin(), prefix_length));
}
}
}
int main()
{
std::vector<std::string> samples = {
"/home/user/foo.txt",
"/home/user/bar.txt",
"/home/baz.txt"
};
remove_common_prefix(samples);
for (auto& s : samples)
{
std::cout << s << std::endl;
}
}
You have to search every string in the list. However you don't need to compare all the characters in every string. The common prefix can only get shorter, so you only need to compare with "the common prefix so far". I don't think this changes the big-O complexity - but it will make quite a difference to the actual speed.
Also, these look like file names. Are they sorted (bearing in mind that many filesystems tend to return things in sorted order)? If so, you only need to consider the first and last elements. If they are probably pr mostly ordered, then consider the common prefix of the first and last, and then iterate through all the other strings shortening the prefix further as necessary.
You just have to iterate over every string. You can only avoid iterating over the full length of strings needlessly by exploiting the fact, that the prefix can only shorten:
#include <iostream>
#include <string>
#include <vector>
std::string common_prefix(const std::vector<std::string> &ss) {
if (ss.empty())
// no prefix
return "";
std::string prefix = ss[0];
for (size_t i = 1; i < ss.size(); i++) {
size_t c = 0; // index after which the string differ
for (; c < prefix.length(); c++) {
if (prefix[c] != ss[i][c]) {
// strings differ from character c on
break;
}
}
if (c == 0)
// no common prefix
return "";
// the prefix is only up to character c-1, so resize prefix
prefix.resize(c);
}
return prefix;
}
void strip_common_prefix(std::vector<std::string> &ss) {
std::string prefix = common_prefix(ss);
if (prefix.empty())
// no common prefix, nothing to do
return;
// drop the common part, which are always the first prefix.length() characters
for (std::string &s: ss) {
s = s.substr(prefix.length());
}
}
int main()
{
std::vector<std::string> ss { "/home/user/foo.txt", "/home/user/bar.txt", "/home/baz.txt"};
strip_common_prefix(ss);
for (std::string &s: ss)
std::cout << s << "\n";
}
Drawing from the hints of Martin Bonner's answer, you may implement a more efficient algorithm if you have more prior knowledge on your input.
In particular, if you know your input is sorted, it suffices to compare the first and last strings (see Richard's answer).
i - Find the file which has the least folder depth (i.e. baz.txt) - it's root path is home
ii - Then go through the other strings to see if they start with that root.
iii - If so then remove root from all the strings.
Start with std::size_t index=0;. Scan the list to see if characters at that index match (note: past the end does not match). If it does, advance index and repeat.
When done, index will have the value of the length of the prefix.
At this point, I'd advise you to write or find a string_view type. If you do, simply create a string_view for each of your strings str with start/end of index, str.size().
Overall cost: O(|prefix|*N+N), which is also the cost to confirm that your answer is correct.
If you don't want to write a string_view, simply call str.erase(str.begin(), str.begin()+index) on each str in your vector.
Overall cost is O(|total string length|+N). The prefix has to be visited in order to confirm it, then the tail of the string has to be rewritten.
Now the cost of the breadth-first is locality, as you are touching memory all over the place. It will probably be more efficient in practice to do it in chunks, where you scan the first K strings up to length Q and find the common prefix, then chain that common prefix plus the next block. This won't change the O-notation, but will improve locality of memory reference.
for(vector<string>::iterator itr=V.begin(); itr!=V.end(); ++itr)
itr->erase(0,6);

Recursive String Transformations

EDIT: I've made the main change of using iterators to keep track of successive positions in the bit and character strings and pass the latter by const ref. Now, when I copy the sample inputs onto themselves multiple times to test the clock, everything finishes within 10 seconds for really long bit and character strings and even up to 50 lines of sample input. But, still when I submit, CodeEval says the process was aborted after 10 seconds. As I mention, they don't share their input so now that "extensions" of the sample input work, I'm not sure how to proceed. Any thoughts on an additional improvement to increase my recursive performance would be greatly appreciated.
NOTE: Memoization was a good suggestion but I could not figure out how to implement it in this case since I'm not sure how to store the bit-to-char correlation in a static look-up table. The only thing I thought of was to convert the bit values to their corresponding integer but that risks integer overflow for long bit strings and seems like it would take too long to compute. Further suggestions for memoization here would be greatly appreciated as well.
This is actually one of the moderate CodeEval challenges. They don't share the sample input or output for moderate challenges but the output "fail error" simply says "aborted after 10 seconds," so my code is getting hung up somewhere.
The assignment is simple enough. You take a filepath as the single command-line argument. Each line of the file will contain a sequence of 0s and 1s and a sequence of As and Bs, separated by a white space. You are to determine whether the binary sequence can be transformed into the letter sequence according to the following two rules:
1) Each 0 can be converted to any non-empty sequence of As (e.g, 'A', 'AA', 'AAA', etc.)
2) Each 1 can be converted to any non-empty sequences of As OR Bs (e.g., 'A', 'AA', etc., or 'B', 'BB', etc) (but not a mixture of the letters)
The constraints are to process up to 50 lines from the file and that the length of the binary sequence is in [1,150] and that of the letter sequence is in [1,1000].
The most obvious starting algorithm is to do this recursively. What I came up with was for each bit, collapse the entire next allowed group of characters first, test the shortened bit and character strings. If it fails, add back one character from the killed character group at a time and call again.
Here is my complete code. I removed cmd-line argument error checking for brevity.
#include <iostream>
#include <fstream>
#include <string>
#include <iterator>
using namespace std;
//typedefs
typedef string::const_iterator str_it;
//declarations
//use const ref and iterators to save time on copying and erasing
bool TransformLine(const string & bits, str_it bits_front, const string & chars, str_it chars_front);
int main(int argc, char* argv[])
{
//check there are at least two command line arguments: binary executable and file name
//ignore additional arguments
if(argc < 2)
{
cout << "Invalid command line argument. No input file name provided." << "\n"
<< "Goodybe...";
return -1;
}
//create input stream and open file
ifstream in;
in.open(argv[1], ios::in);
while(!in.is_open())
{
char* name;
cout << "Invalid file name. Please enter file name: ";
cin >> name;
in.open(name, ios::in);
}
//variables
string line_bits, line_chars;
//reserve space up to constraints to reduce resizing time later
line_bits.reserve(150);
line_chars.reserve(1000);
int line = 0;
//loop over lines (<=50 by constraint, ignore the rest)
while((in >> line_bits >> line_chars) && (line < 50))
{
line++;
//impose bit and char constraints
if(line_bits.length() > 150 ||
line_chars.length() > 1000)
continue; //skip this line
(TransformLine(line_bits, line_bits.begin(), line_chars, line_chars.begin()) == true) ? (cout << "Yes\n") : (cout << "No\n");
}
//close file
in.close();
return 0;
}
bool TransformLine(const string & bits, str_it bits_front, const string & chars, str_it chars_front)
{
//using iterators so store current length as local const
//can make these const because they're not altered here
int bits_length = distance(bits_front, bits.end());
int chars_length = distance(chars_front, chars.end());
//check success rule
if(bits_length == 0 && chars_length == 0)
return true;
//Check fail rules:
//1. next bit is 0 but next char is B
//2. bits length is zero (but char is not, by previous if)
//3. char length is zero (but bits length is not, by previous if)
if((*bits_front == '0' && *chars_front == 'B') ||
bits_length == 0 ||
chars_length == 0)
return false;
//we now know that chars_length != 0 => chars_front != chars.end()
//kill a bit and then call recursively with each possible reduction of front char group
bits_length = distance(++bits_front, bits.end());
//current char group tracker
const char curr_char_type = *chars_front; //use const so compiler can optimize
int curr_pos = distance(chars.begin(), chars_front); //position of current front in char string
//since chars are 0-indexed, the following is also length of current char group
//start searching from curr_pos and length is relative to curr_pos so subtract it!!!
int curr_group_length = chars.find_first_not_of(curr_char_type, curr_pos)-curr_pos;
//make sure this isn't the last group!
if(curr_group_length < 0 || curr_group_length > chars_length)
curr_group_length = chars_length; //distance to end is precisely distance(chars_front, chars.end()) = chars_length
//kill the curr_char_group
//if curr_group_length = char_length then this will make chars_front = chars.end()
//and this will mean that chars_length will be 0 on next recurssive call.
chars_front += curr_group_length;
curr_pos = distance(chars.begin(), chars_front);
//call recursively, adding back a char from the current group until 1 less than starting point
int added_back = 0;
while(added_back < curr_group_length)
{
if(TransformLine(bits, bits_front, chars, chars_front))
return true;
//insert back one char from the current group
else
{
added_back++;
chars_front--; //represents adding back one character from the group
}
}
//if here then all recursive checks failed so initial must fail
return false;
}
They give the following test cases, which my code solves correctly:
Sample input:
1| 1010 AAAAABBBBAAAA
2| 00 AAAAAA
3| 01001110 AAAABAAABBBBBBAAAAAAA
4| 1100110 BBAABABBA
Correct output:
1| Yes
2| Yes
3| Yes
4| No
Since a transformation is possible if and only if copies of it are, I tried just copying each binary and letter sequences onto itself various times and seeing how the clock goes. Even for very long bit and character strings and many lines it has finished in under 10 seconds.
My question is: since CodeEval is still saying it is running longer than 10 seconds but they don't share their input, does anyone have any further suggestions to improve the performance of this recursion? Or maybe a totally different approach?
Thank you in advance for your help!
Here's what I found:
Pass by constant reference
Strings and other large data structures should be passed by constant reference.
This allows the compiler to pass a pointer to the original object, rather than making a copy of the data structure.
Call functions once, save result
You are calling bits.length() twice. You should call it once and save the result in a constant variable. This allows you to check the status again without calling the function.
Function calls are expensive for time critical programs.
Use constant variables
If you are not going to modify a variable after assignment, use the const in the declaration:
const char curr_char_type = chars[0];
The const allows compilers to perform higher order optimization and provides safety checks.
Change data structures
Since you are perform inserts maybe in the middle of a string, you should use a different data structure for the characters. The std::string data type may need to reallocate after an insertion AND move the letters further down. Insertion is faster with a std::list<char> because a linked list only swaps pointers. There may be a trade off because a linked list needs to dynamically allocate memory for each character.
Reserve space in your strings
When you create the destination strings, you should use a constructor that preallocates or reserves room for the largest size string. This will prevent the std::string from reallocating. Reallocations are expensive.
Don't erase
Do you really need to erase characters in the string?
By using starting and ending indices, you overwrite existing letters without have to erase the entire string.
Partial erasures are expensive. Complete erasures are not.
For more assistance, post to Code Review at StackExchange.
This is a classic recursion problem. However, a naive implementation of the recursion would lead to an exponential number of re-evaluations of a previously computed function value. Using a simpler example for illustration, compare the runtime of the following two functions for a reasonably large N. Lets not worry about the int overflowing.
int RecursiveFib(int N)
{
if(N<=1)
return 1;
return RecursiveFib(N-1) + RecursiveFib(N-2);
}
int IterativeFib(int N)
{
if(N<=1)
return 1;
int a_0 = 1, a_1 = 1;
for(int i=2;i<=N;i++)
{
int temp = a_1;
a_1 += a_0;
a_0 = temp;
}
return a_1;
}
You would need to follow a similar approach here. There are two common ways of approaching the problem - dynamic programming and memoization. Memoization is the easiest way of modifying your approach. Below is a memoized fibonacci implementation to illustrate how your implementation can be speeded up.
int MemoFib(int N)
{
static vector<int> memo(N, -1);
if(N<=1)
return 1;
int& res = memo[N];
if(res!=-1)
return res;
return res = MemoFib(N-1) + MemoFib(N-2);
}
Your failure message is "Aborted after 10 seconds" -- implying that the program was working fine as far as it went, but it took too long. This is understandable, given that your recursive program takes exponentially more time for longer input strings -- it works fine for the short (2-8 digit) strings, but will take a huge amount of time for 100+ digit strings (which the test allows for). To see how your running time goes up, you should construct yourself some longer test inputs and see how long they take to run. Try things like
0000000011111111 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBAAAAAAAA
00000000111111110000000011111111 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBAAAAAAAA
and longer. You need to be able to handle up to 150 digits and 1000 letters.
At CodeEval, you can submit a "solution" that just outputs what the input is, and do that to gather their test set. They may have variations so you may wish to submit it a few times to gather more samples. Some of them are too difficult to solve manually though... the ones you can solve manually will also run very quickly at CodeEval too, even with inefficient solutions, so there's that to consider.
Anyway, I did this same problem at CodeEval (using VB of all things), and my solution recursively looked for the "next index" of both A and B depending on what the "current" index is for where I was in a translation (after checking stoppage conditions first thing in the recursive method). I did not use memoization but that might've helped speed it up even more.
PS, I have not run your code, but it does seem curious that the recursive method contains a while loop within which the recursive method is called... since it's already recursive and should therefore encompass every scenario, is that while() loop necessary?

Alternatives to standard functions of C++ to get speed optimization

Just to clarify that I also think the title is a bit silly. We all know that most built-in functions of the language are really well written and fast (there are ones even written by assembly). Though may be there still are some advices for my situation. I have a small project which demonstrates the work of a search engine. In the indexing phase, I have a filter method to filter out unnecessary things from the keywords. It's here:
bool Indexer::filter(string &keyword)
{
// Remove all characters defined in isGarbage method
keyword.resize(std::remove_if(keyword.begin(), keyword.end(), isGarbage) - keyword.begin());
// Transform all characters to lower case
std::transform(keyword.begin(), keyword.end(), keyword.begin(), ::tolower);
// After filtering, if the keyword is empty or it is contained in stop words list, mark as invalid keyword
if (keyword.size() == 0 || stopwords_.find(keyword) != stopwords_.end())
return false;
return true;
}
At first sign, these functions (alls are member functions of STL container or standard function) are supposed to be fast and not take many time in the indexing phase. But after profiling with Valgrind, the inclusive cost of this filter is ridiculous high: 33.4%. There are three standard functions of this filter take most of the time for that percentage: std::remove_if takes 6.53%, std::set::find takes 15.07% and std::transform takes 7.71%.
So if there are any thing I can do (or change) to reduce the instruction times cost by this filter (like using parallellizing or something like that), please give me your advice. Thanks in advance.
UPDATE: Thanks for all your suggestion. So in brief, I've summarize what I need to do is:
1) Merge tolower and remove_if into one by construct my own loop.
2) Use unordered_set instead of set for faster find method.
Thus I've chosen Mark_B's as the right answer.
First, are you certain that optimization and inlining are enabled when you compile?
Assuming that's the case, I would first try writing my own transformer that combines removing garbage and lower-casing into one step to prevent iterating over the keyword that second time.
There's not a lot you can do about the find without using a different container such as unordered_set as suggested in a comment.
Is it possible for your application that doing the filtering really just is a really CPU-intensive part of the operation?
If you use a boost filter iterator you can merge the remove_if and transform into one, something like (untested):
keyword.erase(std::transform(boost::make_filter_iterator(!boost::bind(isGarbage), keyword.begin(), keyword.end()),
boost::make_filter_iterator(!boost::bind(isGarbage), keyword.end(), keyword.end()),
keyword.begin(),
::tolower), keyword.end());
This is assuming you want the side effect of modifying the string to still be visible externally, otherwise pass by const reference instead and just use count_if and a predicate to do all in one. You can build a hierarchical data structure (basically a tree) for the list of stop words that makes "in-place" matching possible, for example if your stop words are SELECT, SELECTION, SELECTED you might build a tree:
|- (other/empty accept)
\- S-E-L-E-C-T- (empty, fail)
|- (other, accept)
|- I-O-N (fail)
\- E-D (fail)
You can traverse a tree structure like that simultaneously whilst transforming and filtering without any modifications to the string itself. In reality you'd want to compact the multi-character runs into a single node in the tree (probably).
You can build such a data structure fairly trivially with something like:
#include <iostream>
#include <map>
#include <memory>
class keywords {
struct node {
node() : end(false) {}
std::map<char, std::unique_ptr<node>> children;
bool end;
} root;
void add(const std::string::const_iterator& stop, const std::string::const_iterator c, node& n) {
if (!n.children[*c])
n.children[*c] = std::unique_ptr<node>(new node);
if (stop == c+1) {
n.children[*c]->end = true;
return;
}
add(stop, c+1, *n.children[*c]);
}
public:
void add(const std::string& str) {
add(str.end(), str.begin(), root);
}
bool match(const std::string& str) const {
const node *current = &root;
std::string::size_type pos = 0;
while(current && pos < str.size()) {
const std::map<char,std::unique_ptr<node>>::const_iterator it = current->children.find(str[pos++]);
current = it != current->children.end() ? it->second.get() : nullptr;
}
if (!current) {
return false;
}
return current->end;
}
};
int main() {
keywords list;
list.add("SELECT");
list.add("SELECTION");
list.add("SELECTED");
std::cout << list.match("TEST") << std::endl;
std::cout << list.match("SELECT") << std::endl;
std::cout << list.match("SELECTOR") << std::endl;
std::cout << list.match("SELECTED") << std::endl;
std::cout << list.match("SELECTION") << std::endl;
}
This worked as you'd hope and gave:
0
1
0
1
1
Which then just needs to have match() modified to call the transformation and filtering functions appropriately e.g.:
const char c = str[pos++];
if (filter(c)) {
const std::map<char,std::unique_ptr<node>>::const_iterator it = current->children.find(transform(c));
}
You can optimise this a bit (compact long single string runs) and make it more generic, but it shows how doing everything in-place in one pass might be achieved and that's the most likely candidate for speeding up the function you showed.
(Benchmark changes of course)
If a call to isGarbage() does not require synchronization, then parallelization should be the first optimization to consider (given of course that filtering one keyword is a big enough task, otherwise parallelization should be done one level higher). Here's how it could be done - in one pass through the original data, multi-threaded using Threading Building Blocks:
bool isGarbage(char c) {
return c == 'a';
}
struct RemoveGarbageAndLowerCase {
std::string result;
const std::string& keyword;
RemoveGarbageAndLowerCase(const std::string& keyword_) : keyword(keyword_) {}
RemoveGarbageAndLowerCase(RemoveGarbageAndLowerCase& r, tbb::split) : keyword(r.keyword) {}
void operator()(const tbb::blocked_range<size_t> &r) {
for(size_t i = r.begin(); i != r.end(); ++i) {
if(!isGarbage(keyword[i])) {
result.push_back(tolower(keyword[i]));
}
}
}
void join(RemoveGarbageAndLowerCase &rhs) {
result.insert(result.end(), rhs.result.begin(), rhs.result.end());
}
};
void filter_garbage(std::string &keyword) {
RemoveGarbageAndLowerCase res(keyword);
tbb::parallel_reduce(tbb::blocked_range<size_t>(0, keyword.size()), res);
keyword = res.result;
}
int main() {
std::string keyword = "ThIas_iS:saome-aTYpe_Ofa=MoDElaKEYwoRDastrang";
filter_garbage(keyword);
std::cout << keyword << std::endl;
return 0;
}
Of course, the final code could be improved further by avoiding data copying, but the goal of the sample is to demonstrate that it's an easily threadable problem.
You might make this faster by making a single pass through the string, ignoring the garbage characters. Something like this (pseudo-code):
std::string normalizedKeyword;
normalizedKeyword.reserve(keyword.size())
for (auto p = keyword.begin(); p != keyword.end(); ++p)
{
char ch = *p;
if (!isGarbage(ch))
normalizedKeyword.append(tolower(ch));
}
// then search for normalizedKeyword in stopwords
This should eliminate the overhead of std::remove_if, although there is a memory allocation and some new overhead of copying characters to normalizedKeyword.
The problem here isn't the standard functions, it's your use of them. You are making multiple passes over your string when you obviously need to be doing only one.
What you need to do probably can't be done with the algorithms straight up, you'll need help from boost or rolling your own.
You should also carefully consider whether resizing the string is actually necessary. Yeah, you might save some space but it's going to cost you in speed. Removing this alone might account for quite a bit of your operation's expense.
Here's a way to combine the garbage removal and lower-casing into a single step. It won't work for multi-byte encoding such as UTF-8, but neither did your original code. I assume 0 and 1 are both garbage values.
bool Indexer::filter(string &keyword)
{
static char replacements[256] = {1}; // initialize with an invalid char
if (replacements[0] == 1)
{
for (int i = 0; i < 256; ++i)
replacements[i] = isGarbage(i) ? 0 : ::tolower(i);
}
string::iterator tail = keyword.begin();
for (string::iterator it = keyword.begin(); it != keyword.end(); ++it)
{
unsigned int index = (unsigned int) *it & 0xff;
if (replacements[index])
*tail++ = replacements[index];
}
keyword.resize(tail - keyword.begin());
    // After filtering, if the keyword is empty or it is contained in stop words list, mark as invalid keyword
    if (keyword.size() == 0 || stopwords_.find(keyword) != stopwords_.end())
        return false;
    return true;
}
The largest part of your timing is the std::set::find so I'd also try std::unordered_set to see if it improves things.
I would implement it with lower level C functions, something like this maybe (not checking this compiles), doing the replacement in place and not resizing the keyword.
Instead of using a set for garbage characters, I'd add a static table of all 256 characters (yeah, it will work for ascii only), with 0 for all characters that are ok, and 1 for those who should be filtered out. something like:
static const char GARBAGE[256] = { 1, 1, 1, 1, 1, ...., 0, 0, 0, 0, 1, 1, ... };
then for each character in offset pos in const char *str you can just check if (GARBAGE[str[pos]] == 1);
this is more or less what an unordered set does, but will have much less instructions. stopwords should be an unordered set if they're not.
now the filtering function (I'm assuming ascii/utf8 and null terminated strings here):
bool Indexer::filter(char *keyword)
{
char *head = pos;
char *tail = pos;
while (*head != '\0') {
//copy non garbage chars from head to tail, lowercasing them while at it
if (!GARBAGE[*head]) {
*tail = tolower(*head);
++tail; //we only advance tail if no garbag
}
//head always advances
++head;
}
*tail = '\0';
// After filtering, if the keyword is empty or it is contained in stop words list, mark as invalid keyword
if (tail == keyword || stopwords_.find(keyword) != stopwords_.end())
return false;
return true;
}

C++ Faster String Parsing?

I am trying to write a parser to parse this dataset: http://research.microsoft.com/en-us/projects/mslr/
I wrote the code (below).. However it is too slow.. it takes almost a full minute to parse a few hundred megabytes of data.
I ran a profiler and it said most of the time was spent on boost::split (30%) and boost::lexical_cast (40%) ... any suggestions on how to speed up the code?
Thanks.
std::ifstream train("letor/Fold1/train.txt", std::ifstream::in | std::ifstream::binary);
m_train.clear();
for (std::size_t i = 0; i < 10000 && train.good(); i++) {
std::size_t query;
boost::numeric::ublas::vector<double> features;
double label;
features.resize(get_feature_size(), false);
// 2 qid:1 1:3 2:3 3:0 4:0 5:3 6:1 7:1 8:0 9:0 10:1 11:156 12:4 13:0 14:7 15:167 16:6.931275 17:22.076928 18:19.673353 19:22.255383 20:6.926551 21:3 22:3 23:0 24:0 25:6 26:1 27:1 28:0 29:0 30:2 31:1 32:1 33:0 34:0 35:2 36:1 37:1 38:0 39:0 40:2 41:0 42:0 43:0 44:0 45:0 46:0.019231 47:0.75000 48:0 49:0 50:0.035928 51:0.00641 52:0.25000 53:0 54:0 55:0.011976 56:0.00641 57:0.25000 58:0 59:0 60:0.011976 61:0.00641 62:0.25000 63:0 64:0 65:0.011976 66:0 67:0 68:0 69:0 70:0 71:6.931275 72:22.076928 73:0 74:0 75:13.853103 76:1.152128 77:5.99246 78:0 79:0 80:2.297197 81:3.078917 82:8.517343 83:0 84:0 85:6.156595 86:2.310425 87:7.358976 88:0 89:0 90:4.617701 91:0.694726 92:1.084169 93:0 94:0 95:2.78795 96:1 97:1 98:0 99:0 100:1 101:1 102:1 103:0 104:0 105:1 106:12.941469 107:20.59276 108:0 109:0 110:16.766961 111:-18.567793 112:-7.760072 113:-20.838749 114:-25.436074 115:-14.518523 116:-21.710022 117:-21.339609 118:-24.497864 119:-27.690319 120:-20.203779 121:-15.449379 122:-4.474452 123:-23.634899 124:-28.119826 125:-13.581932 126:3 127:62 128:11089534 129:2 130:116 131:64034 132:13 133:3 134:0 135:0 136:0
std::string line;
getline(train, line);
boost::algorithm::trim(line);
std::vector<std::string> tokens;
boost::split(tokens, line, boost::is_any_of(" "));
assert(tokens.size() == 138);
label = boost::lexical_cast<double>(tokens[0]);
query = boost::lexical_cast<std::size_t>(tokens[1].substr(tokens[1].find(":") + 1, tokens[1].size()));
for (std::size_t i = 2; i < tokens.size(); i++) {
features[i - 2] = boost::lexical_cast<double>(tokens[i].substr(tokens[i].find(":") + 1, tokens[i].size()));
}
m_train.push_back(query, features, label);
train.peek();
}
If I understand your format correctly, each line starts with a number followed by colon separated pairs. The first pair on each line has a special meaning and consists of an std::string and a size_t while all other pairs consist of an index (which is ignored) and a double. There is no reason to use Boost for this at all: use IOStreams directly:
std::streamsize max(std::numeric_limits<std::streamsize>::max());
std::string line;
std::istringstream in;
for (std::size_t i(0); i < 1000 && std::getline(train, line); ++i) {
double label;
size_t query;
in.clear();
in.str(line)
if ((in >> label).ignore(max, ':') >> query) {
boost::numeric::ublas::vector<double> features;
while (in.ignore(max, ':') >> feature) {
features.push_back(feature);
}
assert(features.size() == 136);
m_train.push_back(query, features, label);
}
}
Note that this code is careful to check that reads are actually successful. Your code checked ahead of time if the read would be successful but this doesn't work reliably. For example, if your last line consisted just of a spurious space your assert() would trigger which is hardly what you want.
Chopping the string multiple times requires memory allocations and deallocations. You could go with the good old strtod and char pointers to avoid splitting the string. That will take some care of the 30% spend in string tokenization. As for the 40% in converting sting to doubles, this probably cannot be significantly improved.
If you want to go for a quick, dirty, and amazingly ugly but probably the fastest C-only solution, try this. That test completed in about 35 seconds on a E8300 2.83 GHz CPU. Assuming that all strings have exactly same format.
#include "stdio.h"
void main ()
{
const char* test_str = "2 qid:1 1:3 2:3 3:0 4:0 5:3 6:1 7:1 8:0 9:0 10:1 11:156 12:4 13:0 14:7 15:167 16:6.931275 17:22.076928 18:19.673353 19:22.255383 20:6.926551 21:3 22:3 23:0 24:0 25:6 26:1 27:1 28:0 29:0 30:2 31:1 32:1 33:0 34:0 35:2 36:1 37:1 38:0 39:0 40:2 41:0 42:0 43:0 44:0 45:0 46:0.019231 47:0.75000 48:0 49:0 50:0.035928 51:0.00641 52:0.25000 53:0 54:0 55:0.011976 56:0.00641 57:0.25000 58:0 59:0 60:0.011976 61:0.00641 62:0.25000 63:0 64:0 65:0.011976 66:0 67:0 68:0 69:0 70:0 71:6.931275 72:22.076928 73:0 74:0 75:13.853103 76:1.152128 77:5.99246 78:0 79:0 80:2.297197 81:3.078917 82:8.517343 83:0 84:0 85:6.156595 86:2.310425 87:7.358976 88:0 89:0 90:4.617701 91:0.694726 92:1.084169 93:0 94:0 95:2.78795 96:1 97:1 98:0 99:0 100:1 101:1 102:1 103:0 104:0 105:1 106:12.941469 107:20.59276 108:0 109:0 110:16.766961 111:-18.567793 112:-7.760072 113:-20.838749 114:-25.436074 115:-14.518523 116:-21.710022 117:-21.339609 118:-24.497864 119:-27.690319 120:-20.203779 121:-15.449379 122:-4.474452 123:-23.634899 124:-28.119826 125:-13.581932 126:3 127:62 128:11089534 129:2 130:116 131:64034 132:13 133:3 134:0 135:0 136:0";
const char* format = "%lf qid:%lf 1:%lf 2:%lf 3:%lf 4:%lf 5:%lf 6:%lf 7:%lf 8:%lf 9:%lf 10:%lf 11:%lf 12:%lf 13:%lf 14:%lf 15:%lf 16:%lf 17:%lf 18:%lf 19:%lf 20:%lf 21:%lf 22:%lf 23:%lf 24:%lf 25:%lf 26:%lf 27:%lf 28:%lf 29:%lf 30:%lf 31:%lf 32:%lf 33:%lf 34:%lf 35:%lf 36:%lf 37:%lf 38:%lf 39:%lf 40:%lf 41:%lf 42:%lf 43:%lf 44:%lf 45:%lf 46:%lf 47:%lf 48:%lf 49:%lf 50:%lf 51:%lf 52:%lf 53:%lf 54:%lf 55:%lf 56:%lf 57:%lf 58:%lf 59:%lf 60:%lf 61:%lf 62:%lf 63:%lf 64:%lf 65:%lf 66:%lf 67:%lf 68:%lf 69:%lf 70:%lf 71:%lf 72:%lf 73:%lf 74:%lf 75:%lf 76:%lf 77:%lf 78:%lf 79:%lf 80:%lf 81:%lf 82:%lf 83:%lf 84:%lf 85:%lf 86:%lf 87:%lf 88:%lf 89:%lf 90:%lf 91:%lf 92:%lf 93:%lf 94:%lf 95:%lf 96:%lf 97:%lf 98:%lf 99:%lf 100:%lf 101:%lf 102:%lf 103:%lf 104:%lf 105:%lf 106:%lf 107:%lf 108:%lf 109:%lf 110:%lf 111:%lf 112:%lf 113:%lf 114:%lf 115:%lf 116:%lf 117:%lf 118:%lf 119:%lf 120:%lf 121:%lf 122:%lf 123:%lf 124:%lf 125:%lf 126:%lf 127:%lf 128:%lf 129:%lf 130:%lf 131:%lf 132:%lf 133:%lf 134:%lf 135:%lf 136:%lf";
double data[138];
for (int i = 0; i < 500000; i++)
{
sscanf(test_str, format,
data+0, data+1, data+2, data+3, data+4, data+5,
data+6, data+7, data+8, data+9, data+10, data+11,
data+12, data+13, data+14, data+15, data+16, data+17,
data+18, data+19, data+20, data+21, data+22, data+23,
data+24, data+25, data+26, data+27, data+28, data+29,
data+30, data+31, data+32, data+33, data+34, data+35,
data+36, data+37, data+38, data+39, data+40, data+41,
data+42, data+43, data+44, data+45, data+46, data+47,
data+48, data+49, data+50, data+51, data+52, data+53,
data+54, data+55, data+56, data+57, data+58, data+59,
data+60, data+61, data+62, data+63, data+64, data+65,
data+66, data+67, data+68, data+69, data+70, data+71,
data+72, data+73, data+74, data+75, data+76, data+77,
data+78, data+79, data+80, data+81, data+82, data+83,
data+84, data+85, data+86, data+87, data+88, data+89,
data+90, data+91, data+92, data+93, data+94, data+95,
data+96, data+97, data+98, data+99, data+100, data+101,
data+102, data+103, data+104, data+105, data+106, data+107,
data+108, data+109, data+110, data+111, data+112, data+113,
data+114, data+115, data+116, data+117, data+118, data+119,
data+120, data+121, data+122, data+123, data+124, data+125,
data+126, data+127, data+128, data+129, data+130, data+131,
data+132, data+133, data+134, data+135, data+136, data+137);
}
}
C99 has vsscanf that would make it look better. The format string can then be pre-generated dynamically once before the loop, depending on the dataset format. Make sure to check the return value of sscanf to be exactly 138 in this sample.
EDIT: Dietmar Kühl's solution looks clean, and must not be significantly if at all slower than a single sscanf. Best use the code above only as a benchmark reference.
It's hard to say without some experimentation, but...
I'd start by dropping boost::split. It's making a
std::vector<std::string>, which in turn involves a lot of dynamic
allocation and copying. What you probably want to do is write some sort
of iterator over a string in which ++ advances to the next token, and
* returns a pair of iterators defining the current token. This avoids
the intermediate data structure.
You can then define a << operator on this pair, something like:
std::ostream&
operator<<( std::ostream& dest, TextRange const& token )
{
std::copy( token.begin(), token.end(),
std::ostream_iterator<char>( dest ) );
return dest;
}
Reducing the time used by boost::lexical_cast will be more difficult.
Basically, boost::lexical_cast is using << to insert the source into
a std::stringstream, and >> to extract it. You can write something
similar, but using your own streambuf, based on the pair of iterators
(very simple), so given the pair of iterators, 1) you don't have to use
<< at all, since the pair of iterators becomes the istream, and 2) you
completely avoid creating any intermediate std::string. (You do
not want to reimplement the conversion routines in std::istream.)