My old Trimming function:
string TailTrimString (const string & sSource, const char *chars) {
size_t End = sSource.find_last_not_of(chars);
if (End == string::npos) {
// only "*chars"
return "";
}
if (End == sSource.size() - 1) {
// noting to trim
return sSource;
}
return sSource.substr(0, End + 1);
}
Instead of it I've decided to use boost, and wrote the trivial:
string TailTrimString (const string & sSource, const char *chars) {
return boost::algorithm::trim_right_copy_if(sSource,boost::algorithm::is_any_of(chars));
}
And I was amazed to find out that the new function works much slower.
I've done some profiling, and I see that the function is_any_of is very slow.
Is it possible that boost's implementation works slower than my quite straightforward implementation? Is there anything I should use instead of is_any_of in order to improve the performance?
I also found a discussion on this matter in the boost's mailing list, but I am still not sure on how can I improve the performance of my code.
The boost version that I use is 1.38, which is quite old, but I guess this code didn't change too much since then.
Thank you.
it possible that boost's implementation works slower than my quite straightforward implementation?
Of course.
Is there anything I should use instead of is_any_of in order to improve the performance?
Yeah -- your original code. You said nothing about it having a defect, or the reason why you re-implemented it using boost. If there was no defect in the original code, then there was no valid reason to chuck the original implementation.
Introducing Boost to a codebase makes sense. It brings a lot of functionality that can be helpful. But gutting a function for the sole purpose of using a new technology is a big rookie mistake.
EDIT:
In response to your comment:
I still don't understand, why boost's performance is worse.
A hand-crafted function that is designed to do one specific job for one specific application will often be faster than a generic solution. Boost is a great library of generic tools that can save a lot of programming and a lot of defects. But its generic. You may only need to trim your string in a specific way, but Boost handles everything. That takes time.
In answer to your question about the relative performance, the std::string::find_last_not_of, will wrap C string routines (such as strcspan), and these are very fast, however boost::algorithm::is_any_of uses (probably used to use, I'd hazard that in the later versions this has changed!) a std::set for the set of characters to look for and does a check in this set for each character - which will not be anywhere near as fast!
EDIT: just to add an echo, your function works, it's not broken, it's not slow, so don't bother changing it...
In answer to your question about the relative performance.
You are using boost::algorithm::trim_right_copy_if which, according to the name, creates a copy of the input before trimming. Try using boost::algorithm::trim_right_if to see if that has better performance. This function will perform the operation in-place instead of on a new string.
Related
istrstream was perfect for my needs - basically, take a fixed char buffer, and give me a simple way to extract lines getline() and test for eof()
I'm switching our projects to C++ 17 compliance - which has deprecated istrsteam - apparently because there are too many C++ programmers who cannot fathom fixed buffer memory management (are you serious?!)
At any rate, the istringstream provides the same use semantics, but it imposes the need to now copy the entire fixed character buffer at construction time.
This is an anti-pattern.
What I am looking for is either a way to use a string_view in place of a string for the istringstream, or alternately a better replacement for stringstream which itself handles externally managed fixed buffer (it need only point into it, it never need worry about managing that resource, just as strstream did).
Currently, in VS 2017, this is illegal, and if I understand things correctly, is illegal everywhere in the current state-of-art of C++ (I'm sure you'll correct me if I'm wrong!)
std::string_view raw_view(reinterpret_cast<const char *>(raw_buffer.get()), raw_buffer.size());
std::istringstream raw_stream(raw_view);
So - ideas?
Note: Peter Sommerlad has a proposal for this exact idea here for the C++ standards body:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0448r1.pdf
Continue using istrstream for the time being. It likely won't be removed until either P0448 (using std::span<char> as the source/destination of a stream buffer) or P0408 (the ability to move data into/outof stringstreams) is adopted by the standard. Either of those would serve your needs well.
That being said, if all you're trying to do is get substrings between \ns, it would be far more efficient (even with the above proposals) to just use a regex search. Or just a regular search, since you're just looking for \n. That would give you a pair of iterators that represents a line. Using iostreams for line-by-line processing of an already-loaded character buffer is overkill and will never be as efficient as the alternative.
Many other posts, like " Read whole ASCII file into C++ std::string " explain what some of the options are but do not describe pro and cons of various methods in any depth. I want to know why one method is preferable over another?
All of these use std::fstream to read the file into a std::string. I am unsure what the costs and benefits of each method. Lets assume this is for the common case where the read files are known to be of some smallish size memory can easily accommodate, clearly reading a multi-terrabyte file into an memory is a bad idea no matter how you do it.
The most common way after a few googles searches to read a whole file into an std::string involves using std::getline and appending a newline character to it after each line. This seems needless to me, but is there some performance or compatibility reason that this is ideal?
std::string Results;
std::ifstream ResultReader("file.txt");
while(ResultReader)
{
std::getline(ResultReader, Results);
Results.push_back('\n');
}
Another way I pieced together is to change the getline delimiter so it is something not in the file. The EOF char is seems unlikely to be in the middle of the file so that seems a likely candidate. This includes a cast so there is at least one reason not to do it, but this does read a file at once with no string concatenation. Presumably there is still some cost for the delimiter checks. Are there any other good reasons not to do this?
std::string Results;
std::ifstream ResultReader("file.txt");
std::getline(ResultReader, Results, (char)std::char_traits<char>::eof());
The cast means that on systems that define std::char_traits::eof() as something other than -1 might have problems. Is this a practical reason to not choose this over other methods that use std::getline and string::push_pack('\n').
How does these compare to other ways of reading the file at once like in this question: Read whole ASCII file into C++ std::string
std::ifstream ResultReader("file.txt");
std::string Results((std::istreambuf_iterator<char>(ResultReader)),
std::istreambuf_iterator<char>());
It would seem this would be best. It offloads almost all the work onto the standard library which ought to be heavily optimized for the given platform. I see no reason for checks other than stream validity and the end of the file. Is this ideal or are there problems with this that are unseen.
Does the standard or do details of some implementation provide reasons to prefer some method over another? Have I missed some method that might prove ideal in a wide variety of circumstances?
What is a simplest, most idiomatic, best performing and standard compliant way of reading a whole file into an std::string?
EDIT - 2
This question has prompted me to write a small suite of benchmarks. They are MIT license and available on github at: https://github.com/Sqeaky/CppFileToStringExperiments
Fastest - TellSeekRead and CTellSeekRead- These have the system provide an easy to get the size and reads the file in one go.
Faster - Getline Appending and Eof - The checking of chars does not seem to impose any cost.
Fast - RdbufMove and Rdbuf - The std::move seems to make no difference in release.
Slow - Iterator, BackInsertIterator and AssignIterator - Something is wrong with iterators and input streams. The work great in memory, but not here. That said some of these are faster than others.
I have added every method suggested so far, including those in links. I would appreciate if someone could run this on windows and with other compilers. I currently do not have access to a machine with NTFS and it has been noted that this and compiler details could be important.
As for measuring simplicity and idiomatic-ness how do we measure these objectively? Simplicity seems doable, perhaps use something line LOCs and Cyclomatic complexity, but how idiomatic something is seems purely subjective.
What is a simplest, most idiomatic, best performing and standard
compliant way of reading a whole file into an std::string?
those are pertty much contradicting requests, one most likely to lessen the other. simpler code won't be the fastest, or more idiomatic.
after exploring this area for a while I've come to some conclusions:
1) the most performance penalty causing is the IO action itself - the less IO actions taken - the fastest the code
2) memory allocations also quite expensive, but not as expensive as the IO
3) reading as binary is faster than reading as text
4) using the OS API will probably be faster than C++ streams
5) std::ios_base::sync_with_stdio doesn't really effect the performence, it's an urban legend.
using std::getline is probably not the best choice if performence is needed because of these reasons: it will make N IO actions and N allocations for N lines.
A compromise which is fast, standard and elegant is to get the file size, allocate all the memory in one time, then reading the file in one time:
std::ifstream fileReader(<your path here>,std::ios::binary|std::ios::ate);
if (fileReader){
auto fileSize = fileReader.tellg();
fileReader.seekg(std::ios::beg);
std::string content(fileSize,0);
fileReader.read(&content[0],fileSize);
}
move the content around to prevent un-needed copies.
This website has a good comparison on several different methods for doing that. The one I currently use is:
std::string read_sequence() {
std::ifstream f("sequence.fasta");
std::ostringstream ss;
ss << f.rdbuf();
return ss.str();
}
If your text files are separated by newlines, this will keep them. If you want to remove that, for instance (which is my case most of the times), you can just add a call to something such as
auto s = ss.str();
s.erase(std::remove_if(s.begin(), s.end(),
[](char c) { return c == '\n'; }), s.end());
There are two big difficulties with your question. First, the Standard doesn't mandate any particular implementation (yes, nearly everybody started with the same implementation; but they've been modifying it over time, and the optimal I/O code for NTFS, say, will be different than the optimal I/O code for ext4), so it is possible (although somewhat unlikely) for a particular approach to be fastest on one platform, but not another. Second, there's a little difficulty in defining "optimal"; I assume you mean "fastest," but that's not necessarily the case.
There are approaches that are idiomatic, and perfectly fine C++, but unlikely to give wonderful performance. If your goal is to end up with a single std::string, using std::getline(std::ostream&, std::string&) very likely to be slower than necessary. The std::getline() call has to look for the '\n', and you'll occasionally reallocate and copy the destination std::string. Even so, it's ridiculously simple, and easy to understand. That could be optimal from a maintenance perspective, assuming you don't need the absolute fastest performance possible. This will also be a good approach if you don't need the whole file in one giant std::string at one time. You'll be very frugal with memory.
An approach that is likely more efficient is to manipulate the read buffer:
std::string read_the_whole_file(std::ostream& ostr)
{
std::ostringstream sstr;
sstr << ostr.rdbuf();
return sstr.str();
}
Personally, I'm just as likely to use std::fopen() and std::fread() (and std::unique_ptr<FILE>) because, on Windows at least, you'll get a better error message when std::fopen() fails than when constructing a file stream object fails. I consider the better error message an important factor when deciding which approach is optimal.
what's the best way to add multilanguage support to a C++ program?
If possible, the language should be read in from a plain text file containing something like key-value pairs (§WelcomeMessage§ "Hello %s!").
I thought of something like adding a localizedString(key) function that returns the string of the loaded language file. Are there better or more efficient ways?
//half-pseudo code
//somewhere load the language key value pairs into langfile[]
string localizedString(key)
{
//do something else here with the string like parsing placeholders
return langfile[key];
}
cout << localizedString(§WelcomeMessage§);
Simplest way without external libraries:
// strings.h
enum
{
LANG_EN_EN,
LANG_EN_AU
};
enum
{
STRING_HELLO,
STRING_DO_SOMETHING,
STRING_GOODBYE
};
// strings.c
char* en_gb[] = {"Well, Hello","Please do something","Goodbye"};
char* en_au[] = {"Morning, Cobber","do somin'","See Ya"};
char** languages[MAX_LANGUAGES] = {en_gb,en_au};
This will give you what you want. Obviously you could read the strings from a file. I.e.
// en_au.lang
STRING_HELLO,"Morning, CObber"
STRING_DO_SOMETHING,"do somin'"
STRING_GOODBYE,"See Ya"
But you would need a list of string names to match to the string titles. i.e.
// parse_strings.c
struct PARSE_STRINGS
{
char* string_name;
int string_id;
}
PARSE_STRINGS[] = {{"STRING_HELLO",STRING_HELLO},
{"STRING_DO_SOMETHING",STRING_DO_SOMETHING},
{"STRING_GOODBYE",STRING_GOODBYE}};
The above should be slightly easier in C++ as you could use the enum classes toString() method (or what ever it as - can't be bothered to look it up).
All you then have to do is parse the language files.
I hope this helps.
PS: and to access the strings:
languages[current_language][STRING_HELLO]
PPS: apologies for the half c half C++ answer.
Space_C0wb0w's suggestion is a good one. We currently use successfully use ICU for that in our products.
Echoing your comment to his answer: It is indeed hard to say that ICU is "small, clean, uncomplicated". There is "accidental" complexity in ICU coming from its "Java-ish" interface, but a large part of the complexity and size simply comes from the complexity and size of the problem domain it is addressing.
If you don't need ICU's full power and are only interested in "message translation", you may want to look at GNU gettext which, depending on your platform and licencing requirements, may be a "smaller, cleaner and less-complicated" alternative.
The Boost.Locale project is also an interesting alternative. In fact, its "Messages Formatting" functionality is based on the gettext model.
Since you are asking for the best way (and didn't mention the platform) I would recommend GNU Gettext.
Arguably it is the most complete and mature internationalization library for C/C++ programming.
Are there particular cases where native text manipulation is more desirable than regex?
In particular .net?
Note:
Regex appears to be a highly emotive subject, so I am wary of asking such a question. This question is not inviting personal/profession opinions on regex, only specific situations where a solution including its use is not as good as language native commands (including those which have underlying code using regex) and why.
Also, note that Desirable can mean performance, can mean code-readability; it does not mean panacea, as each solution for a problem has its benefits and limitations.
Apologies if this is a duplicate, I have searched SO for a similar question.
I prefer text manipulation over regular expressions to parse delimited string input. It's far simpler (for me at least) to issue a string split than to manage a regular expression.
Given some text:
value1, value2, value3
You can parse the line easily:
var values = myString.Split(',');
I'm sure there's a better way but with regular expressions you'd have to do something like:
var match = Regex.Match(myString, "^([^,]*),([^,]*),([^,]*)$");
var value1 = match.Group[1];
...
When you can do it simply with native text manipulation, it is usually preferable (simpler to read & better performance) not to use regex.
Personal rule of thumb: if it's tricky or relatively longer to do it "manually" and that performance gain is negligible, don't. Else do.
Don't examples:
split
simple find & replace
long text
loop
existing native functions (like, in PHP, strrchr, ucwords...)
Using a regex basically means embedding a tiny program, written in a different programming language, in the middle of your program. I'll ignore the inefficiency of using a regex over native string manipulation, because it probably isn't relevant in most cases.
I prefer native text manipulation over regex any time native text manipulation will be easier to follow for other people. Which is true quite frequently, since plenty of the people around me are not strongly familiar with regex. Unless working with something that is very much about parsing (via regex) they should not need to be!
Regular expressions are usually slower, less readable, and harder to debug than native string manipulation.
The main case where I'll prefer regex over string manipulation is when I want to be able to have different ways to parse strings dependning on the source, and the types of sources will increase over time. Native string manipulation is not really practical in this case. I've had cases where I've stuck a regex column in a database...
RegEx's are very flexible and powerful, because they are in many ways similar to an eval() statement. That being said, depending on the implementation, they can be a bit slow. Normally, this is not an issue, however, if they can be avoided in a particularly costly loop, that can boost performance.
That being said, I tend to use them, and only worry about performance when the app is "done" and I have real benchmarks to prove I need to tweak performance. i.e, avoid premature optimization.
Whenever the same result can be achieved with a reasonable amount of code.
Regular expressions are very powerful, but they tend to get hard to read. If you can do the same with simple string operations that usually means that the code gets easier to manage and maintain.
There is some overhead in setting up the object and parsing the expression. For simpler string manipulation you can get better performance with simple string methods.
Example:
Getting the file name from a file path (yes, I know that the Path class should be used for that, it's just an example...)
string name = Regex.Match(path, #"([^\\]+)$").Groups[0].Value;
vs.
string name = path.Substring(path.LastIndexOf('\\') + 1);
The second solution is straight forward and does the minimal work needed to get the result. The regular expression solution produces the same result, but it does more work to parse the string, and it produces a bunch of objects that is not needed for the result.
Regex parsing and execution refers the host language to defer processing to its regex "engine". This adds overhead, so for any instance where native string manipulation could be used it is preferable for speed (and readability!).
I'll usually just use text manipulation for simple string replacements (e.g. replacing tokens in a template with actual values). You could certainly do this with Regex, but replacements are much easier.
Yes. Example:
char* basename (const char* path)
{
char* p = strrchr(path, '/');
return (p != NULL) ? (p+1) : path;
}
I just read about the FastFormat C++ i/o formatting library, and it seems too good to be true: Faster even than printf, typesafe, and with what I consider a pleasing interface:
// prints: "This formats the remaining arguments based on their order - in this case we put 1 before zero, followed by 1 again"
fastformat::fmt(std::cout, "This formats the remaining arguments based on their order - in this case we put {1} before {0}, followed by {1} again", "zero", 1);
// prints: "This writes each argument in the order, so first zero followed by 1"
fastformat::write(std::cout, "This writes each argument in the order, so first ", "zero", " followed by ", 1);
This looks almost too good to be true. Is there a catch? Have you had good, bad or indifferent experiences with it?
Is there a 'catch' with FastFormat?
Last time I checked, there was one annoying catch:
You can only use either the narrow string version or the wide string version of this library. (The functions for wchar_t and char are the same -- which type is used is a compile time switch.)
With iostreams, stdio or Boost.Format you can use both.
Found one "catch", though for most people it will never manifest. From the project page:
Atomic operation. It doesn't write out statement elements one at a time, like the IOStreams, so has no atomicity issues
The only way I can see this happening is if it buffers the whole write() call's output itself, then writes it out to the ostream in one step. This means it needs to allocate memory, and if an object passed into the write() call produces a lot of output (several megabytes or more), it can consume up to twice that much memory in internal buffers (assuming it uses the grow-a-buffer-by-doubling-its-size-each-time trick).
If you're just using it for logging, and not, say, dumping huge amounts of XML, you'll never see this problem.
The only other "catch" I'm seeing is:
Highly portable. It will work with all good modern C++ compilers; it even works with Visual C++ 6!
So it won't work with an old C++ compiler, like cfront, whereas iostreams is backward compatible to the late 80's. Again, I'd be surprised if anyone ever had a problem with this.
Although FastFormat is a good library there are a number of issues with it:
Limited formatting support, in particular the following features are not supported:
Leading zeros (or any other non-space padding)
Octal/hexadecimal encoding
Runtime width/alignment specification
The library is quite big for a relatively small task of formatting and has even bigger dependency (STLSoft).
It looks pretty interesting indeed! Good tip regardless, and +1 for that!
I've been playing with it for a bit. The main drawback I see is that FastFormat supports less formatting options for the output. This is I think a direct consequence of the way the higher typesafety is achieved, and a good tradeoff depending on your circumstances.
If you look in detail at his performance benchmark page, you'll notice that good old C printf-family functions are still winning on Linux. In fact, the only test case where they perform poorly is the test case that should be static string concatenations, where I would expect printf to be wasteful. Moreover, GCC provides static type-checking on printf-style function calls, so the benefit of type-safety is reduced. So: if you are running on Linux and if you need the absolute best performance, FastFormat is probably not the optimal solution.
The library depends on a couple of environment variables, as mentioned in the docs.
That might be no biggie to some people, but I'd prefer my code to be as self-contained as possible. If I check it out from source control, it should work and compile. It won't, if it requires you to set environment variables.