There have been a number of questions on similar topics but none which I found that explore the options in this way.
Often we need to wrap a legacy C-API in C++ to use it's very good functionality while protecting us from the vagaries. Here we will focus just on one element. How to wrap legacy C-functions which accept char* params. The specific example is for an API (the graphviz lib) which accepts many of its params as char* without specifying if that is const or non-const. There appears to be no attempt to modify, but we can't be 100% sure.
The use case for the wrapper is that we want to conveniently call the C++ wrapper with a variety of "stringy" properties names and values, so string literals, strings, const strings, string_views, etc. We want to call both singly during setup where performance is non-critical and in the inner loop, 100M+ times, where performance does matter. (Benchmark code at bottom)
The many ways of passing "strings" to functions have been explained elsewhere.
The code below is heavily commented for 4 options of the cpp_wrapper() function being called 5 different ways.
Which is the best / safest / fastest option? Is it a case of Pick 2?
#include <array>
#include <cassert>
#include <cstdio>
#include <string>
#include <string_view>
void legacy_c_api(char* s) {
// just for demo, we don't really know what's here.
// specifically we are not 100% sure if the code attempts to write
// to char*. It seems not, but the API is not `const char*` eventhough C
// supports that
std::puts(s);
}
// the "modern but hairy" option
void cpp_wrapper1(std::string_view sv) {
// 1. nasty const_cast. Does the legacy API modifY? It appears not but we
// don't know.
// 2. Is the string view '\0' terminated? our wrapper api can't tell
// so maybe an "assert" for debug build checks? nasty too?!
// our use cases below are all fine, but the API is "not safe": UB?!
assert((int)*(sv.data() + sv.size()) == 0);
legacy_c_api(const_cast<char*>(sv.data()));
}
void cpp_wrapper2(const std::string& str) {
// 1. nasty const_cast. Does the legacy API modifY? It appears not but we
// don't know. note that using .data() would not save the const_cast if the
// string is const
// 2. The standard says this is safe and null terminated std::string.c_str();
// we can pass a string literal but we can't pass a string_view to it =>
// logical!
legacy_c_api(const_cast<char*>(str.c_str()));
}
void cpp_wrapper3(std::string_view sv) {
// the slow and safe way. Guaranteed be '\0' terminated.
// is non-const so the legacy can modfify if it wishes => no const_cast
// slow copy? not necessarily if sv.size() < 16bytes => SBO on stack
auto str = std::string{sv};
legacy_c_api(str.data());
}
void cpp_wrapper4(std::string& str) {
// efficient api by making the proper strings in calling code
// but communicates the wrong thing altogether => effectively leaks the c-api
// to c++
legacy_c_api(str.data());
}
// std::array<std::string_view, N> is a good modern way to "store" a large array
// of "stringy" constants? they end up in .text of elf file (or equiv). They ARE
// '\0' terminated. Although the sv loses that info. Used in inner loop => 100M+
// lookups and calls to legacy_c_api;
static constexpr const auto sv_colours =
std::array<std::string_view, 3>{"color0", "color1", "color2"};
// instantiating these non-const strings seems wrong / a waste (there are about
// 500 small constants) potenial heap allocation in during static storage init?
// => exceptions cannot be caught... just the wrong model?
static auto str_colours =
std::array<std::string, 3>{"color0", "color1", "color2"};
int main() {
auto my_sv_colour = std::string_view{"my_sv_colour"};
auto my_str_colour = std::string{"my_str_colour"};
cpp_wrapper1(my_sv_colour);
cpp_wrapper1(my_str_colour);
cpp_wrapper1("literal_colour");
cpp_wrapper1(sv_colours[1]);
cpp_wrapper1(str_colours[2]);
// cpp_wrapper2(my_sv_colour); // compile error
cpp_wrapper2(my_str_colour);
cpp_wrapper2("literal_colour");
// cpp_wrapper2(colours[1]); // compile error
cpp_wrapper2(str_colours[2]);
cpp_wrapper3(my_sv_colour);
cpp_wrapper3(my_str_colour);
cpp_wrapper3("literal_colour");
cpp_wrapper3(sv_colours[1]);
cpp_wrapper3(str_colours[2]);
// cpp_wrapper4(my_sv_colour); // compile error
cpp_wrapper4(my_str_colour);
// cpp_wrapper4("literal_colour"); // compile error
// cpp_wrapper4(sv_colours[1]); // compile error
cpp_wrapper4(str_colours[2]);
}
Benchmark code
Not entirely realistic yet, because work in C-API is minimal and non-existent in C++ client. In the full app I know that I can do 10M in <1s. So just changing between these 2 API abstraction styles looks like it might be a 10% change? Early days...needs more work. Note: that's with a short string which fits in SBO. Longer ones with heap allocation just blow it out completely.
#include <benchmark/benchmark.h>
static void do_not_optimize_away(void* p) {
asm volatile("" : : "g"(p) : "memory");
}
void legacy_c_api(char* s) {
// do at least something with the string
auto sum = std::accumulate(s, s+6, 0);
do_not_optimize_away(&sum);
}
// ... wrapper functions as above: I focused on 1&3 which seem
// "the best compromise".
// Then I added wrapper4 because there is an opportunity to use a
// different signature when in main app's tight loop.
void bench_cpp_wrapper1(benchmark::State& state) {
for (auto _: state) {
for (int i = 0; i< 100'000'000; ++i) cpp_wrapper1(sv_colours[1]);
}
}
BENCHMARK(bench_cpp_wrapper1);
void bench_cpp_wrapper3(benchmark::State& state) {
for (auto _: state) {
for (int i = 0; i< 100'000'000; ++i) cpp_wrapper3(sv_colours[1]);
}
}
BENCHMARK(bench_cpp_wrapper3);
void bench_cpp_wrapper4(benchmark::State& state) {
auto colour = std::string{"color1"};
for (auto _: state) {
for (int i = 0; i< 100'000'000; ++i) cpp_wrapper4(colour);
}
}
BENCHMARK(bench_cpp_wrapper4);
Results
-------------------------------------------------------------
Benchmark Time CPU Iterations
-------------------------------------------------------------
bench_cpp_wrapper1 58281636 ns 58264637 ns 11
bench_cpp_wrapper3 811620281 ns 811632488 ns 1
bench_cpp_wrapper4 147299439 ns 147300931 ns 5
Correct first, then optimize if needed.
wrapper1 has at least two potential instances of undefined behavior: The dubious const_cast, and (in debug versions) possibly accessing an element past the end of an array. (You can create a pointer to one element past the last, but you cannot access it.)
wrapper2 also has a dubious const_case, potentially invoking undefined behavior.
wrapper3 doesn't rely on any UB (that I see).
wrapper4 is similar to wrapper3, but exposes details you're trying to encapsulate.
Start by doing the most correct thing, which is to copy the strings and pass a pointer to the copy, which is wrapper3.
If performance is unacceptable in the tight loop, you can look at alternatives. The tight loop may use only a subset of the interfaces. The tight loop may be heavily biased toward short strings or long strings. The compiler might inline enough of your wrapper in the tight loop that it's effectively a no-op. These factors will affect how (and if) you solve the performance problem.
Alternative solutions might involve caching to reduce the number of copies made, investigating the underlying library enough to make some strategic changes (like changing the underlying library to use const where it can), or by making an overload that exposes the char * and passes it straight through (which shifts the burden to the caller to know what's right).
But all of that is implementation detail: Design the API for usability by the callers.
Is the string view '\0' terminated?
If it happens to point to null terminated string, then sv.data() may be null terminated. But string view does not need to be null terminated, so one should not assume that it is. Thus cpp_wrapper1 is a bad choice.
Does the legacy API modifY? .. we don't know.
If you don't know whether the API modifies the string, then you cannot use const, so cpp_wrapper2 is not an option.
One thing to consider is whether a wrapper is necessary. Most efficient solution is to pass a char*, which is just fine in C++. If using const strings is a typical operation, then cpp_wrapper3 may be useful - but is it typical considering the operations may modify the string? cpp_wrapper4 is more efficient than 3, but not as efficient as plain char* if you don't already have a std::string.
You can provide all of the options mentioned above as overloads.
I have just joined a team that has thousands of lines of code like:
int x = 0;
x=something();
short y=x;
doSomethingImportantWith(y);
The compiler gives nice warnings saying: Conversion of XX bit type value to "short" causes truncation. I've been told that there are no cases where truncation really happens, but I seriously doubt it.
Is there a nice way to insert checks into each case having the effect of:
if (x>short.max) printNastyError(__FILE,__LINE);
before each assignment? Doing this manually will take more time and effort than I'd like to use, and writing a script that reads the warnings and adds this stuff to the correct file as well as the required includes seems overkill -- especially since I expect someone has already done this (or something like it).
I don't care about performance (really) or anything other than knowing when these problems occur so I can either fix only the ones that really matter, or so I can convince management that it's a problem.
You could try compiling and running it with the following ugly hack:
#include <limits>
#include <cstdlib>
template<class T>
struct IntWrapper
{
T value;
template<class U>
IntWrapper(U u) {
if(u > std::numeric_limits<T>::max())
std::abort();
if(U(-1) < 0 && u < std::numeric_limits<T>::min()) // for signed U only
std::abort();
value = u;
}
operator T&() { return value; }
operator T const&() const { return value; }
};
#define short IntWrapper<short>
int main() {
int i = 1, j = 0x10000;
short ii = i;
short jj = j; // this aborts
}
Obviously, it may break code that passes short as a template argument and likely in other instances, so undefine it where it breaks the build. And you may need to add operator overloads so that the usual arithmetic works with the wrapper.
You can probably write a plugin for gcc to detect those truncation and emit a call to a function that check the conversion is safe. You can write those plugins in C or Python. If you prefer to use clang, it also support writing plugins.
I think the easiest way to do it, would be to have the plugin convert the unsafe cast from int to short to call to a function _convert_int_to_float_fail_if_data_loss(value). I'll leave it as a exercise for the reader how to write such a plugin.
In a C++ question about optimization and code style, several answers referred to "SSO" in the context of optimizing copies of std::string. What does SSO mean in that context?
Clearly not "single sign on". "Shared string optimization", perhaps?
Background / Overview
Operations on automatic variables ("from the stack", which are variables that you create without calling malloc / new) are generally much faster than those involving the free store ("the heap", which are variables that are created using new). However, the size of automatic arrays is fixed at compile time, but the size of arrays from the free store is not. Moreover, the stack size is limited (typically a few MiB), whereas the free store is only limited by your system's memory.
SSO is the Short / Small String Optimization. A std::string typically stores the string as a pointer to the free store ("the heap"), which gives similar performance characteristics as if you were to call new char [size]. This prevents a stack overflow for very large strings, but it can be slower, especially with copy operations. As an optimization, many implementations of std::string create a small automatic array, something like char [20]. If you have a string that is 20 characters or smaller (given this example, the actual size varies), it stores it directly in that array. This avoids the need to call new at all, which speeds things up a bit.
EDIT:
I wasn't expecting this answer to be quite so popular, but since it is, let me give a more realistic implementation, with the caveat that I've never actually read any implementation of SSO "in the wild".
Implementation details
At the minimum, a std::string needs to store the following information:
The size
The capacity
The location of the data
The size could be stored as a std::string::size_type or as a pointer to the end. The only difference is whether you want to have to subtract two pointers when the user calls size or add a size_type to a pointer when the user calls end. The capacity can be stored either way as well.
You don't pay for what you don't use.
First, consider the naive implementation based on what I outlined above:
class string {
public:
// all 83 member functions
private:
std::unique_ptr<char[]> m_data;
size_type m_size;
size_type m_capacity;
std::array<char, 16> m_sso;
};
For a 64-bit system, that generally means that std::string has 24 bytes of 'overhead' per string, plus another 16 for the SSO buffer (16 chosen here instead of 20 due to padding requirements). It wouldn't really make sense to store those three data members plus a local array of characters, as in my simplified example. If m_size <= 16, then I will put all of the data in m_sso, so I already know the capacity and I don't need the pointer to the data. If m_size > 16, then I don't need m_sso. There is absolutely no overlap where I need all of them. A smarter solution that wastes no space would look something a little more like this (untested, example purposes only):
class string {
public:
// all 83 member functions
private:
size_type m_size;
union {
class {
// This is probably better designed as an array-like class
std::unique_ptr<char[]> m_data;
size_type m_capacity;
} m_large;
std::array<char, sizeof(m_large)> m_small;
};
};
I'd assume that most implementations look more like this.
SSO is the abbreviation for "Small String Optimization", a technique where small strings are embedded in the body of the string class rather than using a separately allocated buffer.
As already explained by the other answers, SSO means Small / Short String Optimization.
The motivation behind this optimization is the undeniable evidence that applications in general handle much more shorter strings than longer strings.
As explained by David Stone in his answer above, the std::string class uses an internal buffer to store contents up to a given length, and this eliminates the need to dynamically allocate memory. This makes the code more efficient and faster.
This other related answer clearly shows that the size of the internal buffer depends on the std::string implementation, which varies from platform to platform (see benchmark results below).
Benchmarks
Here is a small program that benchmarks the copy operation of lots of strings with the same length.
It starts printing the time to copy 10 million strings with length = 1.
Then it repeats with strings of length = 2. It keeps going until the length is 50.
#include <string>
#include <iostream>
#include <vector>
#include <chrono>
static const char CHARS[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
static const int ARRAY_SIZE = sizeof(CHARS) - 1;
static const int BENCHMARK_SIZE = 10000000;
static const int MAX_STRING_LENGTH = 50;
using time_point = std::chrono::high_resolution_clock::time_point;
void benchmark(std::vector<std::string>& list) {
std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();
// force a copy of each string in the loop iteration
for (const auto s : list) {
std::cout << s;
}
std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now();
const auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count();
std::cerr << list[0].length() << ',' << duration << '\n';
}
void addRandomString(std::vector<std::string>& list, const int length) {
std::string s(length, 0);
for (int i = 0; i < length; ++i) {
s[i] = CHARS[rand() % ARRAY_SIZE];
}
list.push_back(s);
}
int main() {
std::cerr << "length,time\n";
for (int length = 1; length <= MAX_STRING_LENGTH; length++) {
std::vector<std::string> list;
for (int i = 0; i < BENCHMARK_SIZE; i++) {
addRandomString(list, length);
}
benchmark(list);
}
return 0;
}
If you want to run this program, you should do it like ./a.out > /dev/null so that the time to print the strings isn't counted.
The numbers that matter are printed to stderr, so they will show up in the console.
I have created charts with the output from my MacBook and Ubuntu machines.
Note that there is a huge jump in the time to copy the strings when the length reaches a given point.
That's the moment when strings don't fit in the internal buffer anymore and memory allocation has to be used.
Note also that on the linux machine, the jump happens when the length of the string reaches 16.
On the macbook, the jump happens when the length reaches 23. This confirms that SSO depends on the platform implementation.
Ubuntu
Macbook Pro
I've discovered that std::strings are very slow compared to old-fashioned null-terminated strings, so much slow that they significantly slow down my overall program by a factor of 2.
I expected STL to be slower, I didn't realise it was going to be this much slower.
I'm using Visual Studio 2008, release mode. It shows assignment of a string to be 100-1000 times slower than char* assignment (it's very difficult to test the run-time of a char* assignment). I know it's not a fair comparison, a pointer assignment versus string copy, but my program has lots of string assignments and I'm not sure I could use the "const reference" trick in all places. With a reference counting implementation my program would have been fine, but these implementations don't seem to exist anymore.
My real question is: why don't people use reference counting implementations anymore, and does this mean we all need to be much more careful about avoiding common performance pitfalls of std::string?
My full code is below.
#include <string>
#include <iostream>
#include <time.h>
using std::cout;
void stop()
{
}
int main(int argc, char* argv[])
{
#define LIMIT 100000000
clock_t start;
std::string foo1 = "Hello there buddy";
std::string foo2 = "Hello there buddy, yeah you too";
std::string f;
start = clock();
for (int i=0; i < LIMIT; i++) {
stop();
f = foo1;
foo1 = foo2;
foo2 = f;
}
double stl = double(clock() - start) / CLOCKS\_PER\_SEC;
start = clock();
for (int i=0; i < LIMIT; i++) {
stop();
}
double emptyLoop = double(clock() - start) / CLOCKS_PER_SEC;
char* goo1 = "Hello there buddy";
char* goo2 = "Hello there buddy, yeah you too";
char *g;
start = clock();
for (int i=0; i < LIMIT; i++) {
stop();
g = goo1;
goo1 = goo2;
goo2 = g;
}
double charLoop = double(clock() - start) / CLOCKS_PER_SEC;
cout << "Empty loop = " << emptyLoop << "\n";
cout << "char* loop = " << charLoop << "\n";
cout << "std::string = " << stl << "\n";
cout << "slowdown = " << (stl - emptyLoop) / (charLoop - emptyLoop) << "\n";
std::string wait;
std::cin >> wait;
return 0;
}
Well there are definitely known problems regarding the performance of strings and other containers. Most of them have to do with temporaries and unnecessary copies.
It's not too hard to use it right, but it's also quite easy to Do It Wrong. For example, if you see your code accepting strings by value where you don't need a modifiable parameter, you Do It Wrong:
// you do it wrong
void setMember(string a) {
this->a = a; // better: swap(this->a, a);
}
You better had taken that by const reference or done a swap operation inside, instead of yet another copy. Performance penalty increases for a vector or list in that case. However, you are right definitely that there are known problems. For example in this:
// let's add a Foo into the vector
v.push_back(Foo(a, b));
We are creating one temporary Foo just to add a new Foo into our vector. In a manual solution, that might create the Foo directly into the vector. And if the vector reaches its capacity limit, it has to reallocate a larger memory buffer for its elements. What does it do? It copies each element separately to their new place using their copy constructor. A manual solution might behave more intelligent if it knows the type of the elements before-hand.
Another common problem is introduced temporaries. Have a look at this
string a = b + c + e;
There are loads of temporaries created, which you might avoid in a custom solution that you actually optimize onto performance. Back then, the interface of std::string was designed to be copy-on-write friendly. However, with threads becoming more popular, transparent copy on write strings have problems keeping their state consistent. Recent implementations tend to avoid copy on write strings and instead apply other tricks where appropriate.
Most of those problems are solved however for the next version of the Standard. For example instead of push_back, you can use emplace_back to directly create a Foo into your vector
v.emplace_back(a, b);
And instead of creating copies in a concatenation above, std::string will recognize when it concatenates temporaries and optimize for those cases. Reallocation will also avoid making copies, but will move elements where appropriate to their new places.
For an excellent read, consider Move Constructors by Andrei Alexandrescu.
Sometimes, however, comparisons also tend to be unfair. Standard containers have to support the features they have to support. For example if your container does not keep map element references valid while adding/removing elements from your map, then comparing your "faster" map to the standard map can become unfair, because the standard map has to ensure that elements keep being valid. That was just an example, of course, and there are many such cases that you have to keep in mind when stating "my container is faster than standard ones!!!".
It looks like you're misusing char* in the code you pasted. If you have
std::string a = "this is a";
std::string b = "this is b"
a = b;
you're performing a string copy operation. If you do the same with char*, you're performing a pointer copy operation.
The std::string assignment operation allocates enough memory to hold the contents of b in a, then copies each character one by one. In the case of char*, it does not do any memory allocation or copy the individual characters one by one, it just says "a now points to the same memory that b is pointing to."
My guess is that this is why std::string is slower, because it's actually copying the string, which appears to be what you want. To do a copy operation on a char* you'd need to use the strcpy() function to copy into a buffer that's already appropriately sized. Then you'll have an accurate comparison. But for the purposes of your program you should almost definitely use std::string instead.
When writing C++ code using any utility class (whether STL or your own) instead of eg. good old C null terminated strings, you need to rememeber a few things.
If you benchmark without compiler optimisations on (esp. function inlining), classes will lose. They are not built-ins, even stl. They are implemented in terms of method calls.
Do not create unnesessary objects.
Do not copy objects if possible.
Pass objects as references, not copies, if possible,
Use more specialised method and functions and higher level algorithms. Eg.:
std::string a = "String a"
std::string b = "String b"
// Use
a.swap(b);
// Instead of
std::string tmp = a;
a = b;
b = tmp;
And a final note. When your C-like C++ code starts to get more complex, you need to implement more advanced data structures like automatically expanding arrays, dictionaries, efficient priority queues. And suddenly you realise that its a lot of work and your classes are not really faster then stl ones. Just more buggy.
You are most certainly doing something wrong, or at least not comparing "fairly" between STL and your own code. Of course, it's hard to be more specific without code to look at.
It could be that you're structuring your code using STL in a way that causes more constructors to run, or not re-using allocated objects in a way that matches what you do when you implement the operations yourself, and so on.
This test is testing two fundamentally different things: a shallow copy vs. a deep copy. It's essential to understand the difference and how to avoid deep copies in C++ since a C++ object, by default, provides value semantics for its instances (as with the case with plain old data types) which means that assigning one to the other is generally going to copy.
I "corrected" your test and got this:
char* loop = 19.921
string = 0.375
slowdown = 0.0188244
Apparently we should cease using C-style strings since they are soooo much slower! In actuality, I deliberately made my test as flawed as yours by testing shallow copying on the string side vs. strcpy on the :
#include <string>
#include <iostream>
#include <ctime>
using namespace std;
#define LIMIT 100000000
char* make_string(const char* src)
{
return strcpy((char*)malloc(strlen(src)+1), src);
}
int main(int argc, char* argv[])
{
clock_t start;
string foo1 = "Hello there buddy";
string foo2 = "Hello there buddy, yeah you too";
start = clock();
for (int i=0; i < LIMIT; i++)
foo1.swap(foo2);
double stl = double(clock() - start) / CLOCKS_PER_SEC;
char* goo1 = make_string("Hello there buddy");
char* goo2 = make_string("Hello there buddy, yeah you too");
char *g;
start = clock();
for (int i=0; i < LIMIT; i++) {
g = make_string(goo1);
free(goo1);
goo1 = make_string(goo2);
free(goo2);
goo2 = g;
}
double charLoop = double(clock() - start) / CLOCKS_PER_SEC;
cout << "char* loop = " << charLoop << "\n";
cout << "string = " << stl << "\n";
cout << "slowdown = " << stl / charLoop << "\n";
string wait;
cin >> wait;
}
The main point is, and this actually gets to the heart of your ultimate question, you have to know what you are doing with the code. If you use a C++ object, you have to know that assigning one to the other is going to make a copy of that object (unless assignment is disabled, in which case you'll get an error). You also have to know when it's appropriate to use a reference, pointer, or smart pointer to an object, and with C++11, you should also understand the difference between move and copy semantics.
My real question is: why don't people use reference counting
implementations anymore, and does this mean we all need to be much
more careful about avoiding common performance pitfalls of
std::string?
People do use reference-counting implementations. Here's an example of one:
shared_ptr<string> ref_counted = make_shared<string>("test");
shared_ptr<string> shallow_copy = ref_counted; // no deep copies, just
// increase ref count
The difference is that string doesn't do it internally as that would be inefficient for those who don't need it. Things like copy-on-write are generally not done for strings either anymore for similar reasons (plus the fact that it would generally make thread safety an issue). Yet we have all the building blocks right here to do copy-on-write if we wish to do so: we have the ability to swap strings without any deep copying, we have the ability to make pointers, references, or smart pointers to them.
To use C++ effectively, you have to get used to this way of thinking involving value semantics. If you don't, you might enjoy the added safety and convenience but do it at heavy cost to the efficiency of your code (unnecessary copies are certainly a significant part of what makes poorly written C++ code slower than C). After all, your original test is still dealing with pointers to strings, not char[] arrays. If you were using character arrays and not pointers to them, you'd likewise need to strcpy to swap them. With strings you even have a built-in swap method to do exactly what you are doing in your test efficiently, so my advice is to spend a bit more time learning C++.
If you have an indication of the eventual size of your vector you can prevent excessive resizes by calling reserve() before filling it up.
The main rules of optimization:
Rule 1: Don't do it.
Rule 2: (For experts only) Don't do it yet.
Are you sure that you have proven that it is really the STL that is slow, and not your algorithm?
Good performance isn't always easy with STL, but generally, it is designed to give you the power. I found Scott Meyers' "Effective STL" an eye-opener for understanding how to deal with the STL efficiently. Read!
As others said, you are probably running into frequent deep copies of the string, and compare that to a pointer assignment / reference counting implementation.
Generally, any class designed towards your specific needs, will beat a generic class that's designed for the general case. But learn to use the generic class well, and learn to ride the 80:20 rules, and you will be much more efficient than someone rolling everything on their own.
One specific drawback of std::string is that it doesn't give performance guarantees, which makes sense. As Tim Cooper mentioned, STL does not say whether a string assignment creates a deep copy. That's good for a generic class, because reference counting can become a real killer in highly concurrent applications, even though it's usually the best way for a single threaded app.
They didn't go wrong. STL implementation is generally speaking better than yours.
I'm sure that you can write something better for a very particular case, but a factor of 2 is too much... you really must be doing something wrong.
If used correctly, std::string is as efficient as char*, but with the added protection.
If you are experiencing performance problems with the STL, it's likely that you are doing something wrong.
Additionally, STL implementations are not standard across compilers. I know that SGI's STL and STLPort perform generally well.
That said, and I am being completely serious, you could be a C++ genius and have devised code that is far more sophisticated than the STL. It's not likely , but who knows, you could be the LeBron James of C++.
I would say that STL implementations are better than the traditional implementations. Also did you try using a list instead of a vector, because vector is efficient for some purpose and list is efficient for some other
std::string will always be slower than C-strings. C-strings are simply a linear array of memory. You cannot get any more efficient than that, simply as a data structure. The algorithms you use (like strcat() or strcpy()) are generally equivalent to the STL counterparts. The class instantiation and method calls will be, in relative terms, significantly slower than C-string operations (even worse if the implementation uses virtuals). The only way you could get equivalent performance is if the compiler does optimization.
string const string& char* Java string
---------------------------------------------------------------------------------------------------
Efficient no ** yes yes yes
assignment
Thread-safe yes yes yes yes
memory management yes no no yes
done for you
** There are 2 implementations of std::string: reference counting or deep-copy. Reference counting introduces performance problems in multi-threaded programs, EVEN for just reading strings, and deep-copy is obviously slower as shown above. See:
Why VC++ Strings are not reference counted?
As this table shows, 'string' is better than 'char*' in some ways and worse in others, and 'const string&' is similar in properties to 'char*'. Personally I'm going to continue using 'char*' in many places. The enormous amount of copying of std::string's that happens silently, with implicit copy constructors and temporaries makes me somewhat ambivalent about std::string.
A large part of the reason might be the fact that reference-counting is no longer used in modern implementations of STL.
Here's the story (someone correct me if I'm wrong): in the beginning, STL implementations used reference counting, and were fast but not thread-safe - the implementors expected application programmers to insert their own locking mechanisms at higher levels, to make them thread-safe, because if locking was done at 2 levels then this would slow things down twice as much.
However, the programmers of the world were too ignorant or lazy to insert locks everywhere. For example, if a worker thread in a multi-threaded program needed to read a std::string commandline parameter, then a lock would be needed even just to read the string, otherwise crashes could ensue. (2 threads increment the reference count simultaneously on different CPU's (+1), but decrement it separately (-2), so the reference count goes down to zero, and the memory is freed.)
So implementors ditched reference counting and instead had each std::string always own its own copy of the string. More programs worked, but they were all slower.
So now, even a humble assignment of one std::string to another, (or equivalently, passing a std::string as a parameter to a function), takes about 400 machine code instructions instead of the 2 it takes to assign a char*, a slowdown of 200 times.
I tested the magnitude of the inefficiency of std::string on one major program, which had an overall slowdown of about 100% compared with null-terminated strings. I also tested raw std::string assignment using the following code, which said that std::string assignment was 100-900 times slower. (I had trouble measuring the speed of char* assignment). I also debugged into the std::string operator=() function - I ended up knee deep in the stack, about 7 layers deep, before hitting the 'memcpy()'.
I'm not sure there's any solution. Perhaps if you need your program to be fast, use plain old C++, and if you're more concerned about your own productivity, you should use Java.
#define LIMIT 800000000
clock_t start;
std::string foo1 = "Hello there buddy";
std::string foo2 = "Hello there buddy, yeah you too";
std::string f;
start = clock();
for (int i=0; i < LIMIT; i++) {
stop();
f = foo1;
foo1 = foo2;
foo2 = f;
}
double stl = double(clock() - start) / CLOCKS_PER_SEC;
start = clock();
for (int i=0; i < LIMIT; i++) {
stop();
}
double emptyLoop = double(clock() - start) / CLOCKS_PER_SEC;
char* goo1 = "Hello there buddy";
char* goo2 = "Hello there buddy, yeah you too";
char *g;
start = clock();
for (int i=0; i < LIMIT; i++) {
stop();
g = goo1;
goo1 = goo2;
goo2 = g;
}
double charLoop = double(clock() - start) / CLOCKS_PER_SEC;
TfcMessage("done", 'i', "Empty loop = %1.3f s\n"
"char* loop = %1.3f s\n"
"std::string loop = %1.3f s\n\n"
"slowdown = %f",
emptyLoop, charLoop, stl,
(stl - emptyLoop) / (charLoop - emptyLoop));