How string accepting interface should look like? - c++

This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string:
void f(const char* str); // (1)
The other way would be to use an std::string:
void f(const string& str); // (2)
It's also possible to write an overload and accept both:
void f(const char* str); // (3)
void f(const string& str);
Or even a template in conjunction with boost string algorithms:
template<class Range> void f(const Range& str); // (4)
My thoughts are:
(1) is not C++ish and may be less efficient when subsequent operations may need to know the string length.
(2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources.
(3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*.
(4) doesn't work with separate compilation and may cause even larger code bloat.
Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface.
The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?
Edit: There is also a fifth option:
void f(boost::iterator_range<const char*> str); // (5)
which has the pros of (1) (doesn't need to construct a string object) and (2) (the size of the string is explicitly passed to the function).

If you are dealing with a pure C++ code base, then I would go with #2, and not worry about callers of the function that don't use it with a std::string until a problem arises. As always, don't worry too much about optimization unless there is a problem. Make your code clean, easy to read, and easy to extend.

There is a single guideline you can follow: use (2) unless you have very good reasons not to.
A const char* str as parameter does not make it explicit, what operations are allowed to be performed on str. How often can it be incremented before it segfaults? Is it a pointer to a char, an array of chars or a C string (i.e. a zero-terminated array of char)?

I don't really have a single hard preference. Depending on circumstances, I alternate between most of your examples.
Another option I sometimes use is similar to your Range example, but using plain old iterator ranges:
template <typename Iter>
void f(Iter first, Iter last);
which has the nice property that it works easily with both C-style strings (and allows the callee to determine the length of the string in constant time) as well as std::string.
If templates are problematic (perhaps because I don't want the function to be defined in a header), I sometimes do the same, but using char* as iterators:
void f(const char* first, const char* last);
Again, it can be trivially used with both C-strings and C++ std::string (as I recall, C++03 doesn't explicitly require strings to be contiguous, but every implementation I know of uses contiguously allocated strings, and I believe C++0x will explicitly require it).
So these versions both allow me to convey more information than the plain C-style const char* parameter (which loses information about the string length, and doesn't handle embedded nulls), in addition to supporting both of the major string types (and probably any other string class you can think of) in an idiomatic way.
The downside is of course that you end up with an additional parameter.
Unfortunately, string handling isn't really C++'s strongest side, so I don't think there is a single "best" approach. But the iterator pair is one of several approaches I tend to use.

For taking a parameter I would go with whatever is simplest and often that is const char*. This works with string literals with zero cost and retrieving a const char* from something stored in a std:string is typically very low cost as well.
Personally, I wouldn't bother with the overload. In all but the simplest cases you will want to merge to two code paths and have one call the other at some point or both call a common function. It could be argued that having the overload hides whether one is converted to the other or not and which path has a higher cost.
Only if I actually wanted to use const features of the std::string interface inside the function would I have const std::string& in the interface itself and I'm not sure that just using size() would be enough of a justification.
In many projects, for better or worse, alternative string classes are often used. Many of these, like std::string give cheap access to a zero-terminated const char*; converting to a std::string requires a copy. Requiring a const std::string& in the interface is dictating a storage strategy even when the internals of the function don't need to specify this. I consider it this to be undesirable, much like taking a const shared_ptr<X>& dictates a storage strategy whereas taking X&, if possible, allows the caller to use any storage strategy for a passed object.
The disadvantages of a const char* are that, purely from an interface standpoint, it doesn't enforce non-nullness (although very occasionally the difference betweem a null parameter and an empty string is used in some interfaces - this can't be done with std::string), and a const char* might be the address of just a single character. In practice, though, the use of a const char* to pass a string is so prevalent that I would consider citing this as a negative to be a fairly trivial concern. Other concerns, such as whether the encoding of the characters specified in the interface documentation (applies to both std::string and const char*) are much more important and likely to cause more work.

The answer should depend heavily on what you are intending to do in f. If you need to do some complex processing with the string, the approach 2 makes sense, if you simply need to pass to some other functions, then select based on those other functions (let's say for arguments sake you are opening a file - what would make most sense? ;) )

It's also possible to write an
overload and accept both:
void f(const string& str) already accepts both because of the implicit conversion from const char* to std::string. So #3 has little advantage over #2.

I would choose void f(const string& str) if the function body does not do char-analysis; means it's not referring to char* of str.

Use (2).
The first stated problem with it is not an issue, because the string has to be created at some point regardless.
Fretting over the second point smells of premature optimization. Unless you have a specific circumstance where the heap allocation is problematic, such as repeated invocations with string literals, and those cannot be changed, then it is better to favor clarity over avoiding this pitfall. Then and only then might you consider option (3).
(2) clearly communicates what the function accepts, and has the right restrictions.
Of course, all 5 are improvements over foo(char*) which I have encountered more than I would care to mention.

Related

Avoiding the func(char *) api on embedded

Note:
I heavily changed my question to be more specific, but I will keep the old question at end of the post, in case it is useful to anyone.
New Question
I am developing an embedded application which uses the following types to represent strings :
string literals(null terminated by default)
std::array<char,size> (not null terminated)
std::string_view
I would like to have a function that accepts all of them in a uniform way. The only problem is that if the input is a string literal I will have to count the size with strlen that in both other cases doesn't work but if I use size it will not work on case 1.
Should I use a variant like so: std::variant<const char *,std::span<char>> ? Would that be heavy by forcing myself to use std::visit ? Would that thing even match correctly all the different representations of strings?
Old Question
Disclaimer when I refer to "string" in the following context I don't mean an std::string but just an abstract way to say alphanumeric series.
Most of the cases when I have to deal with strings in c++ I use something like void func(const std::string &); or without the const and the reference at some cases.Now on an embedded app I don't have access to std::string and I tried to use std::string_view the problem is that std::string_view when constructed from a non literal sometimes is not null terminated
Edit: I changed the question a bit as the comments implied some very helphull hints .
So even though y has a size in the example below:
std::array<char,5> x{"aa"} ;
std::string_view y(x.data());
I can't use y with a c api like printf(%s,y.data()) that is based on null termination
#include <array>
#include <string_view>
#include "stdio.h"
int main(){
std::array<char,5> x{"aaa"};
std::string_view y(x.data());
printf("%s",x);
}
To summarize:
What can I do to implement a stack allocated string that implicitly gets a static size at its constructors (from null terminated strings,string literals, string_views and std::arrays) and it is movable (or cheap copyable)?
What would be the underlying type of my class? What would be the speed costs in comparison with the underlying type?
I think that you are looking at two largely and three subtly different semantics of char*.
Yes, all of them point at char but the type-specific info on how to determine the length is not carried by that. Even in the ancient ancestor of C++ (not saying C...) a pointer to char was not always the same. Already there pointers to terminated and non-terminated sequences of characters could not be mixed.
In C++ the tool of overloading a function exists and it seems to be the obvious solution for your problem. You can still implement that efficiently with only one (helper) function doing the actual work, based on an explicit size information in a second parameter.
Overload the function which is "visible" on the API, with three versions for the three types. Have it determine the length in the appropriate way, then call the single helper function, providing that length.

Why C++20 format has no strong typedef for format string?

C++20 introduces the following format function (locale and wstring_view versions ignored since they do not affect the question):
template<class... Args>
std::string format(std::string_view fmt, const Args&... args);
There is nothing wrong with this, but I wonder why is there not an overload that accepts a "strong typedef", something like
template<class... Args>
std::string format(std::format_string fmt, const Args&... args);
My guesses would be some or all of the following:
increased complexity of implementation
increased compilation time
code bloat
, but I wonder if this was ever discussed during standardization.
The point of strong typedefs is to prevent this from working:
void takes_id(SomeIdType);
takes_id(42);
The point of format is to allow this to work:
format("User {} owes me {} points.", name, 100);
That is a string literal. Requiring a strong type means more burden on the users, having to write something like this:
format(format_string("User {} owes me {} points."), name, 100);
This isn't a burden on the typical strong typedef use case, since you will actually be trafficking in SomeIdTypes. You'll have a function that gives you a SomeIdType, you'll store a member of type SomeIdType. Basically, the amount of actual conversions will be fairly minimal... so on the call site you would just write takes_id(my_id) and the code mostly just looks the same, with added safety.
But the overwhelmingly common case for formatting is to use string literals, so that's a lot of added annotation.
The nominal benefit of strong typing is to catch users doing maybe something like this:
format(name, "User {} owes me {} points.", 100);
Or even:
format(name, 100);
The former seems unlikely to ever happen. The latter is certainly possible, if the first argument happens to be sufficiently string-like. But is this a sufficiently common problem as to force everyone to write more code? I don't think so.
Now, if string literals had their own distinct type from const char[N] (and I really wish they did), then it would be possible to create a type that is implicitly constructible from std::string_literal but needs to be explicitly constructed from std::string_view. And if that were a thing, then the API probably would've used that - since this would require no annotation in the common case and not using string literals seems sufficiently rare that requiring an explicit cast seems... fine?
Besides, on the question of safety, the issue isn't so much passing the wrong kind of string as actually being able to verify the value of it in its context:
format("User {} owes me {} points.", name);
We'd really like for this not to compile, even though we provided a format string in the correct spot. And it appears to be possible to do this. But we don't need strong typedefs for this either, we just need the ability to know if the format string is a constant expression or not.
To summarize, the answer to:
but I wonder why is there not an overload that accepts a "strong typedef"
is that this requires users to provide more call-side annotation while providing very minimal benefit. It would only catch wrong uses in the rarest of uses, so seems like a fairly bad trade-off.

Argument order for mixed const and non-const pass-by-reference

In keeping with the practice of using non-member functions where possible to improve encapsulation, I've written a number of classes that have declarations which look something like:
void auxiliaryFunction(
const Class& c,
std::vector< double >& out);
Its purpose is to do something with c's public member functions and fill a vector with the output.
You might note that its argument order resembles that of a python member function, def auxiliaryFunction(self, out).
However, there are other reasonable ways of choosing the argument order: one would be to say that this function resembles an assignment operation, out = auxiliaryFunction(c). This idiom is used in, for example,
char* strcpy ( char* destination, const char* source );
What if I have a different function that does not resemble a non-essential member function, i.e. one that initializes a new object I've created:
void initializeMyObject(
const double a,
const std::vector<double>& b,
MyObject& out);
So, for consistency's sake, should I use the same ordering (mutable variable last) as I did in auxiliaryFunction?
In general, is it better to choose (non-const , const) over (const, non-const), or only in certain situations? Are there any reasons for picking one, or should I just choose one and stick with it?
(Incidentally, I'm aware of Google style guide's suggestion of using pointers instead of non-const references, but that is tangential to my question.)
The STL algorithms places output (non-const) values last. There you have a precedent for C++ that everyone should be aware of.
I also tend to order arguments from important, to less important. (i.e. size of box goes before box-margin tweak value.)
(Note though: Whatever you do, be consistent! That's infinitely more important than choosing this or that way...)
Few points that may be of help:
Yes, I like the idea of following the standard library's argument ordering convention as much as possible. Principle of least surprises. So, good to go. However, remember that the C standard library itself is not very well crafted, particularly if you look at the file handling API. So beware -- learn from these mistakes.
const with basic arithmetic types are rarely used, it'd be more of a surprise if you do.
The STL, even with its deficiencies provide, IMO, a better example.
Finally, note that C++ has another advantage called Return Value Optimization which is turned on in most modern compilers by default. I'd use that and rewrite your initializeMyObject or even better, use a class and constructors.
Pass by const-reference than by value -- save a lot of copying overhead (both time and memory penalties)
So, your function signature should be more like this:
MyObject initializeMyObject(
double a,
const std::vector<double>& b,
);
(And I maybe tempted to hide the std::vector<double> by a typedef if possible.)
In general, is it better to choose (non-const , const) over (const, non-const), or only in certain situations? Are there any reasons for picking one, or should I just choose one and stick with it?
Use a liberal dose of const whenever you can. For parameters, for functions. You are making a promise to the compiler (to be true to your design) and the compiler will help you along every time you digress with diagnostics. Make the most of your language features and compilers. Further, learn about const& (const-refernces) and their potential performance benefits.

Char array pointer vs string refrence in params

I often see the following structure, especially in constructors:
class::class(const string &filename)
{
}
class::class(const char * const filename)
{
}
By step-by-step debug, I found out the 2nd constructor is always called if I pass a hard-coded string.
Any idea:
1) Why the dual structure is used?
2) What is the speed difference?
Thanks.
Two constructors are needed because you can pass a NULL to your MyClass::MyClass(const std::string &arg). Providing second constructor saves you from a silly crash.
For example, you write constructor for your class, and make it take a const std::string & so that you don't have to check any pointers to be valid if you'd be using const char*.
And everywhere in your code you're just using std::strings. At some point you (or another programmer) pass there a const char*. Here comes nice part of std::string - it has a constructor, which takes char*, and that's very good, apart from the fact, that std::string a_string(NULL) compiles without any problems, just doesn't work.
That's where a second constructor like you've shown comes handy:
MyClass::MyClass(const char* arg)
: m_string(arg ? arg : "")
{}
and it will make a valid std::string object if you pass it a NULL.
In this case I don't think you'd need to worry about any speed. You could try measuring, although I'm afraid you'd be surprised with how little difference (if any) there would be.
EDIT: Just tried std::string a_string(NULL);, compiles just fine, and here's what happens when it is run on my machine (OS X + gcc 4.2.1) (I do recall I tried it on Windows some time ago, result was very similar if not exactly same):
std::logic_error: basic_string::_S_construct NULL not valid
This is useful if the implementation deals with const char*s by itself, but is mostly called by std::string users. These can call using the std::string API, which usually just calls c_str() and dispatches to the const char* implementation. On the other side, if the caller does already have a c-string, no temporary or unneeded std::string needs to be constructed (which can be costly, for longer strings it's a heap allocation).
Also, I once used it to resolve the following case:
My interface took std::string's, but had to be implemented in an external module, thus the STL binary versions of both the module AND the caller module had to match exactly, or it would have crashed (not really good for a portable library… ). So I changed the actual interface to use const char*, and added std::string overloads which I declared inline, so they weren't exported. It didn't break existing code, but resolved all my module boundary problems.
1) Why the dual structure is used?
The string reference version is required if std::string objects are to be used conveniently as parametersm as there is no implicit conversion from a std::string to a const char const. The const char * const version is optional, as character arrays can implicitly be converted into std::strings, but it is more efficient, as no temporary std::string need be created.
2) What is the speed difference?
You will need to measure that yourself.
They are offered basically for convenience. Some times, if you call C functions, you get char* pointers. Others, you get strings, so offering both constructors is just a convenience for the caller. As for the speed, both have virtually the same speed, as they both send a memory address to the constructor.

operator char* in STL string class

Why doesn't the STL string class have an overloaded char* operator built-in? Is there any specific reason for them to avoid it?
If there was one, then using the string class with C functions would become much more convenient.
I would like to know your views.
Following is the quote from Josuttis STL book:
However, there is no automatic type
conversion from a string object to a
C-string. This is for safety reasons
to prevent unintended type conversions
that result in strange behavior (type
char* often has strange behavior) and
ambiguities (for example, in an
expression that combines a string and
a C-string it would be possible to
convert string into char* and vice
versa). Instead, there are several
ways to create or write/copy in a
C-string, In particular, c_str() is
provided to generate the value of a
string as a C-string (as a character
array that has '\0' as its last
character).
You should always avoid cast operators, as they tend to introduce ambiguities into your code that can only be resolved with the use of further casts, or worse still compile but don't do what you expect. A char*() operator would have lots of problems. For example:
string s = "hello";
strcpy( s, "some more text" );
would compile without a warning, but clobber the string.
A const version would be possible, but as strings must (possibly) be copied in order to implement it, it would have an undesirable hidden cost. The explicit c_str() function means you must always state that you really intend to use a const char *.
The string template specification deliberately allows for a "disconnected" representation of strings, where the entire string contents is made up of multiple chunks. Such a representation doesn't allow for easy conversion to char*.
However, the string template also provides the c_str method for precisely the purpose you want: what's wrong with using that method?
By 1998-2002 it was hot topic of c++ forums. The main problem - zero terminator. Spec of std::?string allows zero character as normal, but char* string doesn't.
You can use c_str instead:
string s("I like rice!");
const char* cstr = s.c_str();
I believe that in most cases you don't need the char*, and can work more conveniently with the string class itself.
If you need interop with C-style functions, using a std::vector<char> / <wchar_t> is often easier.
It's not as convenient, and unfortunately you can't O(1)-swap it with a std::string (now that would be a nice thing).
In that respect, I much prefer the interface of MFC/ATL CString which has stricter performance guarantees, provides interop, and doesn't treat wide character/unicode strings as totally foreign (but ok, the latter is somewhat platform specific).