Is there a canonical / public / free implementations variant of std::stringstream where I don't pay for a full string copy each time I call str()? (Possibly through providing a direct c_str() member in the osteam class?)
I've found two questions here:
C++ stl stringstream direct buffer access (Yeah, it's basically the same title, but note that it's accepted answer doesn't fit this here question at all.)
Stream from std::string without making a copy? (Again, accepted answer doesn't match this question.)
And "of course" the deprecated std::strstream class does allow for direct buffer access, although it's interface is really quirky (apart from it being deprecated).
It also seems one can find several code samples that do explain how one can customize std::streambuf to allow direct access to the buffer -- I haven't tried it in practice, but it seems quite easily implemented.
My question here is really two fold:
Is there any deeper reason why std::[o]stringstream (or, rather, basic_stringbuf) does not allow direct buffer access, but only access through an (expensive) copy of the whole buffer?
Given that it seems easy, but not trivial to implement this, is there any varaint available via boost or other sources, that package this functionality?
Note: The performance hit of the copy that str() makes is very measurable(*), so it seems weird to have to pay for this when the use cases I have seen so far really never need a copy returned from the stringstream. (And if I'd need a copy I could always make it at the "client side".)
(*): With our platform (VS 2005), the results I measure in the release version are:
// tested in a tight loop:
// variant stream: run time : 100%
std::stringstream msg;
msg << "Error " << GetDetailedErrorMsg() << " while testing!";
DoLogErrorMsg(msg.str().c_str());
// variant string: run time: *** 60% ***
std::string msg;
((msg += "Error ") += GetDetailedErrorMsg()) += " while testing!";
DoLogErrorMsg(msg.c_str());
So using a std::string with += (which obviously only works when I don't need custom/number formatting is 40% faster that the stream version, and as far as I can tell this is only due to the complete superfluous copy that str() makes.
I will try to provide an answer to my first bullet,
Is there any deeper reason why std::ostringstream does not allow direct buffer access
Looking at how a streambuf / stringbuf is defined, we can see that the buffer character sequence is not NULL terminated.
As far as I can see, a (hypothetical) const char* std::ostringstream::c_str() const; function, providing direct read-only buffer access, can only make sense when the valid buffer range would always be NULL terminated -- i.e. (I think) when sputc would always make sure that it inserts a terminating NULL after the character it inserts.
I wouldn't think that this is a technical hindrance per se, but given the complexity of the basic_streambuf interface, I'm totally not sure whether it is correct in all cases.
As for the second bullet
Given that it seems easy, but not trivial to implement this, is there
any variant available via boost or other sources, that package this
functionality?
There is Boost.Iostreams and it even contains an example of how to implement an (o)stream Sink with a string.
I came up with a little test implementation to measure it:
#include <string>
#include <boost/iostreams/stream.hpp>
#include <libs/iostreams/example/container_device.hpp> // container_sink
namespace io = boost::iostreams;
namespace ex = boost::iostreams::example;
typedef ex::container_sink<std::wstring> wstring_sink;
struct my_boost_ostr : public io::stream<wstring_sink> {
typedef io::stream<wstring_sink> BaseT;
std::wstring result;
my_boost_ostr() : BaseT(result)
{ }
// Note: This is non-const for flush.
// Suboptimal, but OK for this test.
const wchar_t* c_str() {
flush();
return result.c_str();
}
};
In the tests I did, using this with it's c_str()helper ran slightly faster than a normal ostringstream with it's copying str().c_str() version.
I do not include measuring code. Performance in this area is very brittle, make sure to measure your use case yourself! (For example, the constructor overhead of a string stream is non-negligible.)
Related
Is it possible to have a function like this:
const char* load(const char* filename_){
return
#include filename_
;
};
so you wouldn't have to hardcode the #include file?
Maybe with a some macro?
I'm drawing a blank, guys. I can't tell if it's flat out not possible or if it just has a weird solution.
EDIT:
Also, the ideal is to have this as a compile time operation, otherwise I know there's more standard ways to read a file. Hence thinking about #include in the first place.
This is absolutely impossible.
The reason is - as Justin already said in a comment - that #include is evaluated at compile time.
To include files during run time would require a complete compiler "on board" of the program. A lot of script languages support things like that, but C++ is a compiled language and works different: Compile and run time are strictly separated.
You cannot use #include to do what you want to do.
The C++ way of implementing such a function is:
Find out the size of the file.
Allocate memory for the contents of the file.
Read the contents of the file into the allocated memory.
Return the contents of the file to the calling function.
It will better to change the return type to std::string to ease the burden of dealing with dynamically allocated memory.
std::string load(const char* filename)
{
std::string contents;
// Open the file
std::ifstream in(filename);
// If there is a problem in opening the file, deal with it.
if ( !in )
{
// Problem. Figure out what to do with it.
}
// Move to the end of the file.
in.seekg(0, std::ifstream::end);
auto size = in.tellg();
// Allocate memory for the contents.
// Add an additional character for the terminating null character.
contents.resize(size+1);
// Rewind the file.
in.seekg(0);
// Read the contents
auto n = in.read(contents.data(), size);
if ( n != size )
{
// Problem. Figure out what to do with it.
}
contents[size] = '\0';
return contents;
};
PS
Using a terminating null character in the returned object is necessary only if you need to treat the contents of the returned object as a null terminated string for some reason. Otherwise, it maybe omitted.
I can't tell if it's flat out not possible
I can. It's flat out not possible.
Contents of the filename_ string are not determined until runtime - the content is unknown when the pre processor is run. Pre-processor macros are processed before compilation (or as first step of compilation depending on your perspective).
When the choice of the filename is determined at runtime, the file must also be read at runtime (for example using a fstream).
Also, the ideal is to have this as a compile time operation
The latest time you can affect the choice of included file is when the preprocessor runs. What you can use to affect the file is a pre-processor macro:
#define filename_ "path/to/file"
// ...
return
#include filename_
;
it is theoretically possible.
In practice, you're asking to write a PHP construct using C++. It can be done, as too many things can, but you need some awkward prerequisites.
a compiler has to be linked into your executable. Because the operation you call "hardcoding" is essential for the code to be executed.
a (probably very fussy) linker again into your executable, to merge the new code and resolve any function calls etc. in both directions.
Also, the newly imported code would not be reachable by the rest of the program which was not written (and certainly not compiled!) with that information in mind. So you would need an entry point and a means of exchanging information. Then in this block of information you could even put pointers to code to be called.
Not all architectures and OSes will support this, because "data" and "code" are two concerns best left separate. Code is potentially harmful; think of it as nitric acid. External data is fluid and slippery, like glycerine. And handling nitroglycerine is, as I said, possible. Practical and safe are something completely different.
Once the prerequisites were met, you would have two or three nice extra functions and could write:
void *load(const char* filename, void *data) {
// some "don't load twice" functionality is probably needed
void *code = compile_source(filename);
if (NULL == code) {
// a get_last_compiler_error() would be useful
return NULL;
}
if (EXIT_SUCCESS != invoke_code(code, data)) {
// a get_last_runtime_error() would also be useful
release_code(code);
return NULL;
}
// it is now the caller's responsibility to release the code.
return code;
}
And of course it would be a security nightmare, with source code left lying around and being imported into a running application.
Maintaining the code would be a different, but equally scary nightmare, because you'd be needing two toolchains - one to build the executable, one embedded inside said executable - and they wouldn't necessarily be automatically compatible. You'd be crying loud for all the bugs of the realm to come and rejoice.
What problem would be solved?
Implementing require_once in C++ might be fun, but you thought it could answer a problem you have. Which is it exactly? Maybe it can be solved in a more C++ish way.
A better alternative, considering also performances etc., to compile a loadable module beforehand, and load it at runtime.
If you need to perform small tunings to the executable, place parameters into an external configuration file and provide a mechanism to reload it. Once the modules conform to a fixed specification, you can even provide "plugins" that weren't available when the executable was first developed.
Is it possible to get the pointer to the string stored in string stream object . Basically i want to copy the string from that location into another buffer ..
I found that i can get the length from below code
myStringStreamObj.seekg(0, ios::end);
lengthForStringInmyStringStreamObj = myStringStreamObj.tellg();
I know i can always domyStringStreamObj.str().c_str(). However my profiler tells me this code is taking time and i want to avoid it . hence i need some good alternative to get pointer to that string.
My profiler also tells me that another part of code where is do myStringStreamObj.str(std::string()) is slow too . Can some one guide me on this too .
Please , I cant avoid stringstream as its part of a big code which i cant change / dont have permission to change .
The answer is "no". The only documented API to obtain the formatted string contents is the str() method.
Of course, there's always a small possibility that whatever the compiler or platform you're using might have its own specific non-standard and/or non-documented methods for accessing the internals of a stringstream object; which might be faster. Because your question did not specify any particular compiler or implementation, I must conclude that you are looking for a portable, standards-compliant answer; so the answer in that case is a pretty much a "no".
I would actually be surprised if any particular compiler or platform, even some who might have a "reputation" for poisonings language standards <cough>, would offer any alternatives. I would expect that all implementations would prefer to keep the internal stringstream moving gears private, so that they can be tweaked and fiddled with, in future releases, without breaking binary ABI compatibility.
The only other possibility you might want to investigate is to obtain the contents of the stringstream using its iterators. This is, of course, an entirely different mechanism for pulling out what's in a stringstream; and is not as straightforward as just calling one method that hands you the string, on a silver platter; so it's likely to involve a fairly significant rewrite of your code that works with the returned string. But it is possible that iterating over what's in the stringstream might turn out to be faster since, presumably, there will not be any need for the implementation to allocate a new std::string instance just for str()'s benefit, and jamming everything inside it.
Whether or not iterating will be faster in your case depends on how your implementation's stringstream works, and how efficient the compiler is. The only way to find out is to go ahead and do it, then profile the results.
I can not provide a portable, standards compliant method.
Although you can't get the internal buffer you can provide your own.
According to the standard setting the internal buffer of a std::stringbuf object has implementation defined behaviour.
cplusplus.com: std::stringbuf::setbuf()
As it happens in my implementation of GCC 4.8.2 the behaviour is to use the external buffer you provide instead if its internal std::string.
So you can do this:
int main()
{
std::ostringstream oss;
char buf[1024]; // this is where the data ends up when you write to oss
oss.rdbuf()->pubsetbuf(buf, sizeof(buf));
oss << "Testing" << 1 << 2 << 3.2 << '\0';
std::cout << buf; // see the data
}
But I strongly advise that you only do stuff like this as a (very) temporary measure while you sort out something more efficient that is portable according to the standard.
Having said all that I looked at how my implementation implements std::stringstream::str() and it basically returns its internal std::string so you get direct access and with optimization turned on the function calls should be completely optimized away. So this should be the preferred method.
When trying to come up with an answer to this question, I wrote this little test-program:
#include <iostream>
#include <fstream>
#include <vector>
#include <iterator>
#include <algorithm>
void writeFile() {
int data[] = {0,1,2,3,4,5,6,7,8,9,1000};
std::basic_ofstream<int> file("test.data", std::ios::binary);
std::copy(data, data+11, std::ostreambuf_iterator<int>(file));
}
void readFile() {
std::basic_ifstream<int> file("test.data", std::ios::binary);
std::vector<int> data(std::istreambuf_iterator<int>(file),
(std::istreambuf_iterator<int>()));
std::copy(data.begin(), data.end(),
std::ostream_iterator<int>(std::cout, " "));
std::cout << std::endl;
}
int main()
{
writeFile();
readFile();
return 0;
}
It works as expected, writing the data to the file, and after reading the file, it correctly prints:
0 1 2 3 4 5 6 7 8 9 1000
However, I am not sure if there are any pitfalls (endianess issues aside, you always have these when dealing with binary data)? Is this allowed?
It works as expected.
I'm not sure what you are expecting...
Is this allowed?
That's probably not portable. Streams relies on char_traits and on facets which are defined in the standard only for char and wchar_t. An implementation can provides more, but my bet would be that you are relying on a minimal default implementation of those templates and not on a conscious implementation for int. I'd not be surprised that a more in depth use would leads to problems.
Instantiating any of the iostream classes, or basic_string, on anything
but char or wchar_t, without providing a specific custom traits class,
is undefined behavior; most of the libraries I've seen do define it to
do something, but that definition often isn't specified, and is
different between VC++ and g++ (the two cases I've looked at). If you
define and use your own traits class, some of the functionality should
work.
For just about all of the formatted inserters and extractors (the << and
>> operators), istream and ostream delegate to various facets in
the locale; if any of these are used, you'll have to take steps to
ensure that these work as well. (This usually means providing a new
numpunct facet.)
Even if you only use the streambuf (as in your example), filebuf
uses the codecvt facet. And an implementation isn't required to provide
a codecvt, and if it does, can do pretty much whatever it wants in
it. And since filebuf always writes and reads char to and from the
file, this translation must do something. I'm actually rather
surprised that your code worked, because of this. But you still don't
know what was actually on the disk, which means you can't document it,
which means that you won't be able to read it sometime in the future.
If your goal is to write binary data, your first step should be to
define the binary format, then write read and write functions which
implement it. Possibly using the iostream << and >> syntax, and
probably using a basic_streambuf<char> for the actual input and
output; a basic_streambuf<char> that you've carefully imbued with the
"C" locale. Or rather than define your own binary format, just use an
existing one, like XDR. (All of this paragraph supposes that you want
to keep the data, and read it later. If these are just temporary files,
for spilling temporary internal data to disk during a single run, and
will be deleted at the end of the program execution, simpler solutions
are valid.)
Rather simple question.
Where should I store error,exception, user messages?
By far, I always declared local strings inside the function where it is going to be invoked and did not bother.
e.g.
SomeClass::function1(...)
{
std::string str1("message1");
std::string str2("message2");
std::string str3("message3");
...
// some code
...
}
Suddenly I realized that since construction & initialization are called each time and it might be quite expensive. Would it be better to store them as static strings in class or even in a separate module?
Localization is not the case here.
Thanks in advance.
Why not just use a string constant when you need it?
SomeClass::function1(...)
{
/* ... */
throw std::runtime_error("The foo blortched the baz!");
/* ... */
}
Alternately, you can use static const std::strings. This is appropriate if you expect to copy them to a lot of other std::strings, and your C++ implementation does copy-on-write:
SomeClass::function1(...)
{
static const std::string str_quux("quux"); // initialized once, at program start
xyz.someMember = str_quux; // might not require an allocation+copy
}
If you expect to make lots of copies of these strings, and you don't have copy-on-write (or can't rely on it being present), you might want to look into using boost::flyweight.
TBH its probably best to ONLY construct error messages when they are needed (ie if something goes badly wrong who cares if you get a slowdown). If the messages are always going to appear then its probably best to define them statically to avoid the fact that they will be initialised each time. Generally, though, I only display user messages in debug mode so its quite easy to not show them if you are trying to do a performance build. I then only construct them when they are needed.
While troubleshooting some performance problems in our apps, I found out that C's stdio.h functions (and, at least for our vendor, C++'s fstream classes) are threadsafe. As a result, every time I do something as simple as fgetc, the RTL has to acquire a lock, read a byte, and release the lock.
This is not good for performance.
What's the best way to get non-threadsafe file I/O in C and C++, so that I can manage locking myself and get better performance?
MSVC provides _fputc_nolock, and GCC provides unlocked_stdio and flockfile, but I can't find any similar functions in my compiler (CodeGear C++Builder).
I could use the raw Windows API, but that's not portable and I assume would be slower than an unlocked fgetc for character-at-a-time I/O.
I could switch to something like the Apache Portable Runtime, but that could potentially be a lot of work.
How do others approach this?
Edit: Since a few people wondered, I had tested this before posting. fgetc doesn't do system calls if it can satisfy reads from its buffer, but it does still do locking, so locking ends up taking an enormous percentage of time (hundreds of locks to acquire and release for a single block of data read from disk). Not doing character-at-a-time I/O would be a solution, but C++Builder's fstream classes unfortunately use fgetc (so if I want to use iostream classes, I'm stuck with it), and I have a lot of legacy code that uses fgetc and friends to read fields out of record-style files (which would be reasonable if it weren't for locking issues).
I'd simply not do IO a char at a time if it is sensible performance wise.
fgetc is almost certainly not reading a byte each time you call it (where by 'reading' I mean invoking a system call to perform I/O). Look somewhere else for your performance bottleneck, as this is probably not the problem, and using unsafe functions is certainly not the solution. Any lock handling you do will probably be less efficient than the handling done by the standard routines.
The easiest way would be to read the entire file in memory, and then provide your own fgetc-like interface to that buffer.
Why not just memory map the file? Memory mapping is portable (except in Windows Vista which requires you to jump through hopes to use it now, the dumbasses). Anyhow, map your file into memory, and do you're own locking/not-locking on the resulting memory location.
The OS handles all the locking required to actually read from the disk - you'll NEVER be able to eliminate this overhead. But your processing overhead, on the otherhand, won't be affected by extraneous locking other than that which you do yourself.
the multi-platform approach is pretty simple. Avoid functions or operators where standard specifies that they should use sentry. sentry is an inner class in iostream classes which ensures stream consistency for every output character and in multi-threaded environment it locks the stream related mutex for each character being output. This avoids race conditions at low level but still makes the output unreadable, since strings from two threads might be output concurrently as the following example states:
thread 1 should write: abc
thread 2 should write: def
The output might look like: adebcf instead of abcdef or defabc. This is because sentry is implemented to lock and unlock per character.
The standard defines it for all functions and operators dealing with istream or ostream. The only way to avoid this is to use stream buffers and your own locking (per string for example).
I have written an app, which outputs some data to a file and mesures the speed. If you add here a function which ouptuts using the fstream directly without using the buffer and flush, you will see the speed difference. It uses boost, but I hope it is not a problem for you. Try to remove all the streambuffers and see the difference with and without them. I my case the performance drawback was factor 2-3 or so.
The following article by N. Myers will explain how locales and sentry in c++ IOStreams work. And for sure you should look up in ISO C++ Standard document, which functions use sentry.
Good Luck,
Ovanes
#include <vector>
#include <fstream>
#include <iterator>
#include <algorithm>
#include <iostream>
#include <cassert>
#include <cstdlib>
#include <boost/progress.hpp>
#include <boost/shared_ptr.hpp>
double do_copy_via_streambuf()
{
const size_t len = 1024*2048;
const size_t factor = 5;
::std::vector<char> data(len, 1);
std::vector<char> buffer(len*factor, 0);
::std::ofstream
ofs("test.dat", ::std::ios_base::binary|::std::ios_base::out);
noskipws(ofs);
std::streambuf* rdbuf = ofs.rdbuf()->pubsetbuf(&buffer[0], buffer.size());
::std::ostreambuf_iterator<char> oi(rdbuf);
boost::progress_timer pt;
for(size_t i=1; i<=250; ++i)
{
::std::copy(data.begin(), data.end(), oi);
if(0==i%factor)
rdbuf->pubsync();
}
ofs.flush();
double rate = 500 / pt.elapsed();
std::cout << rate << std::endl;
return rate;
}
void count_avarage(const char* op_name, double (*fct)())
{
double av_rate=0;
const size_t repeat = 1;
std::cout << "doing " << op_name << std::endl;
for(size_t i=0; i<repeat; ++i)
av_rate+=fct();
std::cout << "average rate for " << op_name << ": " << av_rate/repeat
<< "\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n"
<< std::endl;
}
int main()
{
count_avarage("copy via streambuf iterator", do_copy_via_streambuf);
return 0;
}
One thing to consider is to build a custom runtime. Most compilers provide the source to the runtime library (I'd be surprised if it weren't in the C++ Builder package).
This could end up being a lot of work, but maybe they've localized the thread support to make something like this easy. For example, with the embedded system compiler I'm using, it's designed for this - they have documented hooks to add the lock routines. However, it's possible that this could be a maintenance headache, even if it turns out to be relatively easy initially.
Another similar route would be to talk to someone like Dinkumware about using a 3rd party runtime that provides the capabilities you need.