I want to refactor some printf/sprintf/fprintf statements into ostream/sstream/fstream statements. The code in question pretty-prints a series of integers and floating-point numbers, using whitespace padding and fixed numbers of decimal points.
It seems to me that this would be a good candidate for a Martin Fowler style writeup of a safe, step-by-step refactorings, with important gotchas noted. The first step, of course, is to get the legacy code into a test harness, which I have done.
What slow and careful steps can I take to perform this refactoring?
If refactoring is not the goal in itself, you can avoid it altogether (well, almost) by using a formatting library such as tinyformat which provides an interface similar to printf but is type safe and uses IOStreams internally.
Basic mechanics of the conversion:
Convert each printf-style clause %w.pf or %w.pe, where w is the field width and p is the number of digits of precision, into << setw(w) << setprecision(p) << fixed.
Convert each printf-style clause %wd or %wi, where w is the field width, into << setw(w).
Convert "\n" to endl where appropriate.
Process for printf:
Create a char[] (let's call it text) with enough total width.
Convert the printf(...) to sprintf(text, ...), and use cout << text to actually print the text.
Complete using the common instructions.
Process for fprintf:
Same as printf, but use the appropriate fstream instead of cout.
If you already have an opened C-style FILE object that you do not want to refactor at this time, it gets a little sticky (but can be done).
Complete using the common instructions.
Process for sprintf:
If the string being written to is only used to output to a stream in the current context, refer to one of the two refactorings above.
Otherwise, begin by creating a stringstream and streaming the contents of the char[] you are writing to into that. If you are still intending to extract a char* from it, you can do std::stringstream::str().c_str().
Complete using the common instructions.
Common instructions:
Convert each clause one by one into C++-style.
Remove *printf and char[] declarations as necessary when finished.
Apply other refactorings, particularly "Extract Method" (Fowler, Refactoring) as necessary.
Related
In my code the following line gives me data that performs the task its meant for:
const char *key = "\xf1`\xf8\a\\\x9cT\x82z\x18\x5\xb9\xbc\x80\xca\x15";
The problem is that it gets converted at compile time according to rules that I don't fully understand. How does "\x" work in a String?
What I'd like to do is to get the same result but from a string exactly like that fed in at run time. I have tried a lot of things and looked for answers but none that match closely enough for me to be able to apply.
I understand that \x denotes a hex number. But I don't know in which form that gets 'baked out' by the compiler (gcc).
What does that ` translate into?
Does the "\a" do something similar to "\x"?
This is indeed provided by the compiler, but this part is not member of the standard library. That means that you are left with 3 ways:
dynamically write a C++ source file containing the string, and writing it on its standard output. Compile it and (providing popen is available) execute it from your main program and read its input. Pretty ugly isn't it...
use the source of an existing compiler, or directly its internal libraries. Clang is probably a good starting point because it has been designed to be modular. But it could require a good amount of work to find where that damned specific point is coded and how to use that...
just mimic what the compiler does, and write your own parser by hand. It is not that hard, and will learn you why tests are useful...
If it was not clear until here, I strongly urge you to use the third way ;-)
If you want to translate "escape" codes in strings that you get as input at run-time then you need to do it yourself, explicitly.
One way is to read the input into one string. Then copy the characters from that source string into a new destination string, one by one. If you see a backslash then you discard it, fetch the next character, and if it's an x you can use e.g. std::stoi to convert the next few characters into its corresponding integer value, and append that number to the destination string (either adding it with std::to_string, or using output string streams and the normal "output" operator <<).
In my scanner.lex file I have this:
{Some rule that matches strings} return STRING; //STRING is enum
in my c++ file I have this:
if (yylex == STRING) {
cout << "STRING: " << yytext << endl;
Obviously with some logic to take the input from stdin.
Now if this program gets the input "Hello\nWorld", my output is "STRING: Hello\nWorld", while I would want my output to be:
Hello
World
The same goes for other escape characters such as \",\0, \x<hex_number>, \t, \\... But I'm not sure how to achieve this. I'm not even sure if that's a flex issue or if I can solve this using only c++ tools...
How can I get this done?
As #Some programmer dude mentions in a comment, there is an an example of how to do this using start conditions in the Flex documentation. That example puts the escaping rules into a separate start condition; each rule is implemented by appending the unescaped text to a buffer. And that's the way it's normally done.
Of course, you might find an external library which unescapes C-style escaped strings, which you could call on the string returned by flex. But that would be both slower and less flexible than the approach suggested in the Flex manual: slower because it requires a second scan of the string, and less flexible because the library is likely to have its own idea of what escapes to handle.
If you're using C++, you might find it more elegant to modify that example to use a std::string buffer instead of an arbitrary fixed-size character array. You can compile a flex-generated scanner with C++, so there is no problem using C++ standard library objects in your scanner code.
Depending on the various semantic value types you are managing, you will probably want to modify the yylex prototype to either use an additional reference parameter or a more structured return type, in order to return the token value to the caller. Note that while it is OK to use yytext before the next call to yylex, it's not generally considered good style since it won't work with most parsers: in general, parsers require the ability to look one or more tokens ahead, and thus yytext is likely to be overwritten by the time your parser needs its value. The flex manual documents the macro hook used to modify the yylex() prototype.
NOTE: I've seen the post What is the cin analougus of scanf formatted input? before asking the question and the post doesn't solve my problem here. The post seeks for C++-way to do it, but as I mentioned already, it is inconvenient to just use C++-way to do it sometimes and I have clear examples for that.
I am trying to read data from an istream object, and sometimes it is inconvenient to just use C++-style ways such as operator>>, e.g. the data are in special form 123:456 so you have to imbue to make ':' as space (which is very hacky, as opposed to %d:%d in scanf), or 00123 where you want to read as string and convert decimal instead of octal (as opposed to %d in scanf), and possibly many other cases.
The reason I chose istream as interface is because it can be derived and therefore more flexible. For example, we can create in-memory streams, or some customized streams that generated on the fly, etc. C-style FILE*, on the other hand, is very limited, at least in a standard-compliant way, on creating customized streams.
So my questions is, is there a way to do scanf-like data extraction on istream object? I think fscanf internally read character by character from FILE* using fgetc, while istream also provides such interface. So it is possible by just copying and pasting the code of fscanf and replace the FILE* with the istream object, but that's very hacky. Is there a smarter and cleaner way, or is there some existing work on this?
Thanks.
You should never, under any circumstances, use scanf or its relatives for anything, for three reasons:
Many format strings, including for instance all the simple uses of %s, are just as dangerous as gets.
It is almost impossible to recover from malformed input, because scanf does not tell you how far in characters into the input it got when it hit something unexpected.
Numeric overflow triggers undefined behavior: yes, that means scanf is allowed to crash the entire program if a numeric field in the input has too many digits.
Prior to C++11, the C++ specification defined istream formatted input of numbers in terms of scanf, which means that last objection is very likely to apply to them as well! (In C++11 the specification is changed to use strto* instead and to do something predictable if that detects overflow.)
What you should do instead is: read entire lines of input into std::string objects with getline, hand-code logic to split them up into fields (I don't remember off the top of my head what the C++-string equivalent of strsep is, but I'm sure it exists) and then convert numeric strings to machine numbers with the strtol/strtod family of functions.
I cannot emphasize this enough: THE ONLY 100% RELIABLE WAY TO CONVERT STRINGS TO NUMBERS IN C OR C++, unless you are lucky enough to have a C++ runtime that is already C++11-conformant in this regard, IS WITH THE strto* FUNCTIONS, and you must use them correctly:
errno = 0;
result = strtoX(s, &ends, 10); // omit 10 for floats
if (s == ends || *ends || errno)
parse_error();
(The OpenBSD manpages, linked above, explain why you have to do this fairly convoluted thing.)
(If you're clever, you can use ends and some manual logic to skip that colon, instead of strsep.)
I do not recommend you to mix C++ input output and C input output. No that they are really incompatible but they could just plain interoperate wrong.
For example Oracle docs recommend not to mix it http://www.oracle.com/technetwork/articles/servers-storage-dev/mixingcandcpluspluscode-305840.html
But no one stops you from reading data into the buffer and parsing it with standard c functions like sscanf.
...
string curString;
int a, b;
...
std::getline(inputStream, curString);
int sscanfResult == sscanf(curString.cstr(), "%d:%d", &a, &b);
if (2 != sscanfResult)
throw "error";
...
But it won't help in some situations when your stream is just one long contiguous sequence of symbols(like some string turned into memory stream).
Making your own fscanf from scratch or porting(?) the original CRT function actually isn't the worst possible idea. Just make sure you have tested it thoroughly(low level custom char manipulation was always a source of pain in C).
I've never really tried the boost\spirit and such parsing infrastructure could really be an overkill for your project. But boost libraries are usually well tested and designed. You could at least try to use it.
Based on #tmyklebu's comment, I implemented streamScanf which wraps istream as FILE* via fopencookie: https://github.com/likan999/codejam/blob/master/Common/StreamScanf.cpp
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Should I use printf in my C++ code?
If I just want to print a string on screen, I can do that using those two ways:
printf("abc");
std::cout << "abc" << std::endl;
The case is, and in the examples shown above, is there an advantage of using printf over std::cout, or, vice versa?
While cout is the proper C++ way, I believe that some people and companies (including Google) continue to use printf in C++ code because it is much easier to do formatted output with printf than with cout.
Here's an interesting example that I found here.
Compare:
printf( "%-20s %-20s %5s\n" , "Name" , "Surname" , "Id" );
and
cout << setw( -20 ) << "Name" << setw( 20 ) << "Surname" << setw( 5 ) << "Id" << endl;
printf and its associated friends are C functions. They work in C++, but do not have the type safety of C++ std::ostreams. Problems can arise in programs that use printf functions to format output based on user input (or even input from a file). For example:
int main()
{
char[] a = {'1', '2', '3', '4'}; // a string that isn't 0-terminated
int i = 50;
printf("%s", a); // will continue printing characters until a 0 is found in memory
printf("%s", i); // will attempt to print a string, but this is actually an integer
}
C++ has much stronger type safety (and a std::string class) to help prevent problems like these.
I struggle with this very question myself. printf is in general easier to use for formatted printing, but the iostreams facility in C++ has the big advantage that you can create custom formatters for objects. I end up using both of them in my code as necessary.
The problem with using both and intermixing them is that the output buffers used by printf and cout are not the same, so unless you run unbuffered or explicitly flush output you can end up with corrupted output.
My main objection to C++ is that there is no fast output formatting facility similar to printf, so there is no way to easily control output for integer, hex, and floating point formatting.
Java had this same problem; the language ended up getting printf.
Wikipedia has a good discussion of this issue at http://en.wikipedia.org/wiki/Printf#C.2B.2B_alternatives_to_sprintf_for_numeric_conversion.
Actually for your particular example, you should have asked which is preferable, puts or cout. printf prints formatted text but you are just outputting plain text to the console.
For general use, streams (iostream, of which cout is a part) are more extensible (you can print your own types with them), and are more generic in that you can generate functions to print to any type of stream, not just the console (or redirected output). You can create generic stream behaviour with printf too using fprintf which take a FILE* as a FILE* is often not a real file, but this is more tricky.
Streams are "typesafe" in that you overload with the type you are printing. printf is not typesafe with its use of ellipses so you could get undefined results if you put the wrong parameter types in that do not match the format string, but the compiler will not complain. You may even get a seg-fault / undefined behaviour (but you could with cout if used incorrectly) if you miss a parameter or pass in a bad one (eg a number for %s and it treats it as a pointer anyway).
printf does have some advantages though: you can template a format string then reuse that format string for different data, even if that data is not in a struct, and using formatting manipulations for one variable does not "stick" that format for further use because you specify the format for each variable. printf is also known to be threadsafe whereas cout actually is not.
boost has combined the advantages of each with their boost::format library.
The printf has been borrowed from C and has some limitations. The most common mentioned limitation of printf is type safety, as it relies on the programmer to correctly match the format string with the arguments. The second limitation that comes again from the varargs environment is that you cannot extend the behavior with user defined types. The printf knows how to print a set of types, and that's all that you will get out of it. Still, it for the few things that it can be used for, it is faster and simpler to format strings with printf than with c++ streams.
While most modern compilers, are able to address the type safety limitation and at least provide warnings (the compiler can parse the format string and check the arguments provided in the call), the second limitation cannot be overcome. Even in the first case, there are things that the compiler cannot really help with, as checking for null termination --but then again, the same problem goes with std::cout if you use it to print the same array.
On the other end, streams (including std::cout) can be extended to handle user defined types by means of overloaded std::ostream& operator<<( std::ostream&, type const & ) for any given user defined type type. They are type safe by themselves --if you pass in a type that has no overloaded operator<< the compiler will complain. They are, on the other hand, more cumbersome to produce formatted output.
So what should you use? In general I prefer using streams, as overloading operator<< for my own types is simple and they can be used uniformly with all types.
Those two examples do different things. The latter will add a newline character and flush output (result of std::endl). std::cout is also slower. Other than that, printf and std::cout achieve the same thing and you can choose whichever you prefer. As a matter of preference, I'd use std::cout in C++ code. It's more readable and safer.
See this article if you need to format output using std::cout.
In general, you should prefer cout because it's much type-safer and more generic. printf isn't type-safe, nor is it generic at all. The only reason you might favour printf is speed- from memory, printf is many times faster than cout.
I'm wondering if there is a library like Boost Format, but which supports named parameters rather than positional ones. This is a common idiom in e.g. Python, where you have a context to format strings with that may or may not use all available arguments, e.g.
mouse_state = {}
mouse_state['button'] = 0
mouse_state['x'] = 50
mouse_state['y'] = 30
#...
"You clicked %(button)s at %(x)d,%(y)d." % mouse_state
"Targeting %(x)d, %(y)d." % mouse_state
Are there any libraries that offer the functionality of those last two lines? I would expect it to offer a API something like:
PrintFMap(string format, map<string, string> args);
In Googling I have found many libraries offering variations of positional parameters, but none that support named ones. Ideally the library has few dependencies so I can drop it easily into my code. C++ won't be quite as idiomatic for collecting named arguments, but probably someone out there has thought more about it than me.
Performance is important, in particular I'd like to keep memory allocations down (always tricky in C++), since this may be run on devices without virtual memory. But having even a slow one to start from will probably be faster than writing it from scratch myself.
The fmt library supports named arguments:
print("You clicked {button} at {x},{y}.",
arg("button", "b1"), arg("x", 50), arg("y", 30));
And as a syntactic sugar you can even (ab)use user-defined literals to pass arguments:
print("You clicked {button} at {x},{y}.",
"button"_a="b1", "x"_a=50, "y"_a=30);
For brevity the namespace fmt is omitted in the above examples.
Disclaimer: I'm the author of this library.
I've always been critic with C++ I/O (especially formatting) because in my opinion is a step backward in respect to C. Formats needs to be dynamic, and makes perfect sense for example to load them from an external resource as a file or a parameter.
I've never tried before however to actually implement an alternative and your question made me making an attempt investing some weekend hours on this idea.
Sure the problem was more complex than I thought (for example just the integer formatting routine is 200+ lines), but I think that this approach (dynamic format strings) is more usable.
You can download my experiment from this link (it's just a .h file) and a test program from this link (test is probably not the correct term, I used it just to see if I was able to compile).
The following is an example
#include "format.h"
#include <iostream>
using format::FormatString;
using format::FormatDict;
int main()
{
std::cout << FormatString("The answer is %{x}") % FormatDict()("x", 42);
return 0;
}
It is different from boost.format approach because uses named parameters and because
the format string and format dictionary are meant to be built separately (and for
example passed around). Also I think that formatting options should be part of the
string (like printf) and not in the code.
FormatDict uses a trick for keeping the syntax reasonable:
FormatDict fd;
fd("x", 12)
("y", 3.141592654)
("z", "A string");
FormatString is instead just parsed from a const std::string& (I decided to preparse format strings but a slower but probably acceptable approach would be just passing the string and reparsing it each time).
The formatting can be extended for user defined types by specializing a conversion function template; for example
struct P2d
{
int x, y;
P2d(int x, int y)
: x(x), y(y)
{
}
};
namespace format {
template<>
std::string toString<P2d>(const P2d& p, const std::string& parms)
{
return FormatString("P2d(%{x}; %{y})") % FormatDict()
("x", p.x)
("y", p.y);
}
}
after that a P2d instance can be simply placed in a formatting dictionary.
Also it's possible to pass parameters to a formatting function by placing them between % and {.
For now I only implemented an integer formatting specialization that supports
Fixed size with left/right/center alignment
Custom filling char
Generic base (2-36), lower or uppercase
Digit separator (with both custom char and count)
Overflow char
Sign display
I've also added some shortcuts for common cases, for example
"%08x{hexdata}"
is an hex number with 8 digits padded with '0's.
"%026/2,8:{bindata}"
is a 24-bit binary number (as required by "/2") with digit separator ":" every 8 bits (as required by ",8:").
Note that the code is just an idea, and for example for now I just prevented copies when probably it's reasonable to allow storing both format strings and dictionaries (for dictionaries it's however important to give the ability to avoid copying an object just because it needs to be added to a FormatDict, and while IMO this is possible it's also something that raises non-trivial problems about lifetimes).
UPDATE
I've made a few changes to the initial approach:
Format strings can now be copied
Formatting for custom types is done using template classes instead of functions (this allows partial specialization)
I've added a formatter for sequences (two iterators). Syntax is still crude.
I've created a github project for it, with boost licensing.
The answer appears to be, no, there is not a C++ library that does this, and C++ programmers apparently do not even see the need for one, based on the comments I have received. I will have to write my own yet again.
Well I'll add my own answer as well, not that I know (or have coded) such a library, but to answer to the "keep the memory allocation down" bit.
As always I can envision some kind of speed / memory trade-off.
On the one hand, you can parse "Just In Time":
class Formater:
def __init__(self, format): self._string = format
def compute(self):
for k,v in context:
while self.__contains(k):
left, variable, right = self.__extract(k)
self._string = left + self.__replace(variable, v) + right
This way you don't keep a "parsed" structure at hand, and hopefully most of the time you'll just insert the new data in place (unlike Python, C++ strings are not immutable).
However it's far from being efficient...
On the other hand, you can build a fully constructed tree representing the parsed format. You will have several classes like: Constant, String, Integer, Real, etc... and probably some subclasses / decorators as well for the formatting itself.
I think however than the most efficient approach would be to have some kind of a mix of the two.
explode the format string into a list of Constant, Variable
index the variables in another structure (a hash table with open-addressing would do nicely, or something akin to Loki::AssocVector).
There you are: you're done with only 2 dynamically allocated arrays (basically). If you want to allow a same key to be repeated multiple times, simply use a std::vector<size_t> as a value of the index: good implementations should not allocate any memory dynamically for small sized vectors (VC++ 2010 doesn't for less than 16 bytes worth of data).
When evaluating the context itself, look up the instances. You then parse the formatter "just in time", check it agaisnt the current type of the value with which to replace it, and process the format.
Pros and cons:
- Just In Time: you scan the string again and again
- One Parse: requires a lot of dedicated classes, possibly many allocations, but the format is validated on input. Like Boost it may be reused.
- Mix: more efficient, especially if you don't replace some values (allow some kind of "null" value), but delaying the parsing of the format delays the reporting of errors.
Personally I would go for the One Parse scheme, trying to keep the allocations down using boost::variant and the Strategy Pattern as much I could.
Given that Python it's self is written in C and that formatting is such a commonly used feature, you might be able (ignoring copy write issues) to rip the relevant code from the python interpreter and port it to use STL maps rather than Pythons native dicts.
I've writen a library for this puporse, check it out on GitHub.
Contributions are wellcome.