What is the strtok_s equivalent in VC7? - c++

The strtok_s function exists in vc8 but not in vc7. So what's a function (or code) that does the equivalent of strtok_s in vc7?

Take a look at this MSDN page.
As far as I can tell, the security enhancements a) Make strtok() reentrant (and thread-safe) by having it take a "context" parameter and b) Make it safe to use with NULL pointers. (The actual behaviors in the case of NULL parameters are listed in a table on the page I've linked.)
As for a VC7 alternative, you'll have to write (or import) one yourself. The NULL-safety is easy to do externally, you'll just have to be careful not to pass NULL strings where none are expected; but as far as reentrancy goes, there's no way for strtok() to handle that.
Take a look at this and this question. I believe POSIX also supplies a reentrant version of strtok() called strtok_r(); you can search for it. It would also be a good (and short) exercise to write an implementation yourself. Shouldn't take more than ~10 lines of code.

Related

Which functions are affected by -fno-math-errno?

I have been excited by this post: https://stackoverflow.com/a/57674631/2492801 and I consider using -fno-math-errno. But I would like to be sure that I do not harm the behaviour of the software I am working on.
Therefore I have checked the (rather large) codebase to see where errno is being used and I wanted to decide whether these usages interfere with -fno-math-errno. But how to do that? The documentation says:
-fno-math-errno
Do not set errno after calling math functions that are executed with a single instruction, e.g., sqrt...
But how can I know which math functions are executed with a single instruction? Is this documented somewhere? Where?
It seems as if the codebase I use relies on errno especially when calling strtol and when working with streams. I guess that strtol is not executed with a single instruction. Is it considered to be a math function at all? How can I be sure?
You can find list of functions affected by -fno-math-errno in GCC's builtins.def (search for "ERRNO"). It seems that only some functions from math.h header (cos, sin, exp, etc.) are affected. Treatment of other standard functions that use errno (strtol, etc.) will not change under this flag.

What is the equivalent standard function of the "AfxIsValidAddress" function?

I was using an MFC-project, that shall be proted in a platform-independent environment, using std-function instead of MFC/AFX.
For example: instead CString the std::string, instead CMutex the std::mutex will be used.
What is the platform-independet, C++11 std::-equivalent of the MFC function "AfxIsValidAddress"?
There is not something similar to AfxIsValidAddress() in the standard library and it appears that the function doesn't actually do that much validation anyway.
See AfxIsValidAddress (and Others) Don’t Work as Advertised which says the function ends up just doing a check against NULL. It also has this to say about the family of valid address check functions:
There are several Win32 API similar in functionality: IsBadWritePtr,
IsBadHugeWritePtr, IsBadReadPtr, IsBadHugeReadPtr, IsBadCodePtr,
IsBadStringPtr. It has been known since at least 2004 that these
functions are broken beyond repair and should never be used. The
almighty Raymond Chen and Larry Osterman both discuss the reasons in
detail, so just a short rehash: IsBad*Ptr all work by accessing the
tested address and catching any thrown exceptions. Problem is that a
certain few of these access violations (namely, those on stack guard
pages) should never be caught – the OS uses them to properly enlarge
thread stacks.
I think it is better to just follow standard C++ procedures to check that a pointer is not a nullptr or better yet to limit the use of pointers as much as possible.

C vs C++ file handling

I have been working in C and C++ and when it comes to file handling I get confused. Let me state the things I know.
In C, we use functions:
fopen, fclose, fwrite, fread, ftell, fseek, fprintf, fscanf, feof, fileno, fgets, fputs, fgetc, fputc.
FILE *fp for file pointer.
Modes like r, w, a
I know when to use these functions (Hope I didn't miss anything important).
In C++, we use functions / operators:
fstream f
f.open, f.close, f>>, f<<, f.seekg, f.seekp, f.tellg, f.tellp, f.read, f.write, f.eof.
Modes like ios::in, ios::out, ios::bin , etc...
So is it possible (recommended) to use C compatible file operations in C++?
Which is more widely used and why?
Is there anything other than these that I should be aware of?
Sometimes there's existing code expecting one or the other that you need to interact with, which can affect your choice, but in general the C++ versions wouldn't have been introduced if there weren't issues with the C versions that they could fix. Improvements include:
RAII semantics, which means e.g. fstreams close the files they manage when they leave scope
modal ability to throw exceptions when errors occur, which can make for cleaner code focused on the typical/successful processing (see http://en.cppreference.com/w/cpp/io/basic_ios/exceptions for API function and example)
type safety, such that how input and output is performed is implicitly selected using the variable type involved
C-style I/O has potential for crashes: e.g. int my_int = 32; printf("%s", my_int);, where %s tells printf to expect a pointer to an ASCIIZ character buffer but my_int appears instead; firstly, the argument passing convention may mean ints are passed differently to const char*s, secondly sizeof int may not equal sizeof const char*, and finally, even if printf extracts 32 as a const char* at best it will just print random garbage from memory address 32 onwards until it coincidentally hits a NUL character - far more likely the process will lack permissions to read some of that memory and the program will crash. Modern C compilers can sometimes validate the format string against the provided arguments, reducing this risk.
extensibility for user-defined types (i.e. you can teach streams how to handle your own classes)
support for dynamically sizing receiving strings based on the actual input, whereas the C functions tend to need hard-coded maximum buffer sizes and loops in user code to assemble arbitrary sized input
Streams are also sometimes criticised for:
verbosity of formatting, particularly "io manipulators" setting width, precision, base, padding, compared to the printf-style format strings
a sometimes confusing mix of manipulators that persist their settings across multiple I/O operations and others that are reset after each operation
lack of convenience class for RAII pushing/saving and later popping/restoring the manipulator state
being slow, as Ben Voigt comments and documents here
The performance differences between printf()/fwrite style I/O and C++ IO streams formatting are very much implementation dependent.
Some implementations (visual C++ for instance), build their IO streams on top of FILE * objects and this tends to increase the run-time complexity of their implementation. Note, however, that there was no particular constraint to implement the library in this fashion.
In my own opinion, the benefits of C++ I/O are as follows:
Type safety.
Flexibility of implementation. Code can be written to do specific formatting or input to or from a generic ostream or istream object. The application can then invoke this code with any kind of derived stream object. If the code that I have written and tested against a file now needs to be applied to a socket, a serial port, or some other kind of internal stream, you can create a stream implementation specific to that kind of I/O. Extending the C style I/O in this fashion is not even close to possible.
Flexibility in locale settings: the C approach of using a single global locale is, in my opinion, seriously flawed. I have experienced cases where I invoked library code (a DLL) that changed the global locale settings underneath my code and completely messed up my output. A C++ stream allows you to imbue() any locale to a stream object.
An interesting critical comparison can be found here.
C++ FQA io
Not exactly polite, but makes to think...
Disclaimer
The C++ FQA (that is a critical response to the C++ FAQ) is often considered by the C++ community a "stupid joke issued by a silly guy the even don't understand what C++ is or wants to be"(cit. from the FQA itself).
These kind of argumentation are often used to flame (or escape from) religion battles between C++ believers, Others languages believers or language atheists each in his own humble opinion convinced to be in something superior to the other.
I'm not interested in such battles, I just like to stimulate critical reasoning about the pros and cons argumentation. The C++ FQA -in this sens- has the advantage to place both the FQA and the FAQ one over the other, allowing an immediate comparison. And that the only reason why I referenced it.
Following TonyD comments, below (tanks for them, I makes me clear my intention need a clarification...), it must be noted that the OP is not just discussing the << and >> (I just talk about them in my comments just for brevity) but the entire function-set that makes up the I/O model of C and C++.
With this idea in mind, think also to other "imperative" languages (Java, Python, D ...) and you'll see they are all more conformant to the C model than C++. Sometimes making it even type safe (what the C model is not, and that's its major drawback).
What my point is all about
At the time C++ came along as mainstream (1996 or so) the <iostream.h> library (note the ".h": pre-ISO) was in a language where templates where not yet fully available, and, essentially, no type-safe support for varadic functions (we have to wait until C++11 to get them), but with type-safe overloaded functions.
The idea of oveloading << retuning it's first parameter over and over is -in fact- a way to chain a variable set of arguments using only a binary function, that can be overload in a type-safe manner. That idea extends to whatever "state management function" (like width() or precision()) through manipulators (like setw) appear as a natural consequence. This points -despite of what you may thing to the FQA author- are real facts. And is also a matter of fact that FQA is the only site I found that talks about it.
That said, years later, when the D language was designed starting offering varadic templates, the writef function was added in the D standard library providing a printf-like syntax, but also being perfectly type-safe. (see here)
Nowadays C++11 also have varadic templates ... so the same approach can be putted in place just in the same way.
Moral of the story
Both C++ and C io models appear "outdated" respect to a modern programming style.
C retain speed, C++ type safety and a "more flexible abstraction for localization" (but I wonder how many C++ programmers are in the world that are aware of locales and facets...) at a runtime-cost (jut track with a debugger the << of a number, going through stream, buffer locale and facet ... and all the related virtual functions!).
The C model, is also easily extensible to parametric messages (the one the order of the parameters depends on the localization of the text they are in) with format strings like
#1%d #2%i allowing scrpting like "text #2%i text #1%d ..."
The C++ model has no concept of "format string": the parameter order is fixed and itermixed with the text.
But C++11 varadic templates can be used to provide a support that:
can offer both compile-time and run-time locale selection
can offer both compile-time and run-time parametric order
can offer compile-time parameter type safety
... all using a simple format string methodology.
Is it time to standardize a new C++ i/o model ?

Length of a C string: std::strlen() vs. std::char_traits<char>::length()

Both are equivalent in that they return the length of the null-terminated character sequence. Are there reasons of preferring one to the other?
Use the simpler alternative. std::char_traits::length is great and all, but for C strings it does the same and is much longer code.
Do yourself a favour and avoid code bloat. I’m a huge fan of C++ functions over C equivalent (e.g. I will never use std::strcpy or std::memcpy, there’s a perfectly fine std::copy). But avoiding std::strlen is just silly.
One reason to use C++ functions exclusively is interface uniformity: for instance, both std::strcpy and std::memcpy have atrocious interfaces. However, std::strlen is a perfectly fine algorithm in the best tradition of C++. It doesn’t generalise, true, but the neither do other class-specific free functions found in the standard library.
std::strlen() is a holdover from the C Standard Library and only operates on a const char* (it is unsafe in that it has undefined behavior if the string is not null terminated). If the string is using a wide character set (e.g. const unsigned short*), std::strlen() is useless.
std::char_traits<T>::length() will operate on whatever the T type is (e.g. if it is an unsigned short, it will still operate properly, but also requires a null terminated value - that is the last value must be T(0) - if an array of T's passed to it is not null terminated, behavior is undefined as well).
In general, when dealing with strings, it is better to use std::string::length() instead of using C-style character strings.
Both Zack Howland and Konrad Rudolph have a point. Thanks. I accept both answers. The summarized reply would be:
There doesn't seem to be any except personal preference either for shorter code or the C++ standard library (I leave out generalization since it wasn't the point of the question as can be seen from the title).
std::strlen() is a C standard library compatibility (even though it is part of ISO C++) function that takes const char* as an argument. length() is a method of the std::string family of classes. So if you want to use strlen() on std::string you'd have to write:
strlen(mystring.c_str())
which is less tidy than mystr.length(). Apart from that, there should be no tangible difference (for char type, that is).

How to identify which forms are macros and which are functions while looking at a Clojure code?

Lisp/Clojure code have consistency in their syntax and it is a plus point as one doesn't need to understand various different constructs.
But at times It is easier to understand by looking at a piece of code just by the different syntax being used like this is a switch case or this is the pattern matching construct etc without actually reading the text.
I have started out with Clojure couple of months ago and I have realized I can't understand the code without reading the name of the form and then googling whether it is a macro or a function and how it works.
So it turns out that, a piece of Clojure code, irrespective fo the uniformity of the syntax isn't uniform.
It may seem like a function but if at all it is a macro then it might not be evaluating all its arguments.
Is there a naming convention or indentation style that all macros use so it is easier for someone to grasp by the name what is going on ?
The most useful intuition in my opinion comes from understanding the purpose of a given operator / Var. Well-designed macros simply could not be written as functions and still offer the same functionality with the same syntax, for if they could, they would in fact be written as functions (see the "well-designed" part above!).1 So, if you're dealing with a construct which couldn't possibly be a regular function, then you know it isn't; otherwise it likely is.
Additionally, the usual ways of learning about the Vars exported by a library tell you whether you're dealing with a macro or a function up front. That is true of doc ((doc foo) says that foo is a macro near the top of its output if that is indeed the case), source (since it gives you the entire code) and M-. (jump to definition in Emacs with nrepl.el or swank-clojure; M-, jumps back). Documentation may be expected to mention what is a macro and what isn't (except that's not necessarily true of docstrings, since all usual ways of accessing a docstring already tell you whether you're dealing with a macro, as explained above).
If you're skimming a body of code with the intention of forming a rough understanding of what it probably does on the assumption that the various operators perform the functions suggested by their names, then either (1) the names are suggestive enough and you get an idea of what's intended by the code, so you don't even need to care which operators happen to be macros, or (2) the names are not suggestive enough, so you'll need to dive into the docs or the source for some of the operators anyway, and then the first thing you'll learn is which of them are registered as macros.
Finally, there is no single naming style for macros, although there are certain conventions specific to particular use cases. For example with-foo-style constructs tend to be convenience macros whose purpose is to simplify dealing with resources of type foo; dofoo-style constructs tend to be macros which take a body of expressions to be executed (how many times and with which additional context set up depends on the macro; the most basic member of this family, do, is actually a special form rather than a macro); deffoo-style constructs introduce new Vars or type-like entities.
It's worth pointing out that similar patterns are sometimes broken. For instance, most threading constructs (-> & Co.) are macros, but xml-> from clojure.data.zip.xml is a function. That makes perfect sense when one considers the functionality provided, which brings us back to the point about the purpose of an operator being the most useful source of intuition.
1 There might be some exceptions to this rule. One would expect these to be documented. Some projects are of course not documented at all (or very nearly so); here the issue goes away completely, since one must go to the source to make sense of things anyway.
There are two attributes that typically distinguish a macro (or sometimes special form) from a function:
When the form does some sort of binding (i.e. declaring new identifiers for later use)
When some of the arguments are evaluated lazily
Examples of the first case are let, letfn, binding and with-local-vars. Strangely though, defn is defined as a function, but I'm pretty sure it has something to do with Clojure's bootstrapping process (defn is defined before defmacro is defined).
Examples of the second would be and, or and lazy-seq. In all these constructs, the arguments are evaluated lazily by either putting them in conditional branches (like if) or moving them inside a function body.
Both of those attributes are really just manifestations of the macro manipulating the Clojure syntax. I don't think the threading macros (-> and ->>) fit very well into either of those categories, but the nil-safe versions (-?> and -?>>) kind of fall under having lazy arguments.
As far as I know there is no enforced naming convention.
As a rule of thumb, functions are preferred wherever possible, but macros can sometimes be spotted when they follow the pattern def<something> for setting up a something or with-<resource> for doing something with an open resource.
Because of this, you may find clojure's doc macro helpful. It will tell you whether a form is a macro/function/special form, as well as give it's arg list and doc string (if present). For example
(use 'clojure.repl)
(doc and)
Will print the following to the repl.
clojure.core/and
([] [x] [x & next])
Macro
Evaluates exprs one at a time, from left to right. If a form
returns logical false (nil or false), and returns that value and
doesn't evaluate any of the other expressions, otherwise it returns
the value of the last expr. (and) returns true.
Some editors (e.g. emacs) will provide this documentation as a pop-up or on a key combination, which makes accessing it (and reading) much faster.