Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
The docs on this are rather lacking so I'm hoping the community can run a simple test and post results here so that I, and anybody else, has a reference.
#include <cwchar>
sizeof( std::mbstate_t );
If you could post the results here and also mention which compiler you are using, I would be very grateful.
On VS2010 it's declared as typedef int mbstate_t; and it's size is 4 bytes for both 32 and 64 bit builds.
I'm asking this because mbstate_t is a member of streampos. I need to use this member to store the conversion state of an encoding. The minimum space I can get away with is 3 bytes so I need to know if any implementation is going to break my code.
Thanks in advance.
gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 on x86_64
size = 8
gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 on armv7l
size = 8
You just want know the results of the sizeof?
Qt 5.1 with GCC x86 32bit under Debian:
size = 8
From the C11 specification (7.29.1/2):
mbstate_t
which is a complete object type other than an array type that can hold the conversion state
information necessary to convert between sequences of multibyte characters and wide
characters;
So while I was wrong in that is can be an array, it could be anything else (including a structure containing an array). The language in the specification doesn't say anything about how it should be implemented, just that it's "a complete object type other than an array type".
From the C++11 specification (multiple places, for example 21.2.3.1/4):
The type mbstate_t is defined in <cwchar> and can represent any of the conversion states that can occur in an implementation-defined set of supported multibyte character encoding rules.
In conclusion, you can not rely on mbstate_t being an integer type, or of a specific size, if you want to be portable. If you want to be portable, you have to let the standard library manage the state for you.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 12 months ago.
The community reviewed whether to reopen this question 12 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
If I compiled a library that includes an undefined behavior function guaranteed to work on a certain compiler, is it portable to other compilers?
I thought that the library has already generated assembly, so, when other programs call UB function, the function assembly well-defined to a certain compiler would be executed.
What am I getting wrong here?
Do not look to the C++ standard for all your answers.
The C++ standard does not define the behavior when object modules compiled by different compilers are linked together. The jurisdiction of the C++ standard is solely single C++ implementations whose creators choose to conform to the C++ standard.
Linking together different object modules is covered by an application binary interface (ABI). If you compile two functions with two compilers that both conform to the same ABI and link them with a linker that conforms to the ABI, they generally should work together. There are additional details to consider, such as how various things in the language(s) bind to corresponding things in the ABI. For example, one compiler might map long to some 32-bit integer type in the ABI while the other compiler might map long to some 64-bit integer type, and this would of course interfere with the functions working together unless corresponding adjustments were made.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm currently reading through Crash Course C++ and have a question regarding types.
If I declare and initialize both an int and a long long variable on a 64-bit machine running Linux to the decimal value 4, does the compiler recognize the wasted bytes and make changes to the underlying type? This is probably a no, as at some point that field may take on a value that would cause overflow with a smaller type (i.e. going from 8 bytes to 4).
I've read a little about object byte reordering during compilation in c++; that compilers can sometimes rearrange fields to minimize padding in memory. Just wondering if there is a similar optimization that happens for numeric types.
I do not think that the compiler will change the size of a variable. It might do so because of the as if rule, but if it can reliably do that, it means that the variable is used is a very simple context, for example assigned (or initialized) once from a constant and then only used in the same compilation unit and its address is not used (that last point if often said odr-usage for One Definition Rule).
But in that case, the compiler will simply optimize out the variable because it can directly use its value, so it will use no memory at all...
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
In C/C++ each library has its own data types for primitive types. For example:
byte, word64, DWORD, LWORD, uint, unsigned int, size_t, ...
However in high-level languages such as Java and C#, everything is uniform (across all libraries).
In C/C++ programming, [beginner] people really get confused which data type should they use. For example, byte is defined as unsigned char, so why do we need byte at all?
I think this could be a source for many memory leaks and other problems (such as many vulnerabilities) at least for beginners. I still can't figure out what's the point of having such data types while a minimal set of them is enough?
The same problem is for null pointer as we have:
NULL, 0, _null, ...
And all are defined as 0.
Update #1:
As #CoryKramer stated the reason could be because of cross-platform compatibility, and interoperability. So, another question which comes to my mind, is that why don't standards define a uniform, cross-platform and inter-operable data types?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
A lot of C++ programs are written with the time() function, but as you probably know, this is not going to work after the year 2038 as it will return a negative integer. It's going to cause a lot of programs to be completely unusable, so I'm just wondering, what is going to be the solution, and is anybody worried about this? Is there actually an alternative out there right now?
Also, do you think this is going to be a major problem or is not something really to worry about?
One question is answerable:
Is there actually an alternative out there right now?
Yes, since C++11 the std::chrono library provides time types that are specified to be good for roughly 500 years. Since they're nicely encapsulated, it shouldn't be too difficult to extend their range, if anything recognisable as C++ is still in use by then.
On most modern platforms, time_t has 64 bits, so even using that the problem can be avoided if you're careful to always assign the results to time_t variables, not int or whatever.
The other questions are purely speculative. I suspect the problem will be similar to Y2K - most programs will already do the right thing; others can be easily changed; and there will be some ancient systems churning away long after the developers have retired, the compilers discontinued, and the source code lost.
The time function returns a time_t value and it's not specified how big the time_t type must be. Implementations will probably just change the time_t typedef so that it is at least 64 bits in size. I think this is already the case on most (or all) 64-bit machines. There is a chance that this could cause a problem for programs that depend on time_t being less than 64-bits, but I imagine that's very unlikely to often be the case for something like time_t.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
There are tons of examples about file reading on the web. Many examples use plain old C-style file reading. Other examples use C++ stuff, but I have no idea whether it's just another average programmer writing a tutorial or whether it's real good modern C++.
So, the question is: How does a good C++ programmer nowadays solve the following tasks?
Read some bytes representing a single, primitive type variable from a binary file.
Read an array of known primitive type of known (but not constant) length.
Read an array of bytes when the type is not yet known, but the length in bytes is known. For example, if the array is read from the file and then passed to a function which actually builds an object from it.
How does a good C++ programmer nowadays solve the following tasks?
1.Read some bytes representing a single, primitive type variable from a binary file.
Use std::istream::read, if you want to "read some bytes representing a type". Use operator >> to read an instance of the type (you will have to implement this operator yourself for non-native types, but this is the way to do it).
2.Read an array of known primitive type of known (but not constant) length.
std::vector<YourType> YourVector;
KnownElementsCount = 100;
std::copy_n(std::istream_operator<YourType>{ in }, KnownElementsCount,
std::back_inserter(YourVector));
If you want to read array of values of unknown length:
std::vector<YourType> YourVector;
std::copy(std::istream_operator<YourType>{ in }, std::istream_operator<YourType>{},
std::back_inserter(YourVector));
3.Read an array of bytes when the type is not yet known, but the length in bytes is known. For example, if the array is read from the file and then passed to a function which actually builds an object from it.
Use std::istream::read; then, construct your object from the data.