I am working on the below piece of code and when I'm executing this code, I'm getting a std::bad_alloc exception:
int _tmain(int argc, _TCHAR* argv[])
{
FILE * pFile;
size_t state;
pFile = fopen("C:\\shared.tmp", "rb");
if (pFile != NULL)
{
size_t rt = fread(&state, sizeof(int), 1, pFile);
char *string = NULL;
string= new char[state + 1];
fclose(pFile);
}
return 0;
}
This below line causing exception to be thrown:
string = new char[state + 1];
Why this is happening and how can I fix this?
You're passing the address of an uninitialized 64-bit (8 bytes, on modern 64-bit systems) variable, state, and tell fread to read sizeof(int) (32 bits, 4 bytes on those same systems) bytes from the file into this variable.
This will overwrite 4 bytes of the variable with the value read, but leave the other 4 uninitialized. Which 4 bytes it overwrites depends on the architecture (the least significant on Intel CPUs, the most significant on big-endian-configured ARMs), but the result will most likely be garbage either way, because 4 bytes were left uninitialized and could contain anything.
In your case, most likely they are the most significant bytes, and contain at least one non-zero bit, meaning that you then try to allocate far beyond 4GB of memory, which you don't have.
The solution is to make state a std::uint32_t (since you apparently expect the file to contain 4 bytes representing an unsigned integer; don't forget to include <cstdint>) and pass sizeof(std::uint32_t), and in general make sure that for every fread and similar call where you pass in a pointer and a size, you make sure that the thing the pointer points to actually has exactly the size you pass along. Passing a size_t* and sizeof(int) does not fulfill these requirements on 64-bit systems, and since the size of C++'s basic types is not guaranteed, you generally don't want to use them for binary I/O at all.
There are a various things which you could improve in your C++ code, but there are a number of reasons, why you end up with this behaviour:
First, the variable state is of type size_t, but your code attempts to initialize its value using fread(&state, sizeof(int), 1, pFile);. Now, if sizeof(state) != sizeof(int) then you have undefined behaviour. If sizeof(state) < sizeof(int), then the fread statement usually overwrites some arbitrary memory after the storage for variable state. This leads to undefined behaviour (e.g. state might have some random large value, and allocation fails).
Second, if sizeof(state) > sizeof(int), then state is only partially initialized and its actual value depends on both the initialized (by fread) and the uninitialized bits. So its value can be a large number and allocation may fail.
Third, the if sizeof(state) == sizeof(int) then it just might be that the the value read is too large, and allocation simply fails because you run out of memory.
Fourth, the value you read from the file might have some different encoding or endianness. For example, if value was written to the file in big-endian format, but is fread on a little-endian CPU, might cause the bytes to be incorrectly swapped. You might need to swap the bytes before using the value read.
I suggest you instead use some fixed-width integer type from <cstdint> (or <stdint.h> for pre-C++11), such as std::uint64_t for variable state, read the value using fread(&state, sizeof(state), 1, pFile);, and then byte-swap state if the endianness of your CPU doesn't match the endianness of the value stored in the file.
You should decide what the maximum number of characters you are willing to allocate is and error out if state is greater than that. Almost certainly, it is.
Related
It's easy to find popular conventions for C-style I/O. What's more difficult is finding explanations as to why they are such. It's common to see a read with statements like:
fread(buffer, sizeof(buffer), 1, ptr);
How should a programmer think about using the parameters size and n of fread()?
For example, if my input file is 100 bytes, should I opt for a larger size with fewer n or read more objects of a smaller size?
If the size-to-be-read and n exceed the byte-size of an input file, what happens? Are the excess bytes that were read composed, colloquially speaking, of "junk values"?
size_t fread(void * restrict ptr, size_t size, size_t n, FILE * restrict stream);
How should a programmer think about using the parameters size and n of fread()?
When reading into an array:
size is the size of called pointer's de-referenced type.
n in the number of elements.
some_type destination[max_number_of_elements];
size_t num_read = fread(destination, sizeof *destination, max_number_of_elements, inf);
printf("Number of elements read %zu\n", num_read);
if (num_read == 0) {
if (feof(inf)) puts("Nothing read as at end-of-file");
else if (ferror(inf)) puts("Input error occurred");
else {
// since sizeof *destination and max_number_of_elements cannot be 0 here
// something strange has occurred (UB somewhere prior?)
}
For example, if my input file is 100 bytes, should I opt for a larger size with fewer n or read more objects of a smaller size?
In the case, the size of the data is 1, the max count 100.
#define MAX_FILE_SIZE 100
uint8_t destination[MAX_FILE_SIZE];
size_t num_read = fread(destination, sizeof *destination, MAX_FILE_SIZE, inf);
If the size-to-be-read and n exceed the byte-size of an input file, what happens?
The destination is not filled. Use the return value to determine.
Are the excess bytes that were read composed, colloquially speaking, of "junk values"?
No. There values before fread() remain the same, (as long as the return was not 0 and ferror() not set). If the destination was not initialized/assigned, then yes, it may be though of as junk.
Separate size, n allows fread() to function as desired even if size * n overflows size_t math. With current flat memory models, rarely is this needed.
First, the while (!feof(ptr)) is wrong and a really bad anti-pattern. There are situations where it can work, but it's almost always gratuitously more complicted that correct idiomatic usage. The return value of fread or other stdio read functions already tells you if it didn't succeed, and you usually need to be able to handle that immediately rather than waiting for the next loop iteration to start. If whatever resource you're learning from is teaching this while (!feof(ptr)) thing, you should probably stop trusting it as a source for learning C.
Now, on to your specific question about the size and n arguments: having them separate is completely gratuitous and not useful. Just pass the desired length to read for one of them, and 1 for the other. If you want to be able to determine how many bytes were already read if you hit end-of-file or an error, you need to pass 1 for size and the requested number of bytes as n. Otherwise, if any read shorter than expected is an error, it sometimes makes sense to switch them; then the only possible return values are 1 and 0 (success and error, respectively).
For an understanding of why it's the case that how you use these two arguments don't matter, all the stdio read functions, including fread, are specified as if they happened via repeated calls to fgetc. It does not matter if you have size*n such calls or n*size such calls, because multiplication of numbers commutes.
I would like to allocate some char buffers0, to be passed to an external non-C++ function, that have a specific alignment requirement.
The requirement is that the buffer be aligned to a N-byte1 boundary, but not to a 2N boundary. For example, if N is 64, then an the pointer to this buffer p should satisfy ((uintptr_t)p) % 64 == 0 and ((uintptr_t)p) % 128 != 0 - at least on platforms where pointers have the usual interpretation as a plain address when cast to uintptr_t.
Is there a reasonable way to do this with the standard facilities of C++11?
If not, is there is a reasonable way to do this outside the standard facilities2 which works in practice for modern compilers and platforms?
The buffer will be passed to an outside routine (adhering to the C ABI but written in asm). The required alignment will usually be greater than 16, but less than 8192.
Over-allocation or any other minor wasted-resource issues are totally fine. I'm more interested in correctness and portability than wasting a few bytes or milliseconds.
Something that works on both the heap and stack is ideal, but anything that works on either is still pretty good (with a preference towards heap allocation).
0 This could be with operator new[] or malloc or perhaps some other method that is alignment-aware: whatever makes sense.
1 As usual, N is a power of two.
2 Yes, I understand an answer of this type causes language-lawyers to become apoplectic, so if that's you just ignore this part.
Logically, to satisfy "aligned to N, but not 2N", we align to 2N then add N to the pointer. Note that this will over-allocate N bytes.
So, assuming we want to allocate B bytes, if you just want stack space, alignas would work, perhaps.
alignas(N*2) char buffer[B+N];
char *p = buffer + N;
If you want heap space, std::aligned_storage might do:
typedef std::aligned_storage<B+N,N*2>::type ALIGNED_CHAR;
ALIGNED_CHAR buffer;
char *p = reinterpret_cast<char *>(&buffer) + N;
I've not tested either out, but the documentation suggests it should be OK.
You can use _aligned_malloc(nbytes,alignment) (in MSVC) or _mm_malloc(nbytes,alignment) (on other compilers) to allocate (on the heap) nbytes of memory aligned to alignment bytes, which must be an integer power of two.
Then you can use the trick from Ken's answer to avoid alignment to 2N:
void*ptr_alloc = _mm_malloc(nbytes+N,2*N);
void*ptr = static_cast<void*>(static_cast<char*>(ptr_alloc) + N);
/* do your number crunching */
_mm_free(ptr_alloc);
We must ensure to keep the pointer returned by _mm_malloc() for later de-allocation, which must be done via _mm_free().
char b = 'a';
int *a = (int*)&b;
std::cout << *a;
What could be the content of *a? It is showing garbage value. Can you anyone please explain. Why?
Suppose char takes one byte in memory and int takes two bytes (the exact number of bytes depends of the platform, but usually they are not same for char and int). You set a to point to the memory location same as b. In case of b dereferencing will consider only one byte because it's of type char. In case of a dereferencing will access two bytes and thus will print the integer stored at these locations. That's why you get a garbage: first byte is 'a', the second is random byte - together they give you a random integer value.
Either the first or the last byte should be hex 61 depending on byte order. The other three bytes are garbage. best to change the int to an unsigned int and change the cout to hex.
I don't know why anyone would want to do this.
You initialize a variable with the datatype char ...
a char in c++ should have 1 Byte and an int should contain 2 Byte. Your a points to the address of the b variable... an adress should be defined as any hexadecimal number. Everytime you call this "program" there should be any other hexadecimal number, because the scheduler assigns any other address to your a variable if you start this program new.
Think of it as byte blocks. Char has one byte block (8 bits). If you set a conversion (int*) then you get the next 7 byte blocks from the char's address. Therefore you get 7 random byte blocks which means you'll get a random integer. That's why you get a garbage value.
The code invokes undefined behavior, garbage is a form of undefined behavior, but your program could also cause a system violation and crash with more consequences.
int *a = (int*)&b; initializes a pointer to int with the address of a char. Dereferencing this pointer will attempt to read an int from that address:
If the address is misaligned and the processor does not support misaligned accesses, you may get a system specific signal or exception.
If the address is close enough to the end of a segment that accessing beyond the first byte causes a segment violation, that's what you can get.
If the processor can read the sizeof(int) bytes at the address, only one of those will be a, (0x61 in ASCII) but the others have undetermined values (aka garbage). As a matter of fact, on some architectures, reading from uninitialized memory may cause problems: under valgrind for example, this will cause a warning to be displayed to the user.
All the above are speculations, undefined behavior means anything can happen.
What would be the most efficient way to read a UInt32 value from an arbitrary memory address in C++? (Assuming Windows x86 or Windows x64 architecture.)
For example, consider having a byte pointer that points somewhere in memory to block that contains a combination of ints, string data, etc., all mixed together. The following sample shows reading the various fields from this block in a loop.
typedef unsigned char* BytePtr;
typedef unsigned int UInt32;
...
BytePtr pCurrent = ...;
while ( *pCurrent != 0 )
{
...
if ( *pCurrent == ... )
{
UInt32 nValue = *( (UInt32*) ( pCurrent + 1 ) ); // line A
...
}
pCurrent += ...;
}
If at line A, pPtr happens to contain a 4-byte-aligned address, reading the UInt32 should be a single memory read. If pPtr contains a non-aligned address, more than one memory cycles my be needed which slows the code down. Is there a faster way to read the value from non-aligned addresses?
I'd recommend memcpy into a temporary of type UInt32 within your loop.
This takes advantage of the fact that a four byte memcpy will be inlined by the compiler when building with optimization enabled, and has a few other benefits:
If you are on a platform where alignment matters (hpux, solaris sparc, ...) your code isn't going to trap.
On a platform where alignment matters there it may be worthwhile to do an address check for alignment then one of a regular aligned load or a set of 4 byte loads and bit ors. Your compiler's memcpy very likely will do this the optimal way.
If you are on a platform where an unaligned access is allowed and doesn't hurt performance (x86, x64, powerpc, ...), you are pretty much guarenteed that such a memcpy is then going to be the cheapest way to do this access.
If your memory was initially a pointer to some other data structure, your code may be undefined because of aliasing problems, because you are casting to another type and dereferencing that cast. Run time problems due to aliasing related optimization issues are very hard to track down! Presuming that you can figure them out, fixing can also be very hard in established code and you may have to use obscure compilation options like -fno-strict-aliasing or -qansialias, which can limit the compiler's optimization ability significantly.
Your code is undefined behaviour.
Pretty much the only "correct" solution is to only read something as a type T if it is a type T, as follows:
uint32_t n;
char * p = point_me_to_random_memory();
std::copy(p, p + 4, reinterpret_cast<char*>(&n));
std::cout << "The value is: " << n << std::endl;
In this example, you want to read an integer, and the only way to do that is to have an integer. If you want it to contain a certain binary representation, you need to copy that data to the address starting at the beginning of the variable.
Let the compiler do the optimizing!
UInt32 ReadU32(unsigned char *ptr)
{
return static_cast<UInt32>(ptr[0]) |
(static_cast<UInt32>(ptr[1])<<8) |
(static_cast<UInt32>(ptr[2])<<16) |
(static_cast<UInt32>(ptr[3])<<24);
}
(sizeof) char always returns 1 in 32 bit GCC compiler.
But since the basic block size in 32 bit compiler is 4, How does char occupy a single byte when the basic size is 4 bytes???
Considering the following :
struct st
{
int a;
char c;
};
sizeof(st) returns as 8 as agreed with the default block size of 4 bytes (since 2 blocks are allotted)
I can never understand why sizeof(char) returns as 1 when it is allotted a block of size 4.
Can someone pls explain this???
I would be very thankful for any replies explaining it!!!
EDIT : The typo of 'bits' has been changed to 'bytes'. I ask Sorry to the person who made the first edit. I rollbacked the EDIT since I did not notice the change U made.
Thanks to all those who made it a point that It must be changed especially #Mike Burton for downvoting the question and to #jalf who seemed to jump to conclusions over my understanding of concepts!!
sizeof(char) is always 1. Always. The 'block size' you're talking about is just the native word size of the machine - usually the size that will result in most efficient operation. Your computer can still address each byte individually - that's what the sizeof operator is telling you about. When you do sizeof(int), it returns 4 to tell you that an int is 4 bytes on your machine. Likewise, your structure is 8 bytes long. There is no information from sizeof about how many bits there are in a byte.
The reason your structure is 8 bytes long rather than 5 (as you might expect), is that the compiler is adding padding to the structure in order to keep everything nicely aligned to that native word length, again for greater efficiency. Most compilers give you the option to pack a structure, either with a #pragma directive or some other compiler extension, in which case you can force your structure to take minimum size, regardless of your machine's word length.
char is size 1, since that's the smallest access size your computer can handle - for most machines an 8-bit value. The sizeof operator gives you the size of all other quantities in units of how many char objects would be the same size as whatever you asked about. The padding (see link below) is added by the compiler to your data structure for performance reasons, so it is larger in practice than you might think from just looking at the structure definition.
There is a wikipedia article called Data structure alignment which has a good explanation and examples.
It is structure alignment with padding. c uses 1 byte, 3 bytes are non used. More here
Sample code demonstrating structure alignment:
struct st
{
int a;
char c;
};
struct stb
{
int a;
char c;
char d;
char e;
char f;
};
struct stc
{
int a;
char c;
char d;
char e;
char f;
char g;
};
std::cout<<sizeof(st) << std::endl; //8
std::cout<<sizeof(stb) << std::endl; //8
std::cout<<sizeof(stc) << std::endl; //12
The size of the struct is bigger than the sum of its individual components, since it was set to be divisible by 4 bytes by the 32 bit compiler. These results may be different on different compilers, especially if they are on a 64 bit compiler.
First of all, sizeof returns a number of bytes, not bits. sizeof(char) == 1 tells you that a char is eight bits (one byte) long. All of the fundamental data types in C are at least one byte long.
Your structure returns a size of 8. This is a sum of three things: the size of the int, the size of the char (which we know is 1), and the size of any extra padding that the compiler added to the structure. Since many implementations use a 4-byte int, this would imply that your compiler is adding 3 bytes of padding to your structure. Most likely this is added after the char in order to make the size of the structure a multiple of 4 (a 32-bit CPU access data most efficiently in 32-bit chunks, and 32 bits is four bytes).
Edit: Just because the block size is four bytes doesn't mean that a data type can't be smaller than four bytes. When the CPU loads a one-byte char into a 32-bit register, the value will be sign-extended automatically (by the hardware) to make it fill the register. The CPU is smart enough to handle data in N-byte increments (where N is a power of 2), as long as it isn't larger than the register. When storing the data on disk or in memory, there is no reason to store every char as four bytes. The char in your structure happened to look like it was four bytes long because of the padding added after it. If you changed your structure to have two char variables instead of one, you should see that the size of the structure is the same (you added an extra byte of data, and the compiler added one fewer byte of padding).
All object sizes in C and C++ are defined in terms of bytes, not bits. A byte is the smallest addressable unit of memory on the computer. A bit is a single binary digit, a 0 or a 1.
On most computers, a byte is 8 bits (so a byte can store values from 0 to 256), although computers exist with other byte sizes.
A memory address identifies a byte, even on 32-bit machines. Addresses N and N+1 point to two subsequent bytes.
An int, which is typically 32 bits covers 4 bytes, meaning that 4 different memory addresses exist that each point to part of the int.
In a 32-bit machine, all the 32 actually means is that the CPU is designed to work efficiently with 32-bit values, and that an address is 32 bits long. It doesn't mean that memory can only be addressed in blocks of 32 bits.
The CPU can still address individual bytes, which is useful when dealing with chars, for example.
As for your example:
struct st
{
int a;
char c;
};
sizeof(st) returns 8 not because all structs have a size divisible by 4, but because of alignment. For the CPU to efficiently read an integer, its must be located on an address that is divisible by the size of the integer (4 bytes). So an int can be placed on address 8, 12 or 16, but not on address 11.
A char only requires its address to be divisible by the size of a char (1), so it can be placed on any address.
So in theory, the compiler could have given your struct a size of 5 bytes... Except that this wouldn't work if you created an array of st objects.
In an array, each object is placed immediately after the previous one, with no padding. So if the first object in the array is placed at an address divisible by 4, then the next object would be placed at a 5 bytes higher address, which would not be divisible by 4, and so the second struct in the array would not be properly aligned.
To solve this, the compiler inserts padding inside the struct, so its size becomes a multiple of its alignment requirement.
Not because it is impossible to create objects that don't have a size that is a multiple of 4, but because one of the members of your st struct requires 4-byte alignment, and so every time the compiler places an int in memory, it has to make sure it is placed at an address that is divisible by 4.
If you create a struct of two chars, it won't get a size of 4. It will usually get a size of 2, because when it contains only chars, the object can be placed at any address, and so alignment is not an issue.
Sizeof returns the value in bytes. You were talking about bits. 32 bit architectures are word aligned and byte referenced. It is irrelevant how the architecture stores a char, but to compiler, you must reference chars 1 byte at a time, even if they use up less than 1 byte.
This is why sizeof(char) is 1.
ints are 32 bit, hence sizeof(int)= 4, doubles are 64 bit, hence sizeof(double) = 8, etc.
Because of optimisation padding is added so size of an object is 1, 2 or n*4 bytes (or something like that, talking about x86). That's why there is added padding to 5-byte object and to 1-byte not. Single char doesn't have to be padded, it can be allocated on 1 byte, we can store it on space allocated with malloc(1). st cannot be stored on space allocated with malloc(5) because when st struct is being copied whole 8 bytes are being copied.
It works the same way as using half a piece of paper. You use one part for a char and the other part for something else. The compiler will hide this from you since loading and storing a char into a 32bit processor register depends on the processor.
Some processors have instructions to load and store only parts of the 32bit others have to use binary operations to extract the value of a char.
Addressing a char works as it is AFAIR by definition the smallest addressable memory. On a 32bit system pointers to two different ints will be at least 4 address points apart, char addresses will be only 1 apart.