Acquire a long int from a binary file - casting

I have a binary file in which 64 bits blocks are a number that I need. If I try to read it with:
FILE *ptr;
ptr = fopen("HH", "rb"); //r for read, b for binary
ulong val;
fseek(ptr, 0, SEEK_SET);
fread(&val, 8, 1, ptr);
it doesn't work (it gives random values always different), but it work if I say him to acquire until 3 bytes (in the sense that I have the right value for that three bytes - with little ending encoding, that is the right one). I've also tried with char type, it put the right value of each byte in each element of my char array, but then I don't know how to cast it into a long int type (I've managed just to do it element per element, but is not what I need). Can you help me? Thanks!

I've understood the mistake! It was just a problem with printf, the data was acquired well. I said:
printf ("%d", val);
instead of
printf ("%ld", val);
where the ld is for long type...

Related

Read a Fixed Number of (Binary) Bytes from an unsigned const char*

I have an unsigned const char* buffer in memory (comes from the network) that I need to do some stuff with. What stumps me right now is that I need to interpret the first two bytes as binary data, while the rest is ASCII. I have no problem reading the ASCII (I think), but I can't figure out how to read just the first two bytes of the unsigned array, and turn them into (say) an int. I was going to use reinterpret_cast, but the first two bytes are not null-terminated, and the only other help I could find was all about file IO.
In short, I have something like {0000000000001011}ABC Z123 XY0 5, where the characters outside the curly braces are read as ASCII, while the ones inside are supposed to be a single binary number, i.e. 11).
int c1 = buffer[0];
int c2 = buffer[1];
int number = c1 << 8 + c2;
unsigned char* asciiData = buffer+2;
I really don't get why the bytes have to be "null-terminated" for you to use reinterpret_cast. What I would do (and works so far in my projects) is:
uint16_t first_bytes = *(reinterpret_cast<const uint16_t*>(buffer));
That would get you the first two bytes in the buffer and assign the value to the first_bytes variable.

Use reinterpret_cast to convert binary data at an offset in the char array

I found this post:
Why is memcpy slower than a reinterpret_cast when parsing binary data?
where somebody uses reinterpret_cast to convert binary data to an integer. However (I presume) the number they are converting is at the 0th element in the char* array.
How could I use the above for situations where the binary number I want to convert is offset N bytes from the beginning of the char array?
I want to convert the binary number in as fewer CPU cycles as possible, hence my interest in reinterpret_cast and the above SO question.
Just add a offset to the byte array address.
Instead of x, cast x+123
But: Have you read the first line of the question (bold edit)?
TLDR: I forgot to enable compiler optimizations. With the
optimizations enabled the performance is (nearly) identical.
If you have a *array; initialized with your binary data then you can simply do this:
for (int offset = 0; offset < sizeof (array); offset++)
{
... = *reinterpret_cast<const int*>(array + offset);
}

c++ void* memory traversal

I'm trying to store a couple of ints in memory using void* & then retrieve them but it keeps throwing "pointer of type ‘void *’ used in arithmetic" warning.
void *a = new char[4];
memset(a, 0 , 4);
unsigned short d = 7;
memcpy(a, (void *)&d, 2);
d=8;
memcpy(a+2, (void *)&d, 2); //pointer of type ‘void *’ used in arithmetic
/*Retrieving*/
unsigned int *data = new unsigned int();
memcpy(data, a, 2);
cout << (unsigned int)(*data);
memcpy(data, a+2, 2); //pointer of type ‘void *’ used in arithmetic
cout << (unsigned int)(*data);
The results are as per expectation but I fear that these warnings might turn into errors on some compiler. Is there another way to do this that I'm not aware of?
I know this is perhaps a bad practice in normal scenario but the problem statement requires that unsigned integers be stored and sent in 2 byte packets. Please correct me if I'm wrong but as per my understanding, using a char* instead of a void* would have taken up 3 bytes for 3-digit numbers.
a+2, with a being a pointer, means that the pointer is increased to allow space for two items of the pointer type. V.g., if a was int32 *, a + 2 would mean "a position plus 8 bytes".
Since void * has no type, it can only try to guess what do you mean by a + 2, since it does not know the size of the type being referred.
The problem is that the compiler doesn't know what to do with
a+2
This instruction means "Move pointer 'a' forward by 2 * (sizeof-what-is-pointed-to-by-'a')".
If a is void *, the compiler doesn't know the size of the target object (there isn't one!), so it gives an error.
You need to do:
memcpy(data, ((char *)a)+2, 2);
This way, the compiler knows how to add 2 - it knows the sizeof(char).
Please correct me if I'm wrong but as per my understanding, using a char* instead of a void* would have taken up 3 bytes for 3-digit numbers.
Yes, you are wrong, that would be the case if you were transmitting the numbers as chars. 'char*' is just a convenient way of referring to 8-bit values - and since you are receiving pairs of bytes, you could treat the destination memory are char's to do simple math. But it is fairly common for people to use 'char' arrays for network data streams.
I prefer to use something like BYTE or uint8_t to indicate clearly 'I'm working with bytes' as opposed to char or other values.
void* is a pointer to an unknown, more importantly, 0 sized type (void). Because the size is zero, offset math is going to result in zeros, so the compiler tells you it's invalid.
It is possible that your solution could be as simple as to receive the bytes from the network into a byte-based array. An int is 32 bits, 4 bytes. They're not "char" values, but quads of a 4-byte integer.
#include <cstdint>
uint8_t buffer[4];
Once you know you've filled the buffer, you can simply say
uint32_t integer = *(static_cast<uint32*>(buffer));
Whether this is correct will depend on whether the bytes are in network or host order. I'm guessing you'll probably need:
uint32_t integer = ntohl(*(static_cast<uint32*>(buffer)));

Change byte in int - casting byte to an integer

I'm streaming data from the server. Server sends various BigEndian variables, but also sends bytes (representing number). One of my SocketClient.read overloads accepts (int length, char* array). I want to pass an integer variable pointer to this function to get 0-255 value in it (unsigned byte).
What have I tried:
unsigned int UNSIGNED_BYTE;
socket.read(1, &((char*)&UNSIGNED_BYTE)[0]); //I am changing 1st byte of a variable - C++ uses little endian
//I know that function reads 6, and that is what I find in 1st byte
std::cout<<(int)((char*)&UNSIGNED_BYTE)[0]<<")\n"; //6 - correct
std::cout<<UNSIGNED_BYTE<<")\n"; //3435973638 -What the hell?
According to the above, I am changing the wrong part of the int. But what else should I change?
My class declaration and implementation:
/*Declares:*/
bool read(int bytes, char *text);
/*Implements:*/
bool SocketClient::read(int bytes, char *text) {
//boost::system::error_code error;
char buffer = 0;
int length = 0;
while(bytes>0) {
try
{
size_t len = sock.receive(boost::asio::buffer(&buffer, 1)); //Read a byte to a buffer
}
catch(const boost::system::system_error& ex) {
std::cout<<"Socket exception: "<<ex.code()<<'\n';
return false; //Happens when peer disconnects for example
}
if(byteEcho)
std::cout<<(int)(unsigned char)buffer<<' ';
bytes--; //Decrease ammount to read
text[length] = buffer;
length++;
}
return true;
}
So firstly:
unsigned int UNSIGNED_BYTE;
Probably isn't very helpfully named since I very much doubt the architecture you're using defines an int as an 8 bit unsigned integer additionally you're not initializing this to zero and then later you're writing to only part of it leaving the rest as garbage. It's likely to be 32/64 bits in size on most modern compilers/architectures.
Secondly:
socket.read(1, &((char*)&UNSIGNED_BYTE)[0])
Is reading 8 bits into a (probably) 32 bit memory location and the correct end to put the 8 bits is not down to C++ (as you say in your comments). It's actually down to your CPU since endianness is a property of the CPU not the language. Why don't you read the value into an actual char and then simply assign that to an int since this will deal with the conversion for you and will make your code portable.
The problem was, that I did not initialise the int. Though the 1st byte was changed, other 3 bytes had random values.
This makes the solution very simple (and also makes my question be likely to be closed as Too localised):
unsigned int UNSIGNED_BYTE = 0;

When to use unsigned char pointer

What is the use of unsigned char pointers? I have seen it at many places that pointer is type cast to pointer to unsinged char Why do we do so?
We receive a pointer to int and then type cast it to unsigned char*. But if we try to print element in that array using cout it does not print anything. why? I do not understand. I am new to c++.
EDIT Sample Code Below
int Stash::add(void* element)
{
if(next >= quantity)
// Enough space left?
inflate(increment);
// Copy element into storage, starting at next empty space:
int startBytes = next * size;
unsigned char* e = (unsigned char*)element;
for(int i = 0; i < size; i++)
storage[startBytes + i] = e[i];
next++;
return(next - 1); // Index number
}
You are actually looking for pointer arithmetic:
unsigned char* bytes = (unsigned char*)ptr;
for(int i = 0; i < size; i++)
// work with bytes[i]
In this example, bytes[i] is equal to *(bytes + i) and it is used to access the memory on the address: bytes + (i* sizeof(*bytes)). In other words: If you have int* intPtr and you try to access intPtr[1], you are actually accessing the integer stored at bytes: 4 to 7:
0 1 2 3
4 5 6 7 <--
The size of type your pointer points to affects where it points after it is incremented / decremented. So if you want to iterate your data byte by byte, you need to have a pointer to type of size 1 byte (that's why unsigned char*).
unsigned char is usually used for holding binary data where 0 is valid value and still part of your data. While working with "naked" unsigned char* you'll probably have to hold the length of your buffer.
char is usually used for holding characters representing string and 0 is equal to '\0' (terminating character). If your buffer of characters is always terminated with '\0', you don't need to know it's length because terminating character exactly specifies the end of your data.
Note that in both of these cases it's better to use some object that hides the internal representation of your data and will take care of memory management for you (see RAII idiom). So it's much better idea to use either std::vector<unsigned char> (for binary data) or std::string (for string).
In C, unsigned char is the only type guaranteed to have no trapping values, and which guarantees copying will result in an exact bitwise image. (C++ extends this guarantee to char as well.) For this reason, it is traditionally used for "raw memory" (e.g. the semantics of memcpy are defined in terms of unsigned char).
In addition, unsigned integral types in general are used when bitwise operations (&, |, >> etc.) are going to be used. unsigned char is the smallest unsigned integral type, and may be used when manipulating arrays of small values on which bitwise operations are used. Occasionally, it's also used because one needs the modulo behavior in case of overflow, although this is more frequent with larger types (e.g. when calculating a hash value). Both of these reasons apply to unsigned types in general; unsigned char will normally only be used for them when there is a need to reduce memory use.
The unsinged char type is usually used as a representation of a single byte of binary data. Thus, and array is often used as a binary data buffer, where each element is a singe byte.
The unsigned char* construct will be a pointer to the binary data buffer (or its 1st element).
I am not 100% sure what does c++ standard precisely says about size of unsigned char, whether it is fixed to be 8 bit or not. Usually it is. I will try to find and post it.
After seeing your code
When you use something like void* input as a parameter of a function, you deliberately strip down information about inputs original type. This is very strong suggestion that the input will be treated in very general manner. I.e. as a arbitrary string of bytes. int* input on the other hand would suggest it will be treated as a "string" of singed integers.
void* is mostly used in cases when input gets encoded, or treated bit/byte wise for whatever reason, since you cannot draw conclusions about its contents.
Then In your function you seem to want to treat the input as a string of bytes. But to operate on objects, e.g. performing operator= (assignment) the compiler needs to know what to do. Since you declare input as void* assignment such as *input = something would have no sense because *input is of void type. To make compiler to treat input elements as the "smallest raw memory pieces" you cast it to the appropriate type which is unsigned int.
The cout probably did not work because of wrong or unintended type conversion. char* is considered a null terminated string and it is easy to confuse singed and unsigned versionin code. If you pass unsinged char* to ostream::operator<< as a char* it will treat and expect the byte input as normal ASCII characters, where 0 is meant to be end of string not an integer value of 0. When you want to print contents of memory it is best to explicitly cast pointers.
Also note that to print memory contents of a buffer you would need to use a loop, since other wise the printing function would not know when to stop.
Unsigned char pointers are useful when you want to access the data byte by byte. For example, a function that copies data from one area to another could need this:
void memcpy (unsigned char* dest, unsigned char* source, unsigned count)
{
for (unsigned i = 0; i < count; i++)
dest[i] = source[i];
}
It also has to do with the fact that the byte is the smallest addressable unit of memory. If you want to read anything smaller than a byte from memory, you need to get the byte that contains that information, and then select the information using bit operations.
You could very well copy the data in the above function using a int pointer, but that would copy chunks of 4 bytes, which may not be the correct behavior in some situations.
Why nothing appears on the screen when you try to use cout, the most likely explanation is that the data starts with a zero character, which in C++ marks the end of a string of characters.