I have an unsigned char array in c++, and I want to save it's bits into a file.
So for example if I have the array below
arr[0] = 0;
arr[1] = 1;
arr[2] = 2;
I want to save this into the file so that the binary data of the file looks like this (without the spaces).
00000000 00000001 00000010
I have tried this code below but it doesn't seem to work. I give the program a blank file that is 0 bytes in size, and after the program runs the file remains 0 bytes in size (nothing gets written to it).
// Create unsigned char array
unsigned char *arrayToWrite= (unsigned char*)malloc(sizeof(unsigned char)*720);
// Populate it with some test values
arrayToWrite[0] = 0;
arrayToWrite[1] = 1;
arrayToWrite[2] = 2;
// Open the file I want to write to
FILE *dat;
dat = fopen("filePath.bin", "wb");
// Write array to file
fwrite(&arrayToWrite, sizeof(char)*720, 1, dat);
fclose(dat);
I would expect after this program runs that the file "filePath.bin" would be 720 bytes long filled with mostly 0's except for the first and second position I filled with test data.
What am I doing wrong? Any help would be much appreciated! Thank you!
The basic issue there is that you're passing the pointer to your arrayToWrite variable, rather than to the array itself. Change fwrite(&arrayToWrite... to fwrite(arrayToWrite....
BTW, malloc() does NOT promise to give you zeroed memory. For that, use calloc() to allocate and zero memory or memset() to zero already-allocated memory. (Though all that is more a C thing; for C++ you'd be better off using something like std::vector instead of raw C arrays.)
C++ code is written different way:
std::vector<unsigned char> vectorToWrite = { 0, 1, 2 };
vectorToWrite.resize( 720 );
std::ofstream file( "filePath.bin", std::ios::out | std::ios::binary );
if( !file ) {
// error handling
}
file.write( vectorToWrite.data(), vectorToWrite.size() * sizeof( decltype(vectorToWrite)::value_type ) );
Note: I put sizeof( decltype(vectorToWrite)::value_type ) there so if you later change vector data it will still properly work without further change, but in case of char it can be omitted completely as sizeof(char) is always equal to 1
Related
Does anyone know how to read in a file with raw encoding? So stumped.... I am trying to read in floats or doubles (I think). I have been stuck on this for a few weeks. Thank you!
File that I am trying to read from:
http://www.sci.utah.edu/~gk/DTI-data/gk2/gk2-rcc-mask.raw
Description of raw encoding:
hello://teem.sourceforge.net/nrrd/format.html#encoding (change hello to http to go to page)
- "raw" - The data appears on disk exactly the same as in memory, in terms of byte values and byte ordering. Produced by write() and fwrite(), suitable for read() or fread().
Info of file:
http://www.sci.utah.edu/~gk/DTI-data/gk2/gk2-rcc-mask.nhdr - I think the only things that matter here are the big endian (still trying to understand what that means from google) and raw encoding.
My current approach, uncertain if it's correct:
//Function ripped off from example of c++ ifstream::read reference page
void scantensor(string filename){
ifstream tdata(filename, ifstream::binary); // not sure if I should put ifstream::binary here
// other things I tried
// ifstream tdata(filename) ifstream tdata(filename, ios::in)
if(tdata){
tdata.seekg(0, tdata.end);
int length = tdata.tellg();
tdata.seekg(0, tdata.beg);
char* buffer = new char[length];
tdata.read(buffer, length);
tdata.close();
double* d;
d = (double*) buffer;
} else cerr << "failed" << endl;
}
/* P.S. I attempted to print the first 100 elements of the array.
Then I print 100 other elements at some arbitrary array indices (i.e. 9,900 - 10,000). I actually kept increasing the number of 0's until I ran out of bound at 100,000,000 (I don't think that's how it works lol but I was just playing around to see what happens)
Here's the part that makes me suspicious: so the ifstream different has different constructors like the ones I tried above.
the first 100 values are always the same.
if I use ifstream::binary, then I get some values for the 100 arbitrary printing
if I use the other two options, then I get -6.27744e+066 for all 100 of them
So for now I am going to assume that ifstream::binary is the correct one. The thing is, I am not sure if the file I provided is how binary files actually look like. I am also unsure if these are the actual numbers that I am supposed to read in or just casting gone wrong. I do realize that my casting from char* to double* can be unsafe, and I got that from one of the threads.
*/
I really appreciate it!
Edit 1: Right now the data being read in using the above method is apparently "incorrect" since in paraview the values are:
Dxx,Dxy,Dxz,Dyy,Dyz,Dzz
[0, 1], [-15.4006, 13.2248], [-5.32436, 5.39517], [-5.32915, 5.96026], [-17.87, 19.0954], [-6.02961, 5.24771], [-13.9861, 14.0524]
It's a 3 x 3 symmetric matrix, so 7 distinct values, 7 ranges of values.
The floats that I am currently parsing from the file right now are very large (i.e. -4.68855e-229, -1.32351e+120).
Perhaps somebody knows how to extract the floats from Paraview?
Since you want to work with doubles, I recommend to read the data from file as buffer of doubles:
const long machineMemory = 0x40000000; // 1 GB
FILE* file = fopen("c:\\data.bin", "rb");
if (file)
{
int size = machineMemory / sizeof(double);
if (size > 0)
{
double* data = new double[size];
int read(0);
while (read = fread(data, sizeof(double), size, file))
{
// Process data here (read = number of doubles)
}
delete [] data;
}
fclose(file);
}
I would like to construct a function that performs an file analisys returning in array every byte count from 0x0 to 0xff and it's frequency.
So, I wrote this prototype:
// function prototype and other stuff
unsigned int counts[256] = {0}; // byte lookup table
FILE * pFile; // file handle
long fsize; // to store file size
unsigned char* buff; // buffer
unsigned char* pbuf; // later, mark buffer start
unsigned char* ebuf; // later, mark buffer end
if ( ( pFile = fopen ( FNAME , "rb" ) ) == NULL )
{
printf("Error");
return -1;
}
else
{
//get file size
fseek (pFile , 0 , SEEK_END);
fsize = ftell (pFile);
rewind (pFile);
// allocate space ( file size + 1 )
// I want file contents as string for populating it
// with pointers
buff = (unsigned char*)malloc( sizeof(char) * fsize + 1 );
// read whole file into memory
fread(buff,1,fsize,pFile);
// close file
fclose(pFile);
// mark end of buffer as string
buff[fsize] = '\0';
// set the pointers to beginning and end
pbuf = &buff[0];
ebuf = &buff[fsize];
// Here the Bottleneck
// iterate entire file byte by byte
// counting bytes
while ( pbuf != ebuf)
{
printf("%c\n",*pbuf);
// update byte count
counts[(*pbuf)]++;
++pbuf;
}
// free allocated memory
free(buff);
buff = NULL;
}
// printing stuff
But this way is slower. I am finding related algorithms because I have seen HxD for example
do it faster.
I think maybe reading some bytes at once could be a solution, but I don't know how.
I need a hand, or advice.
Thanks.
Assuming your file isn't so large it causes the system to start paging because you are reading the whole thing into memory, your algorithm is as good as it gets for general purpose data - O(n).
You'll need to remove the printf (as commented above); but beyond that if the performance isn't higher than the only way to improve it will be to look at the generated assembler - possibly the compiler isn't optimizing out all the de-references (gcc should do though).
If you happen to know something about your dataset, then there are potential improvements - if it is a bitmap type image that is likely to have blocks of identical bytes then it may be worth doing a little run length encoding. There could also be some data sets where it is actually worth sorting the data first (although that reduces the general case down to O(nlog(n)), so it's unlikely.
the rle would look something like (untested and probably sub-optimal off the top of my head disclaimer)
unsigned int cur_count=1;
unsigned char cbuf=*(++pbuf);
while ( pbuf != ebuf)
{
while( pbuf != ebuf && cbuf == *pbuf )
{
cur_count++;
pbuf++;
}
counts[cbuf]+=cur_count;
cur_count=0;
}
counts[cbuf]+=cur_count;
You can often trade an increase in program size for an improvement in speed and I think that could work nicely in your case. I would consider replacing your unsigned char* pointers with unsigned short* pointers and effectively processing two bytes at a time. That way, you have half the number of array index increments, half the number of calculations of offsets into your accumulator, half the number of accumulating additions and half the number of tests to see if your loop has finished.
Like I said, this will come at the expense of increased program size, so your accumulator array now needs 65536 elements instead of 256, but that is a small price to pay. I admit there is a tradeoff with legibility too.
At the end, you will have to run an index through all 65536 elements of my new, bigger accumulator and mask it with 0xff to get the first byte and shift by 8 bits to get the second. Then you will have the two indexes corresponding your original accumulator and you can do the 2 accumulates into your original 256 accumulator from there.
P.S. Note that although you can handle nearly all the file 2 bytes at a time, you will have to handle the last byte on its own if your file size is an odd number of bytes.
P.P.S. Note that this problem is readily parallelisable across, say 4, threads, if you want to get your spare 3 CPU cores doing something more useful than twiddling their thumbs ;-)
I'm writing a resource file which I want to insert a bunch of data from various common files such as .JPG, .BMP (for example) and I want it to be in binary.
I'm going to code something to retrieve these data later on organized by index, and this is what I got so far:
float randomValue = 23.14f;
ofstream fileWriter;
fileWriter.open("myFile.dat", ios::binary);
fileWriter.write((char*)&randomValue, sizeof(randomValue));
fileWriter.close();
//With this my .dat file, when opened in notepad has "B!¹A" in it
float retrieveValue = 0.0f;
ifstream fileReader;
fileReader.open("myFile.dat", ios::binary);
fileReader.read((char*)&retrieveValue, sizeof(retrieveValue));
fileReader.close();
cout << retrieveValue << endl; //This gives me exactly the 23.14 I wanted, perfect!
While this works nicely, I'd like to understand what exactly is happening there.
I'm converting the address of randomValue to char*, and writing the values in this address to the file?
I'm curious also because I need to do this for an array, and I can't do this:
int* myArray = new int[10];
//fill myArray values with random stuff
fileWriter.open("myFile.dat", ios::binary);
fileWriter.write((char*)&myArray, sizeof(myArray));
fileWriter.close();
From what I understand, this would just write the first address' value in the file, not all the array. So, for testing, I'm trying to simply convert a variable to a char* which I would write to a file, and convert back to the variable to see if I'm retrieving the values correctly, so I'm with this:
int* intArray = new int[10];
for(int i = 0; i < 10; i++)
{
cout << &intArray[i]; //the address of each number in my array
cout << intArray[i]; //it's value
cout << reinterpret_cast<char*>(&intArray[i]); //the char* value of each one
}
But for some reason I don't know, my computer "beeps" when I run this code. During the array, I'm also saving these to a char* and trying to convert back to int, but I'm not getting the results expected, I'm getting some really long values.
Something like:
float randomValue = 23.14f;
char* charValue = reinterpret_cast<char*>(&randomValue);
//charValue contains "B!¹A" plus a bunch of other (un-initiallized values?) characters, so I'm guessing the value is correct
//Now I'm here
I want to convert charValue back to randomValue, how can I do it?
edit: There's valuable information in the answers below, but they don't solve my (original) problem. I was testing these type of conversions because I'm doing a code that I will pick a bunch of resource files such as BMP, JPG, MP3, and save them in a single .DAT file organized by some criteria I still haven't fully figured out.
Later, I am going to use this resource file to read from and load these contents into a program (game) I'm coding.
The criteria I am still thinking but I was wondering if it's possible to do something like this:
//In my ResourceFile.DAT
[4 bytes = objectID][3 bytes = objectType (WAV, MP3, JPG, BMP, etc)][4 bytes = objectLength][objectLength bytes = actual objectData]
//repeating this until end of file
And then in the code that reads the resource file, I want to do something like this (untested):
ifstream fileReader;
fileReader.open("myFile.DAT", ios::binary);
//file check stuff
while(!fileReader.eof())
{
//Here I'll load
int objectID = 0;
fileReader((char*)&objectID, 4); //read 4 bytes to fill objectID
char objectType[3];
fileReader(&objectType, 3); //read the type so I know which parser use
int objectLength = 0;
fileReader((char*)&objectLength, 4); //get the length of the object data
char* objectData = new char[objectLength];
fileReader(objectData, objectLength); //fill objectData with the data
//Here I'll use a parser to fill classes depending on the type etc, and move on to the next obj
}
Currently my code is working with the original files (BMP, WAV, etc) and filling them into classes, and I want to know how I can save the data from these files into a binary data file.
For example, my class that manages BMP data has this:
class FileBMP
{
public:
int imageWidth;
int imageHeight;
int* imageData;
}
When I load it, I call:
void FileBMP::Load(int iwidth, int iheight)
{
int imageTotalSize = iwidth * iheight * 4;
imageData = new int[imageTotalSize]; //This will give me 4 times the amount of pixels in the image
int cPixel = 0;
while(cPixel < imageTotalSize)
{
imageData[cPixel] = 0; //R value
imageData[cPixel + 1] = 0; //G value
imageData[cPixel + 2] = 0; //B value
imageData[cPixel + 3] = 0; //A value
cPixel += 4;
}
}
So I have this single dimension array containing values in the format of [RGBA] per pixel, which I am using later on for drawing on screen.
I want to be able to save just this array in the binary data format that I am planning that I stated above, and then read it and fill this array.
I think it's asking too much for a code like this, so I'd like to understand what I need to know to save these values into a binary file and then read back to fill it.
Sorry for the long post!
edit2: I solved my problem by making the first edit... thanks for the valuable info, I also got to know what I wanted to!
By using the & operator, you're getting a pointer to the contents of the variable (think of it as just a memory address).
float a = 123.45f;
float* p = &a; // now p points to a, i.e. has the memory address to a's contents.
char* c = (char*)&a; // c points to the same memory location, but the code says to treat the contents as char instead of float.
When you gave the (char*)&randomValue for write(), you simply told "take this memory address having char data and write sizeof(randomValue) chars from there". You're not writing the address value itself, but the contents from that location of memory ("raw binary data").
cout << reinterpret_cast<char*>(&intArray[i]); //the char* value of each one
Here you're expected to give char* type data, terminated with a null char (zero). However, you're providing the raw bytes of the float value instead. Your program might crash here, as cout will input chars until it finds the terminator char -- which it might not find anytime soon.
float randomValue = 23.14f;
char* charValue = reinterpret_cast<char*>(&randomValue);
float back = *(float*)charValue;
Edit: to save binary data, you simply need to provide the data and write() it. Do not use << operator overloads with ofstream/cout. For example:
int values[3] = { 5, 6, 7 };
struct AnyData
{
float a;
int b;
} data;
cout.write((char*)&values, sizeof(int) * 3); // the other two values follow the first one, you can write them all at once.
cout.write((char*)&data, sizeof(data)); // you can also save structs that do not have pointers.
In case you're going to write structs, have a look at #pragma pack compiler directive. Compilers will align (use padding) variable to certain size (int), which means that the following struct actually might require 8 bytes:
#pragma pack (push, 1)
struct CouldBeLongerThanYouThink
{
char a;
char b;
};
#pragma pack (pop)
Also, do not write pointer values itself (if there are pointer members in a struct), because the memory addresses will not point to any meaningful data once read back from a file. Always write the data itself, not pointer values.
What's happening is that you're copying the internal
representation of your data to a file, and then copying it back
into memory, This works as long as the program doing the
writing was compiled with the same version of the compiler,
using the same options. Otherwise, it might or it might not
work, depending on any number of things beyond your control.
It's not clear to me what you're trying to do, but formats like
.jpg and .bmp normally specify the format they want the
different types to have, and you have to respect that format.
It is unclear what you really want to do, so I cannot recommend a way of solving your real problem. But I would not be surprised if running the program actually caused beeps or any other strange behavior in your program.
int* intArray = new int[10];
for(int i = 0; i < 10; i++)
{
cout << reinterpret_cast<char*>(&intArray[i]);
}
The memory returned by new above is uninitialized, but you are trying to print it as if it was a null terminated string. That uninitialized memory could have the bell character (that causes beeps when printed to the terminal) or any other values, including that it might potentially not have a null termination and the insertion operator into the stream will overrun the buffer until it either finds a null or your program crashes accessing invalid memory.
There are other incorrect assumptions in your code, like for example given int *p = new int[10]; the expression sizeof(p) will be the size of a pointer in your architecture, not 10 times the size of an integer.
I'm trying to load an image file into a buffer in order to send it through a scket. The problem that I'm having is that the program creates a buffer with a valid size but it does not copy the whole file into the buffer. My code is as follow
//imgload.cpp
#include <iostream>
#include <stdlib.h>
#include <stdio.h>
using namespace std;
int main(int argc,char *argv){
FILE *f = NULL;
char filename[80];
char *buffer = NULL;
long file_bytes = 0;
char c = '\0';
int i = 0;
printf("-Enter a file to open:");
gets(filename);
f = fopen(filename,"rb");
if (f == NULL){
printf("\nError opening file.\n");
}else{
fseek(f,0,SEEK_END);
file_bytes = ftell(f);
fseek(f,0,SEEK_SET);
buffer = new char[file_bytes+10];
}
if (buffer != NULL){
printf("-%d + 10 bytes allocated\n",file_bytes);
}else{
printf("-Could not allocate memory\n");
// Call exit?.
}
while (c != EOF){
c = fgetc(f);
buffer[i] = c;
i++;
}
c = '\0';
buffer[i-1] = '\0'; // helps remove randome characters in buffer when copying is finished..
i = 0;
printf("buffer size is now: %d\n",strlen(buffer));
//release buffer to os and cleanup....
return 0;
}
> output
c:\Users\Desktop>imgload
-Enter a file to open:img.gif
-3491 + 10 bytes allocated
buffer size is now: 9
c:\Users\Desktop>imgload
-Enter a file to open:img2.gif
-1261 + 10 bytes allocated
buffer size is now: 7
From the output I can see that it's allocating the correct size for each image 3491 and 1261 bytes (i doubled checked the file sizes through windows and the sizes being allocated are correct) but the buffer sizes after supposedly copying is 9 and 7 bytes long. Why is it not copying the entire data?.
You are wrong. Image is binary data, nor string data. So there are two errors:
1) You can't check end of file with EOF constant. Because EOF is often defined as 0xFF and it is valid byte in binary file. So use feof() function to check for end of file. Or also you may check current position in file with maximal possible (you got it before with ftell()).
2) As file is binary it may contain \0 in middle. So you can't use string function to work with such data.
Also I see that you use C++ language. Tell me please why you use classical C syntax for file working? I think that using C++ features such as file streams, containers and iterators will simplify your program.
P.S. And I want to say that you program will have problems with really big files. Who knows maybe you will try to work with them. If 'yes', rewrite ftell/fseek functions to their int64 (long long int) equivalents. Also you'll need to fix array counter. Another good idea is to read file by blocks. Reading byte by byte is dramatically slower.
All this is unneeded and actually makes no sense:
c = '\0';
buffer[i-1] = '\0';
i = 0;
printf("buffer size is now: %d\n",strlen(buffer));
Don't use strlen for binary data. strlen stops at the first NUL (\0) byte. A binary file may contain many such bytes, so NUL can't be used.
-3491 + 10 bytes allocated /* There are 3491 bytes in the file. */
buffer size is now: 9 /* The first byte with the value 0. */
In conclusion, drop that part. You already have the size of the file.
You are reading a binary file like a text file. You can't check for EOF as this could be anywhere in the binary file.
I am trying to read data from binary file, and having issues. I have reduced it down to the most simple case here, and it still won't work. I am new to c++ so I may be doing something silly but, if anyone could advise I would be very grateful.
Code:
int main(int argc,char *argv[]) {
ifstream myfile;
vector<bool> encoded2;
cout << encoded2 << "\n"<< "\n" ;
myfile.open(argv[2], ios::in | ios::binary |ios::ate );
myfile.seekg(0,ios::beg);
myfile.read((char*)&encoded2, 1 );
myfile.close();
cout << encoded2 << "\n"<< "\n" ;
}
Output
00000000
000000000000000000000000000011110000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Compression_Program(58221) malloc: * error for object 0x10012d: Non-aligned pointer being freed
* set a breakpoint in malloc_error_break to debug
Thanks in advance.
Do not cast a vector<bool>* to a char*. It is does not do anything predictable.
You are reading on encoded2: myfile.read((char*)&encoded2, 1 );. this is wrong. you can to read a bool and then put it in encoded2
bool x;
myfile.read( &x, 1 );
encoded2[0] = x;
Two mistakes here:
you assume the address of a vector is the address of the first element
you rely on vector<bool>
Casting a vector into a char * is not really a good thing, because a vector is an object and stores some state along with its elements.
Here you are probably overwriting the state of the vector, thus the destructor of fails.
Maybe you would like to cast the elements of the vector (which are guaranteed to be stored contiguously in memory). But another trap is that vector<bool> may be implementation-optimized.
Therefore you should do a encoded2.reserve(8) and use myfile.read(reinterpret_cast<char *>(&encoded2[0])).
But probably you want to do something else and we need to know what the purpose is here.
You're overwriting a std::vector, which you shouldn't do. A std::vector is actually a pointer to a data array and an integer (probably a size_t) holding its size; if you overwrite these with practically random bits, data corruption will occur.
Since you're only reading a single byte, this will suffice:
char c;
myfile.read(&c, 1);
The C++ language does not provide an efficient I/O method for reading bits as bits. You have to read bits in groups. Also, you have to worry about Endianess when reading int the bits.
I suggest the old fashioned method of allocating a buffer, reading into the buffer then operating on the buffer.
Allocating a buffer
const unsigned int BUFFER_SIZE = 1024 * 1024; // Let the compiler calculate it.
//...
unsigned char * const buffer = new unsigned char [BUFFER_SIZE]; // The pointer is constant.
Reading in the data
unsigned int bytes_read = 0;
ifstream data_file("myfile.bin", ios::binary); // Open file for input without translations.
data_file.read(buffer, BUFFER_SIZE); // Read data into the buffer.
bytes_read = data_file.gcount(); // Get actual count of bytes read.
Reminders:
delete the buffer when you are
finished with it.
Close the file when you are finished
with it.
myfile.read((char*) &encoded2[0], sizeof(int)* COUNT);
or you can use push_back();
int tmp;
for(int i = 0; i < COUNT; i++) {
myfile.read((char*) &tmp, 4);
encoded2.push_back(tmp);
}