I'm trying to load an image file into a buffer in order to send it through a scket. The problem that I'm having is that the program creates a buffer with a valid size but it does not copy the whole file into the buffer. My code is as follow
//imgload.cpp
#include <iostream>
#include <stdlib.h>
#include <stdio.h>
using namespace std;
int main(int argc,char *argv){
FILE *f = NULL;
char filename[80];
char *buffer = NULL;
long file_bytes = 0;
char c = '\0';
int i = 0;
printf("-Enter a file to open:");
gets(filename);
f = fopen(filename,"rb");
if (f == NULL){
printf("\nError opening file.\n");
}else{
fseek(f,0,SEEK_END);
file_bytes = ftell(f);
fseek(f,0,SEEK_SET);
buffer = new char[file_bytes+10];
}
if (buffer != NULL){
printf("-%d + 10 bytes allocated\n",file_bytes);
}else{
printf("-Could not allocate memory\n");
// Call exit?.
}
while (c != EOF){
c = fgetc(f);
buffer[i] = c;
i++;
}
c = '\0';
buffer[i-1] = '\0'; // helps remove randome characters in buffer when copying is finished..
i = 0;
printf("buffer size is now: %d\n",strlen(buffer));
//release buffer to os and cleanup....
return 0;
}
> output
c:\Users\Desktop>imgload
-Enter a file to open:img.gif
-3491 + 10 bytes allocated
buffer size is now: 9
c:\Users\Desktop>imgload
-Enter a file to open:img2.gif
-1261 + 10 bytes allocated
buffer size is now: 7
From the output I can see that it's allocating the correct size for each image 3491 and 1261 bytes (i doubled checked the file sizes through windows and the sizes being allocated are correct) but the buffer sizes after supposedly copying is 9 and 7 bytes long. Why is it not copying the entire data?.
You are wrong. Image is binary data, nor string data. So there are two errors:
1) You can't check end of file with EOF constant. Because EOF is often defined as 0xFF and it is valid byte in binary file. So use feof() function to check for end of file. Or also you may check current position in file with maximal possible (you got it before with ftell()).
2) As file is binary it may contain \0 in middle. So you can't use string function to work with such data.
Also I see that you use C++ language. Tell me please why you use classical C syntax for file working? I think that using C++ features such as file streams, containers and iterators will simplify your program.
P.S. And I want to say that you program will have problems with really big files. Who knows maybe you will try to work with them. If 'yes', rewrite ftell/fseek functions to their int64 (long long int) equivalents. Also you'll need to fix array counter. Another good idea is to read file by blocks. Reading byte by byte is dramatically slower.
All this is unneeded and actually makes no sense:
c = '\0';
buffer[i-1] = '\0';
i = 0;
printf("buffer size is now: %d\n",strlen(buffer));
Don't use strlen for binary data. strlen stops at the first NUL (\0) byte. A binary file may contain many such bytes, so NUL can't be used.
-3491 + 10 bytes allocated /* There are 3491 bytes in the file. */
buffer size is now: 9 /* The first byte with the value 0. */
In conclusion, drop that part. You already have the size of the file.
You are reading a binary file like a text file. You can't check for EOF as this could be anywhere in the binary file.
Related
I have 640*480 numbers. I need to write them into a file. I will need to read them later. What is the best solution? Numbers are between 0 - 255.
For me the best solution is to write them binary(8 bits). I wrote the numbers into txt file and now it looks like 1011111010111110 ..... So there are no questions where the number starts and ends.
How am I supposed to read them from the file?
Using c++
It's not good idea to write bit values like 1 and 0 to text file. The file size will bigger in 8 times. 1 byte = 8 bits. You have to store bytes, 0-255 - is byte. So your file will have size 640*480 bytes instead of 640*480*8. Every symbol in text file has size of 1 byte minimum. If you want to get bits, use binary operators of programming language that you use. To read bytes much easier. Use binary file for saving your data.
Presumably you have some sort of data structure representing your image, which somewhere inside holds the actual data:
class pixmap
{
public:
// stuff...
private:
std::unique_ptr<std::uint8_t[]> data;
};
So you can add a new constructor which takes a filename and reads bytes from that file:
pixmap(const std::string& filename)
{
constexpr int SIZE = 640 * 480;
// Open an input file stream and set it to throw exceptions:
std::ifstream file;
file.exceptions(std::ios_base::badbit | std::ios_base::failbit);
file.open(filename.c_str());
// Create a unique ptr to hold the data: this will be cleaned up
// automatically if file reading throws
std::unique_ptr<std::uint8_t[]> temp(new std::uint8_t[SIZE]);
// Read SIZE bytes from the file
file.read(reinterpret_cast<char*>(temp.get()), SIZE);
// If we get to here, the read worked, so we move the temp data we've just read
// into where we'd like it
data = std::move(temp); // or std::swap(data, temp) if you prefer
}
I realise I've assumed some implementation details here (you might not be using a std::unique_ptr to store the underlying image data, though you probably should be) but hopefully this is enough to get you started.
You can print the number between 0-255 as the char value in the file.
See the below code. in this example I am printing integer 70 as char.
So this result in print as 'F' on the console.
Similarly you can read it as char and then convert this char to integer.
#include <stdio.h>
int main()
{
int i = 70;
char dig = (char)i;
printf("%c", dig);
return 0;
}
This way you can restrict the file size.
I'm trying to write-to-disk an array containing 11.26 million uint16_t values. The total memory size should be ~22 MB. However, the size of my file is 52MB. I'm using fprintf to write the array to disk. I thought maybe the values were being promoted. I tried to be explicit but it seems to make no difference. The size of my file is stubbornly unchanged.
What am I doing wrong? Code follows.
#define __STDC_FORMAT_MACROS
...
uint32_t dbsize = 11262336;
uint16_t* db_ = new uint16_t[dbsize_];
...
char fname[256] = "foo";
FILE* f = fopen(fname, "wb");
if(f == NULL)
{
return;
}
fprintf(f, "%i\t", dbsize_);
for(uint32_t i = 0; i < dbsize_; i++)
{
fprintf(f, "%" SCNu16 "", db_[i]);
}
fclose(f);
You're writing ASCII to your file, not binary.
Try writing your array like this instead of using fprintf in a loop.
fwrite(db_, sizeof(db_[0]), dbsize, f);
fprintf always formats numbers and other types to text, whether you've opened the file in binary mode or not. Binary mode just keeps the runtime from doing things like converting \n to \r\n.
fprintf will convert you number to a series of ASCII characters and write them to a file. Depending on its value, a 32-bit int will be from 1 to 10 characters long when expressed as a string. You need to use fwrite to write raw binary values to a file.
The source of confusion is likely to be that the "b" in FILE* f = fopen(fname, "wb"); does not do what you think it does.
Most significantly, it doesn't change any of the print or scan statements to use binary values instead of ASCII values. Like others have said - use fwrite instead.
I have the code:
unsigned char *myArray = new unsigned char[40000];
char pixelInfo[3];
int c = 0;
while(!reader.eof()) //reader is a ifstream open to a BMP file
{
reader.read(pixelInfo, 3);
myArray[c] = (unsigned char)pixelInfo[0];
myArray[c + 1] = (unsigned char)pixelInfo[1];
myArray[c + 2] = (unsigned char)pixelInfo[2];
c += 3;
}
reader.close();
delete[] myArray; //I get HEAP CORRUPTION here
After some tests, I found it to be caused by the cast in the while loop, if I use a signed char myArray I don't get the error, but I must use unsigned char for the rest of my code.
Casting pixelInfo to unsigned char also gives the same error.
Is there any solution to this?
This is what you should do:
reader.read((char*)myArray, myArrayLength); /* note, that isn't (sizeof myArray) */
if (!reader) { /* report error */ }
If there's processing going on inside the loop, then
int c = 0;
while (c + 2 < myArraySize) //reader is a ifstream open to a BMP file
{
reader.read(pixelInfo, 3);
myArray[c] = (unsigned char)pixelInfo[0];
myArray[c + 1] = (unsigned char)pixelInfo[1];
myArray[c + 2] = (unsigned char)pixelInfo[2];
c += 3;
}
Trying to read after you've hit the end is not a problem -- you'll get junk in the rest of the array, but you can deal with that at the end.
Assuming your array is big enough to hold the whole file invites buffer corruption. Buffer overrun attacks involving image files with carefully crafted incorrect metadata are quite well-known.
in Mozilla
in Sun Java
in Internet Explorer
in Windows Media Player
again in Mozilla
in MSN Messenger
in Windows XP
Do not rely on the entire file content fitting in the calculated buffer size.
reader.eof() will only tell you if the previous read hit the end of the file, which causes your final iteration to write past the end of the array. What you want instead is to check if the current read hits the end of file. Change your while loop to:
while(reader.read(pixelInfo, 3)) //reader is a ifstream open to a BMP file
{
// ...
}
Note that you are reading 3 bytes at a time. If the total number of bytes is not divisible by 3 (not a multiple of 3) then only part of the pixelInfo array will actually be filled with correct data which may cause an error with your program. You could try the following piece of not tested code.
while(!reader.eof()) //reader is a ifstream open to a BMP file
{
reader.read(pixelInfo, 3);
for (int i = 0; i < reader.gcount(); i++) {
myArray[c+i] = pixelInfo[i];
}
c += 3;
}
Your code does follow the documentation on cplusplus.com very well since eof bit will be set after an incomplete read so this code will terminate after your last read however, as I mentioned before the likely cause of your issue is the fact that you are assigning likely junk data to the heap since pixelInfo[x] might not necessarily be set if 3 bytes were not read.
What is an efficient, proper way of reading in a data file with mixed characters? For example, I have a data file that contains a mixture of data loaded from other files, 32-bit integers, characters and strings. Currently, I am using an fstream object, but it gets stopped once it hits an int32 or the end of a string. if i add random data onto the end of the string in the data file, it seems to follow through with the rest of the file. This leads me to believe that the null-termination added onto strings is messing it up. Here's an example of loading in the file:
void main()
{
fstream fin("C://mark.dat", ios::in|ios::binary|ios::ate);
char *mymemory = 0;
int size;
size = 0;
if (fin.is_open())
{
size = static_cast<int>(fin.tellg());
mymemory = new char[static_cast<int>(size+1)];
memset(mymemory, 0, static_cast<int>(size + 1));
fin.seekg(0, ios::beg);
fin.read(mymemory, size);
fin.close();
printf(mymemory);
std::string hithere;
hithere = cin.get();
}
}
Why might this code stop after reading in an integer or a string? How might one get around this? Is this the wrong approach when dealing with these types of files? Should I be using fstream at all?
Have you ever considered that the file reading is working perfectly and it is printf(mymemory) that is stopping at the first null?
Have a look with the debugger and see if I am right.
Also, if you want to print someone else's buffer, use puts(mymemory) or printf("%s", mymemory). Don't accept someone else's input for the format string, it could crash your program.
Try
for (int i = 0; i < size ; ++i)
{
// 0 - pad with 0s
// 2 - to two zeros max
// X - a Hex value with capital A-F (0A, 1B, etc)
printf("%02X ", (int)mymemory[i]);
if (i % 32 == 0)
printf("\n"); //New line every 32 bytes
}
as a way to dump your data file back out as hex.
Create a flat text file in c++ around 50 - 100 MB
with the content 'Added first line' should be inserted in to the file for 4 million times
using old style file io
fopen the file for write.
fseek to the desired file size - 1.
fwrite a single byte
fclose the file
The fastest way to create a file of a certain size is to simply create a zero-length file using creat() or open() and then change the size using chsize(). This will simply allocate blocks on the disk for the file, the contents will be whatever happened to be in those blocks. It's very fast since no buffer writing needs to take place.
Not sure I understand the question. Do you want to ensure that every character in the file is a printable ASCII character? If so, what about this? Fills the file with "abcdefghabc...."
#include <stdio.h>
int main ()
{
const int FILE_SiZE = 50000; //size in KB
const int BUFFER_SIZE = 1024;
char buffer [BUFFER_SIZE + 1];
int i;
for(i = 0; i < BUFFER_SIZE; i++)
buffer[i] = (char)(i%8 + 'a');
buffer[BUFFER_SIZE] = '\0';
FILE *pFile = fopen ("somefile.txt", "w");
for (i = 0; i < FILE_SIZE; i++)
fprintf(pFile, buffer);
fclose(pFile);
return 0;
}
You haven't mentioned the OS but I'll assume creat/open/close/write are available.
For truly efficient writing and assuming, say, a 4k page and disk block size and a repeated string:
open the file.
allocate 4k * number of chars in your repeated string, ideally aligned to a page boundary.
print repeated string into the memory 4k times, filling the blocks precisely.
Use write() to write out the blocks to disk as many times as necessary. You may wish to write a partial piece for the last block to get the size to come out right.
close the file.
This bypasses the buffering of fopen() and friends, which is good and bad: their buffering means that they're nice and fast, but they are still not going to be as efficient as this, which has no overhead of working with the buffer.
This can easily be written in C++ or C, but does assume that you're going to use POSIX calls rather than iostream or stdio for efficiency's sake, so it's outside the core library specification.
I faced the same problem, creating a ~500MB file on Windows very fast.
The larger buffer you pass to fwrite() the fastest you'll be.
int i;
FILE *fp;
fp = fopen(fname,"wb");
if (fp != NULL) {
// create big block's data
uint8_t b[278528]; // some big chunk size
for( i = 0; i < sizeof(b); i++ ) // custom initialization if != 0x00
{
b[i] = 0xFF;
}
// write all blocks to file
for( i = 0; i < TOT_BLOCKS; i++ )
fwrite(&b, sizeof(b), 1, fp);
fclose (fp);
}
Now at least on my Win7, MinGW, creates file almost instantly.
Compared to fwrite() 1 byte at time, that will complete in 10 Secs.
Passing 4k buffer will complete in 2 Secs.
Fastest way to create large file in c++?
Ok. I assume fastest way means the one that takes the smallest run time.
Create a flat text file in c++ around 50 - 100 MB with the content 'Added first line' should be inserted in to the file for 4 million times.
preallocate the file using old style file io
fopen the file for write.
fseek to the desired file size - 1.
fwrite a single byte
fclose the file
create a string containing the "Added first line\n" a thousand times.
find it's length.
preallocate the file using old style file io
fopen the file for write.
fseek to the the string length * 4000
fwrite a single byte
fclose the file
open the file for read/write
loop 4000 times,
writing the string to the file.
close the file.
That's my best guess.
I'm sure there are a lot of ways to do it.