I am currently writing a file-page-manager program which basically write, append and read pages to binary files. For the writing function, I have to delete the whole content of the specified page and write new content. I need to delete data from a file within a specific range like delete data from position 30 to position 4096.
If there is no more data after position 4096 then you can use truncate(2) to shrink the file to 30 bytes.
If you have more data after 4096 byte then you could first overwrite the data starting at position 30 by the data present after 4096th byte. Then you can truncate the file to [original_filesize - (4096-30)] bytes.
The only way to delete data in a file is to mark it as deleted. Use some value to indicate that the section(s) are deleted. Otherwise, copy the sections you want to save to a new file.
So easy with std::string
Follow the steps:
read the file and extract it to the std::string
std::ifstream input_file_stream( "file" );
const unsigned size_of_file = input_file_stream.seekg( 0, std::ios_base::end ).tellg();
input_file_stream.seekg( 0, std::ios_base::beg ); // rewind
std::string whole_file( size_of_file, ' ' ); // reserved space for the whole file
input_file_stream.read( &* whole_file.begin(), size_of_file );
then erase what you want:
// delete with offset
whole_file.erase( 10, // size_type index
200 ); // size_type count
and finally write to a new file:
// write it to the new file:
std::ofstream output_file_stream( "new_file" );
output_file_stream.write( &*whole_file.begin(), whole_file.size() );
output_file_stream.close();
input_file_stream.close();
Related
Let's just say I have an .ini file that looks like:
[material#3]
Diffuse=sometexture.jpg
Normal=someNormalMap.jpg
Specular=NULL
And I want to add a specular map value to the 'Specular' label, if I had this string buffer in a byte array I would do something like:
void assignSpecularMap(const char* name)
{
// Seek to position after 'Specular='
int oldValueStrLen = strlen("NULL");
int newValueStrLen = strlen(name);
int shiftAmount = newValueStrLen - oldValueStrLen;
// Shift the entire buffer from after 'NULL' by the shiftAmount
// Write the new value after 'Specular='
}
The shifting of the buffer would be done with a memcpy. This is my naive understanding of how to write new values into a text file. If each entry or section isn't padded to a uniform byte size, which it never is for human readable and editable files, then any change in the entries for these files had to be accompanied by a shifting of the entire buffer, correct?
I'm wondering how to do this with a file stream in C++. It's not the case that the entire stream has to be loaded into a char buffer and do it the way I showed?
Suppose I write a program in C/C++ and create an array of certain size. I want to keep that permanently even if I switch off the computer and later access it. Is there a way to do that? If so do let me know and also after saving it, how to access it.
Save the data to a file and load it on programm start.
Say you create a vector of MAX size for a string:
char * str = (char *) malloc( MAX );
At some point, you fill it with some data:
strcpy( str, "Useful data in the form of a string" );
Finally, at the program's end, you save it to a file:
FILE * f = fopen( "data.bin", "wb" );
fwrite( str, 1, MAX, f );
fclose( f );
At the beginning of the next execution, you'd like to load it:
char * str = (char *) malloc( MAX );
FILE * f = fopen( "data.bin", "rb" );
fread( str, 1, MAX, f );
fclose( f );
This solution has a few shortcomings: for example your data will be only useful for the computer in which you saved it. If you want portability, then you should use text and XML: http://www.jclark.com/xml/expat.html
Hope this helps.
You could use a memory mapped file and use offsets in the memory mapped file in place of pointers. You would have to implement your own dynamic block allocation management in the memory mapped file.
Using offsets would be less efficient than pointers. But you would load and save the data structure in a snap.
It is possible to avoid the use of offset and use real pointers instead. To do this, you save the pointer value to the memory mapped file when you close the memory mapped file. When you load the memory mapped file, you would then have to adjust all pointers in the data structure by adding the offset of the pointer to the memory mapped file.
If the data structure is small, you could do it in one pass when the file is mapped into memory. If the data structure is big, you could do it in a lazy way and only fix pointers of struct when you access them for the first time.
I have 640*480 numbers. I need to write them into a file. I will need to read them later. What is the best solution? Numbers are between 0 - 255.
For me the best solution is to write them binary(8 bits). I wrote the numbers into txt file and now it looks like 1011111010111110 ..... So there are no questions where the number starts and ends.
How am I supposed to read them from the file?
Using c++
It's not good idea to write bit values like 1 and 0 to text file. The file size will bigger in 8 times. 1 byte = 8 bits. You have to store bytes, 0-255 - is byte. So your file will have size 640*480 bytes instead of 640*480*8. Every symbol in text file has size of 1 byte minimum. If you want to get bits, use binary operators of programming language that you use. To read bytes much easier. Use binary file for saving your data.
Presumably you have some sort of data structure representing your image, which somewhere inside holds the actual data:
class pixmap
{
public:
// stuff...
private:
std::unique_ptr<std::uint8_t[]> data;
};
So you can add a new constructor which takes a filename and reads bytes from that file:
pixmap(const std::string& filename)
{
constexpr int SIZE = 640 * 480;
// Open an input file stream and set it to throw exceptions:
std::ifstream file;
file.exceptions(std::ios_base::badbit | std::ios_base::failbit);
file.open(filename.c_str());
// Create a unique ptr to hold the data: this will be cleaned up
// automatically if file reading throws
std::unique_ptr<std::uint8_t[]> temp(new std::uint8_t[SIZE]);
// Read SIZE bytes from the file
file.read(reinterpret_cast<char*>(temp.get()), SIZE);
// If we get to here, the read worked, so we move the temp data we've just read
// into where we'd like it
data = std::move(temp); // or std::swap(data, temp) if you prefer
}
I realise I've assumed some implementation details here (you might not be using a std::unique_ptr to store the underlying image data, though you probably should be) but hopefully this is enough to get you started.
You can print the number between 0-255 as the char value in the file.
See the below code. in this example I am printing integer 70 as char.
So this result in print as 'F' on the console.
Similarly you can read it as char and then convert this char to integer.
#include <stdio.h>
int main()
{
int i = 70;
char dig = (char)i;
printf("%c", dig);
return 0;
}
This way you can restrict the file size.
I'm writing a simple console application in Visual Studio C++. I want to read a binary file with .cer extension to a byte array.
ifstream inFile;
size_t size = 0;
char* oData = 0;
inFile.open(path, ios::in|ios::binary);
if (inFile.is_open())
{
size = inFile.tellg(); // get the length of the file
oData = new char[size+1]; // for the '\0'
inFile.read( oData, size );
oData[size] = '\0' ; // set '\0'
inFile.close();
buff.CryptoContext = (byte*)oData;
delete[] oData;
}
But when I launch it, I receive in all the oData characters the same char, every time another one, For example:
oData = "##################################################...".
Then I tried another way:
std::ifstream in(path, std::ios::in | std::ios::binary);
if (in)
{
std::string contents;
in.seekg(0, std::ios::end);
contents.resize(in.tellg());
in.seekg(0, std::ios::beg);
in.read(&contents[0], contents.size());
in.close();
}
Now the content has very strange values: a part of the values is correct, and a part is negative and strange values (maybe it is related to signed char and unsigned char?).
Does anyone have any idea?
Thanks ahead!
Looking at the first version:
What makes you think that tellg gets the size of the stream? It does not, it returns the current read position. You then go on to give a pointer to your data to buff.CryptoContents and promptly delete the data pointed to! This is very dangerous practice; you need to copy the data, use a smart pointer or otherwise ensure the data has the correct lifespan. It is likely the deletion is stomping your data with a marker to show it has been deleted if you're running in debug mode which is why you are getting the stream of identical characters.
I suspect your suggestion about signed and unsigned may be correct for the second but I can't say without seeing your file and data.
You are setting CryptoContext to point to your data by byte pointer, and after that you delete that data!
buff.CryptoContext = (byte*)oData;
delete[] oData;
After this lines CryptoContext is pointing to released and invalid data. Just keep oData array longer in memory and delete it after you are done with decoding or whatever you are doing with it.
I'm trying to copy a file, but whatever I try, the copy seems to be a few bytes short.
_file is an ifstream set to binary mode.
void FileProcessor::send()
{
//If no file is opened return
if(!_file.is_open()) return;
//Reset position to beginning
_file.seekg(0, ios::beg);
//Result buffer
char * buffer;
char * partBytes = new char[_bufferSize];
//Packet *p;
//Read the file and send it over the network
while(_file.read(partBytes,_bufferSize))
{
//buffer = Packet::create(Packet::FILE,std::string(partBytes));
//p = Packet::create(buffer);
//cout<< p->getLength() << "\n";
//writeToFile(p->getData().c_str(),p->getLength());
writeToFile(partBytes,_bufferSize);
//delete[] buffer;
}
//cout<< *p << "\n";
delete [] partBytes;
}
_writeFile is the file to be written to.
void FileProcessor::writeToFile(const char *buffer,unsigned int size)
{
if(_writeFile.is_open())
{
_writeFile.write(buffer,size);
_writeFile.flush();
}
}
In this case I'm trying to copy a zip file.
But opening both the original and copy in notepad I noticed that while they look identical , It's different at the end where the copy is missing a few bytes.
Any suggestions?
You are assuming that the file's size is a multiple of _bufferSize. You have to check what's left on the buffer after the while:
while(_file.read(partBytes,_bufferSize)) {
writeToFile(partBytes,_bufferSize);
}
if(_file.gcount())
writeToFile(partBytes, _file.gcount());
Your while loop will terminate when it fails to read _bufferSize bytes because it hits an EOF.
The final call to read() might have read some data (just not a full buffer) but your code ignores it.
After your loop you need to check _file.gcount() and if it is not zero, write those remaining bytes out.
Are you copying from one type of media to another? Perhaps different sector sizes are causing the apparent weirdness.
What if _bufferSize doesn't divide evenly into the size of the file...that might cause extra bytes to be written.
You don't want to always do writeToFile(partBytes,_bufferSize); since it's possible (at the end) that less than _bufferSize bytes were read. Also, as pointed out in the comments on this answer, the ifstream is no longer "true" once the EOF is reached, so the last chunk isn't copied (this is your posted problem). Instead, use gcount() to get the number of bytes read:
do
{
_file.read(partBytes, _bufferSize);
writeToFile(partBytes, (unsigned int)_file.gcount());
} while (_file);
For comparisons of zip files, you might want to consider using a non-text editor to do the comparison; HxD is a great (free) hex editor with a file compare option.