C++ std::ofstream - Move the put pointer - c++

I am writing some data to a file. Occasionally, I want to write a block of data from memory, and then move the put pointer along either 1, 2 or 3 bytes to maintain a 4 byte data boundary format.
I could make a new block of data containing zeros and write this, but this seems unnecessary and clumsy. How can I move the put pointer along 1, 2 or 3 bytes?
I am not sure how to do this, because if I call seekp() surely I will move the pointer outside of the current file size? Whereas I assume ofstream.write() deals with this correctly? ie: It resizes the file somehow while writing data?

I am assuming you are doing something like, except instead of writing two bytes of data you want to write 4 bytes with some padding.
#include <fstream>
using namespace std;
struct data
{
char first;
char second;
};
int _tmain(int argc, _TCHAR* argv[])
{
ofstream outFile;
data data1;
data data2;
data1.first = 'a';
data1.second = 'b';
data2.first = 'c';
data2.second = 'd';
outFile.open("somefile.dat");
outFile.write(reinterpret_cast<char*>(&data1), sizeof(data));
outFile.write(reinterpret_cast<char*>(&data2), sizeof(data));
outFile.close();
return 0;
}
One option is to simply make the struct 4 bytes. This could have a disadvantage as it could increase memory footprint.
Using seekp probably is not a good option, I tried it and it sort of worked but not really.
outFile.write(reinterpret_cast<char*>(&data1), sizeof(data));
outFile.seekp(2, ios_base::cur);
outFile.write(reinterpret_cast<char*>(&data2), sizeof(data));
outFile.seekp(2, ios_base::cur);
This did succeed in adding padding after data1 but not data2. Moving the pointer past the just isn't a good idea as it doesn't change the file size. I tried writing 0 bytes after seekp but this didn't work either.
Honestly I would implement a helper function to provide this functionality. Seems much cleaner this way. Here is a simple example:
#include <fstream>
using namespace std;
struct data
{
char first;
char second;
};
void WriteWithPadding(ofstream* outFile, data d, int width);
int _tmain(int argc, _TCHAR* argv[])
{
ofstream* outFile = new ofstream();
data data1;
data data2;
data1.first = 'a';
data1.second = 'b';
data2.first = 'c';
data2.second = 'd';
outFile->open("somefile.dat");
WriteWithPadding(outFile, data1, 4);
WriteWithPadding(outFile, data1, 4);
outFile->close();
delete outFile;
return 0;
}
void WriteWithPadding(ofstream* outFile, data d, int width)
{
if (sizeof(d) > width)
throw;
width = width - sizeof(d); // width is now amount of padding required
outFile->write(reinterpret_cast<char*>(&d), sizeof(data));
// Add Padding
for (int i = 0; i < width; i++)
{
outFile->put(0);
}
}

Just to be pedantic, I assume you have opened your file with ios::binary, because you'll have issues if you haven't.
When writing a file, the file is only as large as the number of bytes you have written to your file. So if you write three bytes to the file, you will have a three-byte file.
To maintain a four-byte resolution, you must make sure to write four bytes at a time -- if you write a three-byte object, write an additional byte (zero?) to bring it up to four bytes.
Hope this helps.

Related

C++ storing 0 and 1 more efficiently, like in a binary file?

I want to store multiple arrays which all entries consist of either 0 or 1.
This file would be quite large if i do it the way i do it.
I made a minimalist version of what i currently do.
#include <iostream>
#include <fstream>
using namespace std;
int main(){
ofstream File;
File.open("test.csv");
int array[4]={1,0,0,1};
for(int i = 0; i < 4; ++i){
File << array[i] << endl;
}
File.close();
return 0;
}
So basically is there a way of storing this in a binary file or something, since my data is 0 or 1 in the first place anyways?
If yes, how to do this? Can i also still have line-breaks and maybe even commas in that file? If either of the latter does not work, that's also fine. Just more importantly, how to store this as a binary file which has only 0 and 1 so my file is smaller.
Thank you very much!
So basically is there a way of storing this in a binary file or something, since my data is 0 or 1 in the first place anyways? If yes, how to do this? Can i also still have line-breaks and maybe even commas in that file? If either of the latter does not work, that's also fine. Just more importantly, how to store this as a binary file which has only 0 and 1 so my file is smaller.
The obvious solution is to take 64 characters, say A-Z, a-z, 0-9, and + and /, and have each character code for six entries in your table. There is, in fact, a standard for this called Base64. In Base64, A encodes 0,0,0,0,0,0 while / encodes 1,1,1,1,1,1. Each combination of six zeroes or ones has a corresponding character.
This still leaves commas, spaces, and newlines free for your use as separators.
If you want to store the data as compactly as possible, I'd recommend storing it as binary data, where each bit in the binary file represents one boolean value. This will allow you to store 8 boolean values for each byte of disk space you use up.
If you want to store arrays whose lengths are not multiples of 8, it gets a little bit more complicated since you can't store a partial byte, but you can solve that problem by storing an extra byte of meta-data at the end of the file that specifies how many bits of the final data-byte are valid and how many are just padding.
Something like this:
#include <iostream>
#include <fstream>
#include <cstdint>
#include <vector>
using namespace std;
// Given an array of ints that are either 1 or 0, returns a packed-array
// of uint8_t's containing those bits as compactly as possible.
vector<uint8_t> packBits(const int * array, size_t arraySize)
{
const size_t vectorSize = ((arraySize+7)/8)+1; // round up, then +1 for the metadata byte
vector<uint8_t> packedBits;
packedBits.resize(vectorSize, 0);
// Store 8 boolean-bits into each byte of (packedBits)
for (size_t i=0; i<arraySize; i++)
{
if (array[i] != 0) packedBits[i/8] |= (1<<(i%8));
}
// The last byte in the array is special; it holds the number of
// valid bits that we stored to the byte just before it.
// That way if the number of bits we saved isn't an even multiple of 8,
// we can use this value later on to calculate exactly how many bits we should restore
packedBits[vectorSize-1] = arraySize%8;
return packedBits;
}
// Given a packed-bits vector (i.e. as previously returned by packBits()),
// returns the vector-of-integers that was passed to the packBits() call.
vector<int> unpackBits(const vector<uint8_t> & packedBits)
{
vector<int> ret;
if (packedBits.size() < 2) return ret;
const size_t validBitsInLastByte = packedBits[packedBits.size()-1]%8;
const size_t numValidBits = 8*(packedBits.size()-((validBitsInLastByte>0)?2:1)) + validBitsInLastByte;
ret.resize(numValidBits);
for (size_t i=0; i<numValidBits; i++)
{
ret[i] = (packedBits[i/8] & (1<<(i%8))) ? 1 : 0;
}
return ret;
}
// Returns the size of the specified file in bytes, or -1 on failure
static ssize_t getFileSize(ifstream & inFile)
{
if (inFile.is_open() == false) return -1;
const streampos origPos = inFile.tellg(); // record current seek-position
inFile.seekg(0, ios::end); // seek to the end of the file
const ssize_t fileSize = inFile.tellg(); // record current seek-position
inFile.seekg(origPos); // so we won't change the file's read-position as a side effect
return fileSize;
}
int main(){
// Example of packing an array-of-ints into packed-bits form and saving it
// to a binary file
{
const int array[]={0,0,1,1,1,1,1,0,1,0};
// Pack the int-array into packed-bits format
const vector<uint8_t> packedBits = packBits(array, sizeof(array)/sizeof(array[0]));
// Write the packed-bits to a binary file
ofstream outFile;
outFile.open("test.bin", ios::binary);
outFile.write(reinterpret_cast<const char *>(&packedBits[0]), packedBits.size());
outFile.close();
}
// Now we'll read the binary file back in, unpack the bits to a vector<int>,
// and print out the contents of the vector.
{
// open the file for reading
ifstream inFile;
inFile.open("test.bin", ios::binary);
const ssize_t fileSizeBytes = getFileSize(inFile);
if (fileSizeBytes < 0)
{
cerr << "Couldn't read test.bin, aborting" << endl;
return 10;
}
// Read in the packed-binary data
vector<uint8_t> packedBits;
packedBits.resize(fileSizeBytes);
inFile.read(reinterpret_cast<char *>(&packedBits[0]), fileSizeBytes);
// Expand the packed-binary data back out to one-int-per-boolean
vector<int> unpackedInts = unpackBits(packedBits);
// Print out the int-array's contents
cout << "Loaded-from-disk unpackedInts vector is " << unpackedInts.size() << " items long:" << endl;
for (size_t i=0; i<unpackedInts.size(); i++) cout << unpackedInts[i] << " ";
cout << endl;
}
return 0;
}
(You could probably make the file even more compact than that by running zip or gzip on the file after you write it out :) )
You can indeed write and read binary data. However having line breaks and commas would be difficult. Imagine you save your data as boolean data, so only ones and zeros. Then having a comma would mean you need an special character, but you have only ones and zeros!. The next best thing would be to make an object of two booleans, one meaning the usual data you need (c++ would then read the data in pairs of bits), and the other meaning whether you have a comma or not, but I doubt this is what you need. If you want to do something like a csv, then it would be easy to just fix the size of each column (int would be 4 bytes, a string of no more than 32 char for example), and then just read and write accordingly. Suppose you have your binary
To initially save your array of the an object say pets, then you would use
FILE *apFile;
apFile = fopen(FILENAME,"w+");
fwrite(ARRAY_OF_PETS, sizeof(Pet),SIZE_OF_ARRAY, apFile);
fclose(apFile);
To access your idx pet, you would use
Pet m;
ifstream input_file (FILENAME, ios::in|ios::binary|ios::ate);
input_file.seekg (sizeof(Pet) * idx, ios::beg);
input_file.read((char*) &m,sizeof(Pet));
input_file.close();
You can also add data add the end, change data in the middle and so on.

C++ reading large files part by part

I've been having a problem that I not been able to solve as of yet. This problem is related to reading files, I've looked at threads even on this website and they do not seem to solve the problem. That problem is reading files that are larger than a computers system memory. Simply when I asked this question a while ago I was referred too using the following code.
string data("");
getline(cin,data);
std::ifstream is (data);//, std::ifstream::binary);
if (is)
{
// get length of file:
is.seekg (0, is.end);
int length = is.tellg();
is.seekg (0, is.beg);
// allocate memory:
char * buffer = new char [length];
// read data as a block:
is.read (buffer,length);
is.close();
// print content:
std::cout.write (buffer,length);
delete[] buffer;
}
system("pause");
This code works well apart from the fact that it eats memory like fat kid in a candy store.
So after a lot of ghetto and unrefined programing, I was able to figure out a way to sort of fix the problem. However I more or less traded one problem for another in my quest.
#include <iostream>
#include <vector>
#include <string>
#include <fstream>
#include <stdio.h>
#include <stdlib.h>
#include <iomanip>
#include <windows.h>
#include <cstdlib>
#include <thread>
using namespace std;
/*======================================================*/
string *fileName = new string("tldr");
char data[36];
int filePos(0); // The pos of the file
int tmSize(0); // The total size of the file
int split(32);
char buff;
int DNum(0);
/*======================================================*/
int getFileSize(std::string filename) // path to file
{
FILE *p_file = NULL;
p_file = fopen(filename.c_str(),"rb");
fseek(p_file,0,SEEK_END);
int size = ftell(p_file);
fclose(p_file);
return size;
}
void fs()
{
tmSize = getFileSize(*fileName);
int AX(0);
ifstream fileIn;
fileIn.open(*fileName, ios::in | ios::binary);
int n1,n2,n3;
n1 = tmSize / 32;
// Does the processing
while(filePos != tmSize)
{
fileIn.seekg(filePos,ios_base::beg);
buff = fileIn.get();
// To take into account small files
if(tmSize < 32)
{
int Count(0);
char MT[40];
if(Count != tmSize)
{
MT[Count] = buff;
cout << MT[Count];// << endl;
Count++;
}
}
// Anything larger than 32
else
{
if(AX != split)
{
data[AX] = buff;
AX++;
if(AX == split)
{
AX = 0;
}
}
}
filePos++;
}
int tz(0);
filePos = filePos - 12;
while(tz != 2)
{
fileIn.seekg(filePos,ios_base::beg);
buff = fileIn.get();
data[tz] = buff;
tz++;
filePos++;
}
fileIn.close();
}
void main ()
{
fs();
cout << tmSize << endl;
system("pause");
}
What I tried to do with this code is too work around the memory issue. Rather than allocating memory for a large file that simply does not exist on a my system, I tried to use the memory I had instead which is about 8gb, but I only wanted to use maybe a few Kilobytes of it if at all possible.
To give you a layout of what I am talking about I am going to write a line of text.
"Hello my name is cake please give me cake"
Basically what I did was read said piece of text letter by letter. Then I put those letters into a box that could store 32 of them, from there I could use something like xor and then write them onto another file.
The idea in a way works but it is horribly slow and leaves off parts of files.
So basically how can I make something like this work without going slow or cutting off files. I would love to see how xor works with very large files.
So if anyone has a better idea than what I have, then I would be very grateful for the help.
To read and process the file piece-by-piece, you can use the following snippet:
// Buffer size 1 Megabyte (or any number you like)
size_t buffer_size = 1<<20;
char *buffer = new char[buffer_size];
std::ifstream fin("input.dat");
while (fin)
{
// Try to read next chunk of data
fin.read(buffer, buffer_size);
// Get the number of bytes actually read
size_t count = fin.gcount();
// If nothing has been read, break
if (!count)
break;
// Do whatever you need with first count bytes in the buffer
// ...
}
delete[] buffer;
The buffer size of 32 bytes, as you are using, is definitely too small. You make too many calls to library functions (and the library, in turn, makes calls (although probably not every time) to OS, which are typically slow, since they cause context-switching). There is also no need of tell/seek.
If you don't need all the file content simultaneously, reduce the working set first - like a set of about 32 words, but since XOR can be applied sequentially, you may further simplify the working set with constant size, like 4 kilo-bytes.
Now, you have the option to use file reader is.read() in a loop and process a small set of data each iteration, or use memmap() to map the file content as memory pointer which you can perform both read and write operations.

Reading data from binary file

I am trying to read data from binary file, and having issues. I have reduced it down to the most simple case here, and it still won't work. I am new to c++ so I may be doing something silly but, if anyone could advise I would be very grateful.
Code:
int main(int argc,char *argv[]) {
ifstream myfile;
vector<bool> encoded2;
cout << encoded2 << "\n"<< "\n" ;
myfile.open(argv[2], ios::in | ios::binary |ios::ate );
myfile.seekg(0,ios::beg);
myfile.read((char*)&encoded2, 1 );
myfile.close();
cout << encoded2 << "\n"<< "\n" ;
}
Output
00000000
000000000000000000000000000011110000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Compression_Program(58221) malloc: * error for object 0x10012d: Non-aligned pointer being freed
* set a breakpoint in malloc_error_break to debug
Thanks in advance.
Do not cast a vector<bool>* to a char*. It is does not do anything predictable.
You are reading on encoded2: myfile.read((char*)&encoded2, 1 );. this is wrong. you can to read a bool and then put it in encoded2
bool x;
myfile.read( &x, 1 );
encoded2[0] = x;
Two mistakes here:
you assume the address of a vector is the address of the first element
you rely on vector<bool>
Casting a vector into a char * is not really a good thing, because a vector is an object and stores some state along with its elements.
Here you are probably overwriting the state of the vector, thus the destructor of fails.
Maybe you would like to cast the elements of the vector (which are guaranteed to be stored contiguously in memory). But another trap is that vector<bool> may be implementation-optimized.
Therefore you should do a encoded2.reserve(8) and use myfile.read(reinterpret_cast<char *>(&encoded2[0])).
But probably you want to do something else and we need to know what the purpose is here.
You're overwriting a std::vector, which you shouldn't do. A std::vector is actually a pointer to a data array and an integer (probably a size_t) holding its size; if you overwrite these with practically random bits, data corruption will occur.
Since you're only reading a single byte, this will suffice:
char c;
myfile.read(&c, 1);
The C++ language does not provide an efficient I/O method for reading bits as bits. You have to read bits in groups. Also, you have to worry about Endianess when reading int the bits.
I suggest the old fashioned method of allocating a buffer, reading into the buffer then operating on the buffer.
Allocating a buffer
const unsigned int BUFFER_SIZE = 1024 * 1024; // Let the compiler calculate it.
//...
unsigned char * const buffer = new unsigned char [BUFFER_SIZE]; // The pointer is constant.
Reading in the data
unsigned int bytes_read = 0;
ifstream data_file("myfile.bin", ios::binary); // Open file for input without translations.
data_file.read(buffer, BUFFER_SIZE); // Read data into the buffer.
bytes_read = data_file.gcount(); // Get actual count of bytes read.
Reminders:
delete the buffer when you are
finished with it.
Close the file when you are finished
with it.
myfile.read((char*) &encoded2[0], sizeof(int)* COUNT);
or you can use push_back();
int tmp;
for(int i = 0; i < COUNT; i++) {
myfile.read((char*) &tmp, 4);
encoded2.push_back(tmp);
}

How to write only regularly spaced items from a char buffer to disk in C++

How can I write only every third item in a char buffer to file quickly in C++?
I get a three-channel image from my camera, but each channel contains the same info (the image is grayscale). I'd like to write only one channel to disk to save space and make the writes faster, since this is part of a real-time, data collection system.
C++'s ofstream::write command seems to only write contiguous blocks of binary data, so my current code writes all three channels and runs too slowly:
char * data = getDataFromCamera();
int dataSize = imageWidth * imageHeight * imageChannels;
std::ofstream output;
output.open( fileName, std::ios::out | std::ios::binary );
output.write( data, dataSize );
I'd love to be able to replace the last line with a call like:
int skipSize = imageChannels;
output.write( data, dataSize, skipSize );
where skipSize would cause write to put only every third into the output file. However, I haven't been able to find any function that does this.
I'd love to hear any ideas for getting a single channel written to disk quickly.
Thanks.
You'll probably have to copy every third element into a buffer, then write that buffer out to disk.
You can use a codecvt facet on a local to filter out part of the output.
Once created you can imbue any stream with the appropraite local and it will only see every third character on the input.
#include <locale>
#include <fstream>
#include <iostream>
class Filter: public std::codecvt<char,char,mbstate_t>
{
public:
typedef std::codecvt<char,char,mbstate_t> MyType;
typedef MyType::state_type state_type;
typedef MyType::result result;
// This indicates that we are converting the input.
// Thus forcing a call to do_out()
virtual bool do_always_noconv() const throw() {return false;}
// Reads from -> from_end
// Writes to -> to_end
virtual result do_out(state_type &state,
const char *from, const char *from_end, const char* &from_next,
char *to, char *to_limit, char* &to_next) const
{
// Notice the increment of from
for(;(from < from_end) && (to < to_limit);from += 3,to += 1)
{
(*to) = (*from);
}
from_next = from;
to_next = to;
return((to > to_limit)?partial:ok);
}
};
Once you have the facet all you need is to know how to use it:
int main(int argc,char* argv[])
{
// construct a custom filter locale and add it to a local.
const std::locale filterLocale(std::cout.getloc(), new Filter());
// Create a stream and imbue it with the locale
std::ofstream saveFile;
saveFile.imbue(filterLocale);
// Now the stream is imbued we can open it.
// NB If you open the file stream first.
// Any attempt to imbue it with a local will silently fail.
saveFile.open("Test");
saveFile << "123123123123123123123123123123123123123123123123123123";
std::vector<char> data[1000];
saveFile.write( &data[0], data.length() /* The filter implements the skipSize */ );
// With a tinay amount of extra work
// You can make filter take a filter size
// parameter.
return(0);
}
Let's say your buffer is 24-bit RGB, and you're using a 32-bit processor (so that operations on 32-bit entities are the most efficient).
For the most speed, let's work with a 12-byte chunk at a time. In twelve bytes, we'll have 4 pixels, like so:
AAABBBCCCDDD
Which is 3 32-bit values:
AAAB
BBCC
CDDD
We want to turn this into ABCD (a single 32-bit value).
We can create ABCD by applying a mask to each input and ORing.
ABCD = A000 | 0BC0 | 000D
In C++, with a little-endian processor, I think it would be:
unsigned int turn12grayBytesInto4ColorBytes( unsigned int buf[3] )
{
return (buf[0]&0x000000FF) // mask seems reversed because of little-endianness
| (buf[1]&0x00FFFF00)
| (buf[2]&0xFF000000);
}
It's probably fastest to do this another conversion to another buffer and THEN dump to disk, instead of going directly to disk.
There is no such a functionality in the standardlibrary afaik. Jerry Coffin's solution will work best. I wrote a simple snippet which should do the trick:
const char * data = getDataFromCamera();
const int channelNum = 0;
const int channelSize = imageWidth * imageHeight;
const int dataSize = channelSize * imageChannels;
char * singleChannelData = new char[channelSize];
for(int i=0; i<channelSize ++i)
singleChannelData[i] = data[i*imageChannels];
try {
std::ofstream output;
output.open( fileName, std::ios::out | std::ios::binary );
output.write( singleChannelData, channelSize );
}
catch(const std::ios_base::failure& output_error) {
delete [] channelSize;
throw;
}
delete [] singleChannelData;
EDIT: i added try..catch. Of course you could aswell use a std::vector for nicer code, but it might be a tiny bit slower.
First, I'd mention that to maximize writing speed, you should write buffers that are multiples of the sector size (eg. 64KB or 256KB)
To answer your question, you're going to have to copy every 3rd element from your source data into another buffer, and then write that to the stream.
If I recall correctly Intel Performance Primitives has functions for copying buffers, skipping a certain number of elements. Using IPP will probably have faster results than your own copy routine.
I'm tempted to say that you should read your data into a struct and then overload the insertion operator.
ostream& operator<< (ostream& out, struct data * s) {
out.write(s->first);
}

Howto read chunk of memory as char in c++

Hello I have a chunk of memory (allocated with malloc()) that contains bits (bit literal), I'd like to read it as an array of char, or, better, I'd like to printout the ASCII value of 8 consecutively bits of the memory.
I have allocated he memory as char *, but I've not been able to take characters out in a better way than evaluating each bit, adding the value to a char and shifting left the value of the char, in a loop, but I was looking for a faster solution.
Thank you
What I've wrote for now is this:
for allocation:
char * bits = (char*) malloc(1);
for writing to mem:
ifstream cleartext;
cleartext.open(sometext);
while(cleartext.good())
{
c = cleartext.get();
for(int j = 0; j < 8; j++)
{ //set(index) and reset(index) set or reset the bit at bits[i]
(c & 0x80) ? (set(index)):(reset(index));//(*ptr++ = '1'):(*ptr++='0');
c = c << 1;
}..
}..
and until now I've not been able to get character back, I only get the bits printed out using:
printf("%s\n" bits);
An example of what I'm trying to do is:
input.txt contains the string "AAAB"
My program would have to write "AAAB" as "01000001010000010100000101000010" to memory
(it's the ASCII values in bit of AAAB that are 65656566 in bits)
Then I would like that it have a function to rewrite the content of the memory to a file.
So if memory contains again "01000001010000010100000101000010" it would write to the output file "AAAB".
int numBytes = 512;
char *pChar = (char *)malloc(numBytes);
for( int i = 0; i < numBytes; i++ ){
pChar[i] = '8';
}
Since this is C++, you can also use "new":
int numBytes = 512;
char *pChar = new char[numBytes];
for( int i = 0; i < numBytes; i++ ){
pChar[i] = '8';
}
If you want to visit every bit in the memory chunk, it looks like you need std::bitset.
char* pChunk = malloc( n );
// read in pChunk data
// iterate over all the bits.
for( int i = 0; i != n; ++i ){
std::bitset<8>& bits = *reinterpret_cast< std::bitset<8>* >( pByte );
for( int iBit = 0; iBit != 8; ++iBit ) {
std::cout << bits[i];
}
}
I'd like to printout the ASCII value of 8 consecutively bits of the memory.
The possible value for any bit is either 0 or 1. You probably want at least a byte.
char * bits = (char*) malloc(1);
Allocates 1 byte on the heap. A much more efficient and hassle-free thing would have been to create an object on the stack i.e.:
char bits; // a single character, has CHAR_BIT bits
ifstream cleartext;
cleartext.open(sometext);
The above doesn't write anything to mem. It tries to open a file in input mode.
It has ascii characters and common eof or \n, or things like this, the input would only be a textfile, so I think it should only contain ASCII characters, correct me if I'm wrong.
If your file only has ASCII data you don't have to worry. All you need to do is read in the file contents and write it out. The compiler manages how the data will be stored (i.e. which encoding to use for your characters and how to represent them in binary, the endianness of the system etc). The easiest way to read/write files will be:
// include these on as-needed basis
#include <algorithm>
#include <iostream>
#include <iterator>
#include <fstream>
using namespace std;
// ...
/* read from standard input and write to standard output */
copy((istream_iterator<char>(cin)), (istream_iterator<char>()),
(ostream_iterator<char>(cout)));
/*-------------------------------------------------------------*/
/* read from standard input and write to text file */
copy(istream_iterator<char>(cin), istream_iterator<char>(),
ostream_iterator<char>(ofstream("output.txt"), "\n") );
/*-------------------------------------------------------------*/
/* read from text file and write to text file */
copy(istream_iterator<char>(ifstream("input.txt")), istream_iterator<char>(),
ostream_iterator<char>(ofstream("output.txt"), "\n") );
/*-------------------------------------------------------------*/
The last remaining question is: Do you want to do something with the binary representation? If not, forget about it. Else, update your question one more time.
E.g: Processing the character array to encrypt it using a block cipher
/* a hash calculator */
struct hash_sha1 {
unsigned char operator()(unsigned char x) {
// process
return rc;
}
};
/* store house of characters, could've been a vector as well */
basic_string<unsigned char> line;
/* read from text file and write to a string of unsigned chars */
copy(istream_iterator<unsigned char>(ifstream("input.txt")),
istream_iterator<char>(),
back_inserter(line) );
/* Calculate a SHA-1 hash of the input */
basic_string<unsigned char> hashmsg;
transform(line.begin(), line.end(), back_inserter(hashmsg), hash_sha1());
Something like this?
char *buffer = (char*)malloc(42);
// ... put something into the buffer ...
printf("%c\n", buffer[0]);
But, since you're using C++, I wonder why you bother with malloc and such...
char* ptr = pAddressOfMemoryToRead;
while(ptr < pAddressOfMemoryToRead + blockLength)
{
char tmp = *ptr;
// temp now has the char from this spot in memory
ptr++;
}
Is this what you are trying to achieve:
char* p = (char*)malloc(10 * sizeof(char));
char* p1 = p;
memcpy(p,"abcdefghij", 10);
for(int i = 0; i < 10; ++i)
{
char c = *p1;
cout<<c<<" ";
++p1;
}
cout<<"\n";
free(p);
Can you please explain in more detail, perhaps including code? What you're saying makes no sense unless I'm completely misreading your question. Are you doing something like this?
char * chunk = (char *)malloc(256);
If so, you can access any character's worth of data by treating chunk as an array: chunk[5] gives you the 5th element, etc. Of course, these will be characters, which may be what you want, but I can't quite tell from your question... for instance, if chunk[5] is 65, when you print it like cout << chunk[5];, you'll get a letter 'A'.
However, you may be asking how to print out the actual number 65, in which case you want to do cout << int(chunk[5]);. Casting to int will make it print as an integer value instead of as a character. If you clarify your question, either I or someone else can help you further.
Are you asking how to copy the memory bytes of an arbitrary struct into a char* array? If so this should do the trick
SomeType t = GetSomeType();
char* ptr = malloc(sizeof(SomeType));
if ( !ptr ) {
// Handle no memory. Probably should just crash
}
memcpy(ptr,&t,sizeof(SomeType));
I'm not sure I entirely grok what you're trying to do, but a couple of suggestions:
1) use std::vector instead of malloc/free and new/delete. It's safer and doesn't have much overhead.
2) when processing, try doing chunks rather than bytes. Even though streams are buffered, it's usually more efficient grabbing a chunk at a time.
3) there's a lot of different ways to output bits, but again you don't want a stream output for each character. You might want to try something like the following:
void outputbits(char *dest, char source)
{
dest[8] = 0;
for(int i=0; i<8; ++i)
dest[i] = source & (1<<(7-i)) ? '1':'0';
}
Pass it a char[9] output buffer and a char input, and you get a printable bitstring back. Decent compilers produce OK output code for this... how much speed do you need?