I need to correctly manage binary files streams for copying and pasting compressed files with .zip extension.
I'm actually trying to make a simple test project: I would like to copy ./src/Directory.zip in ./dest directory initially empty.
The code compiling is correctly done, but at the end of the program the destination directory is still empty and I don't know for which reason.
This is the code:
#include <iostream>
#include <fstream>
#include <vector>
typedef unsigned char BYTE;
std::vector<BYTE> readFile(const char* filename)
{
// open the file:
std::streampos fileSize;
std::ifstream file(filename, std::ios::binary);
// get its size:
file.seekg(0, std::ios::end);
fileSize = file.tellg();
file.seekg(0, std::ios::beg);
// read the data:
std::vector<BYTE> fileData(fileSize);
file.read((char*) &fileData[0], fileSize);
return fileData;
}
int main(){
std::vector<BYTE> fileData = readFile("./src/Directory.zip");
std::ofstream destFile;
destFile = std::ofstream("./dest/Directory.zip", std::ios::out | std::ios::binary);
if ( !destFile )
std::cout << std::strerror(errno) << '\n';
destFile.write( (char*) &fileData[0], sizeof(BYTE)*fileData.size() );
destFile.close();
return 0;
}
I know this could be simply done with std::filesystem::copy_file or other high-level functions, but these binary files will be sent through a socket from a server to a client. Firstly I would like to let this work in a local directory.
I've just followed the advice in the answer and I get this error:
No such file or directory
Related
I'm trying to read in an .exe and write it back out. My code works with .txt files but for some reason it is breaking executables. What am I doing wrong?
I'm not sure if I am reading it wrong or writing it wrong..
#include <string>
#include <vector>
#include <iostream>
#include <filesystem>
#include <unordered_set>
#include <Windows.h>
unsigned char *ReadFileAsBytes(std::string filepath, DWORD &buffer_len)
{
std::ifstream ifs(filepath, std::ofstream::binary | std::ifstream::ate);
if (!ifs.is_open())
{
return(nullptr);
}
// Go To End
ifs.seekg(0, ifs.end);
// Get Position (Size)
buffer_len = static_cast<DWORD>(ifs.tellg());
// Go To Beginning
ifs.seekg(0, ifs.beg);
// Allocate New Char Buffer The Size Of File
PBYTE buffer = new BYTE[buffer_len];
ifs.read(reinterpret_cast<char*>(buffer), buffer_len);
ifs.close();
return buffer;
}
void WriteToFile(std::string argLocation, unsigned char *argContents, int argSize)
{
std::ofstream myfile;
myfile.open(argLocation);
myfile.write((const char *)argContents, argSize);
myfile.close();
}
int main()
{
// Config
static std::string szLocation = "C:\\Users\\Admin\\Desktop\\putty.exe";
static std::string szOutLoc = "C:\\Users\\Admin\\Desktop\\putty2.exe";
DWORD dwLen;
unsigned char *szBytesIn = ReadFileAsBytes(szLocation, dwLen);
std::cout << "Read In " << dwLen << " Bytes" << std::endl;
// Write To File
WriteToFile(szOutLoc, szBytesIn, dwLen);
system("pause");
}
You open input file in binary mode, but in this code
std::ofstream myfile;
myfile.open(argLocation);
you open output file without binary mode. And there is no reason to call open separately:
std::ofstream myfile( argLocation, std::ios::out | std::ios::binary | std::ios::trunc);
I've got C++ program that is getting data buffer from time to time, and should add it to existing compressed file.
I tried to make POC by reading 1k chunks from some file, passing them to compressed stream and uncompress it when the data is over.
I use Poco::DeflatingOutputStream to compress each chunk to the file, and Poco::InflatingOutputStream to check that after decompressing I get the original file.
However, it seems that after decompressing the stream my data went almost identical to the original file, except that between every 2 consecutive chunks of data i get a few garbage characters such as : à¿_ÿ
here's an example of line that is split between 2 chunks. the original line looks like that :
elevated=0 path=/System/Library/CoreServices/Dock.app/Contents/MacOS/Dock exist
while the decompressed line is :
elevated=0 path=/System/Libr à¿_ÿary/CoreServices/Dock.app/Contents/MacOS/Dock exist
May 19 19:12:51 PANMMUZNG8WNREM kernel[0]: pid=904 uid=1873876126 sbit=0
any idea what am i doing wrong. Here's my POC code:
int zip_unzip() {
std::ostringstream stream1;
Poco::DeflatingOutputStream gzipper(stream1, Poco::DeflatingStreamBuf::STREAM_ZLIB);
std::ifstream bigFile("/tmp/in.log");
constexpr size_t bufferSize = 1024;
char buffer[bufferSize];
while (bigFile) {
bigFile.read(buffer, bufferSize);
gzipper << buffer;
}
gzipper.close();
std::string zipped_string = stream1.str();
//////////////////
std::ofstream stream2("/tmp/out.log", std::ios::binary);
Poco::InflatingOutputStream gunzipper(stream2, InflatingStreamBuf::STREAM_ZLIB);
gunzipper << zipped_string;
gunzipper.close();
return 0;
}
Ok, i just realized i used the '<<' operator on each read from the HugeFile (the original decompressed file) without care, since there was no null termination symbol '/0' at the end of each window i read from the file.
That's the fixed version :
#include <stdio.h>
#include <fstream>
#include <Poco/DeflatingStream.h>
#include <Poco/Exception.h>
#include <iostream>
int BetterZip()
{
try {
// Create gzip file.
std::ofstream output_file("/tmp/out.gz", std::ios::binary);
Poco::DeflatingOutputStream output_stream(output_file, Poco::DeflatingStreamBuf::STREAM_GZIP);
// INPUT
std::ifstream big_file("/tmp/hugeFile");
constexpr size_t ReadBufferSize = 1024;
char buffer[ReadBufferSize];
while (big_file) {
big_file.read(buffer, ReadBufferSize);
output_stream.write(buffer, big_file.gcount());
}
output_stream.close();
} catch (const Poco::Exception& ex) {
std::cout << "Error : (error code " << ex.code() << " (" << ex.displayText() << ")";
return EINVAL;
}
return 0;
}
I'm writing a Web API, using CGI under linux. All is great, using gcc. I am returning an image (jpeg) to the host: std::cout << "Content-Type: image/jpeg\n\n" and now must send the binary jpeg image. Loading the image into a char* buffer and std::cout << buffer; does not work. I do get back an empty image. I suspect stdout stops on the first 00 byte.
I'm receiving from the web server a 200 OK with an incomplete image.
I was going to redirect to the file in an open folder on the device, but this must be a secure transfer and not available to anyone who knows the url.
I'm stumped!
The code snippet looks like this:
std:string imagePath;
syslog(LOG_DEBUG, "Processing GetImage, Image: '%s'", imagePath.c_str());
std::cout << "Content-Type: image/jpeg\n\n";
int length;
char * buffer;
ifstream is;
is.open(imagePath.c_str(), ios::in | ios::binary);
if (is.is_open())
{
// get length of file:
is.seekg(0, ios::end);
length = (int)is.tellg();
is.seekg(0, ios::beg);
// allocate memory:
buffer = new char[length]; // gobble up all the precious memory, I'll optimize it into a smaller buffer later
// OH and VECTOR Victor!
syslog(LOG_DEBUG, "Reading a file: %s, of length %d", imagePath.c_str(), length);
// read data as a block:
is.read(buffer, length);
if (is)
{
syslog(LOG_DEBUG, "All data read successfully");
}
else
{
syslog(LOG_DEBUG, "Error reading jpg image");
return false;
}
is.close();
// Issue is this next line commented out - it doesn't output the full buffer
// std::cout << buffer;
// Potential solution by Captain Obvlious - I'll test in the morning
std::cout.write(buffer, length);
}
else
{
syslog(LOG_DEBUG, "Error opening file: %s", imagePath.c_str());
return false;
}
return true;
As it's already been pointed out to you, you need to use write() instead of IO formatting operations.
But you don't even need to do that. You don't need to manually copy one file to another, one buffer at a time, when iostreams will be happy to do it for you.
std::ifstream is;
is.open(imagePath.c_str(), std::ios::in | std::ios::binary);
if (is.is_open())
{
std::cout << is.rdbuf();
}
That's pretty much it.
This boiled down to a much simpler block. I hard coded the imagePath for this example. Put this in your linux web server's cgi_bin folder, place a jpg in ../www_images/image0001.jpg and from your client call the web server via http:///cgi_bin/test and you return the image.
#include <stdio.h>
#include <iostream>
#include <fstream>
int test()
{
std::ifstream fileStream;
std::string imagePath = "../www_images/image0001.jpg"; // pass this variable in
// output an image header - CGI
syslog(LOG_DEBUG, "Processing GetImage, Image: '%s'", imagePath.c_str());
std::cout << "Content-Type: image/jpeg\n\n";
// output binary image
fileStream.open(imagePath.c_str(), std::ios::in | std::ios::binary);
if (fileStream.is_open())
{
std::cout << fileStream.rdbuf();
}
else
{
return 1; // error - not handled in this code
}
return 0;
}
ps: no religious wars on brackets please. ;)
This creates the file but it doesn't write anything.
std::ofstream outstream;
FILE * outfile;
outfile = fopen("/usr7/cs/test_file.txt", "w");
__gnu_cxx::stdio_filebuf<char> filebuf(outfile, std::ios::out);
outstream.std::ios::rdbuf(&filebuf);
outstream << "some data";
outstream.close();
fclose(outfile);
I know there are other easy solutions to achieve the output, but i need to use this non-standard filebuf to lock a file while editing, so that other process can't open the file.
I don't know why this is not working.
std::ostream already has a constructor doing the right thing:
#include <ext/stdio_filebuf.h>
#include <iostream>
#include <fcntl.h>
int main() {
auto file = fopen("test.txt", "w");
__gnu_cxx::stdio_filebuf<char> sourcebuf(file, std::ios::out);
std::ostream out(&sourcebuf);
out << "Writing to fd " << sourcebuf.fd() << std::endl;
}
Remember that stdio_filebuf won't close the FILE* when it is destroyed, so remember to do that yourself if you need it.
I'm using Linux and C++. I have a binary file with a size of 210732 bytes, but the size reported with seekg/tellg is 210728.
I get the following information from ls-la, i.e., 210732 bytes:
-rw-rw-r-- 1 pjs pjs 210732 Feb 17 10:25 output.osr
And with the following code snippet, I get 210728:
std::ifstream handle;
handle.open("output.osr", std::ios::binary | std::ios::in);
handle.seekg(0, std::ios::end);
std::cout << "file size:" << static_cast<unsigned int>(handle.tellg()) << std::endl;
So my code is off by 4 bytes. I have confirmed that the size of the file is correct with a hex editor. So why am I not getting the correct size?
My answer: I think the problem was caused by having multiple open fstreams to the file. At least that seems to have sorted it out for me. Thanks to everyone who helped.
Why are you opening the file and checking the size? The easiest way is to do it something like this:
#include <sys/types.h>
#include <sys/stat.h>
off_t getFilesize(const char *path){
struct stat fStat;
if (!stat(path, &fStat)) return fStat.st_size;
else perror("file Stat failed");
}
Edit: Thanks PSJ for pointing out a minor typo glitch... :)
At least for me with G++ 4.1 and 4.4 on 64-bit CentOS 5, the code below works as expected, i.e. the length the program prints out is the same as that returned by the stat() call.
#include <iostream>
#include <fstream>
using namespace std;
int main () {
int length;
ifstream is;
is.open ("test.txt", ios::binary | std::ios::in);
// get length of file:
is.seekg (0, ios::end);
length = is.tellg();
is.seekg (0, ios::beg);
cout << "Length: " << length << "\nThe following should be zero: "
<< is.tellg() << "\n";
return 0;
}
When on a flavour of Unix, why do we use that, when we have the stat utlilty
long findSize( const char *filename )
{
struct stat statbuf;
if ( stat( filename, &statbuf ) == 0 )
{
return statbuf.st_size;
}
else
{
return 0;
}
}
if not,
long findSize( const char *filename )
{
long l,m;
ifstream file (filename, ios::in|ios::binary );
l = file.tellg();
file.seekg ( 0, ios::end );
m = file.tellg();
file.close();
return ( m – l );
}
Is it possible that ls -la is actually reporting the number of bytes the file takes up on the disk, instead of its actual size? That would explain why it is slightly higher.