c++ - Execute a binary file from memory - c++

Hello Guys I really need help in my c/c++ programming skills. I have to load a binary file, maybe a simple "hello World" and execute it directly from a buffer. Therefore I loaded my buffer with the binary file and tried to set the programming pointer to the buffer. But it doesn't work correctly. Could you please help me with useful suggestions?
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
int main(int argc, const char * argv[]) {
FILE *fileptr;
char *buffer;
long filelen;
fileptr = fopen("helloworld", "rb"); //Open File in binary mode
fseek(fileptr, 0, SEEK_END); //Jump to the end of file
filelen = ftell(fileptr); //Get the current byte offset in File
rewind(fileptr); // Jump back to beginnig of the file
buffer = (char *)malloc((filelen+1)*sizeof(char));
fread(buffer, filelen, 1, fileptr);
fclose(fileptr);
int *ret;
ret = (int *)&ret + 2;
(*ret) = (int)*buffer;
}

The program instructions are placed in the read only region of the process memory called text code segment and it is decided when the exe is generated by the compiler. The OS simply won't expect program instructions in the stack or the heap! Otherwise viruses would have been very easy to made....

Related

how can I write pixel color data to a bmp image file with stb_image?

I've already opened the bmp file( one channel grayscale) and stored each pixel color in a new line as hex.
after some doing processes on the data (not the point of this question), I need to export a bmp image from my data.
how can I load the textfile(data) and use stb_image_write?
pixel to image :
#include <cstdio>
#include <cstdlib>
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include "stb_image_write.h"
using namespace std;
int main() {
FILE* datafile ;
datafile = fopen("pixeldata.x" , "w");
unsigned char* pixeldata ;//???
char Image2[14] = "image_out.bmp";
stbi_write_bmp(Image2, 512, 512, 1, pixeldata);
image to pixel:
#include <cstdio>
#include <cstdlib>
#define STB_IMAGE_IMPLEMENTATION
#include "stb_image.h"
using namespace std;
const size_t total_pixel = 512*512;
int main() {
FILE* datafile ;
datafile = fopen("pixeldata.x" , "w");
char Image[10] = "image.bmp";
int witdth;
int height;
int channels;
unsigned char *pixeldata = stbi_load( (Image) , &witdth, &height, &channels, 1);
if(pixeldata != NULL){
for(int i=0; i<total_pixel; i++)
{
fprintf(datafile,"%x%s", pixeldata[i],"\n");
}
}
}
There are a lot of weaknesses in the question – too much to sort this out in comments...
This question is tagged C++. Why the error-prone fprintf()? Why not std::fstream? It has similar capabilities (if not even more) but adds type-safety (which printf() family cannot provide).
The counter-part of fprintf() is fscanf(). The formatters are similar but the storage type has to be configured in formatters even more carefully than in fprintf().
If the first code sample is the attempt to read pixels back from datafile.x... Why datafile = fopen("pixeldata.x" , "w");? To open a file with fopen() for reading, it should be "r".
char Image2[14] = "image_out.bmp"; is correct (if I counted correctly) but maintenance-unfriendly. Let the compiler do the work for you:
char Image2[] = "image_out.bmp";
To provide storage for pixel data with (in OPs case) fixed size of 512 × 512 bytes, the simplest would be:
unsigned char pixeldata[512 * 512];
Storing an array of that size (512 × 512 = 262144 Bytes = 256 KByte) in a local variable might be seen as potential issue by certain people. The alternative would be to use a std::vector<unsigned char> pixeldata; instead. (std::vector allocates storage dynamically in heap memory where local variables usually on a kind of stack memory which in turn is usually of limited size.)
Concerning the std::vector<unsigned char> pixeldata;, I see two options:
definition with pre-allocation:
std::vector<unsigned char> pixeldata(512 * 512);
so that it can be used just like the array above.
definition without pre-allocation:
std::vector<unsigned char> pixeldata;
That would allow to add every read pixel just to the end with std::vector::push_back().
May be, it's worth to reserve the final size beforehand as it's known from beginning:
std::vector<unsigned char> pixeldata;
pixeldata.reserve(512 * 512); // size reserved but not yet used
So, this is how it could look finally:
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <vector>
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include "stb_image_write.h"
int main()
{
const int w = 512, h = 512;
// read data
FILE *datafile = fopen("pixeldata.x" , "r");
if (!datafile) { // success of file open should be tested ALWAYS
std::cerr << "Cannot open 'pixeldata.x'!\n";
return -1; // ERROR! (bail out)
}
typedef unsigned char uchar; // for convenience
std::vector<uchar> pixeldata(w * h);
char Image2[] = "image_out.bmp";
for (int i = 0, n = w * h; i < n; ++i) {
if (fscanf(datafile, "%hhx", &pixeldata[i]) < 1) {
std::cerr << "Failed to read value " << i << of 'pixeldata.x'!\n";
return -1; // ERROR! (bail out)
}
}
fclose(datafile);
// write BMP image
stbi_write_bmp(Image2, w, h, 1, pixeldata.data());
// Actually, success of this should be tested as well.
// done
return 0;
}
Some additional notes:
Please, take this code with a grain of salt. I haven't compiled or tested it. (I leave this as task to OP but will react on "bug reports".)
I silently removed using namespace std;: SO: Why is “using namespace std” considered bad practice?
I added checking of success of file operations. File operations are something which are always good for failing for a lot of reasons. For file writing, even the fclose() should be tested. Written data might be cached until file is closed and just writing the cached data to file might fail (because just this might overflow the available volume space).
OP used magic numbers (image width and size) which is considered as bad practice. It makes code maintenance-unfriendly and might be harder to understand for other readers: SO: What is a magic number, and why is it bad?

Error when using byte array C++

I'm trying to read a standard 24-bit BMP file into a byte array so that I can send that byte array to libpng to be saved as a png. My code, which compiles:
#include <string>
#include <stdio.h>
#include <iostream>
#include <fstream>
#include <Windows.h>
#include "png.h"
using namespace std;
namespace BMP2PNG {
long getFileSize(FILE *file)
{
long lCurPos, lEndPos;
lCurPos = ftell(file);
fseek(file, 0, 2);
lEndPos = ftell(file);
fseek(file, lCurPos, 0);
return lEndPos;
}
private: System::Void button1_Click(System::Object^ sender, System::EventArgs^ e)
{
std::string filenamePNG = "D:\\TEST.png";
FILE *fp = fopen(filenamePNG.c_str(), "wb");
png_structp png_ptr = png_create_write_struct(PNG_LIBPNG_VER_STRING,NULL,NULL,NULL);
png_info *info_ptr = png_create_info_struct(png_ptr);
png_init_io(png_ptr, fp);
png_set_IHDR(png_ptr,info_ptr,1920,1080,16,PNG_COLOR_TYPE_RGB,PNG_INTERLACE_NONE,PNG_COMPRESSION_TYPE_BASE,PNG_FILTER_TYPE_BASE);
png_write_info(png_ptr,info_ptr);
png_set_swap(png_ptr);
const char *inputImage = "G:\\R-000.bmp";
BYTE *fileBuf;
BYTE *noHeaderBuf;
FILE *inFile = NULL;
inFile = fopen(inputImage, "rb");
long fileSize = getFileSize(inFile);
fileBuf = new BYTE[fileSize];
noHeaderBuf = new BYTE[fileSize - 54];
fread(fileBuf,fileSize,1,inFile);
for(int i = 54; i < fileSize; i++) //gets rid of 54-byte bmp header
{
noHeaderBuf[i-54] = fileBuf[i];
}
fclose(inFile);
png_write_rows(png_ptr, (png_bytep*)&noHeaderBuf, 1);
png_write_end(png_ptr, NULL);
fclose(fp);
}
};
Unfortunately, when I click the button that runs the code, I get an error "Attempted to read or write protected memory...". I'm very new to C++, but I thought I was reading in the file correctly. Why does this happen and how do I fix it?
Also, my end goal is to read a BMP one pixel row at a time so I don't use much memory. If the BMP is 1920x1080, I just need to read 1920 x 3 bytes for each row. How would I go about reading a file into a byte array n bytes at a time?
Your getFileSize() method is not actually returning the file size. You're basically moving to the correct position in the BMP header but instead of actually reading the next 4 bytes that represent the size, you're returning the position in the file (which will be always 2).
Then in the caller function you don't have any error checking and you have code that assumes the file size is always greater than 54 (the allocations for the read buffers for example).
Also keep in mind that the file size field in the BMP header might not always be correct, you should also take into account the actual file size.
You are reading filee size of your *.bmp file, but "real" data can be larger. BMP can have compression (RLE). After that when you write decompressed PNG to that array, you can have overflow size of image, because you previsouly obtained size of compressed BMP file.
In function
png_set_IHDR(png_ptr,info_ptr,1920,1080,16,PNG_COLOR_TYPE_RGB,PNG_INTERLACE_NONE,PNG_COMPRESSION_TYPE_BASE,PNG_FILTER_TYPE_BASE);
Why do you have bit depth set to 16 ? Shouldn´t it be 8, because each RGB channel from BMP is 8bit.
Also for PNG handling, I am using this library: http://lodev.org/lodepng/. It works fine.

Read file to memory, loop through data, then write file [duplicate]

This question already has answers here:
How to read line by line after i read a text into a buffer?
(4 answers)
Closed 10 years ago.
I'm trying to ask a similar question to this post:
C: read binary file to memory, alter buffer, write buffer to file
but the answers didn't help me (I'm new to c++ so I couldn't understand all of it)
How do I have a loop access the data in memory, and go through line by line so that I can write it to a file in a different format?
This is what I have:
#include <fstream>
#include <iostream>
#include <string>
#include <sstream>
#include <vector>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdlib.h>
using namespace std;
int main()
{
char* buffer;
char linearray[250];
int lineposition;
double filesize;
string linedata;
string a;
//obtain the file
FILE *inputfile;
inputfile = fopen("S050508-v3.txt", "r");
//find the filesize
fseek(inputfile, 0, SEEK_END);
filesize = ftell(inputfile);
rewind(inputfile);
//load the file into memory
buffer = (char*) malloc (sizeof(char)*filesize); //allocate mem
fread (buffer,filesize,1,inputfile); //read the file to the memory
fclose(inputfile);
//Check to see if file is correct in Memory
cout.write(buffer,filesize);
free(buffer);
}
I appreciate any help!
Edit (More info on the data):
My data is different files that vary between 5 and 10gb. There are about 300 million lines of data. Each line looks like
M359
T359 3520 359
M400
A3592 zng 392
Where the first element is a character, and the remaining items could be numbers or characters. I'm trying to read this into memory since it will be a lot faster to loop through line by line, than reading a line, processing, and then writing. I am compiling in 64bit linux. Let me know if I need to clarify further. Again thank you.
Edit 2
I am using a switch statement to process each line, where the first character of each line determines how to format the rest of the line. For example 'M' means millisecond, and I put the next three numbers into a structure. Each line has a different first character that I need to do something different for.
So pardon the potentially blatantly obvious, but if you want to process this line by line, then...
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
int main(int argc, char *argv[])
{
// read lines one at a time
ifstream inf("S050508-v3.txt");
string line;
while (getline(inf, line))
{
// ... process line ...
}
inf.close();
return 0;
}
And just fill in the body of the while loop? Maybe I'm not seeing the real problem (a forest for the trees kinda thing).
EDIT
The OP is inline with using a custom streambuf which may not necessarily be the most portable thing in the world, but he's more interested in avoiding flipping back and forh between input and output files. With enough RAM, this should do the trick.
#include <iostream>
#include <fstream>
#include <iterator>
#include <memory>
using namespace std;
struct membuf : public std::streambuf
{
membuf(size_t len)
: streambuf()
, len(len)
, src(new char[ len ] )
{
setg(src.get(), src.get(), src.get() + len);
}
// direct buffer access for file load.
char * get() { return src.get(); };
size_t size() const { return len; };
private:
std::unique_ptr<char> src;
size_t len;
};
int main(int argc, char *argv[])
{
// open file in binary, retrieve length-by-end-seek
ifstream inf(argv[1], ios::in|ios::binary);
inf.seekg(0,inf.end);
size_t len = inf.tellg();
inf.seekg(0, inf.beg);
// allocate a steam buffer with an internal block
// large enough to hold the entire file.
membuf mb(len+1);
// use our membuf buffer for our file read-op.
inf.read(mb.get(), len);
mb.get()[len] = 0;
// use iss for your nefarious purposes
std::istream iss(&mb);
std::string s;
while (iss >> s)
cout << s << endl;
return EXIT_SUCCESS;
}
You should look into fgets and scanf, in which you can pull out matched pieces of data so it is easier to manipulate, assuming that is what you want to do. Something like this could look like:
FILE *input = fopen("file.txt", "r");
FILE *output = fopen("out.txt","w");
int bufferSize = 64;
char buffer[bufferSize];
while(fgets(buffer,bufferSize,input) != EOF){
char data[16];
sscanf(buffer,"regex",data);
//manipulate data
fprintf(output,"%s",data);
}
fclose(output);
fclose(input);
That would be more of the C way to do it, C++ handles things a little more eloquently by using an istream:
http://www.cplusplus.com/reference/istream/istream/
If I had to do this, I'd probably use code something like this:
std::ifstream in("S050508-v3.txt");
std::istringstream buffer;
buffer << in.rdbuf();
std::string data = buffer.str();
if (check_for_good_data(data))
std::cout << data;
This assumes you really need the entire contents of the input file in memory at once to determine whether it should be copied to output or not. If (for example) you can look at the data one byte at a time, and determine whether that byte should be copied without looking at the others, you could do something more like:
std::ifstream in(...);
std::copy_if(std::istreambuf_iterator<char>(in),
std::istreambuf_iterator<char>(),
std::ostream_iterator<char>(std::cout, ""),
is_good_char);
...where is_good_char is a function that returns a bool saying whether that char should be included in the output or not.
Edit: the size of files you're dealing with mostly rules out the first possibility I've given above. You're also correct that reading and writing large chunks of data will almost certainly improve speed over working on one line at a time.

using ifstream::open "meminfo",fileLen is -1

Below is the code:
#include <iostream>
#include <stdio.h>
#include <unistd.h>
#include <fstream>
#include <memory.h>
int main()
{
std::ifstream file;
file.open("/proc/meminfo");
if(file.fail())
return 0;
file.seekg(0, std::ios::end);
int fileLen = file.tellg();
file.seekg(0, std::ios::beg);
char buffer[fileLen + 1];
memset(buffer, 0, fileLen + 1);
file.read(buffer, fileLen + 1);
if(file.fail())
return 0;
unsigned long long total = 0;
unsigned long long free = 0;
sscanf(buffer, "%*s %llu%*s%llu", &total, &free);
file.close();
return 1;
}
In the code ,fileLen is -1, but I don't know the reason. If ifstream opens a different file, like 1.txt, the program is correct.
at last,thanks for your help
The contents of /proc are not real files, and hence don't have actual sizes. Do not attempt to get their sizes, but instead simply read and parse them normally.
Because this is not a ordinary file:
The proc filesystem is a pseudo-filesystem rooted at /proc that contains user-accessible objects that pertain to the runtime state of the kernel and, by extension, the executing processes that run on top of it. "Pseudo" is used because the proc filesystem exists only as a reflection of the in-memory kernel data structures it displays. This is why most files and directories within /proc are 0 bytes in size.
I think the reason could be /proc/meminfo is not actually a file. /proc does not contain real files, they are just snapshot of the current state of the system.
http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html

using lzo library in c++ application

I got lzo library to use in our application. The version was provided is 1.07.
They have given me .lib along with some header file and some .c source files.
I have setup test environment as per specs. I am able to see lzo routine functions in my application.
Here is my test application
#include "stdafx.h"
#include "lzoconf.h"
#include "lzo1z.h"
#include <stdlib.h>
int _tmain(int argc, _TCHAR* argv[])
{
FILE * pFile;
long lSize;
unsigned char *i_buff;
unsigned char *o_buff;
int i_len,e = 0;
unsigned int o_len;
size_t result;
//data.txt have a single compressed packet
pFile = fopen("data.txt","rb");
if (pFile==NULL)
return -1;
// obtain file size:
fseek (pFile , 0 , SEEK_END);
lSize = ftell (pFile);
rewind (pFile);
// allocate memory to contain the whole file:
i_buff = (unsigned char*) malloc (sizeof(char)*lSize);
if (i_buff == NULL)
return -1;
// copy the file into the buffer:
result = fread (i_buff,1,lSize,pFile);
if (result != lSize)
return -1;
i_len = lSize;
o_len = 512;
// allocate memory for output buffer
o_buff = (unsigned char*) malloc(sizeof(char)*o_len);
if (o_buff == NULL)
return -1;
lzo_memset(o_buff,0,o_len);
lzo1z_decompress(i_buff,i_len,o_buff,&o_len,NULL);
return 0;
}
It gives access violation on last line.
lzo1z_decompress(i_buff,i_len,o_buff,&o_len,NULL);
in provided library signature for above functiion is
lzo1z_decompress ( const lzo_byte *src, lzo_uint src_len,
lzo_byte *dst, lzo_uint *dst_len,
lzo_voidp wrkmem /* NOT USED */ );
What is wrong?
Are you sure 512 bytes is big enough for the decompressed data? You shouldn't be using an arbitrary value, but rather you should have stowed away the original size somewhere as a header when your file was compressed:
LZO Decompression Buffer Size
You should probably make your data types match the interface spec (e.g. o_len should be a lzo_uint...you're passing an address so the actual underlying type matters).
Beyond that, it's open source. So why don't you build lzo with debug info and step into it to see where the problem is?
http://www.oberhumer.com/opensource/lzo/
Thans everyone for suggestion and comments.
The problem was with data. I have successfully decomressed it.