Problem and Code
I am working with code to take a screenshot on a Raspberry Pi. Using some magic from the VC handler, I can take a screenshot and store it in memory with calloc. I can use this to store the data in a file as a ppm image with the requisite header using:
void * image;
image = calloc(1, width * 3 * height);
// code to store data into *image
FILE *fp = fopen("myfile.ppm", "wb");
fprintf(fp, "P6\n%d %d\n255\n", width, height);
fwrite(image, width*3*height, 1, fp);
fclose(fp);
This successfully stores the data. I can access it and view it normally.
However, if I instead try to inspect the data which are being put into the file for debugging purposes by printing:
int cnt = 0;
std::string imstr = (char *)image;
for (int i=0; i<(width*3*height); i++) {
std::cout << (int)imstr[i] << " " << cnt << std::endl;
cnt += 1;
}
I segfault early. The numbers which are returned in the print make sense for the context (e.g. color values <255)
Example Numbers
In the case of a 1280 x 768 x 3 image, my cnt stops at 64231. The value it stops at doesn't seem to have any relation to the sizeof char or int.
I think I'm missing something obvious here, but I can't see it. Any suggestions?
very probably you have at least a null character in (char *)image, so the std::string length is shorter than width*3*height due to its initialization because only the characters up to that first null character are used
use a std::array rather than a std::stringinitialized like that
The way you are converting the image data to a std::string is wrong. If the image's raw data contains any 0x00 bytes then the std::string will be truncated, causing your loop to access out of bounds of the std::string. And if the image's raw data does not contain any 0x00 bytes then the std::string constructor will try to read past the bounds of the image's allocated memory.
You need to take the image's size into account when constructing the std::string, eg:
size_t cnt = 0;
std::string imstr(static_cast<char*>(image), width*3*height);
for (size_t i = 0; i < imstr.size(); ++i) {
std::cout << static_cast<int>(imstr[i]) << " " << cnt << std::endl;
++cnt;
}
Otherwise, simply don't convert the image to std::string at all. You can iterate the image's raw data directly instead, eg:
size_t cnt = 0, imsize = width*3*height;
char *imdata = static_cast<char*>(image);
for (size_t i = 0; i < imsize; ++i) {
std::cout << static_cast<int>(imdata[i]) << " " << cnt << std::endl;
++cnt;
}
Related
I want to apply a simple derive/gradient filter, [-1, 0, 1], to an image from a .ppm file.
The raw binary data from the .ppm file is read into a one-dimensional array:
uint8_t* raw_image_data;
size_t n_rows, n_cols, depth;
// Open the file as an input binary file
std::ifstream file;
file.open("test_image.ppm", std::ios::in | std::ios::binary);
if (!file.is_open()) { /* error */ }
std::string temp_line;
// Check that it's a valid P6 file
if (!(std::getline(file, temp_line) && temp_line == "P6")) {}
// Then skip all the comments (lines that begin with a #)
while (std::getline(file, temp_line) && temp_line.at(0) == '#');
// Try read in the info about the number of rows and columns
try {
n_rows = std::stoi(temp_line.substr(0, temp_line.find(' ')));
n_cols = std::stoi(temp_line.substr(temp_line.find(' ')+1,temp_line.size()));
std::getline(file, temp_line);
depth = std::stoi(temp_line);
} catch (const std::invalid_argument & e) { /* stoi has failed */}
// Allocate memory and read in all image data from ppm
raw_image_data = new uint8_t[n_rows*n_cols*3];
file.read((char*)raw_image_data, n_rows*n_cols*3);
file.close();
I then read a grayscale image from the data into a two-dimensional array, called image_grayscale:
uint8_t** image_grayscale;
image_grayscale = new uint8_t*[n_rows];
for (size_t i = 0; i < n_rows; ++i) {
image_grayscale[i] = new uint8_t[n_cols];
}
// Convert linear array of raw image data to 2d grayscale image
size_t counter = 0;
for (size_t r = 0; r < n_rows; ++r) {
for (size_t c = 0; c < n_cols; ++c) {
image_grayscale[r][c] = 0.21*raw_image_data[counter]
+ 0.72*raw_image_data[counter+1]
+ 0.07*raw_image_data[counter+2];
counter += 3;
}
}
I want to write the resulting filtered image to another two-dimensional array, gradient_magnitude:
uint32_t** gradient_magnitude;
// Allocate memory
gradient_magnitude = new uint32_t*[n_rows];
for (size_t i = 0; i < n_rows; ++i) {
gradient_magnitude[i] = new uint32_t[n_cols];
}
// Filtering operation
int32_t grad_h, grad_v;
for (int r = 1; r < n_rows-1; ++r) {
for (int c = 1; c < n_cols-1; ++c) {
grad_h = image_grayscale[r][c+1] - image_grayscale[r][c-1];
grad_v = image_grayscale[r+1][c] - image_grayscale[r-1][c];
gradient_magnitude[r][c] = std::sqrt(pow(grad_h, 2) + pow(grad_v, 2));
}
}
Finally, I write the filtered image to a .ppm output.
std::ofstream out;
out.open("output.ppm", std::ios::out | std::ios::binary);
// ppm header
out << "P6\n" << n_rows << " " << n_cols << "\n" << "255\n";
// Write data to file
for (int r = 0; r < n_rows; ++r) {
for (int c = 0; c < n_cols; ++c) {
for (int i = 0; i < 3; ++i) {
out.write((char*) &gradient_magnitude[r][c],1);
}
}
}
out.close();
The output image, however, is a mess.
When I simply set grad_v = 0; in the loop (i.e. solely calculate the horizontal gradient), the output is seemingly correct:
When I instead set grad_h = 0; (i.e. solely calculate the vertical gradient), the output is strange:
It seems like part of the image has been circularly shifted, but I cannot understand why. Moreover, I have tried with many images and the same issue occurs.
Can anyone see any issues? Thanks so much!
Ok, first clue is that the image looks circularly shifted. This hints that strides are wrong. The core of your problem is simple:
n_rows = std::stoi(temp_line.substr(0, temp_line.find(' ')));
n_cols = std::stoi(temp_line.substr(temp_line.find(' ')+1,temp_line.size()));
but in the documentation you can read:
Each PPM image consists of the following:
A "magic number" for identifying the file type. A ppm image's magic number is the two
characters "P6".
Whitespace (blanks, TABs, CRs, LFs).
A width, formatted as ASCII characters in decimal.
Whitespace.
A height, again in ASCII decimal.
[...]
Width is columns, height is rows. So that's the classical error that you get when implementing image processing stuff: swapping rows and columns.
From a didactic point of view, why are you doing this mistake? My guess: poor debugging tools. After making a working example from your question (effort that I would have saved if you had provided a MCVE), I run to the end of image loading and used Image Watch to see the content of your image with #mem(raw_image_data, UINT8, 3, n_cols, n_rows, n_cols*3). Result:
Ok, let's try to swap them: #mem(raw_image_data, UINT8, 3, n_rows, n_cols, n_rows*3). Result:
Much better. Unfortunately I don't know how to specify RGB instead of BGR in Image Watch #mem pseudo command, so the wrong colors.
Then we come back to your code: please compile with all warnings on. Then I'd use more of the std::stream features for parsing your input and less std::stoi() or find(). Avoid memory allocation by using std::vector and make a (possibly template) class for images. Even if you stick to your pointer to pointer, don't make multiple new for each row: make a single new for the pointer at row 0, and have the other pointers point to it:
uint8_t** image_grayscale = new uint8_t*[n_rows];
image_grayscale[0] = new uint8_t[n_rows*n_cols];
for (size_t i = 1; i < n_rows; ++i) {
image_grayscale[i] = image_grayscale[i - 1] + n_cols;
}
Same effect, but easier to deallocate and to manage as a single piece of memory. For example, saving as a PGM becomes:
{
std::ofstream out("output.pgm", std::ios::binary);
out << "P5\n" << n_rows << " " << n_cols << "\n" << "255\n";
out.write(reinterpret_cast<char*>(image_grayscale[0]), n_rows*n_cols);
}
Fill your borders! Using the single allocation style I showed you you can do it as:
uint32_t** gradient_magnitude = new uint32_t*[n_rows];
gradient_magnitude[0] = new uint32_t[n_rows*n_cols];
for (size_t i = 1; i < n_rows; ++i) {
gradient_magnitude[i] = gradient_magnitude[i - 1] + n_cols;
}
std::fill_n(gradient_magnitude[0], n_rows*n_cols, 0);
Finally the gradient magnitude is an integer value between 0 and 360 (you used a uint32_t). Then you save only the least significant byte of it! Of course it's wrong. You need to map from [0,360] to [0,255]. How? You can saturate (if greater than 255 set to 255) or apply a linear scaling (*255/360). Of course you can do also other things, but it's not important.
Here you can see the result on a zoomed version of the three cases: saturate, scale, only LSB (wrong):
With the wrong version you see dark pixels where the value should be higer than 255.
in my c++ program I want to pass an array to a function and print the members of that array to console.
now I got into two problems:
int main()
{
unsigned char numbers[8] = { 1,2,3,4,5,6,7,8 };
for (auto i = 0; i < sizeof(numbers); i++)
{
std::cout << numbers[i] << "\n"; // First Problem: Here i get
}
logger(numbers);
}
passing the numbers to logger defined as void logger(unsigned char data[]) cause the type change to unsigned char * so there is no way to iterate over the array as the size is unknown.
my goal also is to pass any sized arrays but assuming that the size of an array is always 8, I changed the signature to
logger(&numbers)
void logger(unsigned char(*data)[8])
{
for (auto i = 0; i < sizeof(*data); i++)
{
std::cout << *(data[i]) << "\n";
}
}
iterating over data has the first problem and output is ``
so the questions are;
why do I get a weird ASCII character at cout.
how should we deal passing an array to another function and iterate over it, I searched alot but found no solution
The problem lies in the contents of your array:
unsigned char numbers[8] = { 1,2,3,4,5,6,7,8 };
Those get interpreted as character codes (because of the array element type)., not literal values. Most probably the character mapping used is ASCII, and characters 1 through 8 aren't printable.
To obtain the character value representing 1, you'd need to write a character literal '1'. If your intended to store and treat them as numbers, you could either change the type of the array to int[8], or cast them when printing:
std::cout << static_cast<int>(numbers[i]) << "\n";
As a side not, if you intended to use characters, you should change the type to char.
To solve passing the arrays of arbitrary size, either use a template and pass a reference to std::array, or simply use a vector.
You cannot pass an array to a function in C++. There are several ways around this
1) Use vectors instead of arrays
2) Pass a reference to the array (this only works with a fixed size array)
3) Pass a pointer to the first element of the array (this requires that you pass the size as a seperate parameter).
Here's how you do all three
1) use vectors
#include <vector>
std::vector<unsigned char>{1,2,3,4,5,6,7,8}:
logger(numbers);
void logger(const vector<unsigned char>& data)
{
for (auto i = 0; i < data.size(); i++)
{
std::cout << (unsigned)data[i] << "\n";
}
}
2) use a reference
unsigned char numbers[8] = { 1,2,3,4,5,6,7,8 };
logger(numbers);
void logger(unsigned char (&data)[8])
{
for (auto i = 0; i < 8; i++)
{
std::cout << (unsigned)data[i] << "\n";
}
}
3) use a pointer
unsigned char numbers[8] = { 1,2,3,4,5,6,7,8 };
logger(numbers, 8);
void logger(unsigned char *data, size_t size)
{
for (auto i = 0; i < size; i++)
{
std::cout << (unsigned)data[i] << "\n";
}
}
vectors are the best solution. C++ has proper data structures as standard, use them.
As has already been explained your printing problems are due to the special rules for printing characters, just cast to unsigned before printing.
No code has been tested (or even compiled).
For your first problem use:
int arr_size = sizeof(numbers)/sizeof(numbers[0]);
I have a problem that I'm not sure how to solve. I have a C++ function that opens a .wav file, reads the samples into an array of doubles that has as many indexes as the number of samples in the .wav file, and return a pointer to that array. This works perfectly.
What I'm wanting to do is read more than one .wav file and store them in a two dimensional array. Although I know how many arrays there will be, the size of each array will be different, because all .wav files have a different nubmer of samples. I don't know how to properly store this data for multiple files.
Here is the call to wav2sig, the function that opens the .wav and returns a pointer:
double* wav2sig(std::string filepath, int & num_samples)
And here is the code that I'm working off, roughly.
std::string paths[3] = {"man1.wav",
"man2.wav",
"man3.wav"};
double **data = new double[3][]; //this would work in java, but not here
int num_samples[3];
for(int i = 0; i < 3; i++) {
data[i] = wav2sig(paths[i], num_samples[i]);
for(int j = 50; j < 100; j++)
std::cout << data[i][j] << " ";
std::cout << std::endl;
}
I know that the returned pointer has all correct data. I just don't know how to store several of them correctly. Thanks for any help!
I strongly suggest the use of std::vector for the arrays instead of pointers that point to dynamically allocated memory. They are just as easy to use and they take away the headache of managing memory.
Change the return value of wav2sig
std::vector<double> wav2sig(std::string filepath);
I am guessing num_samples was used to return the number of elements in the returned array. When you use std::vector<double> as the return type, the size of the returned value will capture that. Hence, there is no need for the additional output argument.
Use std::vector<std::vector<double>> for the 2D array.
std::vector<std::vector<double>> data;
Update the loops accordingly
// Separate reading of the data from files from outputting the data
// Read the data
for(int i = 0; i < 3; i++) {
data.push_back(wav2sig(paths[i]);
}
// Output the data
for(int i = 0; i < 3; i++) {
for(size_t j = 0; j < data[i].size(); j++)
std::cout << data[i][j] << " ";
std::cout << std::endl;
}
I'm working on a project for my CS202 class. I have a supplied binary file of unknown size called data.dat and need to read integers (which I don't know in advance) from the file and store them in a properly sized vector. I have to use fstream() for the filestream and I have to use the reinterpret_cast<char *>() for the conversion. My code looks like this:
fstream filestream2;
//reading binary data from supplied data.dat file
filestream2.open("data.dat", ios::in | ios::binary);
vector<int> v;
filestream2.seekg(0, filestream2.end);
long length = filestream2.tellg();
v.resize(length);
filestream2.read(reinterpret_cast<char *>(&v[0]), length);
for(int num = 0; num < length; num++)
{
cout << v[num] << " ";
}
In theory, the vector should hold all of the integers from the file and print them to stdout, but my output is simply about 50,000, 0s followed by program exited with exit code 0
I'm relatively new to C++ syntax and libraries and I just cannot figure out what I'm doing wrong for the life of me.
Thanks in advance.
When you use
filestream2.seekg(0, filestream2.end);
long length = filestream2.tellg();
you get the number of characters in the file, not the number of items in the vector. Consequently, you will need to use length/sizeof(int) when you want use the size of the vector.
v.resize(length);
is incorrect. It needs to be
v.resize(length/sizeof(int));
and
for(int num = 0; num < length; num++)
{
cout << v[num] << " ";
}
is incorrect. It needs to be
for(int num = 0; num < length/sizeof(int); num++)
{
cout << v[num] << " ";
}
You said that "you don't know in advance" which kind of data ( size of data) is stored in file. The main problem is to identify size of data and its datatype. So what can you do is, create custom formate file.
For ex.
1st byte of file will indicate type of data, (ex. I for integer, F for float, U for unsigned int, C for char, S for char* (string) so on)
Next 4 bytes will be size of data ( required only for char* so it is optional)
After that actual date will be started.
So data will be in file like
Cabcdefghijk
Here 1st byte is C so data will be char. So need to create vector of char type.
Next data size :
fstream.seekg(0, fstream.end);
long length = fstream.tellg(); // length : 12
length -= 1; // 1st byte is indecator // length : 11
// length -= 4; // Optional : if you had write size of data
length = length / sizeof( char); // sizeof( int) or sizeof( flot) or written in file.
// so in our case length will be 11;
Now you have data type and size of data, so create or resize vector accordingly.
I am trying to read 1244 bytes at a time from a file. Essentially, the idea is to segment the 100KB worth of data into packets. So the approach I am taking is, assigning all the data to an array and then creating an array of pointers which will contain starting positions to each of my packets. The pointer array contains values [0, 1244, 2488, and so on].
It works perfectly fine, except my first assignment is gibberish. k[0] and o[0] both come up with garbage while the remaining 79 values seem to be fine. Can anyone assist?
I realize the first argument to the fread command should be a pointer, but this worked also. Also, I need the pointers to the starting of each of my packets because I am doing other function calls (omitted from code) that format the packet properly with the appropriate headers.
It's been a while since I coded in c/c++ so any optimizations you could provide, would be much appreciated.
int main(int argc, const char * argv[])
{
FILE *data;
int size; int i;
int paySize = 1244;
//int hdrSize = 256;
data = fopen("text2.dat","r");
//get data size
fseek(data, 0, SEEK_END);
size = ftell(data);
rewind (data);
char k[size]; //initializing memory location for all the data to be read in.
fread(k, 1, size, data); //reading in data
int temp = ceil(size/paySize);
char * o[temp]; //array of pointers to beginning of each packet.
int q = 0;
for (i = 0; i < size; i = i+paySize)
{
o[q] = &k[i];
q++;
}
cout << o[0] << endl; //this outputs gibberish!
cout << o[0] << endl;
prints an address to which this pointer points. To print value at this address use:
cout << *o[0] << endl;
Here:
char k[size];
char * o[temp];
o[q] = &k[i];
you assign to o[] pointers to characters, dereferencing such a pointer result in a single char.