Grayscale C++ with OpenCV (appears some noise) - c++

i have some problem about convert to grayscale using openCV in make the manual function.
And this is my code.
main.cpp
unsigned int height, width;
int main(int argc, char** argv)
{
IplImage* image_input = cvLoadImage("duck.jpg", CV_LOAD_IMAGE_UNCHANGED);
IplImage* image_output = cvCreateImage(cvGetSize(image_input),IPL_DEPTH_8U,1);
unsigned char *h_out = (unsigned char*)image_output->imageData;
unsigned char *h_in = (unsigned char*)image_input->imageData;
width = image_input->width;
height = image_input->height;
h_grayscale(h_in, h_out);
cvShowImage("Original", image_input);
cvShowImage("CPU", image_output);
cvReleaseImage(&image_input);
cvReleaseImage(&image_output);
waitKey(0);
}
in this my grayscale code.
void h_grayscale( unsigned char* h_in, unsigned char* h_out)
{
for(int i=0;i<height;i++){
for(int j=0;j<width;j++){
int index = (i*j)*3;
double temp = 0.3*h_in[index]+0.6*h_in[index+1]+0.1*h_in[index+2];
h_out[i*j] = (unsigned char)temp;
}
}
but the results are not performing as it should, it appears some noise in it.
I still have not found where the code that makes the error. :(
thx before.

You are calculating the input and output indices incorrectly.
First point to remember while working with OpenCV images is that they are aligned, i.e. each row is padded at the end with some random values. So while calculating the linear index of a pixel in color and grayscale images, widthStep should be used instead of width.
The generic formula to calculate index of a pixel is:
i * widthStep/sizeof(type) + (channels * j)
Where i is the row number, and j is the column number.
Translating the above formula for the current case, the indices will be calculated as follows:
Input:
int index = i * colorWidthStep + (3 * j);
Output:
h_out[i * grayWidthStep + j] = (unsigned char)temp;
You may create 2 additional global variables colorWidthStep and grayWidthStep along with width and height. Initialize the variables as follows:
width = image_input->width;
height = image_input->height;
colorWidthStep = image_input->widthStep;
grayWidthStep = image_output->widthStep;

Related

Setting pixel color of 8-bit grayscale image using pointer

I have this code:
QImage grayImage = image.convertToFormat(QImage::Format_Grayscale8);
int size = grayImage.width() * grayImage.height();
QRgb *data = new QRgb[size];
memmove(data, grayImage.constBits(), size * sizeof(QRgb));
QRgb *ptr = data;
QRgb *end = ptr + size;
for (; ptr < end; ++ptr) {
int gray = qGray(*ptr);
}
delete[] data;
It is based on this: https://stackoverflow.com/a/40740985/8257882
How can I set the color of a pixel using that pointer?
In addition, using qGray() and loading a "bigger" image seem to crash this.
This works:
int width = image.width();
int height = image.height();
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
image.setPixel(x, y, qRgba(0, 0, 0, 255));
}
}
But it is slow when compared to explicitly manipulating the image data.
Edit
Ok, I have this code now:
for (int y = 0; y < height; ++y) {
uchar *line = grayImage.scanLine(y);
for (int x = 0; x < width; ++x) {
int gray = qGray(line[x]);
*(line + x) = uchar(gray);
qInfo() << gray;
}
}
And it seems to work. However, when I use an image that has only black and white colors and print the gray value, black color gives me 0 and white gives 39. How can I get the gray value in a range of 0-255?
First of all you are copying too much data in this line:
memmove(data, grayImage.constBits(), size * sizeof(QRgb));
The size ob Qrgb is 4 bytes, but according to the documentation, the size of a Format_Grayscale8 pixel is only 8 bits or 1 byte. If you remove sizeof(QRgb) you should be copying the correct amount of bytes, assuming all the lines in the bitmap are consecutive (which, according to the documentation, they are not -- they are aligned to at minimum 32-bits, so you would have to account for that in size). The array data should not be of type Qrgb[size] but ucahr[size]. You can then modify data as you like. Finally, you will probably have to create a new QImage with one of the constructors that accept image bits as uchar and assign the new image to the old image:
auto newImage = QImage( data, image.width(), image.height(), QImage::Format_Grayscale8, ...);
grayImage = std::move( newImage );
But instead of copying image data, you could probably just modify grayImage directly by accessing its data through bits(), or even better, through scanLine(), maybe something like this:
int line, column;
auto pLine = grayImage.scanLine(line);
*(pLine + column) = uchar(grayValue);
EDIT:
According to scanLine documentation, the image is at least 32-bit aligned. So if your 8-bit grayScale image is 3 pixels wide, a new scan line will start every 4 bytes. If you have a 3x3 image, the total size of the memory required to hold the image pixels will be 12. The following code shows the required memory size:
int main() {
auto image = QImage(3, 3, QImage::Format_Grayscale8);
std::cout << image.bytesPerLine() * image.height() << "\n";
return 0;
}
The fill method (setting all gray values to 0xC0) could be implemented like this:
auto image = QImage(3, 3, QImage::Format_Grayscale8);
uchar gray = 0xc0;
for ( int i = 0; i < image.height(); ++i ) {
auto pLine = image.scanLine( i );
for ( int j = 0; j < image.width(); ++j )
*pLine++ = gray;
}

Pixel data unpacking to smaller sections

I'm trying to write a function that unpacks an image into separate quads. But for some reason the results are distorted (kinda stretched 45 degrees), so I must be reading the pixel array incorrectly, though I can't see the problem with my function...
The function takes 2 unsigned char arrays, "source" and "target" and two unsigned int values, the "width" and "height" of the source image. Width is dividable by 4, and height is dividable by 3 (both return the same value, because the texture is 600 * 450) so each face is 150*150 px. So the w/h values are correct. Then it also takes in 2 ints, "xIt" and "yIt" which determine the offset - which 150*150 block should be read.
Here's the function:
const unsigned int trgImgWidth = width / 4;
const unsigned int trgImgHeight = height / 3;
unsigned int trgBufferOffset = 0;
// Compute pixel offset to start reading from
unsigned int Yoffset = yIt * trgImgHeight * width * 3;
unsigned int Xoffset = xIt * trgImgWidth * 3;
for (unsigned int y = 0; y < trgImgHeight; y++)
{
unsigned int o = Yoffset + Xoffset; // Offset of current line of pixels
for (unsigned int x = 0; x < trgImgWidth * 3; x++) // for each pixel component (rgb) in the line
{
target[trgBufferOffset] = source[o + x];
trgBufferOffset++;
}
Yoffset += width * 3;
}
Anyone see where I might be going wrong here?

How can I use openimageIO to store RGB values in arrays? (using C++, OpenGL)

I am using openimageIO to read and display an image from a JPG file, and I now need to store the RGB values in arrays so that I can manipulate and re-display them later.
I want to do something like this:
for (int i=0; i<picturesize;i++)
{
Rarray[i]=pixelredvalue;
Garray[i]=pixelgreenvalue;
Barray[i]=pixelbluevalue;
}
This is an openimageIO source that I found online: https://people.cs.clemson.edu/~dhouse/courses/404/papers/openimageio.pdf
"Section 3.2: Advanced Image Output" (pg 35) is the closest to what I'm doing, but I don't understand how I can use the channels to write pixel data to arrays. I also don't fully understand the difference between "writing" and "storing in an array". This is the piece of code in the reference that I am talking about:
int channels = 4;
ImageSpec spec (width, length, channels, TypeDesc::UINT8);
spec.channelnames.clear ();
spec.channelnames.push_back ("R");
spec.channelnames.push_back ("G");
spec.channelnames.push_back ("B");
spec.channelnames.push_back ("A");
I managed to read the image and display it using the code in the reference, but now I need to store all the pixel values in my array.
Here is another useful piece of code from the link, but again, I can't understand how to retrieve the individual RGB values and place them into an array:
#include <OpenImageIO/imageio.h>
OIIO_NAMESPACE_USING
...
const char *filename = "foo.jpg";
const int xres = 640, yres = 480;
const int channels = 3; // RGB
unsigned char pixels[xres*yres*channels];
ImageOutput *out = ImageOutput::create (filename);
if (! out)
return;
ImageSpec spec (xres, yres, channels, TypeDesc::UINT8);
out->open (filename, spec);
out->write_image (TypeDesc::UINT8, pixels);
out->close ();
ImageOutput::destroy (out);
But this is about writing to a file, and still does not solve my problem. This is on page 35.
Let's assume, that your code which reads an image, looks like this (snippet from OpenImageIO 1.7 Programmer Documentation, Chapter 4.1 Image Input Made Simple, page 55):
ImageInput *in = ImageInput::open (filename);
const ImageSpec &spec = in->spec();
int xres = spec.width;
int yres = spec.height;
int channels = spec.nchannels;
std::vector<unsigned char> pixels (xres*yres*channels);
in->read_image (TypeDesc::UINT8, &pixels[0]);
in->close();
ImageInput::destroy (in);
Now all the bytes of the image are contained in std::vector<unsigned char> pixels.
If you want to access the RGB valuse of the pixel at positon x, y, the you can do it like this:
int pixel_addr = (y * yres + x) * channels;
unsigned char red = pixels[pixel_addr];
unsigned char green = pixels[pixel_addr + 1];
unsigned char blue = pixels[pixel_addr + 2];
Since all the pixels are stored in pixels, there is no reason to store them in separate arrays for the 3 color channels.
But if you want to store the red, green and blue values in separated arrays, then you can do it like this:
std::vector<unsigned char> Rarray(x_res*yres);
std::vector<unsigned char> Garray(x_res*yres);
std::vector<unsigned char> Barray(x_res*yres);
for (int i=0; i<x_res*yres; i++)
{
Rarray[i] = pixels[i*channels];
Garray[i] = pixels[i*channels + 1];
Barray[i] = pixels[i*channels + 2];
}
Of course the pixels have to be tightly packed to pixels (line alignment of 1).

C++AMP Computing gradient using texture on a 16 bit image

I am working with depth images retrieved from kinect which are 16 bits. I found some difficulties on making my own filters due to the index or the size of the images.
I am working with Textures because allows to work with any bit size of images.
So, I am trying to compute an easy gradient to understand what is wrong or why it doesn't work as I expected.
You can see that there is something wrong when I use y dir.
For x:
For y:
That's my code:
typedef concurrency::graphics::texture<unsigned int, 2> TextureData;
typedef concurrency::graphics::texture_view<unsigned int, 2> Texture
cv::Mat image = cv::imread("Depth247.tiff", CV_LOAD_IMAGE_ANYDEPTH);
//just a copy from another image
cv::Mat image2(image.clone() );
concurrency::extent<2> imageSize(640, 480);
int bits = 16;
const unsigned int nBytes = imageSize.size() * 2; // 614400
{
uchar* data = image.data;
// Result data
TextureData texDataD(imageSize, bits);
Texture texR(texDataD);
parallel_for_each(
imageSize,
[=](concurrency::index<2> idx) restrict(amp)
{
int x = idx[0];
int y = idx[1];
// 65535 is the maxium value that can take a pixel with 16 bits (2^16 - 1)
int valX = (x / (float)imageSize[0]) * 65535;
int valY = (y / (float)imageSize[1]) * 65535;
texR.set(idx, valX);
});
//concurrency::graphics::copy(texR, image2.data, imageSize.size() *(bits / 8u));
concurrency::graphics::copy_async(texR, image2.data, imageSize.size() *(bits) );
cv::imshow("result", image2);
cv::waitKey(50);
}
Any help will be very appreciated.
Your indexes are swapped in two places.
int x = idx[0];
int y = idx[1];
Remember that C++AMP uses row-major indices for arrays. Thus idx[0] refers to row, y axis. This is why the picture you have for "For x" looks like what I would expect for texR.set(idx, valY).
Similarly the extent of image is also using swapped values.
int valX = (x / (float)imageSize[0]) * 65535;
int valY = (y / (float)imageSize[1]) * 65535;
Here imageSize[0] refers to the number of columns (the y value) not the number of rows.
I'm not familiar with OpenCV but I'm assuming that it also uses a row major format for cv::Mat. It might invert the y axis with 0, 0 top-left not bottom-left. The Kinect data may do similar things but again, it's row major.
There may be other places in your code that have the same issue but I think if you double check how you are using index and extent you should be able to fix this.

OpenCV Foreground Detection slow

I am trying to implement the codebook foreground detection algorithm outlined here in the book Learning OpenCV.
The algorithm only describes a codebook based approach for each pixel of the picture. So I took the simplest approach that came to mind - to have a array of codebooks, one for each pixel, much like the matrix structure underlying IplImage. The length of the array is equal to the number of pixels in the image.
I wrote the following two loops to learn the background and segment the foreground. It uses my limited understanding of the matrix structure inside the src image, and uses pointer arithmetic to traverse the pixels.
void foreground(IplImage* src, IplImage* dst, codeBook* c, int* minMod, int* maxMod){
int height = src->height;
int width = src->width;
uchar* srcCurrent = (uchar*) src->imageData;
uchar* srcRowHead = srcCurrent;
int srcChannels = src->nChannels;
int srcRowWidth = src->widthStep;
uchar* dstCurrent = (uchar*) dst->imageData;
uchar* dstRowHead = dstCurrent;
// dst has 1 channel
int dstRowWidth = dst->widthStep;
for(int row = 0; row < height; row++){
for(int column = 0; column < width; column++){
(*dstCurrent) = find_foreground(srcCurrent, (*c), srcChannels, minMod, maxMod);
dstCurrent++;
c++;
srcCurrent += srcChannels;
}
srcCurrent = srcRowHead + srcRowWidth;
srcRowHead = srcCurrent;
dstCurrent = dstRowHead + dstRowWidth;
dstRowHead = dstCurrent;
}
}
void background(IplImage* src, codeBook* c, unsigned* learnBounds){
int height = src->height;
int width = src->width;
uchar* srcCurrent = (uchar*) src->imageData;
uchar* srcRowHead = srcCurrent;
int srcChannels = src->nChannels;
int srcRowWidth = src->widthStep;
for(int row = 0; row < height; row++){
for(int column = 0; column < width; column++){
update_codebook(srcCurrent, c[row*column], learnBounds, srcChannels);
srcCurrent += srcChannels;
}
srcCurrent = srcRowHead + srcRowWidth;
srcRowHead = srcCurrent;
}
}
The program works, but is very sluggish. Is there something obvious that is slowing it down? Or is it an inherent problem in the simple implementation? Is there anything I can do to speed it up? Each code book is sorted in no specific order, so it does take linear time to process each pixel. So double the background samples, and the program runs slower by 2 for each pixel, which is then magnified by the number of pixels. But as the implementation stands, I don't see any clear, logical way to sort the code element entries.
I am aware that there is an example implementation of the same algorithm in the opencv samples. However, that structure seems to be much more complex. I am looking more to understand the reasoning behind this method, I am aware that I can just modify the sample for real life applications.
Thanks
Operating on every pixel in an image is going to be slow, regardless of how you implement it.