Image with only the red component from RGB 565 - c++

Hello, here's a part of the code which Iam using,
b=(byte2&0xF8)<<8; //F8=11111000 5
g=(byte2&0xFC)<<3; //FC=11111100 6
r=(byte2&0xF8)>>3; //F8=11111000 5
grisColor=(r)|(g)|(b);
it takes a picture with an OV7670 camera in RGB 565, what do I have to modify in order to take a picture with only the red component?
Thanks a lot in advance

Just comment out the end of the last line:
grisColor=(r);//|(g)|(b);
and/or set g and b to zero.

Once you have the 5 bits of red, the next question is, do you want the output in grayscale (e.g. single octet), or RGB565 with just the red filled in (and green and blue zero), or RGB565 grayscale, or RGB24, ...?

I noticed that your goal is to get an 8bit grayscale image from the ov7670. You would get better results by setting the ov7670 to output yuv422 and "convert" that data to grayscale (Just use the Y component and ignore the U and V components).
I have a github repository that contains some useful functions for configuring the ov7670.
Check out the register list named yuv422_ov7670 https://github.com/ComputerNerd/ov7670-no-ram-arduino-uno/blob/master/ov7670_regs.h

Related

How to get grayscale value of pixels from grayscale image in xCode

I was wondering how to determine the equivalent of RGB values for a grayscale image. The original image is grayscale and everything I have found online is converting an RGB image pixel values to the grayscale pixel values. I already can read in the image. Ideally, this would be for xCode.
I was wondering if there was a class which would do this for me. If so, and you could point me to it, that would be great. I will read on it.
Any help is greatly appreciated.
NOTE: I am a beginner in C++ and do not have time to learn everything formally; I have to learn all of my programming on the fly.
You need more information to transform from a simple Greyscale to RGB, when you do reverse operation, the color information is "lost", as the three channels are set to same value(depending on the algorithm each channel will have a different/same weight in the final color computation).
Digital cameras, usually store more information per pixel, 12 bits per channel in 35mm and 14 bits per channel in medium format (those bits number are the average, some products offer less or even more quality).
Thanks to those additional bits per channel, the camera can compute the "real" color, or what it thinks is the real color based on some parameters.
TL;DR: You can't without more data from your source, in this case the image.
You can convert a gray value to RGB by setting each component of the RGB value to the gray value:
ColorRGB myColorRGB = ColorRGBMake(myGrayValue, myGrayValue, myGrayValue);

Conversion from YBR_FULL to RGB in c++?

Could you please explain the relationship between YBR_FULL and RGB so that I'm able to convert the YBR_FULL image to RGB in C++?
I'm getting the pixel data from a Dicom image as bytes in a buffer using DCMTK library. For some selected pixels I set the pixel values as 0, for RGB that works fine as when the images are Visualized the pixel values which are set to 0 are shown as black, but in case of YBR_FULL the those pixels are shown as green. I don't quite understand what the problem is. Could you please elaborate what mistake I'm doing?
this has been answered
Create BufferedImage from YBR_FULL Dicom Image
here's the link to the mathemetical formula
http://www.fourcc.org/fccyvrgb.php
If you are setting the YBR values to (0,0,0) your luminance (Y) is in the correct value, but the chroma (B and R) zero point is exactly in te middle of the range, so you should try with the value 128 (if B and R have one byte size). then you have YBR = (0, 128U, 128U)

1bpp Monochromatic BMP

I ran a demo bmp file format helper program "DDDemo.exe" to help me visualize the format of a 32x1 pixel bmp file (monochromatic). I'm okay with the the two header sections but dont seem to understand the color table and pixel bits portions. I made two 32x1 pixel bmp files to help me compare (please see attached).
Can someone assit me understand how the "pixel bits" relates to the color map?
UPDATE: After some trial and error I finally was able to write a 32x1 pixel monochromatic BMP. Although it has different pixel bits as the attached images, this tool helped with the header and color mapping concept. Thank you for everyones input.
An unset bit in the PIXEL BITS refers to the first color table entry (0,0,0), black, and a set bit refers to the second color table entry (ff,ff,ff), white.
"The 1-bit per pixel (1bpp) format supports 2 distinct colors, (for example: black and white, or yellow and pink). The pixel values are stored in each bit, with the first (left-most) pixel in the most-significant bit of the first byte. Each bit is an index into a table of 2 colors. This Color Table is in 32bpp 8.8.8.0.8 RGBAX format. An unset bit will refer to the first color table entry, and a set bit will refer to the last (second) color table entry." - BMP file format
The color table for these images is simply indicating that there are two colors in the image:
Color 0 is (00, 00, 00) -- pure black
Color 1 is (FF, FF, FF) -- pure white
The image compression method shown (BI_RGB -- uncompressed) doesn't make sense with the given pixel data and images, though.

OpenCV - HSV range of values for tracking red color

Could you please tell me how to what are ranges for Hue, Saturation and Value indices for intense red?
I try to use this values for color tracking and I couldn't find a specific answer via Google.
you can map any color to OpenCV HSV. Actually opencv use 1800 hue cylinder while ideally it is 360, on the orher hand MS paint use 2400 cyllinder.
So to get OpenCV HSV value, simply open MS paint, open mixer, and read the value of HSV, now to map this value into OpenCV HSV multiply it with 180/240.
the range to value for saturation and value is 00-1800
You are the only one who can answer this question, since we don't know your criteria for "intense red". Collect as many samples as you can, some of which you consider intense red and some which are close but just miss the cut. Convert them all to HSL. Study the pattern.
You might put together a small app that has sliders for the H, S, and L parameters and displays a block of color corresponding to the settings. That will tell you your limits very quickly.

How to resize an image in C++?

I have an image which is representative of an Array2D:
template<class T = uint8_t>
Array2D<T> mPixData[4]; ///< 3 component channels + alpha channel.
The comment is in the library. I have no clues about the explanation.
Would someone:
explain what are the 3 component channels + alpha channel are about
show how I could resize this image based on the mPixData
Without know what library this is, here is a stab in the dark:
The type definition implies that it is creating a 2D array of unsigned chars (allowing you to store values up to 255.
template<class T = uint8_t> Array2D<T>
Then, mPixData itself is an array, which implies that at each co-ordinate, you have four values (bytes to contend with), 3 for the colours (let's say RGB, but could be something else) and 1 for Alpha.
The "image" is basically this three dimensional array. Presumably when loading stuff into it, it resizes to the input - what you need to do is to find some form of resizing algorithm (not an image processing expert myself, but am sure google will reveal something), which will then allow you to take this data and do what you need...
1) 3 component channels - Red Green Blue channels. alpha channel tells about image transparency
2) There are many algorithms you can use to resize the image. The simplest would be to discard extra pixels. Another simple is to do interpolation
The 3 component channels represent the Red Green Blue (aka RGB) channels. The 4th channel, ALPHA, is the transparency channel.
A pixel is defined by mPixData[4]
mPixData[0] -> R
mPixData[1] -> G
mPixData[2] -> B
mPixData[3] -> A
Therefore, an image can be represented as a vector or array of mPixData[4]. As you already stated, in this case is Array2D<T> mPixData[4];
Resize/rescale/resample an image is not a trivial process. There are lots of materials available on the web about it and I think you should consider using a library to do this. Check CxImage (Windows/Linux).
There are some code here but I haven't tested it. Check the resample() function.
Hi the 3 channels are the rgb + alpha channel. So red green and blue channels and the alpha channel. There are several methods to downscaling. You could take for example every 4 pixel, but the result would look quite bad, take a look at different interpolation methods e.g.: http://en.wikipedia.org/wiki/Bilinear_interpolation.
Or if you want to use a library use: http://www.imagemagick.org/Magick++/
or as mentioned by karlphillip:
http://www.xdp.it/cximage.htm