Gdiplus on C++ configuration for binary image? - c++

Is there any setting that I can set the color of the GDIPlus Graphics method or Bitmap method only indicates black and white color (0,0,0/255,255,255) for binary image? I've already tried colorpalette option (getPalette / setPalette) of bitmap class but it doesnt work at all.
I've accessed to the image itself but it doesn't work either. please note that image itself.
for (int bufidx = 0; bufidx < m_BufferSize; bufidx++)
{
if (m_pImage[BINARY_VID][bufidx] > m_Threshold)
{
m_pImage[BINARY_VID][bufidx] = 255;
}
else
{
m_pImage[BINARY_VID][bufidx] = 0;
}
} // Algorithm for thresholding image
this is the code that changes data itself. m_buffersize is size of the image (width * height)
m_pImage[BINARY_VID]is the data itself that has 0~255 value 8-bit data. data is from the camera module.
m_pBitmap[vidType]->LockBits(&rc, 0, PixelFormat8bppIndexed, &bitmapdata);
memcpy(bitmapdata.Scan0, m_pImage[vidType], m_BufferSize);
m_pBitmap[vidType]->UnlockBits(&bitmapdata);
and it's the part that I convert it to Bitmap method
int paletteSize = m_pBitmap[vidType]->GetPaletteSize();
ColorPalette* pPalette = new ColorPalette[paletteSize];
m_pBitmap[vidType]->GetPalette(pPalette, paletteSize);
// gets palette info of bitmap image to set color info of the bitmap
switch (vidType)
{
case NORMAL_VID:
case ROI_VID:
for (unsigned int i = 0; i < pPalette->Count; i++)
{
pPalette->Entries[i] = Color::MakeARGB(255, i, i, i);
}
m_pBitmap[vidType]->SetPalette(pPalette);
break;
// Normal video || ROI video color set
case BINARY_VID:
for (unsigned int i = 0; i < pPalette->Count; i++)
{
if (i > m_Threshold)
{
pPalette->Entries[i] = Color::MakeARGB(255, 255, 255, 255);
}
else
{
pPalette->Entries[i] = Color::MakeARGB(255, 0, 0, 0);
}
}
m_pBitmap[vidType]->SetPalette(pPalette);
break;
default:
AfxMessageBox(TEXT("vidtype error : on converting palette!"));
delete[] pPalette;
return;
break;
}
delete[] pPalette;
MemoryLeakCheck();
and this is the part that I use for converting color.
and this is the result image that I get. I just don't know why I have gray noise on the binary image that only has data of 0 or 255.

There is no gray in this image, that is a false impression.

Related

Opencv: Get all objects from segmented colorful image

How to get all objects from image i am separating image objects through colors.
There are almost 20 colors in following image. I want to extract all colors and their position in a vector(Vec3b and Rect).
I'm using egbis algorithum for segmentation
Segmented image
Mat src, dst;
String imageName("/home/pathToImage.jpg" );
src = imread(imageName,1);
if(src.rows < 1)
return -1;
for(int i=0; i<src.rows; i=i+5)
{ for(int j=0; j<src.cols; j=j+5)
{
Vec3b color = src.at<Vec3b>(Point(i,j));
if(colors.empty())
{
colors.push_back(color);
}
else{
bool add = true;
for(int k=0; k<colors.size(); k++)
{
int rmin = colors[k].val[0]-5,
rmax = colors[k].val[0]+5,
gmin = colors[k].val[1]-5,
gmax = colors[k].val[1]+5,
bmin = colors[k].val[2]-5,
bmax = colors[k].val[2]+5;
if((
(color.val[0] >= rmin && color.val[0] <= rmax) &&
(color.val[1] >= gmin && color.val[1] <= gmax) &&
(color.val[2] >= bmin && color.val[2] <= bmax))
)
{
add = false;
break;
}
}
if(add)
colors.push_back(color);
}
}
}
int size = colors.size();
for(int i=0; i<colors.size();i++)
{
Mat inrangeImage;
//cv::inRange(src, Scalar(lowBlue, lowGreen, lowRed), Scalar(highBlue, highGreen, highRed), redColorOnly);
cv::inRange(src, cv::Scalar(colors[i].val[0]-1, colors[i].val[1]-1, colors[i].val[2]-1), cv::Scalar(colors[i].val[0]+1, colors[i].val[1]+1, colors[i].val[2]+1), inrangeImage);
imwrite("/home/kavtech/Segmentation/1/opencv-wrapper-egbis/images/inrangeImage.jpg",inrangeImage);
}
/// Display
namedWindow("Image", WINDOW_AUTOSIZE );
imshow("Image", src );
waitKey(0);
I want to get each color position so that
i can differentiate object positions. Please Help!
That's just a trivial data formatting problem. You want to turn a truecolour image with only 20 or so colours into a colour-indexed image.
So simply step through the image, look up the colour in your growing dictionary, and assign and integer 0-20 to each pixel.
Now you can turn the images into binary images simply by saying one colour is set and the rest are clear, and use standard algorithms for fitting rectangles.

Get RGB values from AVPicture and change to grey-scale in FFMPEG

The main motive of my code is to change the RGB values from the AVPicture in FFMPEG.
I have been able to get the image data "data[0]" by following the article : http://blog.tomaka17.com/2012/03/libavcodeclibavformat-tutorial/
I would like to know that how can I access the 3 bytes of pic.data[0] which is in RGB format. I have been trying to access the pic.data[i][j] via for-loop in 2D matrix fashion but jth element>3.
Any guidance in this regard will be helpful.
Code is here :
AVPicture pic;
avpicture_alloc(&pic, PIX_FMT_RGB24, mpAVFrameInput->width,mpAVFrameInput->height);
auto ctxt = sws_getContext(mpAVFrameInput->width,mpAVFrameInput->height,static_cast<PixelFormat>(mpAVFrameInput->format),
mpAVFrameInput->width, mpAVFrameInput->height, PIX_FMT_RGB24, SWS_BILINEAR, nullptr, nullptr, nullptr);
if (ctxt == nullptr)
throw std::runtime_error("Error while calling sws_getContext");
sws_scale(ctxt, mpAVFrameInput->data, mpAVFrameInput->linesize, 0, mpAVFrameInput->height, pic.data,
pic.linesize);
for (int i = 0; i < (mpAVFrameInput->height-1); i++) {
for (int j = 0; j < (mpAVFrameInput->width-1); j++) {
printf("\n value: %d",pic.data[0][j]);
}
}
Pseudo code which is in my mind is :
For each pixel in image {
Red = pic.data[i][j].pixel.RED;
Green = pic.data[i][j].pixel.GREEN;
Blue = pic.data[i][j].pixel.BLUE;
GRAY = (Red+Green+Blue)/3;
Red = GRAY;
Green = GRAY;
Blue = GRAY;
Save Frame;}
I am quite new to FFMPEG therefore any guidance and help will be highly appreciable.
Many Thanks
First extract the row data row-by-row of each frame; iterate the loop keeping in view the frame's height.
Here's the sample:
int FrameHeight = FrameInput->height;
int FrameWidth = FrameInput->width;
for(int Counter=0; Counter<FrameHeight; Counter++)
{
int RowSize = FrameWidth*sizeof(uint8_t)*3;
uint8_t* RowData = (uint8_t*) malloc(RowSize);
memset(RowData, 0, RowSize);
memcpy(RowData, AVFrameInput->data[0]+Counter*AVFrameInput->linesize[0], RowSize);
for(int k=0;k<AVFrameInput->linesize[0];++k)
{
if(RowData[k]> 200)
{
RowData[k] = RowData[k]/3;
}
else
{
if(RowData[k] > 150)
{
RowData[k] = RowData[k]/3;
}
else
{
RowData[k] = RowData[k]/3;
}
}
}
memcpy(AVFrameInput->data[0]+Counter*AVFrameInput->linesize[0], RowData, RowSize);
}

c++ and opencv get and set pixel color to Mat

I'm trying to set a new color value to some pixel into a cv::Mat image my code is below:
Mat image = img;
for(int y=0;y<img.rows;y++)
{
for(int x=0;x<img.cols;x++)
{
Vec3b color = image.at<Vec3b>(Point(x,y));
if(color[0] > 150 && color[1] > 150 && color[2] > 150)
{
color[0] = 0;
color[1] = 0;
color[2] = 0;
cout << "Pixel >200 :" << x << "," << y << endl;
}
else
{
color.val[0] = 255;
color.val[1] = 255;
color.val[2] = 255;
}
}
imwrite("../images/imgopti"+to_string(i)+".tiff",image);
It seems to get the good pixel in output (with cout) however in the output image (with imwrite) the pixel concerned aren't modified. I have already tried using color.val[0].. I still can't figure out why the pixel colors in the output image dont change.
thanks
You did everything except copying the new pixel value back to the image.
This line takes a copy of the pixel into a local variable:
Vec3b color = image.at<Vec3b>(Point(x,y));
So, after changing color as you require, just set it back like this:
image.at<Vec3b>(Point(x,y)) = color;
So, in full, something like this:
Mat image = img;
for(int y=0;y<img.rows;y++)
{
for(int x=0;x<img.cols;x++)
{
// get pixel
Vec3b & color = image.at<Vec3b>(y,x);
// ... do something to the color ....
color[0] = 13;
color[1] = 13;
color[2] = 13;
// set pixel
//image.at<Vec3b>(Point(x,y)) = color;
//if you copy value
}
}
just use a reference:
Vec3b & color = image.at<Vec3b>(y,x);
color[2] = 13;
I would not use .at for performance reasons.
Define a struct:
//#pragma pack(push, 2) //not useful (see comments below)
struct BGR {
uchar blue;
uchar green;
uchar red; };
And then use it like this on your cv::Mat image:
BGR& bgr = image.ptr<BGR>(y)[x];
image.ptr(y) gives you a pointer to the scanline y. And iterate through the pixels with loops of x and y

DevIL PNG format for display in OpenGL

I'm doing some pixel work in OpenGL, and all was going well until I had to load a PNG. I found a thing called 'DevIL', and followed an example I found. It does display something, but it's just kind of a random rainbow. I'm doing it right as far as I can tell; I output the data to a text file to check it. I tried some other libraries, but building them is a little beyond my capabilities. Here's my setup:
((In global scope))
unsigned char pixels[160000*3];
ILubyte * bytes ;
((Main))
ilInit();
ilLoadImage( "test.png" ) ;
size = ilGetInteger( IL_IMAGE_SIZE_OF_DATA ) ;
bytes = ilGetData() ;
And here's my drawing routine:
//Color the screen all pretty
for(int i=0;i<160000*3;)
{
pixels[i] = a;
i++;
pixels[i] = b;
i++;
pixels[i] = c;
i++;
}
//Break the png
for( int i = 0 ; i < size;)
{
if(bytes[i+3] != 255)
{ i+=4;
continue; }
pixels[i] = bytes[i];
i++;
pixels[i] = bytes[i];
i++;
pixels[i] = bytes[i];
i++;
i++;
}
glDrawPixels(
400,
300,
GL_RGB,
GL_UNSIGNED_BYTE,
pixels
);
I know the alignment is wrong; I'll fix that later. The problem is the colors are totally not right
P.S. I know you're supposed to use textured quads
PNG is a compressed format. Seems like ilGetData returns pointer to a compressed data. To get decoded image use ilCopyPixels.

Trouble fitting depth image to RGB image using Kinect 1.0 SDK

I'm trying to get the Kinect depth camera pixels to overlay onto the RGB camera. I am using the C++ Kinect 1.0 SDK with an Xbox Kinect, OpenCV and trying to use the new "NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution" method.
I have watched the image render itself in slow motion and looks as if pixels are being drawn multiple times in the one frame. It first draws itself from the top and left borders, then it gets to a point (you can see a 45 degree angle in there) where it starts drawing weird.
I have been trying to base my code off of the C# code written by Adam Smith at the MSDN forums but no dice. I have stripped out the overlay stuff and just want to draw the depth normalized depth pixels where it "should" be in the RGB image.
The image on the left is what I'm getting when trying to fit the depth image to RGB space, and the image on the right is the "raw" depth image as I like to see it. I was hoping this my method would create a similar image to the one on the right with slight distortions.
This is the code and object definitions that I have at the moment:
// From initialization
INuiSensor *m_pNuiInstance;
NUI_IMAGE_RESOLUTION m_nuiResolution = NUI_IMAGE_RESOLUTION_640x480;
HANDLE m_pDepthStreamHandle;
IplImage *m_pIplDepthFrame;
IplImage *m_pIplFittedDepthFrame;
m_pIplDepthFrame = cvCreateImage(cvSize(640, 480), 8, 1);
m_pIplFittedDepthFrame = cvCreateImage(cvSize(640, 480), 8, 1);
// Method
IplImage *Kinect::GetRGBFittedDepthFrame() {
static long *pMappedBits = NULL;
if (!pMappedBits) {
pMappedBits = new long[640*480*2];
}
NUI_IMAGE_FRAME pNuiFrame;
NUI_LOCKED_RECT lockedRect;
HRESULT hr = m_pNuiInstance->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &pNuiFrame);
if (FAILED(hr)) {
// return the older frame
return m_pIplFittedDepthFrame;
}
bool hasPlayerData = HasSkeletalEngine(m_pNuiInstance);
INuiFrameTexture *pTexture = pNuiFrame.pFrameTexture;
pTexture->LockRect(0, &lockedRect, NULL, 0);
if (lockedRect.Pitch != 0) {
cvZero(m_pIplFittedDepthFrame);
hr = m_pNuiInstance->NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution(
m_nuiResolution,
NUI_IMAGE_RESOLUTION_640x480,
640 * 480, /* size is previous */ (unsigned short*) lockedRect.pBits,
(640 * 480) * 2, /* size is previous */ pMappedBits);
if (FAILED(hr)) {
return m_pIplFittedDepthFrame;
}
for (int i = 0; i < lockedRect.size; i++) {
unsigned char* pBuf = (unsigned char*) lockedRect.pBits + i;
unsigned short* pBufS = (unsigned short*) pBuf;
unsigned short depth = hasPlayerData ? ((*pBufS) & 0xfff8) >> 3 : ((*pBufS) & 0xffff);
unsigned char intensity = depth > 0 ? 255 - (unsigned char) (256 * depth / 0x0fff) : 0;
long
x = pMappedBits[i], // tried with *(pMappedBits + (i * 2)),
y = pMappedBits[i + 1]; // tried with *(pMappedBits + (i * 2) + 1);
if (x >= 0 && x < m_pIplFittedDepthFrame->width && y >= 0 && y < m_pIplFittedDepthFrame->height) {
m_pIplFittedDepthFrame->imageData[x + y * m_pIplFittedDepthFrame->widthStep] = intensity;
}
}
}
pTexture->UnlockRect(0);
m_pNuiInstance->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &pNuiFrame);
return(m_pIplFittedDepthFrame);
}
Thanks
I have found that the problem was that the loop,
for (int i = 0; i < lockedRect.size; i++) {
// code
}
was iterating on a per-byte basis, not on a per-short (2 bytes) basis. Since lockedRect.size returns the number of bytes the fix was simply changing the increment to i += 2, even better would be changing it to sizeof(short), like so,
for (int i = 0; i < lockedRect.size; i += sizeof(short)) {
// code
}