Make polygonal hole in QImage alpha channel - c++

I'm trying to make polygonal hole in QImage alpha channel.
My current implementation use deprecated 'alphaChannel' method and works slow (because it use containPoint for every image pixel instead of draw polygon).
QImage makeImageWithHole(const QImage & image, const std::vector<QPoint> & hole_points)
{
QImage newImage = image.convertToFormat(QImage::Format_ARGB32);
QImage alpha = newImage.alphaChannel();
QPolygon hole(QVector<QPoint>::fromStdVector(hole_points));
for (int x = 0; x < image.width(); x++)
{
for (int y = 0; y < image.height(); y++)
{
if (hole.containsPoint(QPoint(x, y), Qt::OddEvenFill))
{
alpha.setPixel(x, y, 0);
}
}
}
newImage.setAlphaChannel(alpha);
return newImage;
}
I was also trying to implement it using painter and proper composition mode, but in result I have white artifacts on polygon borders.
QImage makeImageWithHole(const QImage & image, const std::vector<QPoint> & hole)
{
QImage newImage = image.convertToFormat(QImage::Format_ARGB32);
QPainter p(&newImage);
p.setCompositionMode(QPainter::CompositionMode_SourceOut);
p.setPen(QColor(255, 255, 255, 255));
p.setBrush(QBrush(QColor(255, 255, 255, 255)));
p.drawPolygon(hole.data(), hole.size());
p.end();
return newImage;
}
What is proper way to do this?

I think you should enable antialiazing like this:
QPainter p(&newImage);
p.setRenderHints(QPainter::Antialiasing);

Related

Changing the color of pixels using (MFC's) CImage::SetPixel()

I have a 32-bit png file with an alpha (transparent) layer. I want to change the color of some pixels on a per-pixel basis using MFC. Performance isn't an issue (although faster is better).
I wrote code to call CImage::GetPixel() tweak the returned COLORREF, and SetPixel() the new color, but the entire image was transparent. So I wrote the following block which simply gets and sets the original color. The resulting image is entirely transparent. I also tried simply using SetPixel(x, y, RGB(255, 0, 0)) to set all pixels to red. Any advice to resolve this?
CImage image;
if(image.Load(sFilename) == S_OK)
{
TRACE(L"IsTransparencySupported %d", image.IsTransparencySupported()); // Returns 1.
TRACE(L"IsDIBSection %d", image.IsDIBSection()); // Returns 1.
TRACE(L"Size %dx%d", image.GetWidth(), image.GetHeight()); // Displays 141x165.
TRACE(L"BPP %d", image.GetBPP()); // Returns 32.
TRACE(L"Pitch %d", image.GetPitch()); // Returns -564.
COLORREF color;
for(int x = 0; x < image.GetWidth(); x++)
{
for(int y = 0; y < image.GetHeight(); y++)
{
color = image.GetPixel(x, y);
image.SetPixel(x, y, color);
}
}
if(image.Save(sFilenameNew, Gdiplus::ImageFormatPNG) != S_OK)
TRACE(L"Error saving %s.", sFilenameNew);
}
else
TRACE(L"Error loading png %s.", sFilename);
Thanks!
CImage image;
for (int i=0;i<image.ImgHeight;i++)
{
for (int j=0;j<image.ImgWidth;j++)
{
int index = i*image.ImgWidth+j;
unsigned char* pucColor = reinterpret_cast<unsigned char *> (image.GetPixelAddress(j , i));
pucColor[0] = bValues[index];
pucColor[1] = gValues[index];
pucColor[2] = rValues[index];
}
}

How to convert Grayscale/binary Mat to QImage?

I have started learning Qt and is trying to make a simple video Player which will load the video and will play it. It worked perfectly fine. Now added thresholding functionality to it. The threshold value will be obtained from the spinBox.
Code is written in such a way that thresholding operation will be done with value in spinBox except at value 0 (where normal video is displayed).
So this is my function for the same:
void Player::run()
{
while(!stop )
{
if(!capture.read(frame))
stop = true;
// convert RGB to gray
if(frame.channels() == 3)
{
if(thresh == 0)
{
cvtColor(frame, RGBframe, CV_BGR2RGB);
img = QImage((const unsigned char*)(RGBframe.data),
RGBframe.cols,RGBframe.rows,QImage::Format_RGB888);
}
else
{
Mat temp;
cvtColor(frame, temp, CV_BGR2GRAY);
threshold(temp, binary, thresh, 255, 0);
img = QImage((const unsigned char*)(binary.data),
binary.cols, binary.rows, QImage::Format_Indexed8);
bool save = img.save("/home/user/binary.png");
cout<<"threshold value = "<<thresh<<endl;
//imshow("Binary", binary);
}
}
else
{
if(thresh == 0) // original Image
{
img = QImage((const unsigned char*)(frame.data),
frame.cols,frame.rows,QImage::Format_Indexed8);
}
else // convert to Binary Image
{
threshold(frame, binary, thresh, 255, 0);
img = QImage((const unsigned char*)(binary.data),
binary.cols, binary.rows, QImage::Format_Indexed8);
}
}
emit processedImage(img);
this->msleep(delay);
}
}
for spinBox value equals 0 it runs fine but when spinBox value is incremented I get only black screen. I tried imshow(cv:: Mat binary)and it is showing the correct binary image but when I try to save QImage img it is some random black and white pixels (though of same size of original frame).
It seems that you're missing the color table for your indexed image. You need to add a color table (before the while loop):
QVector<QRgb> sColorTable(256);
for (int i = 0; i < 256; ++i){ sColorTable[i] = qRgb(i, i, i); }
and after you create the QImage from the binary Mat you need to add
img.setColorTable(sColorTable);
Or, as pointed out by #KubaOber, from Qt 5.5 you can also use the format QImage::Format_Grayscale8:
// From Qt 5.5
QImage image(inMat.data, inMat.cols, inMat.rows,
static_cast<int>(inMat.step),
QImage::Format_Grayscale8);
In general, you can wrap all Mat to QImage conversion in a function. Below there is the bug corrected and updated version of cvMatToQImage originally found here.
You can then remove all the conversion to QImage from your code and use this function instead.
QImage cvMatToQImage(const cv::Mat &inMat)
{
switch (inMat.type())
{
// 8-bit, 4 channel
case CV_8UC4:
{
QImage image(inMat.data,
inMat.cols, inMat.rows,
static_cast<int>(inMat.step),
QImage::Format_ARGB32);
return image;
}
// 8-bit, 3 channel
case CV_8UC3:
{
QImage image(inMat.data,
inMat.cols, inMat.rows,
static_cast<int>(inMat.step),
QImage::Format_RGB888);
return image.rgbSwapped();
}
// 8-bit, 1 channel
case CV_8UC1:
{
#if QT_VERSION >= 0x050500
// From Qt 5.5
QImage image(inMat.data, inMat.cols, inMat.rows,
static_cast<int>(inMat.step),
QImage::Format_Grayscale8);
#else
static QVector<QRgb> sColorTable;
// only create our color table the first time
if (sColorTable.isEmpty())
{
sColorTable.resize(256);
for (int i = 0; i < 256; ++i)
{
sColorTable[i] = qRgb(i, i, i);
}
}
QImage image(inMat.data,
inMat.cols, inMat.rows,
static_cast<int>(inMat.step),
QImage::Format_Indexed8);
image.setColorTable(sColorTable);
#endif
}
default:
qWarning() << "cvMatToQImage() - cv::Mat image type not handled in switch:" << inMat.type();
break;
}
return QImage();
}

Qt OpenCV update region in QImage

i'm starting integrating Opencv in a qt application, so i have a the follwing program structure:
QGraphicsView
|
|->QGraphicsPixmapItem (where the captured Image will be)
|
|
|->QGraphicsRectItem (a rectangle that define the roi)
i have the follwing function to process an image:
void Inspection::Process()
{
IplImage* m_CapureImage= Capture()->GetImage(); //cvLoadImage("e:\\Desert.jpg");
IplImage* m_ProcessingImage= cvCreateImage(cvGetSize(m_CapureImage), IPL_DEPTH_8U, 1);
cvCvtColor(m_CapureImage,m_ProcessingImage,CV_BGR2GRAY);
// Process all ROI's in inspection
for (int var = 0; var < ROIs()->rowCount(QModelIndex()); ++var) {
ROI* roi=ROIs()->data(ROIs()->index(var,0),Qt::UserRole).value<ROI*>();
if(roi!=0)
roi->Process(m_ProcessingImage);
}
QImage qImg = IplImage2QImage(m_ProcessingImage);
m_BackgroundItem->setPixmap(QPixmap::fromImage(qImg));
}
///
QImage IplImage2QImage(const IplImage *iplImage)
{
int height = iplImage->height;
int width = iplImage->width;
if (iplImage->depth == IPL_DEPTH_8U && iplImage->nChannels == 3)
{
const uchar *qImageBuffer = (const uchar*)iplImage->imageData;
QImage img(qImageBuffer, width, height, QImage::Format_RGB888);
return img.rgbSwapped();
} else if (iplImage->depth == IPL_DEPTH_8U && iplImage->nChannels == 1){
const uchar *qImageBuffer = (const uchar*)iplImage->imageData;
QImage img(qImageBuffer, width, height, QImage::Format_Indexed8);
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++){
colorTable.push_back(qRgb(i, i, i));
}
img.setColorTable(colorTable);
return img;
}else{
qWarning() << "Image cannot be converted.";
return QImage();
}
}
So, my question is:
i change the position of the roi and do some changes in a region of iplImage, what i'm doing now is call again:
QImage qImg = IplImage2QImage(m_ProcessingImage);
m_BackgroundItem->setPixmap(QPixmap::fromImage(qImg));
so i will load again all the iplImage. Is there a way to only update the specific ROI of iplImage in the pixmap?
Thanks
EDIT 1:
I changed image display implementation, so now the QGraphicsPixmapItem will only display the original captured image, then i will create a custom QGraphicsRectItem and override the paint method to draw the processed ROI

How to fill a rounded rectangle with different color segments by area in Qt?

I am new to Qt and I tried looking for examples online and the documentations but couldn't find something. I want something like this:
I tried it using a QLinearGradient but it isn't quite what I want. I want solid colors.
Here's what I've tried:
void drawBackground ( QPainter * painter, const QStyleOptionViewItem & option, const QModelIndex & index ) const {
QLinearGradient linearGrad(QPointF(option.rect.x(), 0), QPointF(option.rect.x() + option.rect.width(), 0));
int total = index.data(StatisticsModel::TotalCount).toInt();
linearGrad.setColorAt(0.0, QColor(255, 255, 255, 0));
int sum = 0;
for (int i = 7; i >= 1; i--) {
int count = index.data(StatisticsModel::Grade0 + i).toInt();
if (count) {
sum += count;
linearGrad.setColorAt(1.0-((double)(total-sum))/total, Prefs::gradeColor(i));
}
}
QRect rect(option.rect);
rect.adjust(1, 1, -1, -1);
QPainterPath path;
path.addRoundedRect( rect, 2.0, 2.0 );
painter->setBrush(QBrush(linearGrad));
painter->drawPath(path);
}
Any help would be appreciated.
Well the best way to color rounded rectangle like this I guess would be to create QPainterPath for it then construct normal rectangles which should be of specified color intersect them with initial rounded rectangle QPainterPath using function QPainterPath::intersected and draw them, selecting corresponding solid color brush and using function drawPath

SDL Image scale

I'm using the sdl library, but it dosent support scale / resize surface, so i downloaded the
SDL_image 1.2 & SDL_gfx Library. My function/code works, but the image appear in bad / low
quality.
Let say i got a image which is 100X100, if i scale down to 95X95 or scale up to 110X110 the
quality appear very low, but if i leave it at 100X100 which is the same size it appear in
good quality. Images most appear in good quality, if scaled down, but ... it dosent
my code is:
int drawImage(SDL_Surface* display, const char * filename, int x, int y, int xx, int yy , const double newwidth, const double newheight, int transparent = NULL)
{
SDL_Surface *image;
SDL_Surface *temp;
temp = IMG_Load(filename); if (temp == NULL) { printf("Unable to load image: %s\n", SDL_GetError()); return 1; }
image = SDL_DisplayFormat(temp); SDL_FreeSurface(temp);
// Zoom function uses doubles for rates of scaling, rather than
// exact size values. This is how we get around that:
double zoomx = newwidth / (float)image->w;
double zoomy = newheight / (float)image->h;
// This function assumes no smoothing, so that any colorkeys wont bleed.
SDL_Surface* sized = zoomSurface( image, zoomx, zoomy, SMOOTHING_OFF );
// If the original had an alpha color key, give it to the new one.
if( image->flags & SDL_SRCCOLORKEY )
{
// Acquire the original Key
Uint32 colorkey = image->format->colorkey;
// Set to the new image
SDL_SetColorKey( sized, SDL_SRCCOLORKEY, colorkey );
}
// The original picture is no longer needed.
SDL_FreeSurface( image );
// Set it instead to the new image.
image = sized;
SDL_Rect src, dest;
src.x = xx; src.y = yy; src.w = image->w; src.h = image->h; // size
dest.x = x; dest.y = y; dest.w = image->w; dest.h = image->h;
if(transparent == true )
{
//Set the color as transparent
SDL_SetColorKey(image,SDL_SRCCOLORKEY|SDL_RLEACCEL,SDL_MapRGB(image->format,0x0,0x0,0x0));
}
else {
}
SDL_BlitSurface(image, &src, display, &dest);
return true;
}
drawImage(display, "Image.png", 50, 100, NULL, NULL, 100, 100,true);
An image that is scaled without allowing smoothing is going to have artifacts. You might have better luck if you start with SVG and render it at the scale that you want. Here's an SVG -> SDL surface library.