Currently I am using OpenCV to process images from an AVCaptureSession. The app right now takes these images and draws cv::Circles on the blobs. The tracking is working but when I draw the circle, it comes out as this gray, distorted circle when it should be green. Is it that OpenCV drawing functions don't work properly with iOS apps? Or is there something I can do to fix it?
Any help would be appreciated.
Here is a screen shot: (Ignore that giant green circle on the bottom)
The cv::Circle is around the outside of the black circle.
Here is where I converted the CMSampleBuffer into a cv::Mat:
enter code here CVImageBufferRef pixelBuff = CMSampleBufferGetImageBuffer(sampleBuffer);
cv::Mat cvMat;
CVPixelBufferLockBaseAddress(pixelBuff, 0);
int bufferWidth = CVPixelBufferGetWidth(pixelBuff);
int bufferHeight = CVPixelBufferGetHeight(pixelBuff);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuff);
cvMat = cv::Mat(bufferHeight, bufferWidth, CV_8UC4, pixel);
cv::Mat grayMat;
cv::cvtColor(cvMat, grayMat, CV_BGR2GRAY);
CVPixelBufferUnlockBaseAddress(pixelBuff, 0);
This is the cv::Circle command:
if (keypoints.size() > 0) {
cv::Point p(keypoints[0].pt.x, keypoints[0].pt.y);
printf("x: %f, y: %f\n",keypoints[0].pt.x, keypoints[0].pt.y);
cv::circle(cvMat, p, keypoints[0].size/2, cv::Scalar(0,255,0), 2, 8, 0);
}
Keypoints is the vector of blobs that have been detected.
Related
I'm new to image processing and development. I have used opencv, There I need to extract circle from a given image. That circle given x, y coordinates are (radius) in Oder to do that I used following code. But my problem is I have to take black rectangle. So the image patch having unwanted black pixels. How do I save just only circle?
my code
double save_key_points(Mat3b img, double x, double y, double radius, string
filename, string foldername)
{
// print image height and width first and check.
Vec3f circ(x, y, radius);
// Draw the mask: white circle on black background
Mat1b mask(img.size(), uchar(0));
circle(mask, Point(circ[0], circ[1]), circ[2], Scalar(255), CV_FILLED);
// Compute the bounding box
Rect bbox(circ[0] - circ[2], circ[1] - circ[2], 2 * circ[2], 2 * circ[2]);
// Create a black image
Mat3b res(img.size(), Vec3b(0, 0, 0));
// Copy only the image under the white circle to black image
img.copyTo(res, mask);
// Crop according to the roi
res = res(bbox);
//remove black but doesn't work.
Mat tmp, alpha;
threshold(res, alpha, 100, 255, THRESH_BINARY);
// Save the image
string path = "C:\\Users\\bb\\Desktop\\test_results\\test_case8\\" + foldername + filename + ".png";
imwrite(path, res);
Mat keypointimg = imread(path, CV_LOAD_IMAGE_GRAYSCALE);
//print the cordinate of one patch.
cordinate_print(keypointimg, radius);
}
(Here i want without black background)
If I understand what you are asking correctly you could remove the black from an image you can use a mask. The mask can highlight anything that is of a certain colour or in your case the shade of black. Check out the link for this implementation and see if it is what you are looknig for. It is in python but can be easily adapted.
Image Filtering
I need explanation of the following loop for face detection in opencv
VideoCapture capture("DSC_0772.avi"); //-1, 0, 1 device id
Mat cap_img,gray_img;
vector<Rect> faces, eyes;
while(1)
{
capture >> cap_img;
waitKey(10);
cvtColor(cap_img, gray_img, CV_BGR2GRAY);
cv::equalizeHist(gray_img,gray_img);
face_cascade.detectMultiScale(gray_img, faces, 1.1, 5, CV_HAAR_SCALE_IMAGE | CV_HAAR_DO_CANNY_PRUNING, cvSize(0,0), cvSize(300,300));
for(int i=0; i < faces.size();i++)
{
Point pt1(faces[i].x+faces[i].width, faces[i].y+faces[i].height);
Point pt2(faces[i].x,faces[i].y);
rectangle(cap_img, pt1, pt2, cvScalar(0,255,0), 2, 8, 0);
}
I don't understand faces[i].x and the other for loop parameters
how they are selected for face detection
Thanks for help
faces is a std::vector of Rect. So the for loop is going through each Rect in the vector and it is creating two points. Rect stores not only an x and y(of the top left corner) but also the height and width of the rectangle. So faces[i].x+faces[i].width is taking the x coordinate of the rectangle plus its width and faces[i].y+faces[i].height is taking the y coordinate of the rectangle plus its height. This is getting the opposite corner of the rectangle. You are then feeding those points plus the image into the rectangle() function.
I would like to know how to draw semi-transparent shapes in OpenCV, similar to those in the image below (from http://tellthattomycamera.wordpress.com/)
I don't need those fancy circles, but I would like to be able to draw a rectangle, e.g, on a 3 channel color image and specify the transparency of the rectangle, something like
rectangle (img, Point (100,100), Point (300,300), Scalar (0,125,125,0.4), CV_FILLED);
where 0,125,125 is the color of the rectangle and 0.4 specifies the transparency.
However OpenCV doesn't have this functionality built into its drawing functions. How can I draw shapes in OpenCV so that the original image being drawn on is partially visible through the shape?
The image below illustrates transparency using OpenCV. You need to do an alpha blend between the image and the rectangle. Below is the code for one way to do this.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int main( int argc, char** argv )
{
cv::Mat image = cv::imread("IMG_2083s.png");
cv::Mat roi = image(cv::Rect(100, 100, 300, 300));
cv::Mat color(roi.size(), CV_8UC3, cv::Scalar(0, 125, 125));
double alpha = 0.3;
cv::addWeighted(color, alpha, roi, 1.0 - alpha , 0.0, roi);
cv::imshow("image",image);
cv::waitKey(0);
}
In OpenCV 3 this code worked for me:
cv::Mat source = cv::imread("IMG_2083s.png");
cv::Mat overlay;
double alpha = 0.3;
// copy the source image to an overlay
source.copyTo(overlay);
// draw a filled, yellow rectangle on the overlay copy
cv::rectangle(overlay, cv::Rect(100, 100, 300, 300), cv::Scalar(0, 125, 125), -1);
// blend the overlay with the source image
cv::addWeighted(overlay, alpha, source, 1 - alpha, 0, source);
Source/Inspired by: http://bistr-o-mathik.org/2012/06/13/simple-transparency-in-opencv/
Adding to Alexander Taubenkorb's answer, you can draw random (semi-transparent) shapes by replacing the cv::rectangle line with the shape you want to draw.
For example, if you want to draw a series of semi-transparent circles, you can do it as follows:
cv::Mat source = cv::imread("IMG_2083s.png"); // loading the source image
cv::Mat overlay; // declaring overlay matrix, we'll copy source image to this matrix
double alpha = 0.3; // defining opacity value, 0 means fully transparent, 1 means fully opaque
source.copyTo(overlay); // copying the source image to overlay matrix, we'll be drawing shapes on overlay matrix and we'll blend it with original image
// change this section to draw the shapes you want to draw
vector<Point>::const_iterator points_it; // declaring points iterator
for( points_it = circles.begin(); points_it != circles.end(); ++points_it ) // circles is a vector of points, containing center of each circle
circle(overlay, *points_it, 1, (0, 255, 255), -1); // drawing circles on overlay image
cv::addWeighted(overlay, alpha, source, 1 - alpha, 0, source); // blending the overlay (with alpha opacity) with the source image (with 1-alpha opacity)
For C++, I personally like the readability of overloaded operators for scalar multiplication and matrix addition:
... same initial lines as other answers above ...
// blend the overlay with the source image
source = source * (1.0 - alpha) + overlay * alpha;
I am currently working on face detection and thereafter eyes, mouth, nose and other facial features.For above detection I have used haarcascade( frontal face, eyes, right_ear, left_ear and mouth).Now, everything works perfectly, if the face is frontal and straight. But I am not getting good result if the face is in side view or it is rotated. For side view, I have used lbscascade_profile.xml( it works only for right side of face). But for rotated face, I am not able to detect face.Can anyone help me out in above context.I am adding my code here for better understanding.
P.S : Thanks in advance and Pardon me for childish question( it might be because I am very new to programming).
void detectAndDisplay( Mat frame)
{
// create a vector array to store the face found
std::vector<Rect> faces;
Mat frame_gray;
bool mirror_image = false;
// convert the frame image into gray image file
cvtColor( frame, frame_gray, CV_BGR2GRAY);
//equalize the gray image file
equalizeHist( frame_gray, frame_gray);
//find the frontal faces and store them in vector array
face_cascade1.detectMultiScale(frame_gray,
faces,
1.1, 2,
0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT,
Size(40, 40),
Size(200, 200));
// find the right side face and store that in the face vector
if(!(faces.size()))
{
profileface_cascade.detectMultiScale( frame_gray,
faces,
1.2, 3,
0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT,
Size(40, 40),
Size(200, 200));
}
// find whether left face exist or not by flipping the frame and checking through lbsprofile
if(!faces.size())
{
cv::flip(frame_gray, frame_gray, 1);
profileface_cascade.detectMultiScale( frame_gray,
faces,
1.2, 3,
0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT,
Size(40, 40),
Size(200, 200));
mirror_image = true;
}
// if the frame is not flipped then the it could be directly drawn into frame
if(mirror_image and faces.size())
{
// flip the frame
cv::flip(frame_gray, frame_gray, 1);
}
if(faces.size())
{
//draw rectangle for the faces detected
rectangle(frame, faces[0], cvScalar(0, 255, 0, 0), 1, 8, 0);
}
// check whether any face is present in frame or not
else
image_not_found++;
imshow("Face Detection", frame);
}
Flandmark will be your friend, then! I've been using it recently quite often, and it turned out to be a successful tool in head pose estimation hence particular in detecting "rotated" face. It works quite reasonable in range of angles: tilt (rotation around axis parallel to image's width) from -30 to +30 degrees, pan (rotation around axis parallel to image's height) from -45 to +45 degrees. Also it is a robust solution.
I was wondering how it is possible to create effects like a glowing ball or a glowing line in my video frames in OpenCV. Any tips on where I can start or what can I use so I can create simple animations in my output?
Thanks in advance!
These effects are simple to accomplish with primitive OpenCV pixel operations. Let's say you have your ball identified as a white region in a separate mask image mask. Blur this mask with GaussianBlur and then combine the result with your source image img. For a glow effect, you probably want something like Photoshop's Screen blending mode, which will only brighten the image:
Result Color = 255 - [((255 - Top Color)*(255 - Bottom Color))/255]
The real key to the "glow" effect is using the pixels in the underlying layer as the screen layer. This translates to OpenCV:
cv::Mat mask, img;
...
mask = mask * img; //fill the mask region with pixels from the original image
cv::GaussianBlur(mask, mask, cv::Size(0,0), 4); //blur the mask, 4 pixels radius
mask = mask * 0.50; //a 50% opacity glow
img = 255 - ((255 - mask).mul(255 - img) / 255); //mul for per-element multiply
I did not test this code, so I might have something wrong here. Color Dodge is also a useful blending mode for glows.
More here: How does photoshop blend two images together?
I wrote a version of the effect that can run both on the CPU and on HW acceleration devices (e.g. GPU). If src is a cv::UMat and you have OpenCL support it will run using OpenCL otherwise if src is a cv::Mat it will run good old CPU code.
template<typename Tmat>
void glow_effect(Tmat& src, int ksize = 100) {
static Tmat resize;
static Tmat blur;
static Tmat src16;
cv::bitwise_not(src, src);
//Resize for some extra performance
cv::resize(src, resize, cv::Size(), 0.5, 0.5);
//Cheap blur
cv::boxFilter(resize, resize, -1, cv::Size(ksize, ksize), cv::Point(-1, -1), true, cv::BORDER_REPLICATE);
//Back to original size
cv::resize(resize, blur, cv::Size(VIDEO_WIDTH, VIDEO_HEIGHT));
//Multiply the src image with a blurred version of itself
cv::multiply(src, blur, src16, 1, CV_16U);
//Normalize and convert back to CV_8U
cv::divide(src16, cv::Scalar::all(255.0), src, 1, CV_8U);
cv::bitwise_not(src, src);
}