I'm trying to convert an bgr mat to an hsv mat for some detection, but the hsv image keeps coming out blocky. Here is my code in c++:
int main() {
const int device = 1;
VideoCapture capture(device);
Mat input;
int key;
if(!capture.isOpened()) {
printf("No video recording device under device number %i found. Aborting program...\n", device);
return -1;
}
namedWindow("Isolation Test", CV_WINDOW_AUTOSIZE);
while(1) {
capture >> input;
cvtColor(input, input, CV_BGR2HSV);
imshow("Isolation Test", input);
key = static_cast<int>(waitKey(10));
if(key == 27)
break;
}
destroyWindow("Isolation Test");
return 0;
}
Here is a snapshot of what the output looks like. the input does not look blocky when I comment out the cvtColor. What is the problem and what should I do to fix it?
I suggested an explanation in the comments part, but decided to actually verify my assumption and explain a little bit about the HSV color space.
There is no problem in the code nor in OpenCV's cvtColor. the "blocky" artifacts exist in the RGB image, but are not noticeable. All of the JPEG family compression algorithms produce these artifacts. The reason we usually don't see them is that the algorithms "exploit" weaknesses in our visual system and compress more stuff that we are not very sensitive to.
I converted the image back to RGB using OpenCVscvtColor` and the artifacts magically disappeared (images are below).
The HSV color space in particular has several characteristics that exaggerate these artifacts. The important of which is probably the fact that wherever the V channel (Value/Luminance) is very low, the H & S channels are very unstable and are quite meaningless. In the extreme: [128,255,0] == [0,0,0].
So very small and unnoticeable compression artifacts in the dark areas of the image become very prominent with the false colors of the HSV color space.
If you want to use the HSV color space as feature space for color comparison keep in mind that if V is very low, H & S are quite meaningless. That is also true for very low S values that make the H value meaningless ([0,0,100] == [128,0,100]).
BTW. also keep in mind that the H channel is cyclic and the difference between H == 0 and H == 255 is only one gray level.
False colors "blocky" HSV image posted in the question
Image converted back to RGB using cvtColor
I think this happen because the imshow function will always interpret the image as a simple RGB or BGR image. So you need to change back HSV to BGR using cvtColor(input,input,CV_HSV2BGR) before show image.
Related
I was looking for a way to increase the saturation of some of my images using code and found the strategy of splitting a material with HSV and then increasing the S channel by a factor. However, I ran into some issues where the split channels were still in BGR (I think) because the output was just a greener tinted version of the original.
//Save original image to material
Mat orgImg = imread("sunset.jpg");
//Resize the image to be smaller
resize(orgImg, orgImg, Size(500, 500));
//Display the original image for comparison
imshow("Original Image", orgImg);
Mat g = Mat::zeros(Size(orgImg.cols, orgImg.rows), CV_8UC1);
Mat convertedHSV;
orgImg.convertTo(convertedHSV, COLOR_BGR2HSV);
Mat saturatedImg;
Mat HSVChannels[3];
split(convertedHSV, HSVChannels);
imshow("H", HSVChannels[0]);
imshow("S", HSVChannels[1]);
imshow("V", HSVChannels[2]);
HSVChannels[1] *= saturation;
merge(HSVChannels, 3, saturatedImg);
//Saturate the original image and save it to a new material.
//Display the new, saturated image.
imshow("Saturated", saturatedImg);
waitKey(0);
return 0;
This is my code and nothing I do makes it actually edit the saturation, all the outputs are just green tinted photos.
Note saturation is a public double that is usually set to around 1.5 or whatever you want.
Do not use cv::convertTo() here. It changes the bitdepth (and representation, int vs. float) of the image, not what you are trying to achieve, the color space.
Using it like that does not throw a warning or error though, because both type indicators (CV_8U, ...) and the colorspace indicators (COLOR_BGR2HSV,...) can be resolved as integers, one is a #define, the other a old style enum.
Following the example here, it is possible to do with cv::cvtColor(). Don't forget to revert back before showing the image, imshow() and imwrite() both expect an BGR format.
// Convert image from BGR -> HSV:
// orgImg.convertTo(convertedHSV, COLOR_BGR2HSV); // <- this wrong, do not use
cvtColor(orgImg, convertedHSV, COLOR_BGR2HSV); // <- this does the trick instead
// to the split, multiplication, merge
// [...]
// Convert image back HSV -> BGR:
cvtColor(saturatedImg, saturatedImg, COLOR_HSV2BGR);
//Display the new, saturated image.
imshow("Saturated", saturatedImg);
Note that oCV does not care about color representation when working with a 3 channel Mat: Could be RGB, HSV or anything else. Only for displaying (or saving to an image format) does the given color space matter.
I'm trying to get the people detector provided by the OpenCV library running. So far I get decent performance on my iPhone 6 but the detection is super bad and almost never correct and I'm not really sure why this is since you can find example videos using the same default HOG descriptor with way better detection.
Here is the code:
- (void)processImage:(Mat&)image {
cv::Mat cvImg, result;
cvtColor(image, cvImg, COLOR_BGR2HSV);
cv::vector<cv::Rect> found, found_filtered;
hog.detectMultiScale(cvImg, found, 0, cv::Size(4,4), cv::Size(8,8), 1.5, 0);
size_t i;
for (i=0; i < found.size(); i++) {
cv::Rect r = found[i];
rectangle(image, r.tl(), r.br(), Scalar(0,255,0), 2);
}
}
The video input comes from the iPhone camera itself and "processImage:" is called for every frame. For the HOGDescriptor I use the default people detector:
_hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
I appreciate any help. :)
I'm new to openCV, so take this with a grain of salt:
The line cvtColor(image, cvImg, COLOR_BGR2HSV); converts the image from the BGR color space to the HSV color space. Essentially, it changes each pixel from being represented by how much blue, green, and red it has, to being represented by the components hue (color), saturation (how much color) and value (how bright). Clearly, the hogDescriptor acts on a BGR image, not an HSV image. You need to pass it a type CV_8UC3 image: An image with 3 channels per pixel (C3), ex. BGR, and an 8bit unsigned number for each channel (8U), This part is less important. What are you passing into the method processImage()? It should be one of those types. If not, you need to know the type and convert it to CV_8UC3 using the cvtColor() method
I'm trying to have a webcam take a picture of someone's face in BGR, convert the picture into HSV, and analyze these HSV values that will later be used in a skin detection algorithm. Unfortunately, the picture seems to be analyzed in BGR, even after I try to convert it using cvtColor().
I use the code below to test whether or not I'm using the right color space. Note the part where I try to set saturation and value to 0:
Mat faceROI = findFace(first); //basic Mat, region of interest for face (code not included)
Mat temp;
faceROI.convertTo(temp, CV_8UC3); //making sure this has right no. of channels and such
CvScalar s;
IplImage face_ipl = temp; //new header
IplImage* aNew = cvCreateImage(cvGetSize(&face_ipl), face_ipl.depth, 3);
cvCvtColor(&face_ipl, aNew, CV_BGR2HSV);
for(int x = 0; x < faceROI.cols; x++){
for (int y = 0; y < faceROI.rows; y++){
s = cvGet2D(aNew, x, y);
//vvvvvvvvvvv
s.val[1] = 0; //should be saturation
s.val[2] = 0; //should be value
//^^^^^^^^^^^
cvSet2D(aNew, x, y, s);
}
}
Mat again(aNew); //<--- is this where something is set back to BGR?
imshow("white", again);
cvReleaseImage(&aNew);
This produces a completely blue picture of my face, so it seems likes I'm editing the G and R channels of a BGR image, instead of the S and V channels of an HSV image. (I'd post the image, but this is my first post so I don't have enough reputation yet.)
Does anybody know why this is happening? Any and all thoughts are appreciated.
You're mixing up the C++ Mat style with the old C IplImage*, this makes it confusing to see what exactly is going on. Here is the code to turn inputImage into HSV:
Mat fullImageHSV;
cvtColor(inputImage, fullImageHSV, CV_BGR2HSV);
Be aware that the OpenCV HSV values are H from 0-180 while S and V are from 0-255 while other programs tend to use different values. ALso note that OpenCV is unable to show HSV images normally, this distorts the color because they are being interpreted as RGB.
I wanted to know if there is any function of OpenCV using C++ to adjust the brightness and contrast of a video / frame. You can convert from BGR color space to HSV color space, and discard the latter component V (luminance) to make the algorithm less sensitive to light conditions in the video, but how can I do it?
I was thinking of using something like cvAddS (frame, cvScalar (-50, -50, -50), frame) to Decrease the brightness, cvAddS and cvScalar work's well for C but how can I do that for C++, I use AddS and Scalar in my program, but don't work with C++
int main() {
VideoCapture video(1);
if(!video.isOpened()) {
cerr<<"No video input"<<endl; return -1;
}
namedWindow("Video",CV_WINDOW_AUTOSIZE);
for(;;) {
Mat frame;
video >> frame; if(!frame.data) break;
Mat frame2;
//I USE AddS AND Scalar TO DECREASE THE BRIGHTNESS
AddS(frame,Scalar(-50,-50,-50),frame2);
//BUT DON'T WORK WITH C++
imshow("Video",frame2);
int c=waitKey(20);
if(c >= 0)break;
}
}
Use matrix expression:
cv::Mat frame2 = frame + cv::Scalar(-50, -50, -50);
You might also want to adjust the contrast with histogram equalization. Convert your RGB image to HSV and apply cv::equalizeHist() to the V channel.
Brightness and contrast are usually corrected using a linear transformation of the pixel values. Brightness corresponds to the additive shift and contrast corresponds to a multiplicative factor.
In general, given a pixel value v, the new value after to correction would be v'=a*v + b.
I'm using a combination of OpenKinect and OpenCV libraries to apply Haar-like feature recognition to both RGB and depth images.
I can get the live feed and successfully detect objects using the RGB feed however the depth is giving me massive problems.
After the initial frame the depth frame does not seem to update at all.
The depth callback function that provides the raw data is as follows:
//depth callback function
void depth_cb(freenect_device *dev, void *v_depth, uint32_t timestamp)
{
if (got_depth == 0){
pthread_mutex_lock(&buf_mutex);
//copy to OpenCV buffer
memcpy(depthMat.data, v_depth, (640*480*2));
// depthMat.convertTo(depthFrame, CV_8UC1, 256.0/2048.0);
got_depth++;
pthread_cond_signal(&frame_cond);
pthread_mutex_unlock(&buf_mutex);
}
}
the Mats used are initialised like so:
cv::Mat depthMat(cv::Size(640,480),CV_16UC1);
cv::Mat depthFrame(cv::Size(640,480),CV_8UC1);
And in the main function I try use them like so:
depthMat.convertTo(depthFrame, CV_8UC1, 255.0/2048.0);
imshow("rgb", rgbMat);
imshow("depth-pre-conversion", depthMat);
imshow("depth", depthFrame);
IplImage depthImage = depthFrame;
IplImage rgbImage = rgbMat;
detect_and_draw(&depthImage);
'Depth-pre-conversion' is a almost black frame, you can just about make out the depth image here. It doesn't update.
'Depth' is the lighter version once converted to 8 bits, it also doesn't move.
'rgb' is the live RGB feed which works no problem (although it is BGR rather than RGB but I'll get round fixing that at some point, it's less important right now)
I'd appreciate any advise and help you can offer.