I want to find the non-white area of an image from a camera using OpenCV. I can already find circles using images from my web cam. I want to make a grid or something so I can determine the percent of the image is not white. Any ideas?
If you want to find the percentage of pixels in your image which is not white, why don't you just count all the pixels which are not white and divide it by the total number of pixels in the image?
Code in C
#include <stdio.h>
#include <cv.h>
#include <cxcore.h>
#include <highgui.h>
int main()
{
// Acquire the image (I'm reading it from a file);
IplImage* img = cvLoadImage("image.bmp",1);
int i,j,k;
// Variables to store image properties
int height,width,step,channels;
uchar *data;
// Variables to store the number of white pixels and a flag
int WhiteCount,bWhite;
// Acquire image unfo
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
// Begin
WhiteCount = 0;
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{ // Go through each channel of the image (R,G, and B) to see if it's equal to 255
bWhite = 0;
for(k=0;k<channels;k++)
{ // This checks if the pixel's kth channel is 255 - it can be faster.
if (data[i*step+j*channels+k]==255) bWhite = 1;
else
{
bWhite = 0;
break;
}
}
if(bWhite == 1) WhiteCount++;
}
}
printf("Percentage: %f%%",100.0*WhiteCount/(height*width));
return 0;
}
You can use cv::countNonZero and subtract if your image is only black and white.
Related
I've been able to find/create some code that allows me to open the depth and color stream from the OpenNI enabled camera (It is an Orbbec Astra S to be specific). Except unlike with the standard OpenNI Viewer, My stream displays the closest points as darkest and further points as the lighter colors.
How would I be able to change this around so that the points closest to the cameras are shown as lighter (whites) and further away is shown as dark?
#include "stdafx.h"
#include "OpenNI.h"
#include <iostream>
#include <iomanip>
#include <fstream>
#include <string>
#include <array>
// OpenCV Header
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
using namespace std;
using namespace cv;
using namespace openni;
//Recorder
int main(int argc, char** argv)
{
Device device;
VideoStream DepthStream,ColorStream;
VideoFrameRef DepthFrameRead,ColorFrameRead;
const char* deviceURI = openni::ANY_DEVICE;
if (argc > 1)
{
deviceURI = argv[1];
}
Status result = STATUS_OK;
result = OpenNI::initialize();
result = device.open(deviceURI);
result = DepthStream.create(device, openni::SENSOR_DEPTH);
result = DepthStream.start();
result = ColorStream.create(device, openni::SENSOR_COLOR);
result = ColorStream.start();
device.setImageRegistrationMode(ImageRegistrationMode::IMAGE_REGISTRATION_DEPTH_TO_COLOR);
int framenum = 0;
Mat frame;
while (true)
{
if (DepthStream.readFrame(&DepthFrameRead) == STATUS_OK)
{
cv::Mat cDepthImg(DepthFrameRead.getHeight(), DepthFrameRead.getWidth(),
CV_16UC1, (void*)DepthFrameRead.getData());
cv::Mat c8BitDepth;
cDepthImg.convertTo(c8BitDepth, CV_8U, 255.0 / (8000));
cv::imshow("Orbbec", c8BitDepth);
}
if (ColorStream.readFrame(&ColorFrameRead) == STATUS_OK)
{
ColorStream.readFrame(&ColorFrameRead);
const openni::RGB888Pixel* imageBuffer = (const openni::RGB888Pixel*)ColorFrameRead.getData();
frame.create(ColorFrameRead.getHeight(), ColorFrameRead.getWidth(), CV_8UC3);
memcpy(frame.data, imageBuffer, 3 * ColorFrameRead.getHeight()*ColorFrameRead.getWidth() * sizeof(uint8_t));
cv::cvtColor(frame, frame, CV_BGR2RGB); //this will put colors right
cv::imshow("frame", frame);
framenum++;
}
if (cvWaitKey(30) >= 0)
{
break;
}
}
DepthStream.destroy();
ColorStream.destroy();
device.close();
OpenNI::shutdown();
return 0;
}
-------------------EDIT-------------------
These Images are originally read in as 16bit images, which look like this (note how dark it is):
But after converting to an 8bit image, they look as follows:
The image you attached shows that the sensor is capturing the data with directly encoding the distance (in mm) of the object in the depth. This is quite normal for such depth cameras. What we want instead for displaying is higher values for objects closer to the sensor (this is totally opposite to the depth image encoding but useful for displaying).
One can devise a simple depth adjustment function if the operating range of the sensor is known. For Astra S, the operating range is from 0.35m to 2.5m. So what we want now is a function that converts 0.35m -> 2.5m and 2.5m -> 0.35m.
This is pretty straightforward, the only caveat is that you have to take care of the invalid depth pixel (depth == 0) yourself. Here is the code for doing this:
#include "include\opencv\cv.h"
#include "include\opencv\highgui.h"
cv::Mat adjustDepth(const cv::Mat& inImage)
{
// from https://orbbec3d.com/product-astra/
// Astra S has a depth in the range 0.35m to 2.5m
int maxDepth = 2500;
int minDepth = 350; // in mm
cv::Mat retImage = inImage;
for(int j = 0; j < retImage.rows; j++)
for(int i = 0; i < retImage.cols; i++)
{
if(retImage.at<ushort>(j, i))
retImage.at<ushort>(j, i) = maxDepth - (retImage.at<ushort>(j, i) - minDepth);
}
return retImage;
}
int main ()
{
cv::Mat inImage;
inImage = cv::imread("testImage.png", CV_LOAD_IMAGE_UNCHANGED);
cv::Mat adjustedDepth = adjustDepth(inImage);
cv::Mat dispImage;
adjustedDepth.convertTo(dispImage, CV_8UC1, 255.0f/2500.0f);
cv::imshow(" ", dispImage);
//cv::imwrite("testImageAdjusted.png", adjustedDepth);
//cv::imwrite("savedImage.png", dispImage);
cv::waitKey(0);
return 0;
}
Here is the output renormalized depth image:
If one wants to further explore what happens in such readjustment function, one can have a look at the histogram for image both before and after applying the adjustment.
Histogram for input depth image (D):
Histogram for negative input depth image (-D):
Histogram for (maxVal-(D-minVal)):
Hope this answers your question.
I am making a function using C++ and OpenCV that will detect the color of a pixel in an image, determine what color range it is in, and replace it with a generic color. For example, green could range from dark green to light green, the program would determine that its still green and replace it with a simple green, making the output image very simple looking. everything is set up but I'm having trouble defining the characteristics of each range and was curious if anyone knows or a formula that, given BGR values, could determine the overall color of a pixel. If not I'll have to do much experimentation and make it myself, but if something already exists that'd save time. I've done plenty of research and haven't found anything so far.
If you want to make your image simpler (i.e. with less colors), but good looking, you have a few options:
A simple approach would be to divide (integer division) by a factor N the image, and then multiply by a factor N.
Or you can divide your image into K colors, using some clustering algorithm such as kmeans showed here, or median-cut algorithm.
Original image:
Reduced colors (quantized, N = 64):
Reduced colors (clustered, K = 8):
Code Quantization:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
imshow("Original", img);
uchar N = 64;
img /= N;
img *= N;
imshow("Reduced", img);
waitKey();
return 0;
}
Code kmeans:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
imshow("Original", img);
// Cluster
int K = 8;
int n = img.rows * img.cols;
Mat data = img.reshape(1, n);
data.convertTo(data, CV_32F);
vector<int> labels;
Mat1f colors;
kmeans(data, K, labels, cv::TermCriteria(), 1, cv::KMEANS_PP_CENTERS, colors);
for (int i = 0; i < n; ++i)
{
data.at<float>(i, 0) = colors(labels[i], 0);
data.at<float>(i, 1) = colors(labels[i], 1);
data.at<float>(i, 2) = colors(labels[i], 2);
}
Mat reduced = data.reshape(3, img.rows);
reduced.convertTo(reduced, CV_8U);
imshow("Reduced", reduced);
waitKey();
return 0;
}
Yes, what you probably mean by "Overall color of a pixel" is either the "Hue" or "Saturation" of the color.
So you want a formula that transform RGB to HSV (Hue, Saturation, Value), and then you would only be interested by the Hue or Saturation values.
See: Algorithm to convert RGB to HSV and HSV to RGB in range 0-255 for both
EDIT: You might need to max out the saturation, and then convert it back to RGB, and inspect which value is the highest (for instance (255,0,0), or (255,0,255), etc.
If you want to access RGB value of all pixels , then below is code,
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat image = imread("image_path");
for(int row = 1; row < image.rows; row++)
{
for(int col = 1; col < image.cols; col++)
{
Vec3b rgb = image.at<Vec3b>(row, col);
}
}
}
I want to increase the contrast of the bellow picture, with opencv c++.
I use histogram processing techniques e.g., histogram equalization (HE), histogram specification, etc. But I don't reaches to good result such as bellow images:
What ideas on how to solve this task would you suggest? Or on what resource on the internet can I find help?
I found a useful subject on OpenCV for changing image contrast :
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace cv;
double alpha; /**< Simple contrast control */
int beta; /**< Simple brightness control */
int main( int argc, char** argv )
{
/// Read image given by user
Mat image = imread( argv[1] );
Mat new_image = Mat::zeros( image.size(), image.type() );
/// Initialize values
std::cout<<" Basic Linear Transforms "<<std::endl;
std::cout<<"-------------------------"<<std::endl;
std::cout<<"* Enter the alpha value [1.0-3.0]: ";std::cin>>alpha;
std::cout<<"* Enter the beta value [0-100]: "; std::cin>>beta;
/// Do the operation new_image(i,j) = alpha*image(i,j) + beta
for( int y = 0; y < image.rows; y++ )
{ for( int x = 0; x < image.cols; x++ )
{ for( int c = 0; c < 3; c++ )
{
new_image.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta );
}
}
}
/// Create Windows
namedWindow("Original Image", 1);
namedWindow("New Image", 1);
/// Show stuff
imshow("Original Image", image);
imshow("New Image", new_image);
/// Wait until user press some key
waitKey();
return 0;
}
See: Changing the contrast and brightness of an image!
I'm no expert but you could try to reduce the number of colours by merging grays into darker grays, and light grays into whites.
E.g.:
Find the least common colour in <0.0, 0.5) range, merge it towards black.
Find the least common colour in <0.5, 1.0> range, merge it towards white.
This would reduce the number of colours and help create a gap between brigher darker colours maybe.
This might be late, but you can try createCLAHE() function in openCV. Works fine for me.
I am making a function using C++ and OpenCV that will detect the color of a pixel in an image, determine what color range it is in, and replace it with a generic color. For example, green could range from dark green to light green, the program would determine that its still green and replace it with a simple green, making the output image very simple looking. everything is set up but I'm having trouble defining the characteristics of each range and was curious if anyone knows or a formula that, given BGR values, could determine the overall color of a pixel. If not I'll have to do much experimentation and make it myself, but if something already exists that'd save time. I've done plenty of research and haven't found anything so far.
If you want to make your image simpler (i.e. with less colors), but good looking, you have a few options:
A simple approach would be to divide (integer division) by a factor N the image, and then multiply by a factor N.
Or you can divide your image into K colors, using some clustering algorithm such as kmeans showed here, or median-cut algorithm.
Original image:
Reduced colors (quantized, N = 64):
Reduced colors (clustered, K = 8):
Code Quantization:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
imshow("Original", img);
uchar N = 64;
img /= N;
img *= N;
imshow("Reduced", img);
waitKey();
return 0;
}
Code kmeans:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
imshow("Original", img);
// Cluster
int K = 8;
int n = img.rows * img.cols;
Mat data = img.reshape(1, n);
data.convertTo(data, CV_32F);
vector<int> labels;
Mat1f colors;
kmeans(data, K, labels, cv::TermCriteria(), 1, cv::KMEANS_PP_CENTERS, colors);
for (int i = 0; i < n; ++i)
{
data.at<float>(i, 0) = colors(labels[i], 0);
data.at<float>(i, 1) = colors(labels[i], 1);
data.at<float>(i, 2) = colors(labels[i], 2);
}
Mat reduced = data.reshape(3, img.rows);
reduced.convertTo(reduced, CV_8U);
imshow("Reduced", reduced);
waitKey();
return 0;
}
Yes, what you probably mean by "Overall color of a pixel" is either the "Hue" or "Saturation" of the color.
So you want a formula that transform RGB to HSV (Hue, Saturation, Value), and then you would only be interested by the Hue or Saturation values.
See: Algorithm to convert RGB to HSV and HSV to RGB in range 0-255 for both
EDIT: You might need to max out the saturation, and then convert it back to RGB, and inspect which value is the highest (for instance (255,0,0), or (255,0,255), etc.
If you want to access RGB value of all pixels , then below is code,
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat image = imread("image_path");
for(int row = 1; row < image.rows; row++)
{
for(int col = 1; col < image.cols; col++)
{
Vec3b rgb = image.at<Vec3b>(row, col);
}
}
}
I've written a piece of code to take my camera feed, split it into a grid (like a chess board) and evaluate each square for colour.
The code i currently have looks like this:
using namespace std;
using namespace cv;
//Standard Dilate and erode functions to improve white/black areas in Binary Image
// Pointer &thresh used so it affects threshImg so it can be used in tracking.
void morphOps(Mat &thresh){
//Increases size of black to remove unwanted white specks outside of object
Mat erodeElement = getStructuringElement( MORPH_RECT,Size(3,3));
//Increases white-area size to remove holes in object
Mat dilateElement = getStructuringElement( MORPH_RECT,Size(8,8));
erode(thresh,thresh,erodeElement);
erode(thresh,thresh,erodeElement);
dilate(thresh,thresh,dilateElement);
dilate(thresh,thresh,dilateElement);
}
//Tracking for the Filtered Object
void trackFilteredObject(int noteNum, string colourtype, Mat &thresh ,Mat HSVImage, Mat &cam){
vector<Brick> Bricks;
Mat temp;
thresh.copyTo(temp);
threshold(temp, thresh, 120, 255, 3); //3 = Threshold to Zero
int whitePixs = countNonZero(thresh);
int cols = thresh.cols;
int rows = thresh.rows;
int imgSize = (rows*cols)/0.75;
if(whitePixs > imgSize){
Brick Brick;
Brick.setColour(colourtype);
Brick.setnoteNum(noteNum);
Bricks.push_back(Brick);
}
int main(int argc, char* argv[])
{
/// Create a window
namedWindow("window", CV_WINDOW_AUTOSIZE );
while(1){
//initialtes camera, sets capture resolution
VideoCapture capture;
capture.open(1);
capture.set(CV_CAP_PROP_FPS, 30);
capture.set(CV_CAP_PROP_FRAME_WIDTH,640);
capture.set(CV_CAP_PROP_FRAME_HEIGHT,480);
Mat cam;
// Saves camera image to Matrix "cam"
capture.read(cam);
//Sets Widths and Heights based on camera resolution (cam.cols/cam.rows retrieves this)
int Width = cam.cols;
int gridWidth = Width/16;
int Height = cam.rows;
int gridHeight = Height/16;
//Splits image into 256 squares going left to right through rows and descending vertically. (16 squares per row for 4/4 pattern)
Mat BigImage;
Mat HSVImage;
// Converts cam to HSV pallete
cvtColor(cam, HSVImage, COLOR_BGR2HSV);
Size smallSize(gridWidth,gridHeight);
std::vector<Mat> smallImages;
for (int y = 0; y < HSVImage.rows; y += smallSize.height)
{
for (int x = 0; x < HSVImage.cols; x += smallSize.width)
{
cv::Rect rect = cv::Rect(x,y, smallSize.width, smallSize.height);
//Saves the matrix to vector
smallImages.push_back(cv::Mat(HSVImage, rect));
}
}
for (int i = 0; i < smallImages.size(); i++){
Mat HSV;
smallImages.at(i).copyTo(HSV);
int noteNum = i;
Mat threshImg;
inRange(HSV,Scalar(0,0,0),Scalar(255,255,255),threshImg);
morphOps(threshImg); //erodes image
string colour = "Red";
trackFilteredObject(noteNum,colour,threshImg,HSV,cam);
inRange(HSV,Scalar(0,0,0),Scalar(255,255,255),threshImg);
morphOps(threshImg); // threshold = mat after erosion/dilation
colour = "yellow";
trackFilteredObject(noteNum,colour,threshImg,HSV,cam);
inRange(HSV,Scalar(0,0,0),Scalar(255,255,255),threshImg);
morphOps(threshImg);
colour = "Black";
trackFilteredObject(noteNum,colour,threshImg,HSV,cam);
inRange(HSV,Scalar(0,0,0),Scalar(255,255,255),threshImg);
morphOps(threshImg); // threshold = mat after erosion/dilation
colour = "White";
trackFilteredObject(noteNum,colour,threshImg,HSV,cam);
inRange(HSV,Scalar(0,0,0),Scalar(255,255,255),threshImg);
morphOps(threshImg); // threshold = mat after erosion/dilation
colour = "Green";
trackFilteredObject(noteNum,colour,threshImg,HSV,cam);
}
imshow("window", cam);
}
return 0;
}
At the moment the code takes quite a long time to execute a full loop (about 1.5 seconds) but i ideally need it to run as close to real time as possible for a music application.
Could anyone suggest why it takes so long to execute? Is there a better way to evaluate the colour of each square?
My class is as follows:
//Brick.h
#include <string>
using namespace std;
class Brick{
public:
Brick(void);
~Brick(void);
string getColour();
void setColour(string whatColour);
int getnoteNum();
void setnoteNum(int whatnoteNum);
private:
int noteNum;
string colour;
};
///
Brick.cpp
#include <stdio.h>
#include <Brick.h>
Brick::Brick(void){
}
Brick::~Brick(void){
}
// get/set Colour
////////////////////////////////
string Brick::getColour(){
return Brick::colour;
}
void Brick::setColour(string whatColour){
Brick::colour = whatColour;
}
// get/set Note Number
////////////////////////////////
int Brick::getnoteNum(){
return Brick::noteNum;
}
void Brick::setnoteNum(int whatnoteNum){
Brick::noteNum = whatnoteNum;
}
I will be so grateful to anyone who replies!
Thank you.
Try hard to not use erode and dilate. These operations are extremely time intensive. I'm quite confident that they are the bottleneck in your program.
There are some measures you can take:
Downscaling(or downsampling) the image. Ideally, you want the downscaled image's pixel to be of the same order of magnitude of a grid square's size.
Remove dilate and erode.
Off-topic: Bugfix. Fix the inRange() parameters used. Consult the HSV color space diagram and normalize to your space. e.g. extracting "green pixels" would correspond to inRange(HSV,Scalar(80f*255/360,0.3*255,0.3*255),Scalar(160f*255/360,255,255),threshImg);