I am making a function using C++ and OpenCV that will detect the color of a pixel in an image, determine what color range it is in, and replace it with a generic color. For example, green could range from dark green to light green, the program would determine that its still green and replace it with a simple green, making the output image very simple looking. everything is set up but I'm having trouble defining the characteristics of each range and was curious if anyone knows or a formula that, given BGR values, could determine the overall color of a pixel. If not I'll have to do much experimentation and make it myself, but if something already exists that'd save time. I've done plenty of research and haven't found anything so far.
If you want to make your image simpler (i.e. with less colors), but good looking, you have a few options:
A simple approach would be to divide (integer division) by a factor N the image, and then multiply by a factor N.
Or you can divide your image into K colors, using some clustering algorithm such as kmeans showed here, or median-cut algorithm.
Original image:
Reduced colors (quantized, N = 64):
Reduced colors (clustered, K = 8):
Code Quantization:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
imshow("Original", img);
uchar N = 64;
img /= N;
img *= N;
imshow("Reduced", img);
waitKey();
return 0;
}
Code kmeans:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
imshow("Original", img);
// Cluster
int K = 8;
int n = img.rows * img.cols;
Mat data = img.reshape(1, n);
data.convertTo(data, CV_32F);
vector<int> labels;
Mat1f colors;
kmeans(data, K, labels, cv::TermCriteria(), 1, cv::KMEANS_PP_CENTERS, colors);
for (int i = 0; i < n; ++i)
{
data.at<float>(i, 0) = colors(labels[i], 0);
data.at<float>(i, 1) = colors(labels[i], 1);
data.at<float>(i, 2) = colors(labels[i], 2);
}
Mat reduced = data.reshape(3, img.rows);
reduced.convertTo(reduced, CV_8U);
imshow("Reduced", reduced);
waitKey();
return 0;
}
Yes, what you probably mean by "Overall color of a pixel" is either the "Hue" or "Saturation" of the color.
So you want a formula that transform RGB to HSV (Hue, Saturation, Value), and then you would only be interested by the Hue or Saturation values.
See: Algorithm to convert RGB to HSV and HSV to RGB in range 0-255 for both
EDIT: You might need to max out the saturation, and then convert it back to RGB, and inspect which value is the highest (for instance (255,0,0), or (255,0,255), etc.
If you want to access RGB value of all pixels , then below is code,
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat image = imread("image_path");
for(int row = 1; row < image.rows; row++)
{
for(int col = 1; col < image.cols; col++)
{
Vec3b rgb = image.at<Vec3b>(row, col);
}
}
}
Related
I want to apply unsharp mask like Adobe Photoshop,
I know this answer, but it's not as sharp as Photoshop.
Photoshop has 3 parameters in Smart Sharpen dialog: Amount, Radius, Reduce Noise; I want to implement all of them.
This is the code I wrote, according to various sources in SO.
But the result is good in some stages ("blurred", "unsharpMask", "highContrast"), but in the last stage ("retval") the result is not good.
Where am I wrong, what should I improve?
Is it possible to improve the following algorithm in terms of performance?
#include "opencv2/opencv.hpp"
#include "fstream"
#include "iostream"
#include <chrono>
using namespace std;
using namespace cv;
// from https://docs.opencv.org/3.4/d3/dc1/tutorial_basic_linear_transform.html
void increaseContrast(Mat img, Mat* dst, int amountPercent)
{
*dst = img.clone();
double alpha = amountPercent / 100.0;
*dst *= alpha;
}
// from https://stackoverflow.com/a/596243/7206675
float luminanceAsPercent(Vec3b color)
{
return (0.2126 * color[2]) + (0.7152 * color[1]) + (0.0722 * color[0]);
}
// from https://stackoverflow.com/a/2938365/7206675
Mat usm(Mat original, int radius, int amountPercent, int threshold)
{
// copy original for our return value
Mat retval = original.clone();
// create the blurred copy
Mat blurred;
cv::GaussianBlur(original, blurred, cv::Size(0, 0), radius);
cv::imshow("blurred", blurred);
waitKey();
// subtract blurred from original, pixel-by-pixel to make unsharp mask
Mat unsharpMask;
cv::subtract(original, blurred, unsharpMask);
cv::imshow("unsharpMask", unsharpMask);
waitKey();
Mat highContrast;
increaseContrast(original, &highContrast, amountPercent);
cv::imshow("highContrast", highContrast);
waitKey();
// assuming row-major ordering
for (int row = 0; row < original.rows; row++)
{
for (int col = 0; col < original.cols; col++)
{
Vec3b origColor = original.at<Vec3b>(row, col);
Vec3b contrastColor = highContrast.at<Vec3b>(row, col);
Vec3b difference = contrastColor - origColor;
float percent = luminanceAsPercent(unsharpMask.at<Vec3b>(row, col));
Vec3b delta = difference * percent;
if (*(uchar*)&delta > threshold) {
retval.at<Vec3b>(row, col) += delta;
//retval.at<Vec3b>(row, col) = contrastColor;
}
}
}
return retval;
}
int main(int argc, char* argv[])
{
if (argc < 2) exit(1);
Mat mat = imread(argv[1]);
mat = usm(mat, 4, 110, 66);
imshow("usm", mat);
waitKey();
//imwrite("USM.png", mat);
}
Original Image:
Blurred stage - Seemingly good:
UnsharpMask stage - Seemingly good:
HighContrast stage - Seemingly good:
Result stage of my code - Looks bad!
Result From Photoshop - Excellent!
First of all, judging by the artefacts that Photoshop left on the borders of the petals, I'd say that it applies the mask by using a weighted sum between the original image and the mask, as in the answer you tried first.
I modified your code to implement this scheme and I tried to tweak the parameters to get as close as the Photoshop result, but I couldn't without creating a lot of noise. I wouldn't try to guess what Photoshop is exactly doing (I would definitely like to know), however I discovered that it is fairly reproducible by applying some filter on the mask to reduce the noise. The algorithm scheme would be:
blurred = blur(image, Radius)
mask = image - blurred
mask = some_filter(mask)
sharpened = (mask < Threshold) ? image : image - Amount * mask
I implemented this and tried using basic filters (median blur, mean filter, etc) on the mask and this is the kind of result I can get:
which is a bit noisier than the Photoshop image but, in my opinion, close enough to what you wanted.
On another note, it will of course depend on the usage you have for your filter, but I think that the settings you used in Photoshop are too strong (you have big overshoots near petals borders). This is sufficient to have a nice image at the naked eye, with limited overshoot:
Finally, here is the code I used to generate the two images above:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
Mat usm(Mat original, float radius, float amount, float threshold)
{
// work using floating point images to avoid overflows
cv::Mat input;
original.convertTo(input, CV_32FC3);
// copy original for our return value
Mat retbuf = input.clone();
// create the blurred copy
Mat blurred;
cv::GaussianBlur(input, blurred, cv::Size(0, 0), radius);
// subtract blurred from original, pixel-by-pixel to make unsharp mask
Mat unsharpMask;
cv::subtract(input, blurred, unsharpMask);
// --- filter on the mask ---
//cv::medianBlur(unsharpMask, unsharpMask, 3);
cv::blur(unsharpMask, unsharpMask, {3,3});
// --- end filter ---
// apply mask to image
for (int row = 0; row < original.rows; row++)
{
for (int col = 0; col < original.cols; col++)
{
Vec3f origColor = input.at<Vec3f>(row, col);
Vec3f difference = unsharpMask.at<Vec3f>(row, col);
if(cv::norm(difference) >= threshold) {
retbuf.at<Vec3f>(row, col) = origColor + amount * difference;
}
}
}
// convert back to unsigned char
cv::Mat ret;
retbuf.convertTo(ret, CV_8UC3);
return ret;
}
int main(int argc, char* argv[])
{
if (argc < 3) exit(1);
Mat original = imread(argv[1]);
Mat expected = imread(argv[2]);
// closer to Photoshop
Mat current = usm(original, 0.8, 12., 1.);
// better settings (in my opinion)
//Mat current = usm(original, 2., 1., 3.);
cv::imwrite("current.png", current);
// comparison plot
cv::Rect crop(127, 505, 163, 120);
cv::Mat crops[3];
cv::resize(original(crop), crops[0], {0,0}, 4, 4, cv::INTER_NEAREST);
cv::resize(expected(crop), crops[1], {0,0}, 4, 4, cv::INTER_NEAREST);
cv::resize( current(crop), crops[2], {0,0}, 4, 4, cv::INTER_NEAREST);
char const* texts[] = {"original", "photoshop", "current"};
cv::Mat plot = cv::Mat::zeros(120 * 4, 163 * 4 * 3, CV_8UC3);
for(int i = 0; i < 3; ++i) {
cv::Rect region(163 * 4 * i, 0, 163 * 4, 120 * 4);
crops[i].copyTo(plot(region));
cv::putText(plot, texts[i], region.tl() + cv::Point{5,40},
cv::FONT_HERSHEY_SIMPLEX, 1.5, CV_RGB(255, 0, 0), 2.0);
}
cv::imwrite("plot.png", plot);
}
Here's my attempt at 'smart' unsharp masking. Result isn't very good, but I'm posting anyway. Wikipedia article on unsharp masking has details about smart sharpening.
Several things I did differently:
Convert BGR to Lab color space and apply the enhancements to the brightness channel
Use an edge map to apply enhancement to the edge regions
Original:
Enhanced: sigma=2 amount=3 low=0.3 high=.8 w=2
Edge map: low=0.3 high=.8 w=2
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <cstring>
cv::Mat not_so_smart_sharpen(
const cv::Mat& bgr,
double sigma,
double amount,
double canny_low_threshold_weight,
double canny_high_threshold_weight,
int edge_weight)
{
cv::Mat enhanced_bgr, lab, enhanced_lab, channel[3], blurred, difference, bw, kernel, edges;
// convert to Lab
cv::cvtColor(bgr, lab, cv::ColorConversionCodes::COLOR_BGR2Lab);
// perform the enhancement on the brightness component
cv::split(lab, channel);
cv::Mat& brightness = channel[0];
// smoothing for unsharp masking
cv::GaussianBlur(brightness, blurred, cv::Size(0, 0), sigma);
difference = brightness - blurred;
// calculate an edge map. I'll use Otsu threshold as the basis
double thresh = cv::threshold(brightness, bw, 0, 255, cv::ThresholdTypes::THRESH_BINARY | cv::ThresholdTypes::THRESH_OTSU);
cv::Canny(brightness, edges, thresh * canny_low_threshold_weight, thresh * canny_high_threshold_weight);
// control edge thickness. use edge_weight=0 to use Canny edges unaltered
cv::dilate(edges, edges, kernel, cv::Point(-1, -1), edge_weight);
// unsharp masking on the edges
cv::add(brightness, difference * amount, brightness, edges);
// use the enhanced brightness channel
cv::merge(channel, 3, enhanced_lab);
// convert to BGR
cv::cvtColor(enhanced_lab, enhanced_bgr, cv::ColorConversionCodes::COLOR_Lab2BGR);
// cv::imshow("edges", edges);
// cv::imshow("difference", difference * amount);
// cv::imshow("original", bgr);
// cv::imshow("enhanced", enhanced_bgr);
// cv::waitKey(0);
return enhanced_bgr;
}
int main(int argc, char *argv[])
{
double sigma = std::stod(argv[1]);
double amount = std::stod(argv[2]);
double low = std::stod(argv[3]);
double high = std::stod(argv[4]);
int w = std::stoi(argv[5]);
cv::Mat bgr = cv::imread("flower.jpg");
cv::Mat enhanced = not_so_smart_sharpen(bgr, sigma, amount, low, high, w);
cv::imshow("original", bgr);
cv::imshow("enhanced", enhanced);
cv::waitKey(0);
return 0;
}
I have a cv::Mat of a RGB image as
cv::Mat cv_img
I want to set zeros value for cv_img at some positions. For example from the bottom to the half location of the image will be filled by zero values. How can I do it in c++ and opencv? Thanks all.
I have searched a setTo function and mask may be a candidate solution, but how to define a binary mask is difficult for me.
cv_img.setTo(Scalar(0,0,0), mask);
You can achieve it by setting the pixels with a desired value. Just define the intervals of roi(region of interest.
Here is a simple code to guide:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img = imread("/ur/img/dir/img.jpg");
for(int i=img.rows/2; i<img.rows;i++)
{
for(int j=0; j<img.cols; j++)
{
img.at<Vec3b>(Point(j,i))[0] = 0;
img.at<Vec3b>(Point(j,i))[1] = 0;
img.at<Vec3b>(Point(j,i))[2] = 0;
}
}
imshow("Result",img);
waitKey(0);
return 0;
}
You can try this:
int w = cv_img.cols;
int h = cv_img.rows;
cv::Rect rectZero(0, h/2, w, h/2);
cv_img(rectZero) = cv::Scalar(0,0,0);
I'm looking for an efficient way of assigning values to an element of a 3-channel matrix. In particular, I need to assign HSV values to elements of a 2D cv::Mat which is initialized as follows:
cv::Mat clusterImage(height,width,CV_8UC3,cv::Scalar(0,0,0));
For this matrix, how do I set the pixel in row i and column j to an HSV value (H=59, S=255, V=255), as efficiently as possible?
My current method (complete code) is below. My fear is that splitting a matrix into channels, editing those channels and then merging them back together is not very efficient - especially since I need to do it in a loop, preferably at 30Hz and above. Does a more efficient method exist?
#include <vector>
#include <stdlib.h>
#include <iostream>
#include <opencv/cv.h>
#include <opencv/highgui.h>
using namespace std;
int main() {
int height = 480;
int width = 640;
cv::Mat clusterImage(height,width,CV_8UC3,cv::Scalar(0,0,0));
vector<cv::Mat> channels(3);
// split the channels
split(clusterImage, channels);
// modify the channels
vector<int> i;
vector<int> j;
int numberOfDots = 1000;
for (int k=0; k<numberOfDots; k++) {
i.push_back(rand() % height + 1);
j.push_back(rand() % width + 1);
}
for (int k=0; k<numberOfDots; k++) {
channels[0].at<unsigned char>(i[k],j[k]) = 59;
channels[1].at<unsigned char>(i[k],j[k]) = 255;
channels[2].at<unsigned char>(i[k],j[k]) = 255;
}
// merge channels
merge(channels, clusterImage);
// convert to RGB and draw
cv::cvtColor(clusterImage, clusterImage, CV_HSV2BGR);
imshow("test_window", clusterImage);
cv::waitKey(0);
return 0;
}
This code would be my choice:
int height = 480;
int width = 640;
cv::Mat clusterImage(height,width,CV_8UC3,cv::Scalar(0,0,0));
int numberOfDots = 1000;
int i , j;
for (int k=0; k<numberOfDots; k++)
{
i = rand() % height ; j = rand() % width ;
clusterImage.at<Vec3b>(i , j )[0] = 59;
clusterImage.at<Vec3b>(i , j )[1] = 255;
clusterImage.at<Vec3b>(i , j )[2] = 255;
}
// convert to RGB and draw
cv::cvtColor(clusterImage, clusterImage, CV_HSV2BGR);
imshow("test_window", clusterImage);
cv::waitKey(0);
Yes, you can make this a lot more efficient.
You can assign to a CV::Mat more or less directly. Assuming your system is underlying RGB, simply set up a CV::Mat of width and hight and with three or four channels (often a dummy alpha makes things a bit faster). Then look up the rgb values for HSV 59, 255, 255 - there are plenty of formulae - and set them directly. I think you can use the "at" member function but that's based on a casual glance at the CV::Mat interface.
Finally, you can get rid of the vectors i and j of the dot x, y co-cordinates, assuming you don't need them later on. Just loop on numberOfDots and generatate two temporary random numbers
I am making a function using C++ and OpenCV that will detect the color of a pixel in an image, determine what color range it is in, and replace it with a generic color. For example, green could range from dark green to light green, the program would determine that its still green and replace it with a simple green, making the output image very simple looking. everything is set up but I'm having trouble defining the characteristics of each range and was curious if anyone knows or a formula that, given BGR values, could determine the overall color of a pixel. If not I'll have to do much experimentation and make it myself, but if something already exists that'd save time. I've done plenty of research and haven't found anything so far.
If you want to make your image simpler (i.e. with less colors), but good looking, you have a few options:
A simple approach would be to divide (integer division) by a factor N the image, and then multiply by a factor N.
Or you can divide your image into K colors, using some clustering algorithm such as kmeans showed here, or median-cut algorithm.
Original image:
Reduced colors (quantized, N = 64):
Reduced colors (clustered, K = 8):
Code Quantization:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
imshow("Original", img);
uchar N = 64;
img /= N;
img *= N;
imshow("Reduced", img);
waitKey();
return 0;
}
Code kmeans:
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
imshow("Original", img);
// Cluster
int K = 8;
int n = img.rows * img.cols;
Mat data = img.reshape(1, n);
data.convertTo(data, CV_32F);
vector<int> labels;
Mat1f colors;
kmeans(data, K, labels, cv::TermCriteria(), 1, cv::KMEANS_PP_CENTERS, colors);
for (int i = 0; i < n; ++i)
{
data.at<float>(i, 0) = colors(labels[i], 0);
data.at<float>(i, 1) = colors(labels[i], 1);
data.at<float>(i, 2) = colors(labels[i], 2);
}
Mat reduced = data.reshape(3, img.rows);
reduced.convertTo(reduced, CV_8U);
imshow("Reduced", reduced);
waitKey();
return 0;
}
Yes, what you probably mean by "Overall color of a pixel" is either the "Hue" or "Saturation" of the color.
So you want a formula that transform RGB to HSV (Hue, Saturation, Value), and then you would only be interested by the Hue or Saturation values.
See: Algorithm to convert RGB to HSV and HSV to RGB in range 0-255 for both
EDIT: You might need to max out the saturation, and then convert it back to RGB, and inspect which value is the highest (for instance (255,0,0), or (255,0,255), etc.
If you want to access RGB value of all pixels , then below is code,
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat image = imread("image_path");
for(int row = 1; row < image.rows; row++)
{
for(int col = 1; col < image.cols; col++)
{
Vec3b rgb = image.at<Vec3b>(row, col);
}
}
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I m trying to find the maximum and minimum of RGB values of an image.
the flow in which i was planning to go is:
load the image.
after loading the image, create a 15x15 cell around the cell to be tested
find the max of RGB of the test cell and store it in an array.
then print the image with the value of max RGB, According to me the image should be a DARK image. The max of RGB corresponds to the dark portion of the image
The problem here is i m new to image processing, opencv.
i dont know how to implement these things which i mentioned abovei have attached a picture related to my doubt
Here is code, i have just read image and got some details of image
#include "iostream"
#include "string.h"
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include "opencv2/opencv.hpp"
float lambda=0.0001; //lambda
double _w=0.95; //w
int height=0; //image Height
int width=0; //image Width
int size=0; //total number of pixels
int blockdim = 32;
char img_name[100]="1.png";
Mat read_image()
{
Mat img = imread(img_name);
height = img.rows;
width = img.cols;
size = img.rows*img.cols;
Mat real_img(img.rows,img.cols,CV_32FC3);
img.convertTo(real_img,CV_32FC3);
return real_img;
}
//Main Function
int main(int argc, char * argv[])
{
Mat img = read_image();
/*****************************************************************/
// Till here i have done my code. i.e. Read my image and get all details about the image
// Now i'm not getting the logic to find the Min/Max of RGB values in an image for
// 15x15 cell
return 0;
}
Finally i want to implement this on GPU, i have learnt few things about GPU,CUDA and played on GPU. Now i wanna do some stuff related to image processing on GPU(CUDA)
I want to compute the extent of haze of an image for each block. This is done by finding the dark channel value that is used to reflect the extent of haze. This concept is from Kaiming He's paper on a Single Image Haze Removal using Dark Channel Prior.
The dark channel value for each block is defined as follows:
where I^c (x',y') denotes the intensity at a pixel location (x',y') in color channel c (one of Red, Green, or Blue color channel), and omega(x,y) denotes the neighborhood of the pixel location (x',y').
since i m new to Image processing and open cv, i'm not sure how to translate this equation
I already implemented this some time ago, and below is the code snippet. Probably could be further optimized, and you should add cuda support by yourself, but this could be a good starting point.
The main steps are:
Load a BGR image
Compute a single channel matrix with the minimum of B,G,R (minValue3b).
Compute the minimum in a patchSize x patchSize neighborhood (minFilter).
NOTES
You need to find the minimum value, not the maximum.
To avoid border issues while searching the minimum in the neighborhood, you can simply add a border big enough around the image, with the maximum allowed value (i.e. 255). You can use copyMakeBorder for this.
Input:
DCP:
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
void minFilter(const Mat1b& src, Mat1b& dst, int radius)
{
Mat1b padded;
copyMakeBorder(src, padded, radius, radius, radius, radius, BORDER_CONSTANT, Scalar(255));
int rr = src.rows;
int cc = src.cols;
dst = Mat1b(rr, cc, uchar(0));
for (int c = 0; c < cc; ++c)
{
for (int r = 0; r < rr; ++r)
{
uchar lowest = 255;
for (int i = -radius; i <= radius; ++i)
{
for (int j = -radius; j <= radius; ++j)
{
uchar val = padded(radius + r + i, radius + c + j);
if (val < lowest) lowest = val;
}
}
dst(r, c) = lowest;
}
}
}
void minValue3b(const Mat3b& src, Mat1b& dst)
{
int rr = src.rows;
int cc = src.cols;
dst = Mat1b(rr, cc, uchar(0));
for (int c = 0; c<cc; ++c)
{
for (int r = 0; r<rr; ++r)
{
const Vec3b& v = src(r, c);
uchar lowest = v[0];
if (v[1] < lowest) lowest = v[1];
if (v[2] < lowest) lowest = v[2];
dst(r, c) = lowest;
}
}
}
void DarkChannel(const Mat3b& img, Mat1b& dark, int patchSize)
{
int radius = patchSize / 2;
Mat1b low;
minValue3b(img, low);
minFilter(low, dark, radius);
}
int main()
{
// Load the image
Mat3b img = imread("path_to_image");
// Compute DCP
Mat1b dark;
DarkChannel(img, dark, 15);
// Show results
imshow("Img", img);
imshow("Dark", dark);
waitKey();
return 0;
}