Color to grayscale conversion - c++

Im using a C++ openCV program for first principles Algorithm development for HDL(Verilog) image object detection. I've finally managed to get HDL version up to the point of canny detection. In order to validate the two, both need to have identical output. I have found their are subtle differences that I thing are being contributed to by the openCV imread colour to grayscale conversion biasing green. The smoothed image is overall brighter in the openCV C++ method. From looking at the rgb2gray method it appears openCV used a bias ie (RX+GY+B*Z)/3 while in HDL I have been using (R+G+B)/3 as I require it to complete Gaussian, Sobel and Canny filters. Human visualisation is secondary and multiplication by a non-int is undesirable.
Is there a standard linear grayscale conversion for conversion or a means to override the existing method?
...
int main()
{
int thold = 15;
clock_t start;
double duration;
const int sobelX[3][3] = { {-1, 0, 1}, {-2, 0, 2}, {-1, 0, 1} }; //Where origionally floats in python
const int sobelY[3][3] = { {-1, -2, -1}, {0, 0, 0}, {1, 2, 1} }; //Where origionally floats in python
const int kernel[5][5] = { {1,6,12,6,1},
{6,42,79,42,6},
{12,79,148,79,12},
{6,42,79,42,6},
{1,6,12,6,1} };// 1/732
// Above normalised kernal for smoothing, see origional python script for method
start = std::clock();
int height, width, intPixel, tSx, tSy, tS, dirE, dirEE, maxDir, curPoint, contDirection, cannyImgPix, nd, tl, tm, tr, mr, br, bm, bl, ml = 0;
int contNum = 128;
int contPixCount = 0;
int curContNum = 0;
int contPlace = 0;
int oldContPlace = 0;
int g = 0;
bool maxPoint;
struct pixel {
int number;
int h;
int w;
};
std::vector<pixel> contourList;
//double floatPixel = 0.0;
int kernalCumulator = 0;
const int mp = 3;
// Scalar color(0, 0, 255);
// duration = ((clock()) - start) / (double)CLOCKS_PER_SEC;
// start = clock();
// cout << "Start image in" << duration << '\n';
// Mat dst;
Mat rawImg = imread("C:\\Users\\&&&\\Documents\\pycode\\paddedGS.png",0);
printf("%d",rawImg.type());
// Mat rawImg = imread("C:\\Users\\&&&\\Documents\\openCV_Master\\openCVexample\\openCVexample\\brace200.jpg ", 0);
height = rawImg.rows;
width = rawImg.cols;
cout << "Height of image " << height << '\n';
cout << "Width of image " << width << '\n';
Mat filteredImg = Mat::zeros(height, width, CV_8U);
printf("%d", filteredImg.type());
Mat sobelImg = Mat::zeros(height, width, CV_8U);
Mat directionImg = Mat::zeros(height, width, CV_8U);
Mat cannyImg = Mat::zeros(height, width, CV_8U);
Mat contourImg = Mat::zeros(height, width, CV_16U);
// rawImg.convertTo(rawImg, CV_8UC1);
duration = ((clock()) - start) / (double)CLOCKS_PER_SEC;
start = clock();
cout << "Start image in" << duration << '\n';
// Loop to threshold already grayscaled image
/*
for (int h = 0; h < (height); h++)
{
for (int w = 0; w < (width); w++)
{
g = (int)rawImg.at<uchar>(h, w,0);
cout << g << "g";
g+= (int)rawImg.at<uchar>(h, w, 1);
cout << g << "g";
g+= (int)rawImg.at<uchar>(h, w, 2);
cout << g << "g";
g = g/3;
rawGImg.at<uchar>(h,w) = g;
}
}
*/
// imshow("thresholded Image", rawImg);
// waitKey();
// Loop to smooth using Gausian 5 x 5 kernal
// imshow("raw Image", rawImg);
for (int h = 3; h < (height - 3); h++)
{
for (int w = 3; w < (width - 3); w++)
{
if (rawImg.at<uchar>(h, w) >=6 )//Thresholding included
{
for (int xk = 0; xk < 5; xk++)
{
for (int yk = 0; yk < 5; yk++)
{
intPixel = rawImg.at<uchar>((h + (xk - mp)), (w + (yk - mp)));
kernalCumulator += intPixel*(kernel[xk][yk]);//Mutiplier required as rounding is making number go above 255, better solution?
}
}
}
else
kernalCumulator = 0;
kernalCumulator = kernalCumulator / 732;
if (kernalCumulator < 0 || kernalCumulator > 255)
{
// cout << "kernal Value: " << kernalCumulator;
// cout << " intPixel:" << intPixel << '\n';
}
filteredImg.at<uchar>(h, w) = (uchar)kernalCumulator;
kernalCumulator = 0;
}
}

Our vision does not perceive linearly the brightness, so it makes sense for usual applications to use some sort of transformation that tries to mimic the human perception.
For your application, you have 2 options: either use a similar transformation in HDL (which might not be easy or desired), or make a custom rgb to grayscale for OpenCV which uses the same transformation you use.
A short snippet (more like pseudocode, you'll have to figure out the details) for this would be something like:
cv::Mat linearRgbToGray(const cv::Mat &color) {
cv::Mat gray(color.size(), CV_8UC1);
for (int i = 0; i < color.rows; i++)
for (int j = 0; j < color.cols; j++)
gray.at(i, j) = (color.at(i, j)[0] + color.at(i, j)[1] + color.at(i, j)[2]) / 3;
}

As per Paul92's advice above
cv::Mat linearRgbToGray(const cv::Mat &color) {
cv::Mat gray(color.size(), CV_8UC1);
for (int i = 0; i < color.rows; i++)
for (int j = 0; j < color.cols; j++)
gray.at<uchar>(i, j) = ((color.at<cv::Vec3b>(i, j)[0] + color.at<cv::Vec3b>(i, j)[1] + color.at<cv::Vec3b>(i, j)[2]) / 3);
return gray;
}
The above code worked and overcame out of bounds errors I experienced earlier. Thank you, Rob.

Related

Issues in eliminate multiple bounding box in YoloV5 using C++ for custom model

I tried to perform object detection using the yolov5 model with c++. I have a custom-trained yolov5 model which is working perfectly fine in python but my whole complete setup is in c++ thereby I have to switch. So I have converted the yolov5s model into ONNX format and tried to run it as by "https://github.com/doleron/yolov4-opencv-cpp-python"1. Unfortunately, I'm getting multiple bounding boxes in the top left corner as in the image.
I don't know how to eliminate this kind of error, but when I used the inbuilt pre-train yolov5s model the c++ code is detecting and worked perfectly. Similarly, when I used the custom-trained model in python it's working perfectly.
Here is my c++ code for object detection using c++
#include <fstream>
#include <opencv2/opencv.hpp>
std::vector<std::string> load_class_list()
{
std::vector<std::string> class_list;
std::ifstream ifs("config_files/classes.txt");
std::string line;
while (getline(ifs, line))
{
class_list.push_back(line);
}
return class_list;
}
void load_net(cv::dnn::Net &net, bool is_cuda)
{
auto result = cv::dnn::readNet("config_files/yolov5s_custom.onnx");
if (is_cuda)
{
std::cout << "Attempty to use CUDA\n";
result.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
result.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA_FP16);
}
else
{
std::cout << "Running on CPU\n";
result.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
result.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
}
net = result;
}
const std::vector<cv::Scalar> colors = {cv::Scalar(255, 255, 0), cv::Scalar(0, 255, 0), cv::Scalar(0, 255, 255), cv::Scalar(255, 0, 0)};
const float INPUT_WIDTH = 640.0;
const float INPUT_HEIGHT = 640.0;
const float SCORE_THRESHOLD = 0.2;
const float NMS_THRESHOLD = 0.4;
const float CONFIDENCE_THRESHOLD = 0.4;
struct Detection
{
int class_id;
float confidence;
cv::Rect box;
};
cv::Mat format_yolov5(const cv::Mat &source) {
int col = source.cols;
int row = source.rows;
int _max = MAX(col, row);
cv::Mat result = cv::Mat::zeros(_max, _max, CV_8UC3);
source.copyTo(result(cv::Rect(0, 0, col, row)));
return result;
}
void detect(cv::Mat &image, cv::dnn::Net &net, std::vector<Detection> &output, const std::vector<std::string> &className) {
cv::Mat blob;
auto input_image = format_yolov5(image);
cv::dnn::blobFromImage(input_image, blob, 1./255., cv::Size(INPUT_WIDTH, INPUT_HEIGHT), cv::Scalar(), true, false);
net.setInput(blob);
std::vector<cv::Mat> outputs;
net.forward(outputs, net.getUnconnectedOutLayersNames());
float x_factor = input_image.cols / INPUT_WIDTH;
float y_factor = input_image.rows / INPUT_HEIGHT;
float *data = (float *)outputs[0].data;
const int dimensions = 85;
const int rows = 25200;
std::vector<int> class_ids;
std::vector<float> confidences;
std::vector<cv::Rect> boxes;
for (int i = 0; i < rows; ++i) {
float confidence = data[4];
if (confidence >= CONFIDENCE_THRESHOLD) {
float * classes_scores = data + 5;
cv::Mat scores(1, className.size(), CV_32FC1, classes_scores);
cv::Point class_id;
double max_class_score;
minMaxLoc(scores, 0, &max_class_score, 0, &class_id);
if (max_class_score > SCORE_THRESHOLD) {
confidences.push_back(confidence);
class_ids.push_back(class_id.x);
float x = data[0];
float y = data[1];
float w = data[2];
float h = data[3];
int left = int((x - 0.5 * w) * x_factor);
int top = int((y - 0.5 * h) * y_factor);
int width = int(w * x_factor);
int height = int(h * y_factor);
boxes.push_back(cv::Rect(left, top, width, height));
}
}
data += 85;
}
std::vector<int> nms_result;
cv::dnn::NMSBoxes(boxes, confidences, SCORE_THRESHOLD, NMS_THRESHOLD, nms_result);
for (int i = 0; i < nms_result.size(); i++) {
int idx = nms_result[i];
Detection result;
result.class_id = class_ids[idx];
result.confidence = confidences[idx];
result.box = boxes[idx];
output.push_back(result);
}
}
int main(int argc, char **argv)
{
std::vector<std::string> class_list = load_class_list();
cv::Mat frame;
cv::VideoCapture capture("sample.mp4");
if (!capture.isOpened())
{
std::cerr << "Error opening video file\n";
return -1;
}
bool is_cuda = argc > 1 && strcmp(argv[1], "cuda") == 0;
cv::dnn::Net net;
load_net(net, is_cuda);
auto start = std::chrono::high_resolution_clock::now();
int frame_count = 0;
float fps = -1;
int total_frames = 0;
while (true)
{
capture.read(frame);
if (frame.empty())
{
std::cout << "End of stream\n";
break;
}
std::vector<Detection> output;
detect(frame, net, output, class_list);
frame_count++;
total_frames++;
int detections = output.size();
for (int i = 0; i < detections; ++i)
{
auto detection = output[i];
auto box = detection.box;
auto classId = detection.class_id;
const auto color = colors[classId % colors.size()];
cv::rectangle(frame, box, color, 3);
cv::rectangle(frame, cv::Point(box.x, box.y - 20), cv::Point(box.x + box.width, box.y), color, cv::FILLED);
cv::putText(frame, class_list[classId].c_str(), cv::Point(box.x, box.y - 5), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0));
}
if (frame_count >= 30)
{
auto end = std::chrono::high_resolution_clock::now();
fps = frame_count * 1000.0 / std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
frame_count = 0;
start = std::chrono::high_resolution_clock::now();
}
if (fps > 0)
{
std::ostringstream fps_label;
fps_label << std::fixed << std::setprecision(2);
fps_label << "FPS: " << fps;
std::string fps_label_str = fps_label.str();
cv::putText(frame, fps_label_str.c_str(), cv::Point(10, 25), cv::FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(0, 0, 255), 2);
}
cv::imshow("output", frame);
if (cv::waitKey(1) != -1)
{
capture.release();
std::cout << "finished by user\n";
break;
}
}
std::cout << "Total frames: " << total_frames << "\n";
return 0;
}
Kindly guide me on how to eliminate these multiple boxes on the output video stream.

Histogram matching of two images with c++ and OpenCV 2.4.x

Since I found nothing about Histogram Matching in C++ and OpenCV 2.4 I ask here again a question.
All solutions I found for newer versions.
My code:
void histogramMatching(Mat & reference, Mat & input, Mat & result) {
const float HISMATCH = 0.001;
double min, max;
vector<Mat> reference_channels;
split(reference, reference_channels);
vector<Mat> input_channels;
split(input, input_channels);
int histSize = 256;
float range[] = { 0,256 };
const float* histrange = { range };
bool uniform = true;
for (int i = 0; i < 3; i++) {
Mat reference_histogram, input_histogram;
Mat reference_histogram_accum, input_histogram_accum;
calcHist(&input_channels[i], 1, 0, Mat(), input_histogram, 1, &histSize, &histrange, &uniform);
try {
calcHist(&reference_channels[i], 1, 0, Mat(), reference_histogram, 1, &histSize, &histrange, &uniform);
}
catch (int n) {
cout << "The first element is " << n << endl;
}
minMaxLoc(reference_histogram, &min, &max);
normalize(reference_histogram, reference_histogram, min / max, NORM_MINMAX);
minMaxLoc(input_histogram, &min, &max);
normalize(input_histogram, input_histogram, min / max, NORM_MINMAX);
reference_histogram.copyTo(reference_histogram_accum);
input_histogram.copyTo(input_histogram_accum);
float* src_cdf_data = input_histogram_accum.ptr<float>();
float* dst_cdf_data = reference_histogram_accum.ptr<float>();
for (int j = 1; j < 256; j++) {
src_cdf_data[j] += src_cdf_data[j - 1];
dst_cdf_data[j] += dst_cdf_data[j - 1];
}
minMaxLoc(reference_histogram_accum, &min, &max);
normalize(reference_histogram_accum, reference_histogram_accum, min / max, 1.0, NORM_MINMAX);
minMaxLoc(input_histogram_accum, &min, &max);
normalize(input_histogram_accum, input_histogram_accum, min / max, 1.0, NORM_MINMAX);
//BEGIN Matching
Mat lut(1, 256, CV_8UC1);
uchar* M = lut.ptr<uchar>();
uchar last = 0;
for (int j = 0; j < input_histogram_accum.rows; j++) {
float F1 = dst_cdf_data[j];
int i = 0;
for (uchar k = last; k < reference_histogram_accum.rows; k++) {
i++;
float F2 = src_cdf_data[k];
if (abs(F2 - F1) < HISMATCH || F2 > F1) {
M[j] = k;
last = k;
break;
}
}
}
LUT(input_channels[i], lut, input_channels[i]);
}
merge(input_channels, result);
}
public:int execute() {
Mat input = imread("input.jpg", IMREAD_COLOR);
if (input.empty()) {
cout << "Image is empty" << endl;
return -1;
}
Mat reference = imread("fuchs.jpg", IMREAD_COLOR);
if (reference.empty()) {
cout << "Reference Image is empty" << endl;
return -1;
}
Mat result = input.clone();
namedWindow("Reference", WINDOW_AUTOSIZE);
namedWindow("Input", WINDOW_AUTOSIZE);
namedWindow("Result", WINDOW_AUTOSIZE);
imshow("Reference", reference);
imshow("Input", input);
histogramMatching(reference, input, result);
imshow("Result", result);
waitKey(0);
return 0;
}
All works, but the loop which begins with "for (int j = 0; j < input_histogram_accum.rows; j++) {", get no response and i waited for more than 6 hours and i think it doesn't work. My input image is 500kb and my fuchs.jpg is 180 kb.
Has anybody a solution for histogram matching with C++ and OpenCV 2.4.x?

Compute coordinates from source images after stitching

I use an algorithm of panorama stitching from opencv, in order to stitch 2 or 3 images into one new result image.
I have coordinates of points in each source image. I need to calculate what are the new coordinates for these points in the result image.
I describe below the algorithm. My code is similar to a sample "stitching_detailed" from opencv (branch 3.4). A result_mask of type Mat is produced, maybe it is the solution? But I don't know how to use it. I found a related question here but not on stitching.
Any idea?
Here is the algorithm (for detailed code: stitching_detailed.cpp):
Find features for each image:
Ptr<FeaturesFinder> finder = makePtr<SurfFeaturesFinder>()
vector<ImageFeatures> features(num_images);
for (int i = 0; i < num_images; ++i)
{
(*finder)(images[i], features[i]);
}
Make pairwise_matches:
vector<MatchesInfo> pairwise_matches;
Ptr<FeaturesMatcher> matcher = makePtr<BestOf2NearestMatcher>(false, match_conf);
(*matcher)(features, pairwise_matches);
Reorder the images:
vector<int> indices = leaveBiggestComponent(features, pairwise_matches, conf_thresh);
# here some code to reorder 'images'
Estimate an homography in cameras:
vector<CameraParams> cameras;
Ptr<Estimator> estimator = makePtr<HomographyBasedEstimator>();
(*estimator)(features, pairwise_matches, cameras);
Convert to CV_32F:
for (size_t i = 0; i < cameras.size(); ++i)
{
Mat R;
cameras[i].R.convertTo(R, CV_32F);
cameras[i].R = R;
}
Execute a BundleAdjuster:
Ptr<detail::BundleAdjusterBase> adjuster = makePtr<detail::BundleAdjusterRay>();
adjuster->setConfThresh(conf_thresh);
adjuster->setRefinementMask(refine_mask);
(*adjuster)(features, pairwise_matches, cameras);
Compute a value for warped_image_scale:
for (int i = 0; i < cameras.size(); ++i)
focals.push_back(cameras[i].focal);
float warped_image_scale = static_cast<float>(focals[focals.size() / 2 - 1] + focals[focals.size() / 2]) * 0.5f;
Do wave correction:
vector<Mat> rmats;
for (size_t i = 0; i < cameras.size(); ++i)
rmats.push_back(cameras[i].R.clone());
waveCorrect(rmats, wave_correct);
for (size_t i = 0; i < cameras.size(); ++i)
cameras[i].R = rmats[i];
Create a warper:
Ptr<WarperCreator> warper_creator = makePtr<cv::SphericalWarper>();
Ptr<RotationWarper> warper = warper_creator->create(static_cast<float>(warped_image_scale * seam_work_aspect));
Create a blender and feed it:
Ptr<Blender> blender;
for (size_t i = 0; i < cameras.size(); ++i)
{
full_img = input_imgs[img_idx];
if (!is_compose_scale_set)
{
is_compose_scale_set = true;
compose_scale = /* … */
}
if (abs(compose_scale - 1) > 1e-1)
resize(full_img, img, Size(), compose_scale, compose_scale, INTER_LINEAR_EXACT);
else
img = full_img;
// Warp the current image
warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);
// Warp the current image mask
mask.create(img_size, CV_8U);
mask.setTo(Scalar::all(255));
warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);
// Compensate exposure
compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);
dilate(masks_warped[img_idx], dilated_mask, Mat());
resize(dilated_mask, seam_mask, mask_warped.size(), 0, 0, INTER_LINEAR_EXACT);
mask_warped = seam_mask & mask_warped;
if (!blender)
{
blender = Blender::createDefault(blend_type, try_gpu);
Size dst_sz = resultRoi(corners, sizes).size();
float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;
MultiBandBlender *mb = dynamic_cast<MultiBandBlender *>(blender.get());
mb->setNumBands(static_cast<int>(ceil(log(blend_width) / log(2.)) - 1.));
blender->prepare(corners, sizes);
}
// Blend the current image
blender->feed(img_warped_s, mask_warped, corners[i]);
}
Then, use the blender:
Mat result, result_mask;
blender->blend(result, result_mask);
// The result image is in 'result'
When I was a school boy, I foundopencv/samples/cpp/stitching_detailed.cpp in OpenCV samples folder. At that time, my programming skills were very poor. I can't understand it even though I racked my brains. This question attracts my attention, and arouses my memory. After a whole night of hard work and debugging, I finally get it.
Basic steps:
Given the three images: blue.png, green.png, and red.png
We can get the stitching result(result.png) using the stitching_detailed.cpp.
.
blender->blend(result, result_mask);
imwrite("result.png", result);
imwrite("result_mask.png", result_mask);
I choose the centers from the three images, and calculate the corresponding coordinates (warped) on the stitching image, and draw in solid as follow:
Warping images (auxiliary)...
Compensating exposure...
Blending ...
Warp each center point, and draw solid circle.
[408, 204] => [532, 224]
[408, 204] => [359, 301]
[408, 204] => [727, 320]
Check `result.png`, `result_mask.png` and `result2.png`!
Done!
This is the function calcWarpedPoint I wrote to calculate the warped point on the stitching image:
cv::Point2f calcWarpedPoint(
const cv::Point2f& pt,
InputArray K, // Camera K parameter
InputArray R, // Camera R parameter
Ptr<RotationWarper> warper, // The Rotation Warper
const std::vector<cv::Point> &corners,
const std::vector<cv::Size> &sizes)
{
// Calculate the wrapped point using camera parameter.
cv::Point2f dst = warper->warpPoint(pt, K, R);
// Calculate the stitching image roi using corners and sizes.
// the corners and sizes have already been calculated.
cv::Point2f tl = cv::detail::resultRoi(corners, sizes).tl();
// Finally adjust the wrapped point to the stitching image.
return cv::Point2f(dst.x - tl.x, dst.y - tl.y);
}
This is example code snippet:
std::cout << "\nWarp each center point, and draw solid circle.\n";
std::vector<cv::Scalar> colors = { {255,0,0}, {0, 255, 0}, {0, 0, 255} };
for (int idx = 0; idx < img_names.size(); ++idx) {
img = cv::imread(img_names[idx]);
Mat K;
cameras[idx].K().convertTo(K, CV_32F);
Mat R = cameras[idx].R;
cv::Point2f cpt = cv::Point2f(img.cols / 2, img.rows / 2);
cv::Point pt = calcWarpedPoint(cpt, K, R, warper, corners, sizes);
cv::circle(result, pt, 5, colors[idx], -1, cv::LINE_AA);
std::cout << cpt << " => " << pt << std::endl;
}
std::cout << "\nCheck `result.png`, `result_mask.png` and `result2.png`!\n";
imwrite("result2.png", result);
The full code:
/*
* Author : Kinght-金(https://stackoverflow.com/users/3547485/)
* Created : 2019/03/01 23:00 (CST)
* Finished : 2019/03/01 07:50 (CST)
*
* Modified on opencv401/samples/cpp/stitching_detailed.cpp
* From https://github.com/opencv/opencv/blob/4.0.1/samples/cpp/stitching_detailed.cpp
*
*
* Description: A simple opencv(4.0.1) image stitching code for Stack Overflow answers.
* For https://stackoverflow.com/questions/54904718/compute-coordinates-from-source-images-after-stitching/54953792#comment96681412_54953792
*
*/
#include <iostream>
#include <fstream>
#include <string>
#include "opencv2/opencv_modules.hpp"
#include <opencv2/core/utility.hpp>
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/stitching/detail/autocalib.hpp"
#include "opencv2/stitching/detail/blenders.hpp"
#include "opencv2/stitching/detail/camera.hpp"
#include "opencv2/stitching/detail/exposure_compensate.hpp"
#include "opencv2/stitching/detail/matchers.hpp"
#include "opencv2/stitching/detail/motion_estimators.hpp"
#include "opencv2/stitching/detail/seam_finders.hpp"
#include "opencv2/stitching/detail/warpers.hpp"
#include "opencv2/stitching/warpers.hpp"
using namespace std;
using namespace cv;
using namespace cv::detail;
//! img_names are the input image (full) paths
// You can download from using the links from the answer.
//! Blue: https://i.stack.imgur.com/Yz3U1.png
//! Green: https://i.stack.imgur.com/AbUTH.png
//! Red: https://i.stack.imgur.com/9wcGc.png
vector<String> img_names = {"D:/stitching/blue.png", "D:/stitching/green.png", "D:/stitching/red.png"};
//! The function to calculate the warped point on the stitching image.
cv::Point2f calcWarpedPoint(
const cv::Point2f& pt,
InputArray K, // Camera K parameter
InputArray R, // Camera R parameter
Ptr<RotationWarper> warper, // The Rotation Warper
const std::vector<cv::Point> &corners,
const std::vector<cv::Size> &sizes)
{
// Calculate the wrapped point
cv::Point2f dst = warper->warpPoint(pt, K, R);
// Calculate the stitching image roi using corners and sizes,
// the corners and sizes have already been calculated.
cv::Point2f tl = cv::detail::resultRoi(corners, sizes).tl();
// Finally adjust the wrapped point
return cv::Point2f(dst.x - tl.x, dst.y - tl.y);
}
int main(int argc, char* argv[])
{
double work_megapix = 0.6;
double seam_megapix = 0.1;
double compose_megapix = -1;
float conf_thresh = 1.f;
float match_conf = 0.3f;
float blend_strength = 5;
// Check if have enough images
int num_images = static_cast<int>(img_names.size());
if (num_images < 2)
{
std::cout << "Need more images\n";
return -1;
}
double work_scale = 1, seam_scale = 1, compose_scale = 1;
bool is_work_scale_set = false, is_seam_scale_set = false, is_compose_scale_set = false;
//(1) 创建特征查找器
Ptr<Feature2D> finder = ORB::create();
// (2) 读取图像,适当缩放,并计算图像的特征描述
Mat full_img, img;
vector<ImageFeatures> features(num_images);
vector<Mat> images(num_images);
vector<Size> full_img_sizes(num_images);
double seam_work_aspect = 1;
for (int i = 0; i < num_images; ++i)
{
full_img = imread(img_names[i]);
full_img_sizes[i] = full_img.size();
if (full_img.empty())
{
cout << "Can't open image " << img_names[i] << std::endl;
return -1;
}
if (!is_work_scale_set)
{
work_scale = min(1.0, sqrt(work_megapix * 1e6 / full_img.size().area()));
is_work_scale_set = true;
}
resize(full_img, img, Size(), work_scale, work_scale, INTER_LINEAR_EXACT);
if (!is_seam_scale_set)
{
seam_scale = min(1.0, sqrt(seam_megapix * 1e6 / full_img.size().area()));
seam_work_aspect = seam_scale / work_scale;
is_seam_scale_set = true;
}
computeImageFeatures(finder, img, features[i]);
features[i].img_idx = i;
std::cout << "Features in image #" << i + 1 << ": " << features[i].keypoints.size() << std::endl;
resize(full_img, img, Size(), seam_scale, seam_scale, INTER_LINEAR_EXACT);
images[i] = img.clone();
}
full_img.release();
img.release();
// (3) 创建图像特征匹配器,计算匹配信息
vector<MatchesInfo> pairwise_matches;
Ptr<FeaturesMatcher> matcher = makePtr<BestOf2NearestMatcher>(false, match_conf);
(*matcher)(features, pairwise_matches);
matcher->collectGarbage();
//! (4) 剔除外点,保留最确信的大成分
// Leave only images we are sure are from the same panorama
vector<int> indices = leaveBiggestComponent(features, pairwise_matches, conf_thresh);
vector<Mat> img_subset;
vector<String> img_names_subset;
vector<Size> full_img_sizes_subset;
for (size_t i = 0; i < indices.size(); ++i)
{
img_names_subset.push_back(img_names[indices[i]]);
img_subset.push_back(images[indices[i]]);
full_img_sizes_subset.push_back(full_img_sizes[indices[i]]);
}
images = img_subset;
img_names = img_names_subset;
full_img_sizes = full_img_sizes_subset;
// Check if we still have enough images
num_images = static_cast<int>(img_names.size());
if (num_images < 2)
{
std::cout << "Need more images\n";
return -1;
}
//!(5) 估计 homography
Ptr<Estimator> estimator = makePtr<HomographyBasedEstimator>();
vector<CameraParams> cameras;
if (!(*estimator)(features, pairwise_matches, cameras))
{
cout << "Homography estimation failed.\n";
return -1;
}
for (size_t i = 0; i < cameras.size(); ++i)
{
Mat R;
cameras[i].R.convertTo(R, CV_32F);
cameras[i].R = R;
std::cout << "\nInitial camera intrinsics #" << indices[i] + 1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R << std::endl;
}
//(6) 创建约束调整器
Ptr<detail::BundleAdjusterBase> adjuster = makePtr<detail::BundleAdjusterRay>();
adjuster->setConfThresh(conf_thresh);
Mat_<uchar> refine_mask = Mat::zeros(3, 3, CV_8U);
refine_mask(0, 0) = 1;
refine_mask(0, 1) = 1;
refine_mask(0, 2) = 1;
refine_mask(1, 1) = 1;
refine_mask(1, 2) = 1;
adjuster->setRefinementMask(refine_mask);
if (!(*adjuster)(features, pairwise_matches, cameras))
{
cout << "Camera parameters adjusting failed.\n";
return -1;
}
// Find median focal length
vector<double> focals;
for (size_t i = 0; i < cameras.size(); ++i)
{
focals.push_back(cameras[i].focal);
}
sort(focals.begin(), focals.end());
float warped_image_scale;
if (focals.size() % 2 == 1)
warped_image_scale = static_cast<float>(focals[focals.size() / 2]);
else
warped_image_scale = static_cast<float>(focals[focals.size() / 2 - 1] + focals[focals.size() / 2]) * 0.5f;
std::cout << "\nWarping images (auxiliary)... \n";
vector<Point> corners(num_images);
vector<UMat> masks_warped(num_images);
vector<UMat> images_warped(num_images);
vector<Size> sizes(num_images);
vector<UMat> masks(num_images);
// Preapre images masks
for (int i = 0; i < num_images; ++i)
{
masks[i].create(images[i].size(), CV_8U);
masks[i].setTo(Scalar::all(255));
}
// Warp images and their masks
Ptr<WarperCreator> warper_creator = makePtr<cv::CylindricalWarper>();
if (!warper_creator)
{
cout << "Can't create the warper \n";
return 1;
}
//! Create RotationWarper
Ptr<RotationWarper> warper = warper_creator->create(static_cast<float>(warped_image_scale * seam_work_aspect));
//! Calculate warped corners/sizes/mask
for (int i = 0; i < num_images; ++i)
{
Mat_<float> K;
cameras[i].K().convertTo(K, CV_32F);
float swa = (float)seam_work_aspect;
K(0, 0) *= swa; K(0, 2) *= swa;
K(1, 1) *= swa; K(1, 2) *= swa;
corners[i] = warper->warp(images[i], K, cameras[i].R, INTER_LINEAR, BORDER_REFLECT, images_warped[i]);
sizes[i] = images_warped[i].size();
warper->warp(masks[i], K, cameras[i].R, INTER_NEAREST, BORDER_CONSTANT, masks_warped[i]);
}
vector<UMat> images_warped_f(num_images);
for (int i = 0; i < num_images; ++i)
images_warped[i].convertTo(images_warped_f[i], CV_32F);
std::cout << "Compensating exposure... \n";
//! 计算曝光度,调整图像曝光,减少亮度差异
Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(ExposureCompensator::GAIN_BLOCKS);
if (dynamic_cast<BlocksCompensator*>(compensator.get()))
{
BlocksCompensator* bcompensator = dynamic_cast<BlocksCompensator*>(compensator.get());
bcompensator->setNrFeeds(1);
bcompensator->setNrGainsFilteringIterations(2);
bcompensator->setBlockSize(32, 32);
}
compensator->feed(corners, images_warped, masks_warped);
Ptr<SeamFinder> seam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR);
seam_finder->find(images_warped_f, corners, masks_warped);
// Release unused memory
images.clear();
images_warped.clear();
images_warped_f.clear();
masks.clear();
Mat img_warped, img_warped_s;
Mat dilated_mask, seam_mask, mask, mask_warped;
Ptr<Blender> blender;
double compose_work_aspect = 1;
for (int img_idx = 0; img_idx < num_images; ++img_idx)
{
// Read image and resize it if necessary
full_img = imread(img_names[img_idx]);
if (!is_compose_scale_set)
{
is_compose_scale_set = true;
compose_work_aspect = compose_scale / work_scale;
// Update warped image scale
warped_image_scale *= static_cast<float>(compose_work_aspect);
warper = warper_creator->create(warped_image_scale);
// Update corners and sizes
for (int i = 0; i < num_images; ++i)
{
cameras[i].focal *= compose_work_aspect;
cameras[i].ppx *= compose_work_aspect;
cameras[i].ppy *= compose_work_aspect;
Size sz = full_img_sizes[i];
if (std::abs(compose_scale - 1) > 1e-1)
{
sz.width = cvRound(full_img_sizes[i].width * compose_scale);
sz.height = cvRound(full_img_sizes[i].height * compose_scale);
}
Mat K;
cameras[i].K().convertTo(K, CV_32F);
Rect roi = warper->warpRoi(sz, K, cameras[i].R);
corners[i] = roi.tl();
sizes[i] = roi.size();
}
}
if (abs(compose_scale - 1) > 1e-1)
resize(full_img, img, Size(), compose_scale, compose_scale, INTER_LINEAR_EXACT);
else
img = full_img;
full_img.release();
Size img_size = img.size();
Mat K, R;
cameras[img_idx].K().convertTo(K, CV_32F);
R = cameras[img_idx].R;
// Warp the current image : img => img_warped
warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);
// Warp the current image mask
mask.create(img_size, CV_8U);
mask.setTo(Scalar::all(255));
warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);
compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);
img_warped.convertTo(img_warped_s, CV_16S);
img_warped.release();
img.release();
mask.release();
dilate(masks_warped[img_idx], dilated_mask, Mat());
resize(dilated_mask, seam_mask, mask_warped.size(), 0, 0, INTER_LINEAR_EXACT);
mask_warped = seam_mask & mask_warped;
if (!blender)
{
blender = Blender::createDefault(Blender::MULTI_BAND, false);
Size dst_sz = resultRoi(corners, sizes).size();
float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;
if (blend_width < 1.f){
blender = Blender::createDefault(Blender::NO, false);
}
else
{
MultiBandBlender* mb = dynamic_cast<MultiBandBlender*>(blender.get());
mb->setNumBands(static_cast<int>(ceil(log(blend_width) / log(2.)) - 1.));
}
blender->prepare(corners, sizes);
}
blender->feed(img_warped_s, mask_warped, corners[img_idx]);
}
/* ===========================================================================*/
// Blend image
std::cout << "\nBlending ...\n";
Mat result, result_mask;
blender->blend(result, result_mask);
imwrite("result.png", result);
imwrite("result_mask.png", result_mask);
std::cout << "\nWarp each center point, and draw solid circle.\n";
std::vector<cv::Scalar> colors = { {255,0,0}, {0, 255, 0}, {0, 0, 255} };
for (int idx = 0; idx < img_names.size(); ++idx) {
img = cv::imread(img_names[idx]);
Mat K;
cameras[idx].K().convertTo(K, CV_32F);
Mat R = cameras[idx].R;
cv::Point2f cpt = cv::Point2f(img.cols / 2, img.rows / 2);
cv::Point pt = calcWarpedPoint(cpt, K, R, warper, corners, sizes);
cv::circle(result, pt, 5, colors[idx], -1, cv::LINE_AA);
std::cout << cpt << " => " << pt << std::endl;
}
std::cout << "\nCheck `result.png`, `result_mask.png` and `result2.png`!\n";
imwrite("result2.png", result);
std::cout << "\nDone!\n";
/* ===========================================================================*/
return 0;
}
Some links maybe useful:
stitching_detailed.cpp : https://github.com/opencv/opencv/blob/4.0.1/samples/cpp/stitching_detailed.cpp
waper->warp(), warpPoint(), warpRoi() https://github.com/opencv/opencv/blob/master/modules/stitching/src/warpers.cpp#L153
resultRoi() https://github.com/opencv/opencv/blob/master/modules/stitching/src/util.cpp#L116
Other links maybe interesting:
Converting opencv remap code from c++ to python
Split text lines in scanned document
How do I use the relationships between Flann matches to determine a sensible homography?

Count percentage of white pixels inside of RotatedRect OpenCV

how can I calculate percentage of white pixels inside of cv::RotatedRect? I mean, how to access single pixel inside of my cv::RotatedRect. If i'd reach that, i'd know what to do later. Thanks
I've tried solution from this thread, but I've had exceptions. https://stackoverflow.com/a/28780359
std::vector<cv::RotatedRect> minRect(count.size());
for (int i = 0; i < count.size(); i++)
{
minRect[i] = cv::minAreaRect(cv::Mat(count[i]));
}
for (size_t i = 0; i < count.size(); i++){
if (cv::contourArea(count[i]) > 200) {
cv::Point2f rect_points[4];
minRect[i].points(rect_points);
// Now I'd like to calculate percentage of white pixels inside of RotatedRect, and if value returned by func would be smaller than 30%,continue;
for (int j = 0; j < 4; j++) {
cv::line(mask, rect_points[j], rect_points[(j + 1) % 4], (0, 255, 0), 1, 8);
}
}
}
You can:
Work on the sub-image defined by cv::boundingRect
create the mask where all points inside the rotated rect are white with cv::fillConvexPoly
logical AND with the original image
count the number of white pixels with cv::countNonZero
The method proposed by John Henkel works, but in my (very quick) tests it something between 10 and 40 times slower.
Below the code with both methods. You'll find small differences in the result, because the white pixels on the border of the rotated rect are handled differently.
#include <opencv2\opencv.hpp>
#include <chrono>
int main()
{
// Create binary image with random pixels b/W
cv::Mat1b img(5000, 5000);
cv::randu(img, cv::Scalar(0), cv::Scalar(256));
img = img > 127;
// Define a rotated rect
cv::Point2f center(2000, 2000);
cv::Size2f sz(1000, 500);
float angle = 30.f;
cv::RotatedRect rr(center, sz, angle);
// Get points
std::vector<cv::Point2f> points(4);
rr.points(points.data());
// Work on ROI
cv::Rect roi = rr.boundingRect();
// Area
float area = rr.size.width * rr.size.height;
//// DEBUG, Show rect
//cv::Mat3b out;
//cv::cvtColor(img, out, cv::COLOR_GRAY2BGR);
//for (int i = 0; i < 4; ++i) {
// cv::line(out, points[i], points[(i + 1) % 4], cv::Scalar(0, 0, 255));
//}
{
// --------------------
// Method #Miki
// --------------------
auto tic = std::chrono::high_resolution_clock::now();
cv::Mat1b sub_img = img(roi);
// Create rotated rect mask
cv::Mat1b mask(roi.size(), uchar(0));
std::vector<cv::Point> points_in_sub_image(4);
for (int i = 0; i < 4; ++i) {
points_in_sub_image[i] = cv::Point(points[i]) - roi.tl();
}
cv::fillConvexPoly(mask, points_in_sub_image, cv::Scalar(255));
// AND sub image with mask
cv::Mat1b inside_roi = sub_img & mask;
//// DEBUG, Draw green points
//for (int r = 0; r < sub_img.rows; ++r) {
// for (int c = 0; c < sub_img.cols; ++c) {
// if (inside_roi(r, c) > 0)
// {
// out(r + roi.y, c + roi.x) = cv::Vec3b(0, 255, 0);
// }
// }
//}
// Get actual count
int cnz = cv::countNonZero(inside_roi);
auto toc = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(toc - tic);
float percent_white_pixels = cnz / area;
std::cout << "percent_white_pixels: " << percent_white_pixels << " in " << elapsed.count() << " us" << std::endl;
}
{
// --------------------
// Method #John Henkel
// --------------------
auto tic = std::chrono::high_resolution_clock::now();
int cnz = 0;
for (int y = roi.y; y < roi.y + roi.height; ++y) {
for (int x = roi.x; x < roi.x + roi.width; ++x) {
if (
(img(y, x) > 0) &&
(cv::pointPolygonTest(points, cv::Point2f(x, y), false) >= 0.0)
)
{
// DEBUG, Draw blue points
//out(y, x) = cv::Vec3b(255, 0, 0);
++cnz;
}
}
}
auto toc = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(toc - tic);
float percent_white_pixels = cnz / area;
std::cout << "percent_white_pixels: " << percent_white_pixels << " in " << elapsed.count() << " us" << std::endl;
}
getchar();
return 0;
}
The best way I can think of to get the individual pixels would be to first obtain the bounding box of your rotated rectangle and then iterate through each of the pixels inside the box to see if they are in the rotated rectangle with pointPolygonTest. I'm not sure if there's a more efficient way to do it, but this should give you the results you're looking for.

Embed watermark using dct on image opencv

I wants to embed watermark into an image using dct with c++ and opencv.
I split image into 8x8 block and apply dct to each block.
Now I don't know what to do next, Can anyone give me some hint or help me?
Here is my work so far.
int main() {
Mat originalImage;
originalImage = imread("image.jpg");
if( !originalImage.data )
{
std::cout<< "Error loading original image!"<<std::endl;
return -1;
}
cout << "Working on image from image.jpg" << endl;
/// Create Windows
namedWindow("Original", 1);
imshow( "Original", originalImage );
int x = 0; int y = 0;
moveWindow("Original", x, y);
imshow("Original", originalImage);
x += 100; y += 100;
int width = originalImage.size().width;
int height = originalImage.size().width;
cout << "Original image Width x Height is " << width << "x" << height << endl;
// Leave original alone, work on a copy
Mat dctImage = originalImage.clone();
// Step through the copied image with rectangles size 8x8
// For each block, split into planes, do dct, and merge back
// into the block. (This will affect the image from
// which the block is selected each time.)
for (int i = 0; i < height; i += 8)
{
for (int j = 0; j < width; j+= 8)
{
Mat block = dctImage(Rect(i, j, 8, 8));
vector<Mat> planes;
split(block, planes);
vector<Mat> outplanes(planes.size());
for (size_t k = 0; k < planes.size(); k++)
{
planes[k].convertTo(planes[k], CV_32FC1);
dct(planes[k], outplanes[k]);
outplanes[k].convertTo(outplanes[k], CV_8UC1);
}
merge(outplanes, block);
}
}
namedWindow("dctBlockImage");
moveWindow("dctBlockImage", x, y);
imshow("dctBlockImage", dctImage);
x += 100; y += 100;
waitKey();
destroyAllWindows();
return 0;
}