Creating vignette filter in opencv? - c++

How we can make vignette filter in opencv? Do we need to implement any algorithm for it or only to play with the values of BGR ? How we can make this type of filters. I saw its implementation here but i didn't understand it clearly . Anyone with complete algorithms guidance and implementation guidance is highly appriciated.
After Abid rehman K answer I tried this in c++
int main()
{
Mat v;
Mat img = imread ("D:\\2.jpg");
img.convertTo(v, CV_32F);
Mat a,b,c,d,e;
c.create(img.rows,img.cols,CV_32F);
d.create(img.rows,img.cols,CV_32F);
e.create(img.rows,img.cols,CV_32F);
a = getGaussianKernel(img.cols,300,CV_32F);
b = getGaussianKernel(img.rows,300,CV_32F);
c = b*a.t();
double minVal;
double maxVal;
cv::minMaxLoc(c, &minVal, &maxVal);
d = c/maxVal;
e = v*d ; // This line causing error
imshow ("venyiet" , e);
cvWaitKey();
}
d is displaying right but e=v*d line is causing runtime error of
OpenCV Error: Assertion failed (type == B.type() && (type == CV_32FC1 || type ==
CV_64FC1 || type == CV_32FC2 || type == CV_64FC2)) in unknown function, file ..
\..\..\src\opencv\modules\core\src\matmul.cpp, line 711

First of all, Abid Rahman K describes the easiest way to go about this filter. You should seriously study his answer with time and attention. Wikipedia's take on Vignetting is also quite clarifying for those that had never heard about this filter.
Browny's implementation of this filter is considerably more complex. However, I ported his code to the C++ API and simplified it so you can follow the instructions yourself.
#include <math.h>
#include <vector>
#include <cv.hpp>
#include <highgui/highgui.hpp>
// Helper function to calculate the distance between 2 points.
double dist(CvPoint a, CvPoint b)
{
return sqrt(pow((double) (a.x - b.x), 2) + pow((double) (a.y - b.y), 2));
}
// Helper function that computes the longest distance from the edge to the center point.
double getMaxDisFromCorners(const cv::Size& imgSize, const cv::Point& center)
{
// given a rect and a line
// get which corner of rect is farthest from the line
std::vector<cv::Point> corners(4);
corners[0] = cv::Point(0, 0);
corners[1] = cv::Point(imgSize.width, 0);
corners[2] = cv::Point(0, imgSize.height);
corners[3] = cv::Point(imgSize.width, imgSize.height);
double maxDis = 0;
for (int i = 0; i < 4; ++i)
{
double dis = dist(corners[i], center);
if (maxDis < dis)
maxDis = dis;
}
return maxDis;
}
// Helper function that creates a gradient image.
// firstPt, radius and power, are variables that control the artistic effect of the filter.
void generateGradient(cv::Mat& mask)
{
cv::Point firstPt = cv::Point(mask.size().width/2, mask.size().height/2);
double radius = 1.0;
double power = 0.8;
double maxImageRad = radius * getMaxDisFromCorners(mask.size(), firstPt);
mask.setTo(cv::Scalar(1));
for (int i = 0; i < mask.rows; i++)
{
for (int j = 0; j < mask.cols; j++)
{
double temp = dist(firstPt, cv::Point(j, i)) / maxImageRad;
temp = temp * power;
double temp_s = pow(cos(temp), 4);
mask.at<double>(i, j) = temp_s;
}
}
}
// This is where the fun starts!
int main()
{
cv::Mat img = cv::imread("stack-exchange-chefs.jpg");
if (img.empty())
{
std::cout << "!!! Failed imread\n";
return -1;
}
/*
cv::namedWindow("Original", cv::WINDOW_NORMAL);
cv::resizeWindow("Original", img.size().width/2, img.size().height/2);
cv::imshow("Original", img);
*/
What img looks like:
cv::Mat maskImg(img.size(), CV_64F);
generateGradient(maskImg);
/*
cv::Mat gradient;
cv::normalize(maskImg, gradient, 0, 255, CV_MINMAX);
cv::imwrite("gradient.png", gradient);
*/
What maskImg looks like:
cv::Mat labImg(img.size(), CV_8UC3);
cv::cvtColor(img, labImg, CV_BGR2Lab);
for (int row = 0; row < labImg.size().height; row++)
{
for (int col = 0; col < labImg.size().width; col++)
{
cv::Vec3b value = labImg.at<cv::Vec3b>(row, col);
value.val[0] *= maskImg.at<double>(row, col);
labImg.at<cv::Vec3b>(row, col) = value;
}
}
cv::Mat output;
cv::cvtColor(labImg, output, CV_Lab2BGR);
//cv::imwrite("vignette.png", output);
cv::namedWindow("Vignette", cv::WINDOW_NORMAL);
cv::resizeWindow("Vignette", output.size().width/2, output.size().height/2);
cv::imshow("Vignette", output);
cv::waitKey();
return 0;
}
What output looks like:
As stated in the code above, by changing the values of firstPt, radius and power you can achieve stronger/weaker artistic effects.
Good luck!

You can do a simple implementation using Gaussian Kernels available in OpenCV.
Load the image, Get its number of rows and columns
Create two Gaussian Kernels of size rows and columns, say A,B. Its variance depends upon your needs.
C = transpose(A)*B, ie multiply a column vector with a row vector such that result array should be same size as that of the image.
D = C/C.max()
E = img*D
See the implementation below (for a grayscale image):
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('temp.jpg',0)
row,cols = img.shape
a = cv2.getGaussianKernel(cols,300)
b = cv2.getGaussianKernel(rows,300)
c = b*a.T
d = c/c.max()
e = img*d
cv2.imwrite('vig2.png',e)
Below is my result:
Similarly for Color image:
NOTE : Of course, it is centered. You will need to make additional modifications to move focus to other places.

Similar one close to Abid's Answer. But the code is for the colored image
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('turtle.jpg',1)
rows,cols = img.shape[:2]
zeros = np.copy(img)
zeros[:,:,:] = 0
a = cv2.getGaussianKernel(cols,900)
b = cv2.getGaussianKernel(rows,900)
c = b*a.T
d = c/c.max()
zeros[:,:,0] = img[:,:,0]*d
zeros[:,:,1] = img[:,:,1]*d
zeros[:,:,2] = img[:,:,2]*d
cv2.imwrite('vig2.png',zeros)
Original Image (Taken from Pexels under CC0 Licence)
After Applying Vignette with a sigma of 900 (i.e `cv2.getGaussianKernel(cols,900))
After Applying Vignette with a sigma of 300 (i.e `cv2.getGaussianKernel(cols,300))
Additionally you can focus the vignette effect to the cordinates of your wish by simply shifting the mean of the gaussian to your focus point as follows.
import cv2
import numpy as np
img = cv2.imread('turtle.jpg',1)
fx,fy = 1465,180 # Add your Focus cordinates here
fx,fy = 145,1000 # Add your Focus cordinates here
sigma = 300 # Standard Deviation of the Gaussian
rows,cols = img.shape[:2]
fxn = fx - cols//2 # Normalised temperory vars
fyn = fy - rows//2
zeros = np.copy(img)
zeros[:,:,:] = 0
a = cv2.getGaussianKernel(2*cols ,sigma)[cols-fx:2*cols-fx]
b = cv2.getGaussianKernel(2*rows ,sigma)[rows-fy:2*rows-fy]
c = b*a.T
d = c/c.max()
zeros[:,:,0] = img[:,:,0]*d
zeros[:,:,1] = img[:,:,1]*d
zeros[:,:,2] = img[:,:,2]*d
zeros = add_alpha(zeros)
cv2.imwrite('vig4.png',zeros)
The size of the turtle image is 1980x1200 (WxH). The following is an example focussing at the cordinate 1465,180 (i.e fx,fy = 1465,180) (Note that I have reduced the variance to exemplify the change in focus)
The following is an example focussing at the cordinate 145,1000 (i.e fx,fy = 145,1000)

Here is my c++ implementation of Vignette filter on Colored Image using opencv. It is faster than the accepted answer.
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
double fastCos(double x){
x += 1.57079632;
if (x > 3.14159265)
x -= 6.28318531;
if (x < 0)
return 1.27323954 * x + 0.405284735 * x * x;
else
return 1.27323954 * x - 0.405284735 * x * x;
}
double dist(double ax, double ay,double bx, double by){
return sqrt((ax - bx)*(ax - bx) + (ay - by)*(ay - by));
}
int main(int argv, char** argc){
Mat src = cv::imread("filename_of_your_image.jpg");
Mat dst = Mat::zeros(src.size(), src.type());
double radius; //value greater than 0,
//greater the value lesser the visible vignette
//for a medium vignette use a value in range(0.5-1.5)
cin << radius;
double cx = (double)src.cols/2, cy = (double)src.rows/2;
double maxDis = radius * dist(0,0,cx,cy);
double temp;
for (int y = 0; y < src.rows; y++) {
for (int x = 0; x < src.cols; x++) {
temp = fastCos(dist(cx, cy, x, y) / maxDis);
temp *= temp;
dst.at<Vec3b>(y, x)[0] =
saturate_cast<uchar>((src.at<Vec3b>(y, x)[0]) * temp);
dst.at<Vec3b>(y, x)[1] =
saturate_cast<uchar>((src.at<Vec3b>(y, x)[1]) * temp );
dst.at<Vec3b>(y, x)[2] =
saturate_cast<uchar>((src.at<Vec3b>(y, x)[2]) * temp);
}
}
imshow ("Vignetted Image", dst);
waitKey(0);
}

Here is a C++ implementation of Vignetting for Grayscale Image
#include "opencv2/opencv.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int argv, char** argc)
{
Mat test = imread("test.jpg", IMREAD_GRAYSCALE);
Mat kernel_X = getGaussianKernel(test.cols, 100);
Mat kernel_Y = getGaussianKernel(test.rows, 100);
Mat kernel_X_transpose;
transpose(kernel_X, kernel_X_transpose);
Mat kernel = kernel_Y * kernel_X_transpose;
Mat mask_v, proc_img;
normalize(kernel, mask_v, 0, 1, NORM_MINMAX);
test.convertTo(proc_img, CV_64F);
multiply(mask_v, proc_img, proc_img);
convertScaleAbs(proc_img, proc_img);
imshow ("Vignette", proc_img);
waitKey(0);
return 0;
}

Related

Slow motion in C++

I want to do slow motion. I've seen an implementation here: https://github.com/vaibhav06891/SlowMotion
I modified the code to generate only one frame.
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/video/tracking.hpp>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <fstream>
#include <string>
using namespace cv;
using namespace std;
#define CLAMP(x,min,max) ( ((x) < (min)) ? (min) : ( ((x) > (max)) ? (max) : (x) ) )
int main(int argc, char** argv)
{
Mat frame,prevframe;
prevframe = imread("img1.png");
frame = imread("img2.png");
Mat prevgray, gray;
Mat fflow,bflow;
Mat flowf(frame.rows,frame.cols ,CV_8UC3); // the forward co-ordinates for interpolation
flowf.setTo(Scalar(255,255,255));
Mat flowb(frame.rows,frame.cols ,CV_8UC3); // the backward co-ordinates for interpolation
flowb.setTo(Scalar(255,255,255));
Mat final(frame.rows,frame.cols ,CV_8UC3);
int fx,fy,bx,by;
cvtColor(prevframe,prevgray,COLOR_BGR2GRAY); // Convert to gray space for optical flow calculation
cvtColor(frame, gray, COLOR_BGR2GRAY);
calcOpticalFlowFarneback(prevgray, gray, fflow, 0.5, 3, 15, 3, 3, 1.2, 0); // forward optical flow
calcOpticalFlowFarneback(gray, prevgray, bflow, 0.5, 3, 15, 3, 3, 1.2, 0); //backward optical flow
for (int y=0; y<frame.rows; y++)
{
for (int x=0; x<frame.cols; x++)
{
const Point2f fxy = fflow.at<Point2f>(y,x);
fy = CLAMP(y+fxy.y*0.5,0,frame.rows);
fx = CLAMP(x+fxy.x*0.5,0,frame.cols);
flowf.at<Vec3b>(fy,fx) = prevframe.at<Vec3b>(y,x);
const Point2f bxy = bflow.at<Point2f>(y,x);
by = CLAMP(y+bxy.y*(1-0.5),0,frame.rows);
bx = CLAMP(x+bxy.x*(1-0.5),0,frame.cols);
flowb.at<Vec3b>(by,bx) = frame.at<Vec3b>(y,x);
}
}
final = flowf*(1-0.5) + flowb*0.5; //combination of frwd and bckward martrix
cv::medianBlur(final,final,3);
imwrite( "output.png",final);
return 0;
}
But the result is not as expected.
For the images:
The result is :
Does anyone know what is the problem?
The optical flow algorithm won't work for your test images.
The first problem is that your test images have very little difference in neighbour pixel values. That completely black lines and a single color square give no clues to optical flow algorithm where the image areas moved as the algorithm is not able to process the whole image at once and calculates optical flow with a small 15x15 (as you set it in calcOpticalFlowFarneback) pixels window.
The second problem is that your test images differ too much. The distance between positions of brown square is too big. Again Farneback is not able to detect it.
Try the code with some real life video frames or edit your tests to be less monotonous (set some texture to the square, background and rectangle lines) and bring the squares closer to each other on the images (try 2-10 px distance). You can also play with calcOpticalFlowFarneback arguments (read here) to suit your conditions.
You can use this code to save the optical flow you get to an image for debugging:
Mat debugImage = Mat::zeros(fflow.size(), CV_8UC3);
float hsvHue, magnitude;
for (int x = 0; x < fflow.cols; x++)
{
for (int y = 0; y < fflow.rows; y++)
{
auto& item = fflow.at<Vec2f>(y, x);
magnitude = sqrtf(item[0] * item[0] + item[1] * item[1]);
hsvHue = atan2f(item[1], item[0]) / static_cast<float>(CV_PI)* 180.f;
// div 2 to fit 0..255 range
hsvHue = (hsvHue >= 0. ? hsvHue : (360.f + hsvHue)) / 2.f;
debugImage.at<Vec3b>(y, x)[0] = static_cast<uchar>(hsvHue);
debugImage.at<Vec3b>(y, x)[1] = 255;
debugImage.at<Vec3b>(y, x)[2] = static_cast<uchar>(255.f * magnitude);
}
}
cvtColor(debugImage, debugImage, CV_HSV2BGR);
imwrite("OpticalFlow.png", debugImage);
Here pixel flow direction will be represented with color (hue), and pixel move distance will be represented with brightness.
Try to use this images I created:
.
Also note that
for (int y = 0; y < frame.rows; y++)
{
for (int x = 0; x < frame.cols; x++)
{
const Point2f fxy = fflow.at<Point2f>(y, x);
fy = CLAMP(y + fxy.y*0.5, 0, frame.rows);
fx = CLAMP(x + fxy.x*0.5, 0, frame.cols);
flowf.at<Vec3b>(fy, fx) = prevframe.at<Vec3b>(y, x);
...
code won't color some flowf pixels that have no corresponding target positions they moved to, and optical flow algorithm can produce such situations. I would change it to:
for (int y = 0; y < frame.rows; y++)
{
for (int x = 0; x < frame.cols; x++)
{
const Point2f fxy = fflow.at<Point2f>(y, x);
fy = CLAMP(y - fxy.y*0.5, 0, frame.rows);
fx = CLAMP(x - fxy.x*0.5, 0, frame.cols);
flowf.at<Vec3b>(y, x) = prevframe.at<Vec3b>(fy, fx);
const Point2f bxy = bflow.at<Point2f>(y, x);
by = CLAMP(y - bxy.y*(1 - 0.5), 0, frame.rows);
bx = CLAMP(x - bxy.x*(1 - 0.5), 0, frame.cols);
flowb.at<Vec3b>(y, x) = frame.at<Vec3b>(by, bx);
}
}
With this changed code and my tests I get this output:

Affine transform in C++

I am currently making a project for school on image processing in visual Studio 2013, using Open CV 3.1. My goal (for now) is to transform an image, using affine transform, so that the trapezoidal board will be transformed into a rectangle.
To do that I have substracted certain channels and thresholded the image so that now I have a binary image with white blocks in the corners of the board.
Now I need to pick 4 white points that are closest to each corner and (using affine transform) set them as corners of the transformed image.
And since this is my first time using Open CV, I am stuck.
Here's my code:
#include <iostream>
#include <opencv2\core.hpp>
#include <opencv2\highgui.hpp>
#include<opencv2/imgproc.hpp>
#include <stdlib.h>
#include <stdio.h>
#include <vector>
int main(){
double dist;
cv::Mat image;
image = cv::imread("C:\\Users\\...\\ideal.png");
cv::Mat imagebin;
imagebin = cv::imread("C:\\Users\\...\\ideal.png");
cv::Mat imageerode;
//cv::imshow("Test", image);
cv::Mat src = cv::imread("C:\\Users\\...\\ideal.png");
std::vector<cv::Mat>img_rgb;
cv::split(src, img_rgb);
//cv::imshow("ideal.png", img_rgb[2] - img_rgb[1]);
cv::threshold(img_rgb[2] - 0.5*img_rgb[1], imagebin , 20, 255, CV_THRESH_BINARY);
cv::erode(imagebin, imageerode, cv::Mat(), cv::Point(1, 1), 2, 1, 1);
cv::erode(imageerode, imageerode, cv::Mat(), cv::Point(1, 1), 2, 1, 1);
// cv::Point2f array[4];
// std::vector<cv::Point2f> array;
for (int i = 0; i < imageerode.cols; i++)
{
for (int j = 0; j < imageerode.rows; j++)
{
if (imageerode.at<uchar>(i,j) > 0)
{
dist = std::min(dist, i + j);
}
}
}
//cv::imshow("Test binary", imagebin);
cv::namedWindow("Test", CV_WINDOW_NORMAL);
cv::imshow("Test", imageerode);
cv::waitKey(0);
std::cout << "Hello world!";
return 0;
}
As you can see I don't know how to loop over each white pixel using image.at and save the distance to each corner.
I would appreciate some help.
Also: I don't want to just do this. I really want to learn how to do that. But I'm currently having some mindstuck.
Thank you
EDIT:
I think I'm done with finding the coordinates of the 4 points. But I can't really get the idea of the warpAffine syntax.
Code:
for (int i = 0; i < imageerode.cols; i++)
{
for (int j = 0; j < imageerode.rows; j++)
{
if (imageerode.at<uchar>(i, j) > 0)
{
if (i + j < distances[0])
{
distances[0] = i + j;
coordinates[0] = i;
coordinates[1] = j;
}
if (i + imageerode.cols-j < distances[1])
{
distances[1] = i + imageerode.cols-j;
coordinates[2] = i;
coordinates[3] = j;
}
if (imageerode.rows-i + j < distances[2])
{
distances[2] = imageerode.rows - i + j;
coordinates[4] = i;
coordinates[5] = j;
}
if (imageerode.rows-i + imageerode.cols-j < distances[3])
{
distances[3] = imageerode.rows - i + imageerode.cols - j;
coordinates[6] = i;
coordinates[7] = j;
}
}
}
Where I set all of the distances values to imageerode.cols+imageerode.rows since it's the maximum value it can get.
Also: note that I'm using taxicab geometry. I was told it's faster and the results are pretty much the same.
If anyone could help me with warpAffine it would be great. I don't understand where do I put the coordinates I have found.
Thank you
I am not sure how your "trapezoidal board" looks like but if it has a perspective transform like when you capture a rectangle with a camera, then an affine transform is not enough. Use perspective transform. I think Features2D + Homography to find a known object is very close to what you want to do.

OpenCV : algorithm for simple image rotation and reduction

I have tried image rotation and reduction(JPEG) with getRotationMatrix2D(center, angle, scale);
and warpAffine(image1, image3, rotation, image3.size());
I got the result I wanted(image as below)
for (int r = 0;r < image1.rows;r++) {
for (int c = r + 1;c < image1.cols;c++) {
Point center = Point(image1.cols / 2, image1.rows / 2);
Point center1 = Point(image1.cols / 2, image1.rows / 2);
double angle = 90.0;
double scale = 1;
double angle1 = 90.0;
double scale1 = 0.5;
rotation = getRotationMatrix2D(center, angle, scale);
rotation1 = getRotationMatrix2D(center1, angle1, scale1);
but I want to learn some simple rotation and reduction algorithm ( simple for beginner like me ) without using any library
to get the same result.
After searching for various solutions, i ended up with this
from https://gamedev.stackexchange.com/questions/67613/how-can-i-rotate-a-bitmap-without-d3d-or-opengl
Can anyone break up bit by bit of the simple linear algebra to explain to me in regards to my pesudo code?
EDIT:
Reduction code
void reduction(Mat image1)
{
for (int r = 0;r < imgC.rows;r++)
{
for (int c = 0;c < imgC.cols;c++)
{
int new_x = c * (125 / 256);
int new_y = r * (125 / 256);
imgC.at<uchar>(r, c) = imgC.at<uchar>(new_y, new_x);
}
}
}
In this example I have loaded two images. One is in gray scale and other is color image. Both are same image so you can understand easily how to handle rotation with mathematical equation. Please see this example which is very easy to understand. Also in the similar manner you can add scaling and reduction. Here each point is converted according to the equation and color value set on new location. Here is the code:
#include <iostream>
#include <string>
#include "opencv/highgui.h"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/objdetect/objdetect.hpp"
using namespace std;
using namespace cv;
#define PIPI 3.14156
int main()
{
Mat img = imread("C:/Users/dell2/Desktop/DSC00587.JPG",0);//loading gray scale image
Mat imgC = imread("C:/Users/dell2/Desktop/DSC00587.JPG",1);//loading color image
Mat rotC(imgC.cols, imgC.rows, imgC.type());
rotC = Scalar(0,0,0);
Mat rotG(img.cols, img.rows, img.type());
rotG = Scalar(0,0,0);
float angle = 90.0 * PIPI / 180.0;
for(int r=0;r<imgC.rows;r++)
{
for(int c=0;c<imgC.cols;c++)
{
float new_px = c * cos(angle) - r * sin(angle);
float new_py = c * sin(angle) + r * cos(angle);
Point pt((int)-new_px, (int)new_py);
//color image
rotC.at<Vec3b>(pt) = imgC.at<Vec3b>(r,c);//assign color value at new location from original image
//gray scale image
rotG.at<uchar>(pt) = img.at<uchar>(r,c);//assign color value at new location from original image
}
}
imshow("color",rotC);
imshow("gray",rotG);
waitKey(0);
return 0;
}

Video Stabilization

I 'm researching about Video Stabilization field. I implement a application using OpenCV.
My progress such as:
Surf points extraction
Matching
estimateRigidTransform
warpAffine
But the result video is not be stable. Can anyone help me this problem or provide me some source code link to improve?
Sample video: Hippo video
Here is my code [EDIT]
#include "stdafx.h"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <opencv2/nonfree/features2d.hpp>
#include <opencv2/opencv.hpp>
const double smooth_level = 0.7;
using namespace cv;
using namespace std;
struct TransformParam
{
TransformParam() {}
TransformParam(double _dx, double _dy, double _da) {
dx = _dx;
dy = _dy;
da = _da;
}
double dx; // translation x
double dy; // translation y
double da; // angle
};
int main( int argc, char** argv )
{
VideoCapture cap ("test12.avi");
Mat cur, cur_grey;
Mat prev, prev_grey;
cap >> prev;
cvtColor(prev, prev_grey, COLOR_BGR2GRAY);
// Step 1 - Get previous to current frame transformation (dx, dy, da) for all frames
vector <TransformParam> prev_to_cur_transform; // previous to current
int k=1;
int max_frames = cap.get(CV_CAP_PROP_FRAME_COUNT);
VideoWriter writeVideo ("stable.avi",0,30,cvSize(prev.cols,prev.rows),true);
Mat last_T;
double avg_dx = 0, avg_dy = 0, avg_da = 0;
Mat smooth_T(2,3,CV_64F);
while(true) {
cap >> cur;
if(cur.data == NULL) {
break;
}
cvtColor(cur, cur_grey, COLOR_BGR2GRAY);
// vector from prev to cur
vector <Point2f> prev_corner, cur_corner;
vector <Point2f> prev_corner2, cur_corner2;
vector <uchar> status;
vector <float> err;
goodFeaturesToTrack(prev_grey, prev_corner, 200, 0.01, 30);
calcOpticalFlowPyrLK(prev_grey, cur_grey, prev_corner, cur_corner, status, err);
// weed out bad matches
for(size_t i=0; i < status.size(); i++) {
if(status[i]) {
prev_corner2.push_back(prev_corner[i]);
cur_corner2.push_back(cur_corner[i]);
}
}
// translation + rotation only
Mat T = estimateRigidTransform(prev_corner2, cur_corner2, false);
// in rare cases no transform is found. We'll just use the last known good transform.
if(T.data == NULL) {
last_T.copyTo(T);
}
T.copyTo(last_T);
// decompose T
double dx = T.at<double>(0,2);
double dy = T.at<double>(1,2);
double da = atan2(T.at<double>(1,0), T.at<double>(0,0));
prev_to_cur_transform.push_back(TransformParam(dx, dy, da));
avg_dx = (avg_dx * smooth_level) + (dx * (1- smooth_level));
avg_dy = (avg_dy * smooth_level) + (dy * (1- smooth_level));
avg_da = (avg_da * smooth_level) + (da * (1- smooth_level));
smooth_T.at<double>(0,0) = cos(avg_da);
smooth_T.at<double>(0,1) = -sin(avg_da);
smooth_T.at<double>(1,0) = sin(avg_da);
smooth_T.at<double>(1,1) = cos(avg_da);
smooth_T.at<double>(0,2) = avg_dx;
smooth_T.at<double>(1,2) = avg_dy;
Mat stable;
warpAffine(prev,stable,smooth_T,prev.size());
Mat canvas = Mat::zeros(cur.rows, cur.cols*2+10, cur.type());
prev.copyTo(canvas(Range::all(), Range(0, prev.cols)));
stable.copyTo(canvas(Range::all(), Range(prev.cols+10, prev.cols*2+10)));
imshow("before and after", canvas);
waitKey(20);
writeVideo.write(stable);
cur.copyTo(prev);
cur_grey.copyTo(prev_grey);
k++;
}
}
First, you can just blur you image. It will helps a bit. Second, you can easily smooth your matrix by simplest implementation of exponential smooth A(t+1) = a*A(t)+(1-a)*A(t+1) and play with a-value in [0;1] range. Third, you can turn off some type of transformations like rotation, shift etc.
Here is code example:
t = estimateRigidTransform(new, old, 0); // 0 means not all transformations (5 of 6)
if(!t.empty()){
// t(Range(0,2), Range(0,2)) = Mat::eye(2, 2, CV_64FC1); // turning off rotation
// t.at<double>(0,2) = 0; t.at<double>(1,2) = 0; // turning off shift dx and dy
tAvrg = tAvrg*a + t*(1-a); // a - smooth level in [0;1] range, play with it
warpAffine(new, stable, tAvrg, Size(new.cols, new.rows));
}

Accessing certain pixel RGB value in openCV

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}