Ways to better thread the code to decrease computation time [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have written an opencv code that reads a video, looks for red pixels in each frame, and exports the frame as a png file if the number of red pixels exceeds a certain amount. The code works well, but I am looking for ways to further reduce computation time because the videos are 4-5 hrs long. I was reading posts on using parallel_pipeline and was wondering if using that would substantially speed up the process. Based on what I read, it seems that I will have to assign a thread for each major task (reading video frames, color detection/thresholding with inRange, and image saving). So my question is:
1) Would this speed up the process compared to the default multithreading that opencv does?
2) Given what the code needs to do, are there more appropriate ways for multithreading than parallel_pipeline?
I am fairly new to this topic, so any help is much appreciated!
/**
* #CheckMotionParallel
* #Motion detection using color detection and image thresholding
*/
//opencv
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/videoio.hpp"
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>
//C
#include <stdio.h>
//C++
#include <iostream>
#include <sstream>
#include "tbb/blocked_range.h"
#include "tbb/parallel_for.h"
#include "tbb/parallel_reduce.h"
#include "tbb/task_scheduler_init.h"
#include "tbb/mutex.h"
#include "tbb/tbb_thread.h"
#include "tbb/blocked_range2d.h"
using namespace cv;
using namespace std;
using namespace tbb;
void help();
void help()
{
cout
<< "--------------------------------------------------------------------------" << endl
<< "Note for program CheckMotion" << endl
<< "CheckMotion does the following" << endl
<< "1) It searches each frame in a video and looks for a specified range of colors in the frame" << endl
<< "2) Pixels falling within the range will be converted to white while everything else is turned to black" << endl
<< "3) For each frame, the program gives: frame number/time stamp, total pixel count, and white pixel count" << endl
<< "4) For frames whose white pixel count exceeds a threshold, it will export those frames as individial png files" << endl
<< "--------------------------------------------------------------------------" << endl
<< endl;
}
int64 startTime;
int NumThreads = task_scheduler_init::default_num_threads();
int main(int argc, char**)
{
//Print out program note
help();
///Part I: Read-in the video
VideoCapture cap("/Users/chi/Desktop/Video analyses/testvideo4.mp4");
//Error message if the video cannot be opened
//Create an object denoting the frames
//Create a window for showing the video as CheckMotion runs
//For loop looking through frames
if(cap.isOpened()) {
startTime = getTickCount();
Mat frame;
for(;;)
{
//Show each frame in the video window previously created
double tfreq = getTickFrequency();
double secs = ((double) getTickCount()-startTime)/tfreq;
cap >> frame;
// namedWindow("Frame");
// imshow("Frame",frame);
//
waitKey(10);
//Create a string for frame number that gets updated for each cycle of the loop
stringstream ss;
ss << cap.get(CAP_PROP_POS_FRAMES);
string FrameNumberString = ss.str();
stringstream maskedfilename;
stringstream rawfilename;
//Create filenames for later use in result output and image save using frame number as ref
maskedfilename << "/Users/chi/Desktop/test/masked" << FrameNumberString.c_str() << ".png";
rawfilename << "/Users/chi/Desktop/test/raw" << FrameNumberString.c_str() << ".png";
///Part II: Image thresholding and image saving
//Create an object representing new images after thresholding
Mat masked;
//inRange function that convert the pixels that fall within the specified range to white and everything else to black
//The Range is specified by a lower [Scalar(200,200,200)] and an upper [Scalar(255,255,255)] threshold
//A color is defined by its BGR score
//The thresholded images will then be represented by the object "masked"
inRange(frame, Scalar(10,0,90), Scalar(50,50,170), masked);
//Creating integer variables for total pixel count and white pixel count for each frame
int totalpixel;
int whitepixel;
//Total pixel count equals the number of rows and columns of the frame
totalpixel = masked.rows*masked.cols;
//Using countNonZero function to count the number of white pixels
whitepixel = countNonZero(masked);
//Output frame number, total pixel count and white pixel count for each frame
//Exit the loop when reaching the last frame (i.e. pixel count drops to 0)
if(totalpixel==0){
cout << "End of the video" << endl;
cout << "Number of threads: " << NumThreads << endl;
cap.release();
break;
}
else {
cout
<< "Frame:" << ss.str() << endl
<< "Number of total pixels:" << totalpixel << endl
<< "Pixels of target colors:" << whitepixel << endl
<< "Run time = " << fixed << secs << "seconds" << endl
<< endl;
//Save the frames with white pixel count larger than a user-determined value (100 in present case)
//Save both the orignal as well as the procesed images
if (whitepixel > 50){
imwrite(rawfilename.str(),frame);
imwrite(maskedfilename.str(),masked);
}
}
}
}
}

Just remove this line :)
waitKey(10);
Then replace endl with '\n'.

Related

Rendering mesh normals as normal maps with PCL [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am trying to generate normal maps given a mesh, camera pose, and camera intrinsics.
My plan is to calculate the vertex normal for each point in the cloud then project them onto an image plane with the corresponding camera pose and intrinsics. More specifically, I would first calculate the vertex normals then convert the point coordinates from world coordinates into camera coordinates with camera pose. Finally, using the camera intrinsics, the point cloud can be projected onto an image where each pixel represents the surface normal of the corresponding 3D vertex.
Below is my code:
#include <iostream>
#include <thread>
#include <pcl/io/ply_io.h>
#include <pcl/point_types.h>
#include <pcl/features/from_meshes.h>
#include <pcl/visualization/pcl_visualizer.h>
using namespace std;
using namespace pcl;
void readPLY(PolygonMesh::Ptr mesh, string fname, bool printResult=false)
{
PLYReader reader;
int success = reader.read(fname, *mesh); // load the file
if (success == -1) {
cout << "Couldn't read file " << fname << endl;
exit(-1);
}
if(printResult){
cout << "Loaded "
<< mesh->cloud.width * mesh->cloud.height
<< " data points from "
<< fname
<< " with the following fields: "
<< endl;
// convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
fromPCLPointCloud2(mesh->cloud, *cloud);
// print the first 10 vertices
cout << "Vertices:" << endl;
for (size_t i=0; i<10; ++i)
cout << " " << cloud->points[i].x
<< " " << cloud->points[i].y
<< " " << cloud->points[i].z << endl;
// print the first 10 polygons
cout << "Polygons:" << endl;
for (size_t i=0; i<10; ++i){
cout << mesh->polygons[i] << endl;
}
}
}
void computeNormal(PolygonMesh::Ptr mesh,
PointCloud<Normal>::Ptr normal,
bool printResult=false)
{
// convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
fromPCLPointCloud2(mesh->cloud, *cloud);
// compute surface normal
pcl::features::computeApproximateNormals(*cloud, mesh->polygons, *normal);
// print results
if (printResult){
cout << "Normal cloud contains "
<< normal->width * normal->height
<< " points" << endl;
// print the first 10 vertices
cout << "Vertex normals:" << endl;
for (size_t i=0; i<10; ++i)
cout << " " << normal->points[i] << endl;
}
}
int main (int argc, char** argv)
{
// ./main [path/to/ply] (--debug)
string fname = argv[1];
// check if debug flag is set
bool debug = false;
for(int i=0;i<argc;++i){
string arg = argv[i];
if(arg == "--debug")
debug = true;
}
// read file
PolygonMesh::Ptr mesh (new PolygonMesh);
readPLY(mesh, fname, debug);
// calculate normals
PointCloud<Normal>::Ptr normal (new PointCloud<Normal>);
computeNormal(mesh, normal, debug);
}
Currently, I have already obtained surface normal for each vertex with pcl::features::computeApproximateNormals. Is there a way to use PCL to project the normals onto an image plane with the xyz-elements of the normal mapped to the RGB channels and save the image to a file?
Welcome to Stack Overflow. What the documentation says is:
Given a geometric surface, it’s usually trivial to infer the direction of the normal at a certain point on the surface as the vector perpendicular to the surface in that point.
From what I gather from what you say is that you already have the surfaces for which you can easily calculate surface normals. Normal estimation is used because 3D point cloud data is basically a bunch of sample points from the real world. You do not have surface information in this kind of data. What you do is you estimate a surface around a pixel using Planar Fitting(2D Regression). Then, you obtain surface normal. You cannot compare these two methods. They essentially gave different purposes.
For question two: Yes. Refer to this SO answer.

Thin Plate Spline shape transformation run-time error [exited with code -1073741819]

I have been trying to warp an image my using opencv 3.1.0, Shape Tranformation class. Specifically, Thin Plate Sline Algorithm
(I actually tried a block of code from Shape Transformers and Interfaces OpenCV3.0 )
But the problem is that I keep gettting runtime time error with the console saying
D:\Project\TPS_Transformation\x64\Debug\TPS_Transformation.exe (process 13776) exited with code -1073741819
I figured out the code that caused the error is
tps->estimateTransformation(source, target, matches);
which is the part that executes the transformation algorithm for the first time.
I searched the runtime error saying that it could be the dll problem, but I have no problem running opencv in general. I get the error when I run the Shape Transformation algorithm, specifically estimateTranformation function.
#include <iostream>
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc.hpp>
#include "opencv2\shape\shape_transformer.hpp"
using namespace std;
using namespace cv;
int main()
{
Mat img1 = imread("D:\\Project\\library\\opencv_3.1.0\\sources\\samples\\data\\graf1.png");
std::vector<cv::Point2f> sourcePoints, targetPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
sourcePoints.push_back(cv::Point2f(399, 0));
sourcePoints.push_back(cv::Point2f(0, 399));
sourcePoints.push_back(cv::Point2f(399, 399));
targetPoints.push_back(cv::Point2f(100, 0));
targetPoints.push_back(cv::Point2f(399, 0));
targetPoints.push_back(cv::Point2f(0, 399));
targetPoints.push_back(cv::Point2f(399, 399));
Mat source(sourcePoints, CV_32FC1);
Mat target(targetPoints, CV_32FC1);
Mat respic, resmat;
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
Ptr<ThinPlateSplineShapeTransformer> tps = createThinPlateSplineShapeTransformer(0);
tps->estimateTransformation(source, target, matches);
std::vector<cv::Point2f> transPoints;
tps->applyTransformation(source, target);
cout << "sourcePoints = " << endl << " " << sourcePoints << endl << endl;
cout << "targetPoints = " << endl << " " << targetPoints << endl << endl;
//cout << "transPos = " << endl << " " << transPoints << endl << endl;
cout << img1.size() << endl;
imshow("img1", img1); // Just to see if I have a good picture
tps->warpImage(img1, respic);
imshow("Tranformed", respic); //Always completley grey ?
waitKey(0);
return 0;
}
I just want to be able to run the algorithm so that I can check if it is the algorithm that I want.
Please help.
Thank you.
opencv-version 3.1.0
IDE: Visual Studio 2015
OS : Windows 10
Try adding
transpose(source, source);
transpose(target, target);
before estimateTransformation().
See https://answers.opencv.org/question/69384/shape-transformers-and-interfaces/.

Storing output data (numbers) into XML/json file format - to develop graph

I am working in Computer Vision domain, coding completely in C++ using OpenCV API, I have found results and printing the values in cmd prompt. I wanna save these results(basically integers and floating point numbers) in XML file and then develop a graph (bar charts or line graphs)- basically a web dashboard(GUI). Now I am in a stage, such that I use, ofstream and save the data in a csv/xml file. But it just prints the value same as such in cmd prompt.
Can someone kindly help me with a technique to store the values in XML tree structure? so that I will create a web dashboard(bar graphs) with that xml data
I have also came across msxml6,tinyxml,libxml++ but have not got any fruitful results.
Thanks in advance - please provide link to the other question if this is a duplicate.
The code sample :
#include<opencv2/core/core.hpp>
#include<iostream>
#include<fstream>
int main ()
{
cv::VideoCapture capVideo;
capVideo.open("video.mp4");
cv::Mat imgFrame1;
cv::Mat imgFrame2;
double fps = capVideo.get(CV_CAP_PROP_FPS);
std::cout << "FPS = " << fps <<std::endl;
double fc = capVideo.get(CV_CAP_PROP_FRAME_COUNT);
std::cout << "Total Framecount = " << fc <<std::endl;
std::ofstream outfile;
outfile.open("theBigDataSheet.xml");
capVideo.read(imgFrame1);
int frameCount = 1;
while (true)
{
int divisor = fps*15;
if (frameCount%divisor == 0 || frameCount==fc-1)
{
if (frameCount<fc-1)
{
outfile << frameCount/fps << std::endl;
outfile << frameCount << std::endl;
}
else{
outfile << frameCount/fps << std::endl;
outfile << frameCount << std::endl;
}
}
if ((capVideo.get(CV_CAP_PROP_POS_FRAMES) + 1) <
capVideo.get(CV_CAP_PROP_FRAME_COUNT))
{
capVideo.read(imgFrame2);
frameCount++;
}
else {
std::cout << "end of video\n";
break;
}
cv::waitKey(33);
}
outfile.close();
return(0);
}
see the code : at every 15th second, it will print the framecount, and the seconds - finally it will print the final number of frames and seconds - i need to plot this as a graph (which will be a linear line)

Can LibTIFF be used with C++ to get the float values of images?

I read here that LibTIFF can display floating point TIFFs. However, I would like to load an image, then get the float values as an array.
Is this possible to do using LibTIFF?
Example TIFF
EDIT: I am using RHEL 6.
If you want to do it with pure libTIFF, your code might look something like this - note that I have not done much error checking so as not to confuse the reader of the code - but you should check that the image is of type float, and you should check the results of memory allocations and you probably shouldn't use malloc() like I do but rather the new C++ methods of memory allocation - but the concept is hopefully clear and the code generates the same answers as my CImg version...
#include "tiffio.h"
#include <cstdio>
#include <iostream>
using namespace std;
int main()
{
TIFF* tiff = TIFFOpen("image.tif","r");
if (!tiff) {
cerr << "Failed to open image" << endl;
exit(1);
}
uint32 width, height;
tsize_t scanlength;
// Read dimensions of image
if (TIFFGetField(tiff,TIFFTAG_IMAGEWIDTH,&width) != 1) {
cerr << "Failed to read width" << endl;
exit(1);
}
if (TIFFGetField(tiff,TIFFTAG_IMAGELENGTH, &height) != 1) {
cerr << "Failed to read height" << endl;
exit(1);
}
scanlength = TIFFScanlineSize(tiff);
// Make space for image in memory
float** image= (float**)malloc(sizeof (float*)*height);
cout << "Dimensions: " << width << "x" << height << endl;
cout << "Line buffer length (bytes): " << scanlength << endl;
// Read image data allocating space for each line as we get it
for (uint32 y = 0; y < height; y++) {
image[y]=(float*)malloc(scanlength);
TIFFReadScanline(tiff,image[y],y);
cout << "Line(" << y << "): " << image[y][0] << "," << image[y][1] << "," << image[y][2] << endl;
}
TIFFClose(tiff);
}
Sample Output
Dimensions: 512x256
Line buffer length (bytes): 6144
Line(0): 3.91318e-06,0.232721,128
Line(1): 0.24209,1.06866,128
Line(2): 0.185419,2.45852,128
Line(3): 0.141297,3.06488,128
Line(4): 0.346642,4.35358,128
...
...
By the way...
I converted your image to a regular JPEG using ImageMagick in the Terminal at the command line as follows:
convert map.tif[0] -auto-level result.jpg
Yes but you will have a much easier time with this if you use the OpenCV library.
If you have OpenCV library compiled and installed doing what you are asking is as easy as using the imread() function. This saves it to an object called cv::Mat (aka a matrix) with the same dimensions and values as the tiff.
From there you can do just about anything you want with it.
You can do it with LibTIFF, and I may well add an answer based on that later, but for ease of installation and use, I would look at CImg which is a C++ header-only library that is very powerful and ideal for your purposes. As it is header-only, it is simple to include (just one file) and needs no special linking or building.
Here is how you might read a TIFF of RGB floats:
#define cimg_display 0
#define cimg_use_tiff
#include "CImg.h"
#include <iostream>
using namespace cimg_library;
using namespace std;
int main(){
// Read in an image
CImg <float>img("image.tif");
// Get its width and height and tell user
int w=img.width();
int h=img.height();
cout << "Dimensions: " << w << "x" << h << endl;
// Get pointer to buffer/array of floats
float* buffer = img.data();
cout << buffer[0] << "," << buffer[1] << "," << buffer[2] << endl;
}
That prints the first three red pixels because they are arranged in planes - i.e. all the red pixels first, then all the green pixel then all the blue pixels.
You would compile that with:
g++-6 -std=c++11 read.cpp -I/usr/local/include -L/usr/local/lib -ltiff -o read
If you prefer, you can access the pixels a slightly different way like this:
#define cimg_display 0
#define cimg_use_tiff
#include "CImg.h"
#include <iostream>
using namespace cimg_library;
using namespace std;
int main(){
// Read in an image
CImg <float>img("image.tif");
// Get its width and height and tell user
int w=img.width();
int h=img.height();
cout << "Dimensions: " << w << "x" << h << endl;
// Dump the pixels
for(int y=0;y<h;y++)
for(int x=0;x<w;x++)
cout << x << "," << y << ": "
<< img(x,y,0,0) << "/"
<< img(x,y,0,1) << "/"
<< img(x,y,0,2) << endl;
}
Sample Output
Dimensions: 512x256
0,0: 3.91318e-06/0.232721/128
1,0: 1.06577/0.342173/128
2,0: 2.3778/0.405881/128
3,0: 3.22933/0.137184/128
4,0: 4.26638/0.152943/128
5,0: 5.10948/0.00773837/128
6,0: 6.02352/0.058757/128
7,0: 7.33943/0.02835/128
8,0: 8.33965/0.478541/128
9,0: 9.46735/0.335981/128
10,0: 10.1918/0.340277/128
...
...
For your information, I made the test image file also with CImg like this - basically each red pixel is set to its x-coordinate plus a small random float less than 0.5. each green pixel is set to its y-coordinate plus a small random float less than 0.5 and each blue pixel is set to a mid-tone.
#define cimg_display 0
#define cimg_use_tiff
#define cimg_use_png
#include "CImg.h"
#include <cstdlib>
using namespace cimg_library;
int main(){
const int w=512;
const int h=256;
const int channels=3;
float* buffer = new float[w*h*channels];
float* fp=buffer;
for(int y=0;y<h;y++){
for(int x=0;x<w;x++){
*fp++=x+float(rand())/(2.0*RAND_MAX); // red
}
}
for(int y=0;y<h;y++){
for(int x=0;x<w;x++){
*fp++=y+float(rand())/(2.0*RAND_MAX); // green
}
}
for(int y=0;y<h;y++){
for(int x=0;x<w;x++){
*fp++=128; // blue
}
}
CImg <float>img(buffer,w,h,1,channels);
img.save_tiff("result.tif");
}
Yet another, easily installed, lightweight option would be to use vips. You can convert your 32-bit TIF to a raw file of 32-bit floats and read them straight into your C++ program. At the commandline, do the conversion with
vips rawsave yourImage.tif raw.bin
and then read in the uncompressed, unformatted floats from file raw.bin. If we now dump the file raw.bin, interpreting the data as floats, you can see the same values as in my other answers:
od -f raw.bin
0000000 3.913185e-06 2.327210e-01 1.280000e+02 1.065769e+00
0000020 3.421732e-01 1.280000e+02 2.377803e+00 4.058807e-01
0000040 1.280000e+02 3.229325e+00 1.371841e-01 1.280000e+02
Of course, you can have your program do the conversion by linking to libvips or simply using system() to run the commandline version and then read its output file.

QImage read pixel data with precision

Sorry for the basic question, I am just starting to use QImage for reading pixel data from an image file.
To understand the member functions, I tried to load an image file and tried to output the functions return values:
QString fileName = "pic1.bmp";
QImage myImage;
myImage.load( fileName );
std::cout << "width = " << myImage.width() << std::endl;
std::cout << "height = " << myImage.height() << std::endl;
std::cout << "dotspermeterX = " << myImage.dotsPerMeterX() << std::endl;
std::cout << "dotspermeterY = " << myImage.dotsPerMeterY() << std::endl;
QRectF myRect = myImage.rect();
std::cout << "rect = " << myRect.bottomLeft().x() << "," << myRect.bottomLeft().y()
<< " " << myRect.topRight().x() << "," << myRect.topRight().y() << std::endl;
The output I got was:
width = 858
height = 608
dotspermeterX = 4724
dotspermeterY = 4724
rect = 0,608 858,0
My questions are:
1. What is the difference between dots and pixels?
2. Does QImage work only with int pixels? Can't I read sub-pixel data for better precision?
To clarify my question, Following is a zoomed bitmap image of a diagonal line and I want to read all the small pixels/dots in this line. Is this possible?
As for the "dots per meter", you probably heard of "dots per inch" (or DPI). It's the same. If, for example, you have a 20 inch monitor with the horizontal resolution of X pixels, you will have Y "dots per inch" (or pixels per inch). If you then switch to a 40 inch monitor but with the same horizontal resolution X, then you have half the number of DPI, as the screen is now double as wide. So DPI (or PPI) can be seens as a measurement of the size of the pixels.
And no, I seriously doubt that QImage have any support for sub-pixel data.