I am using Tensorflow for a image classification problem in C++.
I created the graph and tried using the example code here.
When I give an image (.jpeg) as the input to the main function (for string image in the main function) it works fine. But I have the pixel values of an image (luminance values only) in a 2d vector std::vector<std::vector<int>> vec2d. How can I give this vector as input for making a prediction?
I got a tensor created as follows, but I cannot figure out how to fit that to the existing code.
tensorflow::Tensor input(tensorflow::DT_FLOAT,
tensorflow::TensorShape({32, 32}));
auto input_map = input.tensor<float, 2>();
for (int b = 0; b < 32; b++) {
for (int c = 0; c < 32; c++) {
input_map(b, c) = (vec2d)[b][c];
}
}
Or is there an inbuilt way to pass pixel values in tensorflow?
I do not want to create an image with the pixel values and re-read it. I already did and it works but it takes time for file read/write operations and time is very crucial in my system.
Related
I am loading the .pb file in tensorflow c++. Now I need to fill the input tensor with my data and get the output tensors.
To fill data, I am using below code:
tensorflow::Tensor points_tensor{tensorflow::DataType::DT_FLOAT, tensorflow::TensorShape({number_of_points,4})};
auto pointsMapped = points_tensor.tensor<float, 2>();
for(int i=0; i<number_of_points; i++){
//to the shifting here only
pointsMapped(i,0) = point_cloud.points[i].x;
pointsMapped(i,1) = point_cloud.points[i].y;
pointsMapped(i,2) = point_cloud.points[i].z;
pointsMapped(i,3) = point_cloud.points[i].intensity;
}
point_cloud is a vector of point object.
But I do not think so, it is better way to do it in c++. Because I need to access my tensor shape so.
Can someone help me with this?
Actually The above mentioned method is the better way to fill the tensor in tensorflow
I'm trying to implement video stabilization using OpenCV videostab module. I need to do it in stream, so I'm trying to get motion between two frames. After learning documentation, I decide to do it this way:
estimator = new cv::videostab::MotionEstimatorRansacL2(cv::videostab::MM_TRANSLATION);
keypointEstimator = new cv::videostab::KeypointBasedMotionEstimator(estimator);
bool res;
auto motion = keypointEstimator->estimate(this->firstFrame, thisFrame, &res);
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
Where firstFrame and thisFrame are fully initialized frames. The problem is, that estimate method always return the matrix like that:
In this matrix only last value(matrix[8]) is changing from frame to frame. Am I correctly use videostab objects and how can I apply this matrix on frame to get result?
I am new to OpenCV but here is how I have solved this issue.
The problem lies in the line:
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
For me, the motion matrix is of type 64-bit double (check yours from here) and copying it into std::vector<float> matrix of type 32-bit float messes-up the values.
To solve this issue, try replacing above line with:
std::vector<float> matrix;
for (auto row = 0; row < motion.rows; row++) {
for (auto col = 0; col < motion.cols; col++) {
matrix.push_back(motion.at<float>(row, col));
}
}
I have tested it with running the estimator on duplicate set of points and it gives expected results with most entries close to 0.0 and matrix[0], matrix[4] and matrix[8] being 1.0 (using author's code with this setting was giving the same erroneous values as author's picture displays).
I have several tasks to do on each pixel in opencv. I am using a construct like this:
for(int row = 0; row < inputImage.rows; ++row)
{
uchar* p = inputImage.ptr(row);
for(int col = 0; col < inputImage.cols*3; col+=3)
{
int blue=*(p+col); //points to each pixel B,G,R value in turn assuming a CV_8UC3 colour image
int green=*(p+col+1);
int red=*(p+col+2);
// process pixel }
}
This is working, but I am wondering if there is any faster way to do this? This solution doesn't use any SIMD or any paralle processing of OpenCV.
What is the best way to run a method over all pixels of an image in opencv?
If the Mat is continuous, i.e. the matrix elements are stored continuously without gaps at the end of each row, which can be referred using Mat::isContinuous(), you can treat them as a long row. Thus you can do something like this:
const uchar *ptr = inputImage.ptr<uchar>(0);
for (size_t i=0; i<inputImage.rows*inputImage.cols; ++i){
int blue = ptr[3*i];
int green = ptr[3*i+1];
int red = ptr[3*i+2];
// process pixel
}
As said in the documentation, this approach, while being very simple, can boost the performance of a simple element-operation by 10-20 percents, especially if the image is rather small and the operation is quite simple.
PS: For faster need, you will need to take full use of GPU to process each pixel in parallel.
I have a programming issue regarding the extraction of a subimage (submatrix) from a bigger image (matrix). I have two points (upper and lower bound of the subimage i want to extract) and i want to extract the subimage from the bigger one based on these points. But I can't find how to do thins with C/C++.
I know it's very easy to do with matlab. Suppose these two points are (x_max,y_max) and (x_min,y_min). To extract the subimage I just need to code the following:
(MATLAB CODE)-> small_image=big_image(x_min:x_max,y_min,y_max);
But in C i can't use an interval of indexes with : as i do with Matlab. Does anybody here faced this problem before?
If you are doing image processing in C/C++, you should probably use OpenCV.
The cv::Mat class can do this using a Region Of Interest (ROI).
In straight c++, you'd use a loop.
int* small_im[]; // or whatever the syntax is
int i = 0, j = 0;
for (i = 0; i < (x_max-x_min); i++)
{
for (j = 0; j < (y_max-y_min); j++)
{
small_im[i][j] = big_im[x_min+i][y_min+j];
}
}
I am trying to write a bag of features system image recognition system. One step in the algorithm is to take a larger number of small image patches (say 7x7 or 11x11 pixels) and try to cluster them into groups that look similar. I get my patches from an image, turn them into gray-scale floating point image patches, and then try to get cvKMeans2 to cluster them for me. I think I am having problems formatting the input data such that KMeans2 returns coherent results. I have used KMeans for 2D and 3D clustering before but 49D clustering seems to be a different beast.
I keep getting garbage values for the returned clusters vector, so obviously this is a garbage in / garbage out type problem. Additionally the algorithm runs way faster than I think it should for such a huge data set.
In the code below the straight memcpy is only my latest attempt at getting the input data in the correct format, I spent a while using the built in OpenCV functions, but this is difficult when your base type is CV_32FC(49).
Can OpenCV 1.1's KMeans algorithm support this sort of high dimensional analysis?
Does someone know the correct method of copying from images to the K-Means input matrix?
Can someone point me to a free, Non-GPL KMeans algorithm I can use instead?
This isn't the best code as I am just trying to get things to work right now:
std::vector<int> DoKMeans(std::vector<IplImage *>& chunks){
// the size of one image patch, CELL_SIZE = 7
int chunk_size = CELL_SIZE*CELL_SIZE*sizeof(float);
// create the input data, CV_32FC(49) is 7x7 float object (I think)
CvMat* data = cvCreateMat(chunks.size(),1,CV_32FC(49) );
// Create a temporary vector to hold our data
// we'll copy into the matrix for KMeans
int rdsize = chunks.size()*CELL_SIZE*CELL_SIZE;
float * rawdata = new float[rdsize];
// Go through each image chunk and copy the
// pixel values into the raw data array.
vector<IplImage*>::iterator iter;
int k = 0;
for( iter = chunks.begin(); iter != chunks.end(); ++iter )
{
for( int i =0; i < CELL_SIZE; i++)
{
for( int j=0; j < CELL_SIZE; j++)
{
CvScalar val;
val = cvGet2D(*iter,i,j);
rawdata[k] = (float)val.val[0];
k++;
}
}
}
// Copy the data into the CvMat for KMeans
// I have tried various methods, but this is just the latest.
memcpy( data->data.ptr,rawdata,rdsize*sizeof(float));
// Create the output array
CvMat* results = cvCreateMat(chunks.size(),1,CV_32SC1);
// Do KMeans
int r = cvKMeans2(data, 128,results, cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 1000, 0.1));
// Copy the grouping information to our output vector
vector<int> retVal;
for( int y = 0; y < chunks.size(); y++ )
{
CvScalar cvs = cvGet1D(results, y);
int g = (int)cvs.val[0];
retVal.push_back(g);
}
return retVal;}
Thanks in advance!
Though I'm not familiar with "bag of features", have you considered using feature points like corner detectors and SIFT?
You might like to check out http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/ for another open source clustering package.
Using memcpy like this seems suspect, because when you do:
int rdsize = chunks.size()*CELL_SIZE*CELL_SIZE;
If CELL_SIZE and chunks.size() are very large you are creating something large in rdsize. If this is bigger than the largest storable integer you may have a problem.
Are you wanting to change "chunks" in this function?
I'm guessing that you don't as this is a K-means problem.
So try passing by reference to const here. (And generally speaking this is what you will want to be doing)
so instead of:
std::vector<int> DoKMeans(std::vector<IplImage *>& chunks)
it would be:
std::vector<int> DoKMeans(const std::vector<IplImage *>& chunks)
Also in this case it is better to use static_cast than the old c style casts. (for example static_cast(variable) as opposed to (float)variable ).
Also you may want to delete "rawdata":
float * rawdata = new float[rdsize];
can be deleted with:
delete[] rawdata;
otherwise you may be leaking memory here.