Efficient alternative to cvSet2D? - c++

I'm profiling the code that I have developed, and I see a bottleneck in my code when I use cvSet2D.
Is there some alternative to cvSet2D more efficient?
How can I write that pixels?

I recommend you to use the C++ functions and not the old C style functions.
The most efficient way to write to pixels is the following.
cv::Mat frame;
// load your img to your Mat
uchar* p = frame.ptr(row); // returns a row pointer
p[column] = x; // accesses value in the given column
One thing to note is that you might have more columns than you have pixel columns, e.g. on a 3 channel image you have 3 times the number of actual pixel columns you can access.
For more information on different ways to iterate over pixels, check this tutorial.

You need to get a pointer to the data field of the structure.
(C API) The IplImage structure has a char* field called data. Access (your_type*)image->data for the first element, and then use it like a regular C 1D array, but be careful to use the field stepWidth to jump from a line to the next (because lines of data may be aligned on multiples of 16 bits for memory access optimization).
(C++ API) Use T* cv::Mat::ptr<T>(int i) to get a pointer to the first element of the line you want to access. Then use as a regular C++ 1D array.
This should be the faster access pattern (see the book OpenCV2 Cookbook for a comparison of the different access patterns).

Related

C++: Creating an OpenCV Mat object with specified row addresses

I am using OpenCV 3.2 on C++ and I cannot find a way to do the following task:
Suppose I have 10 pointers to double arrays, say row_0,...,row_9, where each array contains 20 elements. I want to create a cv::Mat object having 10 rows and 20 columns such that its 0th row starts at address row_0, 1st row starts at address row_1 and so on. In other words I already have each row stored contiguously in the memory (however the entire block of 10 rows may not be contiguous) and I want to 'gather' them into a Mat object. How can I accomplish this?
Of course I can declare a 10*20 array, copy the rows successively into it and then call the cv::Mat(int rows,int cols,int type,void *data) constructor, but this requires unnecessary copying of the data. The matrix I need is actually much bigger than 10x20. Moreover I am needing to do this many times in my application, so copying would make my program slow.

C++ working with PPM images

I am trying write a function that reads PPM images and the function should return the contents.
PPM images have the following text format:
P3
numOfRows numOfColumns
maxColor
numOfRows-by-numOfColumns of RGB colors
Since the text format has a mixture of variable types, is there any way to store this all in an array? I remembered that C++ does not support arrays with different types. If not, then I am thinking of defining a class to store the PPM contents.
C++ does not support arrays with different types.
Correct.
You could:
Define a class as you say, like this: C++ Push Multiple Types onto Vector or this: Creating a vector that holds two different data types or classes or even this: Vector that can have 3 different data types C++.
Have a generic C-like array (or better yet, an std::vector) with void*.
C++ isn't Javascript.
The number of columns / number of rows must be integers. Maximum colour value might be either an integer or a float depending on the format details, as might the rgb values.
So you read the image dimensions first. Then you create a buffer to hold the image. Usually 32 bit rgba is what you want, so either allocate width * height * 4 with malloc() or use an std::vector and resize.
Then you loop through the data, reading the values and putting them into
the array.
Then you create an "Image" object, with integer members of width and height, and a pixel buffer of 32 bit rgbas (or whatever is your preferred pixel format).

Fftw3 library and plans reuse

I'm about to use fftw3 library in my very certain task.
I have a heavy load packets stream with variable frame size, which is produced like that:
while(thereIsStillData){
copyDataToInputArray();
createFFTWPlan();
performExecution();
destroyPlan();
}
Since creating plans is rather expensive, I want to modify my code to something like this:
while(thereIsStillData){
if(inputArraySizeDiffers()) destroyOldAndCreateNewPlan();
copyDataToInputArray(); // e.g. `memcpy` or `std::copy`;
performExecution();
}
Can I do this? I mean, does plan contain some important information based on data such, that plan created for one array with size N, when executed will give incorrect results for the other array of same size N.
The fftw_execute() function does not modify the plan presented to it, and can be called multiple times with the same plan. Note, however, that the plan contains pointers to the input and output arrays, so if copyDataToInputArray() involves creating a different input (or output) array then you cannot afterwards use the old plan in fftw_execute() to transform the new data.
FFTW does, however, have a set of "New-array Execute Functions" that could help here, supposing that the new arrays satisfy some additional similarity criteria with respect to the old (see linked docs for details).
The docs do recommend:
If you are tempted to use the new-array execute interface because you want to transform a known bunch of arrays of the same size, you should probably go use the advanced interface instead
but that's talking about transforming multiple arrays that are all in memory simultaneously, and arranged in a regular manner.
Note, too, that if your variable frame size is not too variable -- that is, if it is always one of a relatively small number of choices -- then you could consider keeping a separate plan in memory for each frame size instead of recomputing a plan every time one frame's size differs from the previous one's.

Efficient (time and space complexity) data structure for dense and sparse matrix

I have to read a file in which is stored a matrix with cars (1=BlueCar, 2=RedCar, 0=Empty).
I need to write an algorithm to move the cars of the matrix in that way:
blue ones move downward;
red ones move rightward;
there is a turn in which all the blue ones move and a turn to move all the red ones.
Before the file is read I don't know the matrix size and if it's dense or sparse, so I have to implement two data structures (one for dense and one for sparse) and two algorithms.
I need to reach the best time and space complexity possible.
Due to the unknown matrix size, I think to store the data on the heap.
If the matrix is dense, I think to use something like:
short int** M = new short int*[m];
short int* M_data = new short int[m*n];
for(int i=0; i< m; ++i)
{
M[i] = M_data + i * n;
}
With this structure I can allocate a contiguous space of memory and it is also simple to be accessed with M[i][j].
Now the problem is the structure to choose for the sparse case, and I have to consider also how I can move the cars through the algorithm in the simplest way: for example when I evaluate a car, I need to find easily if in the next position (downward or rightward) there is another car or if it's empty.
Initially I thought to define BlueCar and RedCar objects that inherits from the general Car object. In this objects I can save the matrix coordinates and then put them in:
std::vector<BluCar> sparseBlu;
std::vector<RedCar> sparseRed;
Otherwise I can do something like:
vector< tuple< row, column, value >> sparseMatrix
But the problem of finding what's in the next position still remains.
Probably this is not the best way to do it, so how can I implement the sparse case in a efficient way? (also using a unique structure for sparse)
Why not simply create a memory mapping directly over the file? (assuming your data 0,1,2 is stored in contiguous bytes (or bits) in the file, and the position of those bytes also represents the coordinates of the cars)
This way you don't need to allocate extra memory and read in all the data, and the data can simply and efficiently be accessed with M[i][j].
Going over the rows would be L1-cache friendly.
In case of very sparse data, you could scan through the data once and keep a list of the empty regions/blocks in memory (only need to store startpos and size), which you could then skip (and adjust where needed) in further runs.
With memory mapping, only frequently accessed pages are kept in memory. This means that once you have scanned for the empty regions, memory will only be allocated for the frequently accessed non-empty regions (all this will be done automagically by the kernel - no need to keep track of it yourself).
Another benefit is that you are accessing the OS disk cache directly. Thus no need to keep copying and moving data between kernel space and user space.
To further optimize space- and memory usage, the cars could be stored in 2 bits in the file.
Update:
I'll have to move cars with openMP and MPI... Will the memory mapping
work also with concurrent threads?
You could certainly use multithreading, but not sure if openMP would be the best solution here, because if you work on different parts of the data at the same time, you may need to check some overlapping regions (i.e. a car could move from one block to another).
Or you could let the threads work on the middle parts of the blocks, and then start other threads to do the boundaries (with red cars that would be one byte, with blue cars a full row).
You would also need a locking mechanism for adjusting the list of the sparse regions. I think the best way would be to launch separate threads (depending on the size of the data of course).
In a somewhat similar task, I simply made use of Compressed Row Storage.
The Compressed Row and Column (in the next section) Storage formats
are the most general: they make absolutely no assumptions about the
sparsity structure of the matrix, and they don't store any unnecessary
elements. On the other hand, they are not very efficient, needing an
indirect addressing step for every single scalar operation in a
matrix-vector product or preconditioner solve.
You will need to be a bit more specific about time and space complexity requirements. CSR requires an extra indexing step for simple operations, but that is a minor amount of overhead if you're just doing simple matrix operations.
There's already an existing C++ implementation available online as well.

Storing images of different sizes in a data structure, OpenCV

I am wondering if there is any way to hold (or store) images of different sizes in a single data structure using OpenCV (C++). For example, in MATLAB I can do it by using "cell".
Specifically, I am generating my results which are images of different sizes and it would be grate for me if I can store them in a single data structure. So that, I can use it late on.
Please note, this has to be done with C++ and OpenCV.
I am thinking to give a try with: std::vector. Thanks a lot.
Yeah you can try this
std::vector<cv::Mat> ImageDataBase;
for(int i=0;i<length_of_imageDataBase;i++)
{
cv::Mat img = cv::imread("Address of the images");
ImageDataBase.pushback(img);
}
I think the problem lays in the way You think about objects in c++ generally. Matlab requires objects to be of the same size in one vector/array/matrix/however it should be called, because it is optimised to operate on matrices, and those operations are very dependent on dimensions of a matrix.
In c++ the main entity is an object. The most similar thing to matlab vector is an array, like cv::Mat potatoes[30]. Yet, even this demands only to be filled with objects of the same class, disregarding the size of those cv::Mat contents.
So, to wrap it all up, You have a couple of choices:
an array, like cv::Mat crazySocks[42] - You need to be carefull here, because You need to know how many socks there will be, and You might a segmentation error if You go out of array bounds
a vector, as suggested by Vinoj John Hosan, like std::vector<cv::Mat> jaguars - this is a fine idea, because stl containers can do some nice tricks with their content, and You may easily modify size of the vector.
a list, like std::list<cv::Mat> toFind - this is better than vector if You plan to modify the size of Your container often.
any of previously mentioned, but with pointers, like cv::Mat *crazyPointers[33] - when You have some big objects to move, it's better to move only informations about where they are, than the object.cv::Mat does some tricks internally with it's data, so it shouldn't be the case.