arrayfire flip throws exception - c++

I'm try to flip a matrix of size [249 1 50 20], this is the code:
array flipped_delta = flip(delta, 0);
I get the following exception:
Unhandled exception at 0x00000001801FCA92 (libafcu.dll) in r.exe: 0xC0000094: Integer division by zero.
I try to flip with flip(delta, 2) then I get:
c:\var\lib\hudson\workspace\build-win64-master\jacket\src\cuda\../common/flip.cp
p:47: CUDA runtime error: invalid configuration argument (9)
What am I doing wrong?
thanks.

I don't know ArrayFire, but a quick peek at the documentation suggests that dimension 0 is along the vertical axis, but you have only one row so there's nothing to flip. Consequently this could be a bug in handling that case, where I'd expect a no-op instead.
Try with dimension 1 (horizontal):
array flipped_delta = flip(delta, 1);
Disclaimer: this may or may not actually be how dimension indexes work in ArrayFire.

Related

Problem about value assignment in Arrayfire

I'm using Arrayfire and Flashlight evaluating a network.
auto tmp = output(af::seq(2, 10), af::span, af::span, af::span);
auto softmax_tmp = fl::softmax(tmp, 0);
output(af::seq(2,10),af::span,af::span,af::span)=softmax_tmp;
output is a tensor with the shape of (12,100,1,1). Now I want to pull out the (2,10) dims of the tensor and for the extracted 100 9-dim vectors, apply softmax operation to them. Then put them back. Codes above.
Problem is that the 3rd line doesn't work. softmax_tmp is the right value, but the assignment operator in the 3rd line just failed. Exactly it can pass the compilation successfully, but output remains the old value as in 1st line.
Who could help me? A lot thanks really.

tensorflow: transpose expects a vector of size 1. But input(1) is a vector of size 2

I want to use a trained RNN language model to do inference.So:
I loaded the trained model graph in c++ using
tensorflow::MetaGraphDef graph_def;
TF_CHECK_OK(ReadBinaryProto(Env::Default(), path_to_graph, &graph_def));
TF_CHECK_OK(session->Create(graph_def.graph_def()));
load the model parameters by:
Tensor checkpointPathTensor(tensorflow::DT_STRING, tensorflow::TensorShape());
checkpointPathTensor.scalar<std::string>()() = path_to_ckpt;
TF_CHECK_OK(session_->Run({{graph_def.saver_def().filename_tensor_name(), checkpointPathTensor} },{},{graph_def.saver_def().restore_op_name()},nullptr));
up till now, everything goes fine. Then I want to compute the value of the node "output/output_batch_major":
TF_CHECK_OK(session->Run(inputs,{"output/output_batch_major"},{"post_control_dependencies"}, &outputs));
I got the error:
2018-07-13 14:13:36.793495: F tf_lm_model_loader.cc:190] Non-OK-status: session->Run(inputs,{"output/output_batch_major"},{"post_control_dependencies"}, &outputs) status: Invalid argument: transpose expects a vector of size 1. But input(1) is a vector of size 2
[[Node: extern_data/placeholders/delayed/sequence_mask_time_major/transpose = Transpose[T=DT_BOOL, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](extern_data/placeholders/delayed/SequenceMask/Less, extern_data/placeholders/delayed/sequence_mask_time_major/transpose/perm)]]
Aborted (core dumped)
I checked the graph using tensorboard, extern_data/placeholders/delayed/sequence_mask_time_major/transpose/perm is a Tensor with size 2, is this Tensor the input(1) in the error? How can I fix the problem?Any idea? Thanks in advance!
I had a similar issue on the input tensor to my predictor. I expanded the dimension by one and the issue was resolved. I suggest running the predictor in python, first. This helps to identify the size of the input tensor that you are passing to the predictor. Then, replicate the exact same size in C++. Also, based on your code snippet, I am not sure how you define the inputs to the Run method. I defined is as follows in my code:
std::vector<std::pair<std::string, tensorflow::Tensor>> input = {
{"input_1", input_tensor }
};
where "input_1" is the name of my input layer.
I hope this helps.
I get this error when pass wrong input type into tensorflow model. The model require 3d dimension array, I pass 1d dimension instead of so check your input data first.

Malloc Error: OpenCV/C++ while push_back Vector

I try to create a Descriptor using FAST for the Point detection and SIFT for building the Descriptor. For that purpose I use OpenCV. While I use OpenCV's FAST I just use parts of the SIFT code, because I only need the Descriptor. Now I have a really nasty malloc Error and I don't know, how to solve it. I posted my code into GitHub because it is big and I dont really know where the Error comes from. I just know, that it is created at the end of the DO-WHILE-Loop:
features2d.push_back(features);
features.clear();
candidates2d.push_back(candidates);
candidates.clear();
}
}while(candidates.size() > 100);
As you can see in the code of GitHub I already tried to release Memory of the Application. Xcode Analysis says, that my Application uses 9 Mb memory. I tried to debug the Error but It was very complicated and I haven't found any clue where the Error comes from.
EDIT
I wondered if this Error could occur because I try to access the Image Pixel Value passed to calcOrientationHist(...) with img.at<sift_wt>(...) where typdef float sift_wt at Line 56, and 57 in my code, because normally the Patch I pass outputs the type 0 which means it is a CV_8UC1 But well, I copied this part from the sift.cpp at Line 330 and 331 Normally the SIFT Descriptor should also have a Grayscale image or not?
EDIT2
After changing the type in the img.at<sift_wt>(...)Position nothing changed. So I googled Solutions and landed at the GuardMalloc feature from XCode. Enabling it showed me a new Error which is probably the Reason I get the Malloc Error. In line 77 of my Code. The Error it gives me at this line is EXC_BAD_ACCESS (Code=1, address=....) There are the following lines:
for( k = 0; k < len; k ++){
int bin = cvRound((n/360.f)+Ori[k]);
if(bin >= n)
bin -=n;
if(bin < 0 )
bin +=n;
temphist[bin] += W[k]*Mag[k];
}
The Values of the mentioned Variables are the following:
bin = 52, len = 169, n = 36, k = 0, W, Mag, Ori and temphist are not shown.
Here the GuadMalloc Output (sorry but I dont really understand what exactly it wants)
GuardMalloc[Test-1935]: Allocations will be placed on 16 byte boundaries.
GuardMalloc[Test-1935]: - Some buffer overruns may not be noticed.
GuardMalloc[Test-1935]: - Applications using vector instructions (e.g., SSE) should work.
GuardMalloc[Test-1935]: version 108
Test(1935,0x102524000) malloc: protecting edges
Test(1935,0x102524000) malloc: enabling scribbling to detect mods to free blocks
Answer is simpler as thought...
The Problem was, that in the calculation of Bin in the For-loop the wrong value came out. Instead of adding ori[k] it should be a multiplication with ori[k].
The mistake there resulted in a bin value of 52. But the Length of the Array that temphist is pointing to is 38.
For all who have similar Errors I really recomment to use GuardMalloc or Valgrind to debug Malloc Errors.

How to convert homogeneous points to inhomogeneous points using OpenCV (in C++)

I want to convert points. At the moment my code looks like this:
std::vector<cv::Point3d> homCamPoints(4);
// some assignments to homCamPoints
std::vector<cv::Point2d> inhomCamPoints(4);
convertPointsFromHomogeneous(homCamPoints, inhomCamPoints);
But I always get an exception error concerning a memory position. So, I assume that my input type is wrong, although the OpenCV documentation is saying:
src – Input vector of N-dimensional points.
dst – Output vector of N-1-dimensional points.
That sounds like my input types are ok. However in the internet I only found examples using cv::Mat types, but due to time concerns I would like to avoid the restructuring at that stage.
I run my code in debugging mode. When calling the function the parameters really seem to be fine. The error then occures right after entering the function, but I can't figure it out exactly, as I can't get into the function code itself. Does anybody have an idea why this is not working? Thanks.
I tried this:
std::vector<cv::Point3d> homCamPoints(4, cv::Point3d(0,0,0));
homCamPoints[0] = cv::Point3d(0,0,0);
homCamPoints[1] = cv::Point3d(1,1,1);
homCamPoints[2] = cv::Point3d(-1,-1,-1);
homCamPoints[3] = cv::Point3d(2,2,2);
std::vector<cv::Point2d> inhomCamPoints(4);
cv::convertPointsFromHomogeneous(homCamPoints, inhomCamPoints);
and it works without exception. Maybe your Problem is somewhere else in your code.
inhomCamPoints are:
[0, 0], [1, 1], [1, 1], [1, 1]

c++ armadillo - calculating null space

this is my first post here...
Is there any way to calculate a vector in the null space of another vector? I don't need the basis, just one vector in the null space will do.
I already tried using the solve() method -
colvec x(3);
x = solve(A,B);
where A is a 3x3 matrix of type mat -
2 2 2
3 3 3
4 4 4
and B is the zero vector of type colvec -
0
0
0
But the program terminates throwing the following error -
error: solve(): solution not found
terminate called after throwing an instance of 'std::runtime_error'
what():
I have used the solve() method before and got perfect results, but it doesn't seem to work in this simple case. Is this because the equation has multiple solutions? If so, is there any workaround to this, any other way that I could get a vector in the null space?
Any help would be appreciated.
Edit :
I tried the svd(mat U, vec s, mat V, mat X, method = "standard") method and I got the null space of X from the columns of V. I was just wondering if there is any way to improve the precision of the answer.
Thanks!
In recent version of the armadillo library you can find the orthonormal basis of the null space of matrix using the null() function. See the documentation at http://arma.sourceforge.net/docs.html#null. The functionality was added in version 5.400 (August 2015).