Assertion sv_count !=0 failed - Function train_auto, SVM type - EPS_SVR - c++

The question is related to the OpenCV library, version 2.4.13.2.
I am using n dimensional feature vectors from images for training and performing regression. The output values range between 0 and 255.
The function CvSVM::train works without an error, but requires a manual setting of parameters. So, I would prefer using the function CvSVM::train_auto to perform cross validation and determine the best parameters for the situation.
But I am facing the error:
OpenCV Error: Assertion failed (sv_count != 0) in CvSVM::do_train.
On changing the type to NU_SVR, it works well. The problem is only with type EPS_SVR.
I would appreciate any help I could receive to fix this.
EDIT: I was able to pinpoint the problem to line Number 1786 in the file-
opencv-master\sources\modules\ml\src\svm.cpp
FOR_IN_GRID(p, p_grid)
Upon commenting it, the code runs without errors. I am unaware of the reasons possible.

Facing the same bug. Found out that this bug was caused by svm.setP(x) and svm.setTermCriteria((cv2.TERM_CRITERIA_EPS, y)) where x and y values more than 0.1 (10^-1).

Related

ArrayFire convolution issue with Cuda backend

I've been having an issue with a certain function call in the
dphaseWeighted = af::convolve(dphaseWeighted, m_slowTimeFilter);
which seem to produce nothing but nan's.
The back ground is we have recently switched from using AF OpenCL to AF Cuda and the problem that we are seeing happens in the function.
dphaseWeighted = af::convolve(dphaseWeighted, m_slowTimeFilter);
This seems to work well when using OpenCL.
Unfortunatley, I can't give you the whole function because of IP. only a couple of snippets.
This convolve lies deep with in a phase extract piece of code. and is actualy the second part of that code which uses the af::convolve funtion.
The first function seems to behave as expected, with sensible floating point data out.
but then when it comes to the second function all I'm seeing is nan's coming out ( I view that with af_print amd dumping the data to a file.
in the CMakeList I include
include_directories(${ArrayFire_INCLUDE_DIRS})
and
target_link_libraries(DASPhaseInternalLib ${ArrayFire_CUDA_LIBRARIES})
and it builds as expected.
Has anyone experience any think like this before?

Can an ONNX network be incompatible with onnxruntime?

I am having trouble running inference on an ONNX model, either by making (tiny) adjustments to this Windows ML tutorial, or by implementing my own ONNX Runtime code following their MNIST Tutorial. As I understand it, Windows ML makes use of ONNX Runtime, so both efforts probably end up in the same place... and probably generating the same underlying exception for the same reason.
The exceptions thrown are either unintelligible (a second exception thrown during exception handling by the looks...) or not identified at all. It makes me wonder if the network itself is faulty or incompatible in some sense. The network was produced by taking a saved Tensorflow/Keras model and running this conversion:
python -m tf2onnx.convert --saved-model MyNet --output MyNet.onnx --inputs-as-nchw mobilenetv2_1_00_224_input:0
The result is a network that is rendered by Netron with the following input & output stages:
Is there anything about this network that is obviously incompatible with ONNX Runtime? Any suggestions on how to push past either/both of these exceptions?
Turns out that in my attempt to adapt Windows ML example, I had the output shape wrong - in that example, the output shape is 1 x 1000 x 1 x 1. I had copied/pasted this, and just modified the 1000 to suit. Clearly the network above needs a 1 x 10 shape....

How to debug Halide internal error with CodeGen_LLVM

I have problems finding the source of an error message reported by JIT-compiled pipeline with halide.
The log message is:
Internal Error at Halide-release_2019_08_27/halide/src/CodeGen_LLVM.cpp:2815 triggered by user code at :
Condition failed: append_string:
The LLVM_code at the following lines is:
llvm::Function *append_string = module->getFunction("halide_string_to_string");
internal_assert(append_string);
I'm using halide release build from 2019_08_27 on Ubuntu 18.04.
The pipeline runs without any errors until somebody wanted to use the Halide::print() for debugging.
I've checked a small test pipeline and print seem to work.
My problem now is to find our bug in a very complex pipeline. Could somebody explain the source of this bug and what I need to check in my code to solve this?
Thanks in advance.
That means the function "halide_string_to_string" was not found in the runtime, which would be very odd for CPU targets. Hrm, I wonder if you're trying to use print inside a Func scheduled on a GPU or DSP? I could easily imagine that being broken.

Error message : "Assertion 't = find_next_time_event( m )' failed at pulse/mainloop.c" ?...(C++)

Ok so I'm using CodeBlocks for programming in C++ . I have so "random" (it doesn't happen everytime, and I'm not able to predict when it happens) error message which makes the program crash, it says :
Assertion 't = find_next_time_event( m )' failed at pulse/mainloop.c:721,
function calc_next_timeout() . Aborting .
Aborted (core dumped) .
Process returned 134 (0x86)
Core dumped is when I should not have deleted some pointer, am I right?
The thing I don't understand is what is before "Aborted, core dumepd" . Can it guide me to which kind of error I made ?
Or is it a problem with CodeBlocks (I doubt it , but that could be great :p)
*I don't put code here because I just want information about what could theorically create this kind of message . Then I'll search and if I've problems with finding the error(s), I'll put some code here ;) *
It is telling you that an assertion failed on line 721 of the file pulse/mainloop.c that is part of your source code.
An assertion is typically placed to check invariants or preconditions/postconditions. Taking a precondition as an example, this is saying "this expression has to be true in order for the code below to work correctly".
By inspecting the condition (at line 721 of mainloop.c) and understanding why it was not true in your case, you should be able to find an error in your code that lead to the failed assertion.
This isn't really a solution but this issue actually has to do with PulseAudio. I assume OP is using Linux, possibly Ubuntu, which is when this error occurs. I sometimes get the same thing when using a program written in Python. There is a noted bug about this issue in the PulseAudio Launchpad bugs.

Timed Indexed Color sets in CPN Tools that results in Unhandled Exception Error

I am using CPN Tools to model a distributed system. CPN Tools uses CPN ML an extension of SML. The project homepage is: cpntools.org
I started with a simple model and when I try to make a particular indexed color set timed, I get an "Internal error". There is another indexed colorset within my Petri-net model that is timed and works correctly. I am not sure how I can troubleshoot since I don't understand the error message. Could you help me interpret the error message or give me some hints on what I could be doing wrong?
The model is:
http://imgur.com/JUjPRHK
The declarations of the model are:
http://imgur.com/DvvpyvH
The error message is:
Internal error: Compile error when generating code. Caught error.../compiler/TopLevel/interact/evalloop.sml:296.17-296.20../compiler/TopLevel/interact/evalloop.sml:44.55../compiler/TopLevel/interact/evalloop.sml:66.19-66.27
structure CPN`TransitionID1413873858 = struct ... end (* see simulator debug info for full code *)
simglue.sml:884.12-884.43
"
Thank you~
I know this is an old question, but I run in the same problem and wasted too much time on this, so maybe it will help someone else in the future.
I didn't understand exactly the reason for this, but it seems the problem appears when you play with time values on an arch that ends to a transition (I was updating an integer value to the current time, using IntInf.toInt(time())). Now, if I move the code on the outgoing arch of that transition (that is: the one that ends in a place) there is no error.