OpenNI InitFromXmlFile parameter error - openni

nRetVal = context.InitFromXmlFile(SAMPLE_XML_PATH, &errors);
Above is the function which results in the errors. Actually, SAMPLE_XML_PATH is xml file's path. I tried relative path and absolute path. But the error still pumps out. The return value(nRetVal) of this function is supposed to be 0 here. But the return value is 65537. This function is to read kinect information through XBOX360.
Blow is the error message.
Failed: The parameter is incorrect.
[80070057] (m_pDmo -> AllocateStreamingResources())
m_pReader->Start():Error!
But Niviewer samples all run very well.
Is there any engineer who has bumped into this problem before? I have been struggling for this for a whole day.
Note: This program run pretty well yesterday. But the error pumps out today with similar program. (The difference between them can be ignored because I've already test it.)
I will appreciate your any answers.

When I initialize the kinectInterface handler, I use pointer to "new" a memory and commit operation on the memory.
The bad thing is the handler has never been closed if main process exit.
The Solution is put the pointer into smart pointer like "Ptr" would solve the problem.

Related

Exception thrown: read access violation. std::shared_ptr<>::operator-><,0>(...)->**** was 0xFFFFFFFFFFFFFFE7

Good afternoon to all! I am writing a game engine using OpenGL + Win32 / GLFW. Therefore, I will say the project is large, but I have a problem that has led me to a dead end and I can't understand what the problem is. The problem is that I have a shared_ptr<Context> in the 'windows' class (it is platform-based) that is responsible for the context (GL, D3D). And everything works fine when I launch the application, everything is drawn normally, but when I start entering the cursor into the window, a crash occurs when any function calls from context, in my case is:
context->swapBuffers();
Here a crash:
std::shared_ptr<Context>::operator-><Context,0>(...)->**** was 0xFFFFFFFFFFFFFFE7.
Then I got into the callstack and saw that the context-> itself is non-zero, so the function should be called.
Then I searched for a long time for the problem and found that when I remove the Win32 message processing code, and when I remove it, errors do not occur when calling a function from context->. I removed everything unnecessary from the loop and left only the basic functions and tried to run it like this, because I thought that some other functions inside were causing this problem, but no.
while (PeekMessageW(&msg, NULL, NULL, NULL, PM_REMOVE) > 0) {
TranslateMessage(&msg);
DispatchMessageW(&msg);
}
That is, when I delete TranslateMessage() and DispatchMessage(), the error goes away. And then I finally got confused i.e I don't understand what is happening. And then I thought that maybe somehow the operating system itself affects that pointer, prohibits reading or something like that.
And then I paid attention to __vtptr in the call stack, and noticed that it is nullptr, and moreover it has the void** type. And also the strangest thing is that in the error I have ->**** was 0xffffffffffffffc7 as many as four consecutive pointers. What is it?
I understand I practically didn't throw off the code, because I have a big project and I think it doesn't make sense to throw everything, so I tried to explain the problem by roughly showing what is happening in my code. I will be grateful to everyone who will help :)

QCamera::start gives mysterious "failed to start" log message

It's just plain luck my program is so simple, so I eventually found out what causes the mysterious log message. My program log looks like this:
Debugging starts
failed to start
Debugging has finished
Which happens after:
camera = new QCamera(QCameraInfo::defaultCamera());
// see http://omg-it.works/how-to-grab-video-frames-directly-from-qcamera/
camera->setViewfinder(frameGrabber = new CameraFrameGrabber());
camera->start();
The start() method causes this message in console. Now the meaning of the message is obvious, what it's not very helpful. What steps should I take to troubleshoot it?
Reasons for this might differ, but in my case it was simply because I provided invalid QCameraInfo. The culprit is that QCameraInfo::defaultCamera() might return invalid value if Qt fails to detect any cameras on your system, which unfortunately happens even if cameras are present.

Running the executable of hdl_simple_viewer.cpp from Point Cloud Library

The Point Cloud library comes with an executable pcl_hdl_viewer_simple that I can run (./pcl_hdl_viewer_simple) without any extra arguments to get live data from a Velodyne LIDAR HDL32.
The source code for this program is supposed to be hdl_viewer_simple.cpp. A simplified version of the code is given on this page which cannot be compiled readily and requires a tiny bit of tweaking to make it compile.
My problem is that the executable that I build myself for both the versions are not able to run. I always get the smart pointer error "Assertion px!=0" error. I am not sure if I am not executing the program in the correct way or what. The executable is supposed to be executed like
./hdl_viewer_simple -calibrationFile hdl32calib.xml -pcapFile file.pcap
in case of playing from previously recorded PCAP files or just ./hdl_viewer_simple if wanting to get live data from the real sensor. However, I always get the assertion failed error.
Has anyone been able to run the executables? I do not want to use the ROS drivers
"Assertion px!=0" is occurring because your pointer is not initialized.
Now that being said, you could initialize it inside your routines, in case the pointer is NULL, especially for data input.
in here, you can try updating the line 83 like this :
CloudConstPtr cloud(new Cloud); //initializing your pointer
and hopefully, it will work.
Cheers,

Scilab compilation "cannot allocate this quantity of memory"

I am facing issues with memory allocation in Scilab after compiling.
I am compiling on a Red Hat on ppc64 (POWER8). Stack limits are already set to unlimited (ulimit -s unlimited). The ./configure script (with several options I am not showing here) runs successfully, but the make all fails and stops. When it stops, it is stuck at the Scilab command prompt with this message:
./bin/scilab-cli -ns -noatomsautoload -f modules/functions/scripts/buildmacros/buildmacros.sce
stacksize(5000000);
!--error 10001
stacksize: Cannot allocate memory.
%s: Cannot allocate this quantity of memory.
at line 27 of exec file called by :
exec('modules/functions/scripts/buildmacros/buildmacros.sce',-1)
-->
I have investigated a bit, and that error message seems to be called of course at line 00027 in buildmatros.sce, where the function stacksize(5000000) is called.
This function is defined in:
scilab-5.5.1/modules/core/sci_gateway/c/sci_stacksize.c
I found a version of the file at this page: http://doxygen.scilab.org/master_wg/d5/dfb/sci__stacksize_8c_source.html.
The condition that is FALSE and that triggers the message seems to me to show up at line 00295.
Inside that file, you see that error is displayed whenever the stacksize given as input is LARGER than what is returned by the method get_max_memory_for_scilab_stack() from the class:
scilab-5.5.1/modules/core/src/c/stackinfo.c
Again I found a version online at the following page:
http://doxygen.scilab.org/master_wg/dd/dfb/stackinfo_8h.html#afbd65a57df45bed9445a7393a4558395
The Method is declared from line 109.
It seems to invoke a variable called MAXLONG, which is however NEVER explicitly declared! As you see, it is declared several times (line 00019, 00035, 00043, 00050), but all lines are commented! [correction: the lines are NOT commented, it was my false understanding of # being a comment sign, but it's not]
So my guess is: MAXLONG is not declared, so the function does not return a value (or it returns 0) and therefore the error message is triggered because the stacksize given as input is higher than 0 or NULL or N/A.
My questions are then:
Why are all lines commented where MAXLONG is defined?
Where does MAXLONG originate from? Is it something passed from the kernel?
How can I solve the problem?
Thanks!
PS - I tried to uncomment the line in buildmacros, and it compiled and installed without issues. However, when I started scilab-cli, it displayed the same message again.
Edit after further investigation:
After further investigation, I found out that what I thought were the comments are indeed instructions for the compiler... but I kept those errors of mine, so that the answer to my question is understandable.
Here are my new points.
In Scilab I noticed that by giving an input stacksize out of bounds, the same method get_max_memory_for_scilab_stack() is invoked, to get the upper bound. The lower bound I've seen it's defined by default.
-->stacksize(1)
!--error 1504
stacksize: Out of bounds value. Not in [180000,268435454].
Also the stacksize used seems fine:
-->stacksize()
ans =
7999994. 332.
However, when trying to give such value an input inbetween, it fails.
-->stacksize(1)
!--error 1504
stacksize: Out of bounds value. Not in [180000,268435454].
It seems to invoke a variable called MAXLONG
It's not a variable, but a pre-processor macro.
Why are all lines commented where MAXLONG is defined?
You should ask that from the person who commented the lines. They're not commented in scilab-5.5.1 that's online.
Where does MAXLONG originate from? Is it something passed from the kernel?
It's defined in the file scilab-5.5.1/modules/core/src/c/stackinfo.c. It's defined to the same value as LONG_MAX which is defined by the standard c library (<limits.h> header). If the macro is not supplied by the standard library, then it's defined to some other, platform specific value.
How can I solve the problem?
If your problem originates from the lack of definition for MAXLONG, then you must define it. One way going about it is to uncomment the lines that define it. Or re-download the original sources since yours don't appear to match with the official ones.

Strange semantic error

I have reinstalled emacs 24.2.50 on a new linux host and started a new dotEmacs config based on magnars emacs configuration. Since I have used CEDET to some success in my previous workflow I started configuring it. However, there is some strange behaviour whenever I load a C++ source file.
[This Part Is Solved]
As expected, semantic parses all included files (and during the initial setup parses all files specified by the semantic-add-system-include variables), but it prints this an error message that goes like this:
WARNING: semantic-find-file-noselect called for /usr/include/c++/4.7/vector while in set-auto-mode for /usr/include/c++/4.7/vector. You should call the responsible function into 'mode-local-init-hook'.
In the above example the error is printed for the STL vector but a corresponding error message is printed for every file included by the one I'm visiting and any subsequent includes. As a result it takes quite a long time to finish and unfortunately the process is repeated any type I open a new buffer.
[This Problem Is Solved Too]
Furthermore it looks like the parsing doesn't really work as when I place the point above a non-c primitive type (i.e. not int,double,float, etc) instead of printing the type's definition in the modeline an error message like
Idle Service Error semantic-idle-local-symbol-highlight-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, (((0) \"IndexMap\"))"
Idle Service Error semantic-idle-summary-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, ((\"fXBetween\" 0 nil nil))"
where DEPFETResolutionAnalysis.cc is the file & buffer I'm currently editing and IndexMap and fXBetween are types defined in files included by the file I'm editing/some file included by the file I'm editing.
I have not tested any further features of CEDET/semantic as the problem is pretty annoying. My cedet config can be found here.
EDIT: With the help of Alex Ott I kinda solved the first problem. It was due to my horrible cedet initialisation. See his first answer for the proper way to configure CEDET!
There still remains the problem with the Idle Service Error (which, when enabling global-semantic-idle-local-symbol-highlight-mode, occurs permanently, not only when checking the definition of the type at point).
And there is the new problem of how to disable the site-wise init file(s).
EDIT2: I have executed semantic-debug-idle-function in a buffer where the problem occurs and it produces a ~700kb [sic!] output. It looks like it is performing some operations on a data container which, by the looks of it, contains information on all the symbols defined in the files parsed. As I have parsed a rather large package (~20Mb source files) this table is rather large. Can semantic handle a database that large or is this impossible and the reason of my problem?
EDIT3: Deleting the content of ~/.semanticdb and reparsing all includes did the trick. I still need to disable the site-wise init files but as this is not related to CEDET I will close this question (the question related to the site-wise init files can be found here).
You need to change your init file so it will perform loading of CEDET only once, not in the hook that will be called for each .h/.hpp/.c/.cpp files. You can change this config as the base, and read more in following article.
The problem that you have is caused because Semantic is trying to analyze header files, and when it tries to open them, then its initialization routines are called again, and again...
The first problem was solved by correctly configuring CEDET which is discribed on Alex Ott's homepage. His answer solves this first problem. The config file specified in his answer is a great start for a nice config; I have used the very same to config CEDET for my needs.
The second problem vanished once I updated CEDET from 1.1 to the bazaar (repository) version, which is explained here and in Alex' article. Additionaly one must delete the content of the directory ~/.semanticdb (which contains the semantic database and was corrupted I guess).
I'd like to thank Alex Ott for his help and sticking with me throughout my journey to the solution :)