ERROR::ASSIMP:: Expected different index count in <p> element - c++

I developed a project on windows with visual studio 17 and it worked fine for me. Now I'm compiling it with cmake on a linux virtual machine (in virtualbox) and everything seems to be ok but when I run my program Assimp doesn't work.
When I create an importer to load an animation it prints an error:
ERROR::ASSIMP:: Expected different index count in <p> element.
but what drives me crazy is that it doesn't cause a crash in the program but it keeps going after printing this, it gets the scene like nothing happened, an assert is passed but when i take the animations i get a segmentation fault.
Here the code:
Assimp::Importer importer;
const aiScene* scene = importer.ReadFile(animationPath, aiProcess_Triangulate);
assert(scene && scene->mRootNode);
auto animation = scene->mAnimations[0];
Any ideas how to fix this?
To reproduce it:
Virtualbox with Ubuntu 22.04 LTS os.
the repo can be cloned from here
You need Conan installed.
When Conan is installed, just clone the repo and run the script called installer.sh
it just installs some dependencies like xorg-dev, build-essential etc with apt-get install and some libraries with conan and configures the makefile with cmake and build it. Then, go to the build dir and run the program called Reskinner.

This is a known bug in the collada-implementation. The number of indices does not fit to the kind of primitive. I am not sure if this is caused by a wrong expectation from our collada parser or from an invalid model.
You can find the issue-report here: Problem with wrong indices

Related

Launching xgdb throwing "Error while loading shared libraries: libncursesw.so.5: cannot open shared object file" [duplicate]

I recently went to try to debug a program with GDB and got the following error:
gdb: error while loading shared libraries: libncursesw.so.6: cannot open shared object file: No such file or directory
So I went investigating and tried the obvious things, ie sudo apt-get install libncursesw5 (and dev variants) and apt reports that I've already got the latest version...so next I tried reinstalling GDB, problem persists. The output of ldd with GDB confirms to me that it still doesn't know where this mythical libncursesw.so.6 file is, so I go digging around in the usr/lib/x86_64-linux-gnu folder and run ls libncu* which returns six results: libncurses.a, libncurses++.a, libncurses.so, libncurses++w.a, libncursesw.a, and libncursesw.so...but no libncursesw.so.6. I then naively attempted to just make a copy of libncursesw.so named libncursesw.so.6, to which gdb reports that this file is "too short".
In googling I can't seem to find a good explanation on how to get this file in place? Every other answer I see just suggests running sudo apt-get install libncursesw5 (or something similar) but I've already tried pretty much every variant of that I can think of. I was going to remove it and then reinstall it but when I went to do that it gave me a scary warning that I could be doing something potentially harmful to my system so I aborted that idea.
Some context that also might(?) help:
I'm running a pretty recent install of Linux Mint 19.3 Cinnamon, and this was my first time trying to run GDB on my new computer. I basically set this new computer up as a new install, just porting over my home directory and a couple of the more useful hidden . files from my old laptop...I figure this shouldn't be the reason GDB is failing/these files don't exist on the new machine but just in case I'm mentioning it.
obvious things, ie sudo apt-get install libncursesw5
You want libncursesw6, not libncursesw5.

How to build a docker image for cuda based c++ application runnig on Nvidia Jetson?

To be more specific, my source code is compiled and linked successfully when I am running it from inside the container.
However, when I am trying to build the image from a Dockerfile it fails.
i.e.:
this works (These lines are from the terminal "inside" the container):
cd AppFolder; make; //success
this does not (These are lines from the dockerfile):
RUN git clone <url> && cd APPFolder && make
Now I get:
/usr/bin/ld: warning: libcuda.so.1 needed by...
How can I build the application from the dockerfile?
The only difference between a container and a layer during the image build is the next layers. perhaps you are running the RUN directive to early - i.e. before the cuda library was generated?
try putting this command as low as you can in the Dockerfile
Well, adding "-Wl,--allow-shlib-undefined" to the compiler/linker (g++) solved this issue. I think it "tells" the linker to "remain" pointer to function that will be "linked" only in runtime (i.e., when running the docker image)

Segmentation fault with OpenALPR

I'm trying to install OpenALPR on Laravel Homestead (Ubuntu 18.04). First I tried The Easiest Way. I'm trying to run:
Error opening data file /usr/share/openalpr/runtime_data/ocr/lus.traineddata
Please make sure the TESSDATA_PREFIX environment variable is set to your "tessdata" directory.
Failed loading language 'lus'
Tesseract couldn't load any languages!
Segmentation fault (core dumped)
I cd into /usr/share/openalpr/runtime_data/ocr directory, and I did not find the lus.traineddata file. But there was an ocr folder and there was a lus.traineddata. I copied it from there to the directory up, tried again to recognize but this time:
--(!) Runtime directory '/usr/share/openalpr/runtime_data' is invalid. Missing OCR data for the country: 'us'!
Error loading OpenALPR
Then I tried The Easy Way. Everything was compiled normally. But:
Segmentation fault (core dumped)
This happens because openALPR's Tesseract OCR expects its trained data in a specific path and the files are not there.
According to the error message it can be set using the TESSDATA_PREFIXenvironment variable but in my own experience didn't work.
I came accross with nother solution; it's not the best way but it may work.
I created a symbolic link from /usr/share/openalpr/runtime_data/ocr/tessdata/lus.traineddata to /usr/share/openalpr/runtime_data/ocr/lus.traineddata in order to make the trainedData files available to tesseract/openalpr right where they expect them.
sudo ln -s /usr/share/openalpr/runtime_data/ocr/tessdata/lus.traineddata /usr/share/openalpr/runtime_data/ocr/lus.traineddata
repeat the command changing lus with the desired language/region file (leu, lfr, ...)
Hope it helps
This is because the language trained data is in [runtime_data path]/ocr/tessdata/ in tesseract 4.0 unlike tesseract 3.0 which stores them in [runtime_data path]/ocr/.
This problem is fixed in this commit.
But it seems that the version of openalpr in the apt-get repository is behind this commit.
So the temporary solution is moving language data to [runtime_data path]/ocr like danielillu’s solution.
Since ‘us’ country config only require lus.traineddata file, you only need to move lus.traineddata file.

Where is "nvinfer.h" from tensorrt located?

I have been trying to compile a basic tensorRT project on a desktop host -for now the source is literally just the following:
#include <nvinfer.h>
class Logger : nvinfer1::public ILogger
{
} glogger;
Upon running make, though, I receive the following message:
fatal error: nvinfer.h: No such file or directory #include <nvinfer.h>
The error is correct, too - I used locate to try to find it, but there's nothing on my machine that matches. I followed the install instructions for desktop installation of TensorRT 2.1 as described here: https://developer.nvidia.com/nvidia-tensorrt-download
So my question is, does anyone know where nvinfer.h is supposed to be? In other words, am I missing a needed package that contains it, or did I miss something else that's essential?
Small addendum: one thing I noticed is that libgie1 is not installed, and it was not included as a debian with the provided TensorRT download like the other packages such as gie-dev were.
Before using locate, if you recently added new files is a good practice to run sudo updatedb, if the file is on the pc you should see it after.
Anyway googling a bit it looks like the header your looking for is NvInfer.h, caps matters.

Ros installtion getting stuck during build

I am trying to installed ROS kinetic on raspberry pi 3 according to this official page. Installation require to have 51 packages build. on "Roscpp" package build, whole pi3 gets hang(even with using -j2 option to reduce number of thread's). i tried 2-3 times but always same result, even i left for 1-2 days in same hang state by assuming that it will come out but build never completes. Is this is correct way to do it or there is any other way to do cross compilation and put package in pi3. Am i the only one who is facing this issue? (tried on 2 diff pi3).
I had the same problem with my raspberry pi 2 model B but solved it by changing -j4 to -j2 option.
You can also add extra swap space by going to /etc/dphys-swapfile and changing the line:
CONF_SWAPSIZE=100
To something like this:
CONF_SWAPSIZE=1000