I am trying to connect Valgrind with CLion 2020.2.1 for testing purposes, but CLion (and perhaps I) cannot locate the Valgrind executable.
The path I am currently using is:
\\wsl$\Ubuntu\usr\bin\valgrind
But when I try to run anything with Valgrind using this path I get a message stating that it cannot run the program because of this error:
CreateProcess error=193, %1 is not a valid Win32 application
Which seemed to make sense upon further research, as what I am selecting (\usr\bin\valgrind) appears to be a .cpp source file and not an executable - as the error would indicate - but now I don't know what the issue is. If that isn't what CLion is looking for, then where is the file it is looking for?
Have also tried path '/usr/bin/valgrind/' which resulted in "Valgrind executable not found." when I tried to compile.
If anyone has any advice it would be greatly appreciated!!
Related
i just started to code in cpp and nothing is working idk if i didn't install the gcc properly or what but i already set the path of the bin file idk why it refuses to run
the code:
#include<iostream>
int main()
{
std::cout<<"hello";
}
and the problem is that when I try to use the "code runner extension" it is not working so I just press f5 and then when I get the error messages which says at first "could not find the task file'c/c++:g++.exe build active file' " and I get three options 1:debug anyway
2:configure task
3:cancel
when I choose debug anyway I get this error here
Since you're on windows consider installing Visual Studio or CLion. They're more beginner friendly - VSCode can get tricky to run c++ on windows. Either way, looks like you're trying to run your project without building it first. There should be a build option right next to the run option. Try building it, then running. The build is what compiles and creates the project.exe file, which is what the compiler is saying isn't there.
The referenced IDE's always auto-build on run, so there's that
If you're familiar with using the command line, these commands will do what you want, assuming the .cpp file is in your current directory.
g++ <FILENAME>
./a.out
There are wonderful flags you can add to the first command (the compiling command) like -Wall, -Wextra, -o, and many more. But, since it seems like you're just starting out, there's no need to worry about those!
If you're not familiar with the command line, I would encourage you to become familiar! It is an extremely powerful tool for a programmer. Here is a YouTube video that talks about it.
I get the following error:
Exception occurred executing command line.
Cannot run program "C:/OMNEST-5.5.1/samples/enera/lteAdvanced/enera.exe" (in directory "C:\OMNEST-5.5.1\samples\enera\lte"): CreateProcess error=2, The System cannot find the file.
I already built the project many times. I have tried to make a simplier already given example from omnet just to check if this is working. It is working. But if I copy this example in my Project it also doesn't work, so there is sth wrong with my Project file. But it seems to be correct. I just have one Connection and kept it really really simple. But it doesn't work. I have installed Omnest and inet correctly.
The most likely cause is that the EXE file cannot find the omnet++ dynamic libraries it tries the load. And the most likely reason is that you are trying to execute the executable from a CMD prompt instead of from the shell provided by the mingwenv.cmd script.
Everything you do in OMNeT++ (including starting the simulations) must be run from the mingwenv shell.
I built tensorflow with VS2015 and I was able to run some examples,
as tf_tutorials_example_trainer and label_image.
Then I tried to run the samples here. I was able to compile and start the example.cc but when reaching the line
Scope root = Scope::NewRootScope();
I get this error:
Op type not registered 'NoOp' in binary running on DESKTOP-S5QHRCE.
Make sure the Op and Kernel are registered in the binary running in this process
What am I missing?
I found this.Thanks to Joe, who explains how to use the /WHOLEARCHIVE option to fix the issue.To avoid out of memory error during linking if using Optimize for debugging (/DEBUG) option, do msbuild /p:Configuration=Release youproject.vcxproj in a command prompt.
I am getting an error:
"Error occurred during initialization of VM
Unable to load native library: Can't find dependent libraries"
The error arises when I try to execute my exe file.
I have created exe file through pyinstaller on a django application. Application uses pylucine library. I think it may be the issue of error.
How to fix the error?
Since I can't be certain given you've provided very few details here is a shot in the dark to help solve your problem:
First, try removing the jvm.dll file that gets packaged with the pyinstaller -D youmodule.py command (for now work with the directory command rather than -F option). The reason why is here.
With that jvm.dll file gone, you should start seeing the actual error code - and with that the java class or dependency that isn't being loaded.
If it's a java class that isn't being properly loaded then you know instantly it must not be correcly represented in the classpath environment variable and you should do everything in your power to make sure it is:
e.g.: os.environ['CLASSPATH'] += 'the/path/to/the/jar'
Otherwise, consider bulking up your question with more details, especially if you can get a more meaningful error output.
I had the same error trying to run a .exe built with PyInstaller through wine.
My problem went away by adding C:\Program Files\Java\ [your jdk version here] \jre\bin\server to the PATH environment variable in wine - I suppose it might be the same in Windows.
It also reappeared if I tried to build with C:\Program Files\Java\ [your jdk version here] \jre\bin\server in my PATH, so I had to build without it and then append it before running it (I have no explanation as to why this happens).
I already asked this question in the nvidia forum but never got an answer link.
Every time I try to step into a kernel I get a similar error message to this:
__device_stub__Z10bitreversePj (__par0=0x110000) at
/tmp/tmpxft_00005d4b_00000000-1_bitreverse.cudafe1.stub.c:10
10 /tmp/tmpxft_00005d4b_00000000-1_bitreverse.cudafe1.stub.c: No such file or directory.
in /tmp/tmpxft_00005d4b_00000000-1_bitreverse.cudafe1.stub.c
I tried to follow the instructions of the cuda-gdb walkthrough by the error stays.
Has somebody a tip what could cause this behaviour?
The "device stub" for bitreverse(unsigned int*) (whatever that is) was compiled with debug info, and it was located in /tmp/tmpxft_00005d4b_00000000-1_bitreverse.cudafe1.stub.c (which was likely machine-generated).
The "No such file" error is telling you that that file is not (or no longer) present on your system, but this is not an error; GDB just can't show you the source.
This should not prevent you from stepping further, or from setting breakpoints in other functions and continuing.
I was able to solve this problem by using -keep flag on the nvcc compiler. This specifies that the compiler should keep all intermediate files created during the compilation, including the stub.c files created by cudafe that are needed for the debugger to step through kernel functions. Otherwise the intermediate files seem to get deleted by default at the end of the compilation and the debugger will not be able to find them. You can specify a directory for the intermediate files as well, and will need to point your debugger (cuda-gdb, nsight, etc) to this location.
I think I had such problem once, but can't really remember what was it caused with. Do you use textures in your kernel? In that case you couldn't debug it.