Ok, n00b question. I have a cpp file. I can build and run it in the terminal. I can build and run it using clang++ in VSCode.
Then I add gtest to it. I can compile in the terminal with g++ -std=c++0x $FILENAME -lgtest -lgtest_main -pthread and then run, and the tests work.
I install the C++ TestMate extension in VSCode. Everything I see on the internet implies it should just work. But my test explorer is empty and I don't see any test indicators in the code window.
I've obviously missed something extremely basic. Please help!
Executables should be placed inside the out or build folder of your workspace. Or one can modify the testMate.cpp.test.executables config.
I'd say, never assume something will "just work".
You'll still have to read the manual and figure out what are the names of config properties. I won't provide exact examples, because even though I've only used this extension for a short time, its name, and therefore full properties path, has already changed, so any example might get obsolete quite fast.
The general idea is: this extension monitors some files/folders, when they change, it assumes those are executables created using either gtest or catch2. The extension tries to run them with standard (for those frameworks) flags to obtain a list of test suites and test cases. If it succeeds, it will parse the output and create a nice list in the side panel. Markers in the code are also dependent on the exactly same parsed output, so if you have one, you have the other as well.
From the above, you need 3 things to make this work:
Provide correct path (or a glob pattern) for finding all test executables (while ignoring all non-test executables) in the extension config. There are different ways to do this, depending on the complexity of your setup, they are all in the documentation though.
Do not modify the output of the test executable. For example, if you happen to print something to stdout/stderr before gtest implementation parses and processes its standard flags, extension will fail to parse the output of ./your_test_binary --gtest-list_tests.
If your test executable needs additional setup to run correctly (env vars, cwd), make sure, that you use the "advanced" configuration for the extension and you configure those properties accordingly.
To troubleshoot #2 and #3 you can turn on debug logging for the extension (again, in the VSCode's config json), this will cause an additional "Output" tab/category to be created, where you can see, which files were considered, which were run, what was the output, and what caused this exact file to be ignored.
This messed with me for a while, I did as Mate059 answered above and it didn't work.
Later on I found out that the reason it didn't work was because I was using a Linux terminal inside windows (enabled from the features section) and I previously had installed the G++ compiler using the linux terminal so the compiler was turning my code into a .out file, for some reason TestMate could not read .out files.
Once I compiled the C++ source file using the powershell terminal it created a .exe file which I then changed the path in the setting.json as Mate059 said and it showed up.
TL;DR
Mate059 gave a great answer, go into settings.json inside your .vscode folder and modify "testMate.cpp.test.executables": "filename.exe".
For me it also worked using the wildcard * instead of filename.exe but I do not suggest to do that as in that might mess up something with the .exe from the main cpp file and what not.
Related
I'm currently Studying Computer enginering and taking embeded systems class, My isuse is that we use a custom library then compile it in a old version of Codewarrior.
how I would go about creating an include path for my lsp with nvim
I was woundering how I would go about creating an include path for my lsp with nvim, when I am not compiling the code localy but later compiling it with an old IDE
any wisdom would be apreciated.
note: in class we are required to use an exterior editor and the older version of code warrior is verry bad it is used for compiling for our micro controler but is unusable for writting code.
things I have done
I have atempted using compile_commands.json by coppying my vscode config for path location
I have tryed using a .clangd file with -I ...
I have tried other method but had no sucess so far
over all I was hopping to find a solution and have poured over the getting started page and stack overflow for several hours trying diffrent method to no avail.
The easiest approach is probably to use a .clangd file. Based on the path in your comment, the .clangd file should look like this:
CompileFlags:
Add: -I/home/bjc1269/Documents/github/libraries/lib/hc12c/include
A few things that I'm seeing in the .clangd file in your comment that don't work are:
Variable substitutions like ${workspaceFolder}. This is a VSCode feature that works in some VSCode settings like "clangd.arguments", but is not supported in a .clangd file, which is editor-agnostic (for example, it works with editors that don't have a concept of a "workspace").
Referring to your home directory as ~. Expanding ~ to /home/<username> is a feature of your shell. Command-line arguments specified in .clangd are passed directly to the compiler without being processed by the shell, so ~ will not work.
Globs like **. To be honest, I'm not even sure what the intended semantics for this could be in the context of specifying include directories.
Square brackets inside the argument to -I. Square brackets may appear in a .clangd file as YAML syntax for specifying multiple values in a list, for example you might have:
CompileFlags:
Add: [-I/path/to/directory1, -I/path/to/directory2]
But if you write -I=[/path/to/directory], the brackets just get passed on verbatim to the compiler, which does not understand this syntax.
First of all: Welcome to stackoverflow! :D
I'd recommend to use bear for this. You just simply invoke it with your build-command and the clangd LSP will read the includes automatically.
I have a C++ project in Ubuntu 12.04. To run the project the make file requires the following files:
1-All the .cpp files
2-All the .h files
3-Three shared libraries.
The project is fully functionall and performs according to the specifications. All the required .cpp files and .h files are available. The problem is that there is no main() function in any of the source files and the program entry point resides in one of the three shared libraries. My job is to find out the program execution pipeline and without having any main file I am not able to do that. I can't run the project in any IDE (i.e: eclipse) because there is no main function available.
Question: Can you please tell me how to find the program entry point?
P.S: I will be glad to provide any kind of information or material you may need to solve my problem.
Edit: The CMakeLists.txt file available here.
Edit 2: The build.sh file available here.
To find enty point look into each shared object with:
nm $library | egrep "T main$"
Library with main() will output something like
090d8ab0 T main
Very usefull way to visualize execution tree is to run:
valgrind --tool=callgrind ./my_executable -arg -arg ....
(you can abort execution early with Ctrl+C)
This will output callgrind.<pid> file. To visualize it run kcachegrind callgrind.<pid>.
You will need valgrind:
sudo apt-get install valgrind
and kcachegrind
sudo apt-get install kcachegrind
Build it with the debug option -g and step into the program with a debugger like gdb (or cgdb or ddd). You'll need any appropriate debug libraries libraries though.
Short of that, play with the code a bit. Try putting printf or cout statements that print internal variables in any functions that look important, and see what the program status is and how frequently they get called. If main is hidden in a library, there's probably another function somewhere that behaves like main for the purposes of the API provided by whatever library has the real main.
What's the API documentation for your libraries? (is this a school project?). It sounds odd to have a hidden main and not say anything about it.
In case you use a build system (CMake, SCons, ...) it is highly possible that the build system is also generating some files, and one of them might be containing the main() method. We use this methodology when we generate the main function in order to instantiate classes for libraries that were specifically selected in CMake-gui.
And again, it is possible that the build system deletes the generated files due to some obscure policy the original developers thought of but didn't tell you. So search through your build system files, see what is actually happening there.
Edit
So, after seeing you CMakeLists.txt:
check ${DIR_EXT}/covis/src/ci.cpp where DIR_EXT is SET( DIR_EXT "../ext/" CACHE PATH "Folder holding external libraries" )
See what's in there and let us know :)
Edit2
After seeing build.sh (execute steps in order):
1.
change
`cmake -D COMPILE_BINARY=ON ..`
to
`cmake -D COMPILE_BINARY=ON -DCMAKE_BUILD_TYPE=Debug ..`
and add the same -DCMAKE_BUILD_TYPE=Debug to the other cmake command too.
This will build your library and executable in debug mode.
2.
Now, in one of the c++ source files you have access to and you are sure will be called (the earlier the function will be calle the better), add:
asm("int $0x03");
This will create a breakpoint in your application.
(If you do not want to use this, see below).
3.
Build your application.
4.
Run it via a debugger in terminal:
gdb ./myapplication <ENTER>
(this will give you a gdb prompt)
(if you did not add the asm breakpoint from above, type in the gdb prompt: break filename.cpp:linenumber or break methodname to add a gdb breakpoint).
run <ENTER>
Now your application should stop in your function when it is executed.
You are still in the gdb prompt, so type:
bt <ENTER>
This will print out the backtrace of your application. Somewhere you should see a main function, together with filename and linenumber.
However, that setnames.sh looks interesting, see if it does not do anything funny :)
i just installed YouCompleteMe for Vim through vundle. It works, but it shows only the words contained in the current file. I want to use it to develop c++ programs, how can i configure it to show autocompletion from c++ headers file in /usr/include for example? Thanks a lot.
You need to navigate to ~/.vim/bundles/YouCompleteMe and run the installation script with --clang-completer, so do ./install.sh --clang-completer. After it finishes you should have support for C like languages.
You may also need to place let g:ycm_global_ycm_extra_conf = '~/.vim/bundle/YouCompleteMe/cpp/ycm/.ycm_extra_conf.py' in your ~/.vimrc.
I have installed with pathogen. I tried the above instructions with ./install.sh --clang-complete. After this, it did not work, and I indeed had to add the path. But it was different than in another reply here, namely
let g:ycm_global_ycm_extra_conf = '.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/ycm/.ycm_extra_conf.py'
so there is an extra "third_party/ycmd" in the path.
While the suggestions here might work in the beginning, I am not sure it's the proper way to go. According to YCM developer, whenever you start a project, you need a new .ycm_extra_conf.py file
From https://valloric.github.io/YouCompleteMe/#ubuntu-linux-x64-super-quick-installation
YCM looks for a .ycm_extra_conf.py file in the directory of the opened file or in any directory above it in the hierarchy (recursively); when the file is found, it is loaded (only once!) as a Python module. YCM calls a FlagsForFile method in that module which should provide it with the information necessary to compile the current file. You can also provide a path to a global .ycm_extra_conf.py file, which will be used as a fallback. To prevent the execution of malicious code from a file you didn't write YCM will ask you once per .ycm_extra_conf.py if it is safe to load. This can be disabled and you can white-/blacklist files. See the Options section for more details.
While you might only need to modify the compile flags from the vanilla .ycm_extra_conf.py, I feel it is advisable to create a new file for every project you start.
Everything that the folks here have said is correct. I just want to add that as of 2017, the "install.sh" script is deprecated. Now, you have to use the install.py script instead by typing
./install.py --clang-completer
Also, in your .vimrc file, instead of ".vim/bundle/blahblahblah", you'll need to add a "~/" in front of the address by adding:
let g:ycm_global_ycm_extra_conf = "~/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/ycm/.ycm_extra_conf.py"
to your .vimrc file, to give it an absolute path from the Home directory so that Vim can find the ".ycm_extra_conf.py" file. Otherwise, you might experience some funny behavior.
I just wanted to add if you don't want to manually define a config file there is this neat little repository that will auto generate it. https://github.com/rdnetto/YCM-Generator
I compiled my program with intel C++ compiler for windows (from Intel Composer 2011), and got an error message that libmmdd.lib cannot be found. I googled this problem, and some people said that I have to reinstall my compiler, and I did; however, that didn't resolve the problem, so I started looking in the intel compiler directory, and found that this file (and other required libraries as well) are located at
%CompilerDirectory%\compiler\lib\ia32
It doesn't make sense to write in the make file the whole absolute path of the libraries, so I started searching, and I could only find that %mklroot% points to the math kernel directory. And even with a -L%mklroot%/../compiler/lib/ia32 approach for linking I couldn't link to the libraries correctly, so eventually I did a lame move to solve the problem, which is, I copied every file the linker asks for to the source directory, and so was the problem temporarily solved.
Since this way of solving the problem isn't the best one, I wonder if there's a way to link to those libraries without having to copy the files. It's strange because the compiler should find its own libraries alone, but... I don't know...!
Any ideas? is there something like, %compilerroot%, that points to the compiler directory and that I could put in my makefile (or actually my qmake, since I'm using Qt).
Thanks for any efforts :-)
Instead of using %mklroot% try $$(mklroot) or $(mklroot).
You can find the explanation here:
Variables can be used to store the contents of environment variables.
These can be evaluated at the time that qmake is run, or included in
the generated Makefile for evaluation when the project is built.
To obtain the contents of an environment value when qmakeis run, use
the $$(...) operator:
DESTDIR = $$(PWD)
message(The project will be installed in $$DESTDIR)
In the above assignment, the value of the PWD environment variable is
read when the project file is processed.
To obtain the contents of an environment value at the time when the
generated Makefile is processed, use the $(...) operator:
DESTDIR = $$(PWD)
message(The project will be installed in $$DESTDIR)
DESTDIR = $(PWD)
message(The project will be installed in the value of PWD)
message(when the Makefile is processed.)
In the above assignment, the value of PWD is read immediately when the
project file is processed, but $(PWD) is assigned to DESTDIR in the
generated Makefile. This makes the build process more flexible as long
as the environment variable is set correctly when the Makefile is
processed.
EDIT:
It is strange that neither $$(mklroot) nor $(mklroot) gave you the result you would expect. I did a simple test to verify what I wrote above:
Opened a Command Prompt
Created a new environment variable 'mklroot' with a test value: set mklroot=C:\intel_libs
Verified the result of the previos step: echo %mklroot%. I got C:\intel_libs
Placed your 3 qmake functions at the end of my .pro file:
warning($(%MKLROOT%))
warning($(MKLROOT))
warning($$(MKLROOT))
Ran qmake: qmake. The result:
Project WARNING:
Project WARNING: c:\intel_libs
Project WARNING: c:\intel_libs
As you can see the 2nd and the 3rd warning() displayed the string I set to the environment variable.
I have been trying to set up an EDE project for C++ (emacs24 + builtin CEDET) and I'm starting to get desperate because I can't seem to find the way I want the makefiles to be generated. I'm relatively new to Emacs.
I'll try to describe what I'm doing:
I have a toy project set like so:
main.cpp
other/
Utils.cpp
Utils.h
CGrabBuffer.cpp
CGrabBuffer.h
main.cpp includes both .h's inside the "other/" directory. These are the steps I follow to set up an EDE project with this simple directory setup:
Open main.cpp in emacs and do M-x ede-new ; type: Make ; name: main-proj.
Open one of the files in the "other" directory and do M-x ede-new ; type: Make ; name: aux-proj.
Now it's time to create the targets (which I believe are three in this case):
On the main.cpp buffer: M-x ede-new-target ; name: main ; type: program. When prompted, I add the main.cpp to this target.
I repeat the same for the other two targets (Utils which has Utils.cpp and Utils.h and CGrabBuffer which has CGrabBuffer.cpp and CGrabBuffer.h). Here I find the first problem. What type do these two targets have to be? I only want them to generate .o files.
Once this is done, I type M-x ede-customize-current-target to all three targets and I add some include paths, some libraries, etc.
After this, if I call M-x ede-compile-project it doesn't compile because:
It tries to compile main.cpp first; I have no idea how to specify (using EDE) that both Utils.o and CGrabBuffer.o are needed before attempting to build main.cpp.
If I manually change the order (editing the Makefile), it's not able to link main.cpp because it can't find Utils.o and CGrabBuffer.o.
As you can see, I am in the middle of a great mess. Maybe I'm not even understanding what "target" means in EDE. I have also read about the existence of ede-cpp-root-project which has to be specified inside the .emacs file. I haven't tried it because what I think it does is just help with the semantics. It doesn't generate Makefiles, does it? Can I have (or do I need) an EDE project built with Project.el's and the same thing using ede-cpp-root-project for the semantics? Or is it redundant?
Sorry If I misunderstood a lot of things but I'm very confused and being new to emacs makes things worse. Thanks for your patience!
EDIT: with some tinkering and the responses I received I have been able to figure out a lot of stuff, so thanks a lot. What I still don't understand is the use of the ede-cpp-root-project which has to be specified inside the .emacs file. Is it just for c++ semantics? Is it redundant to have the project with Project.el's AND also the elisp lines in .emacs?
EDE is designed to handle many different kinds of projects, usually of a type where the build system was written outside of Emacs in some other tool.
The EDE project type that creates Makefiles for you can do quite a few things, but you need to have some basic understanding of build systems for it to be helpful, and you really do need to customize the projects to get anything of any complexity working.
I've recently added a section to the EDE manual to help with basic project setups that autogenerate Automake files. You can check out the tutorial here:
http://www.randomsample.de/cedetdocs/ede/ede/Quick-Start.html
The same steps will apply for projects that just use Make instead, but Make based projects often have trouble with shared libraries due to the extra complexity.
Mike's answer is quite good, but I think it is ok to just add .h files to the same target as your .cpp sources. It will keep track of them separately.
Another useful trick is to use the whole project compile keystroke (C-c . C) which uses a capital C whenever you change something big. That will regenerate the Makefiles, rerun any needed Automake features, and start at the top.
EDIT: You only need one EDE project for a give project area. The ede-cpp-root project is useful when no other automatic project type works. That's when you create that in your .emacs file so that the other tools that need a project definition, like semantic's smart completion, and tag lookup, will work.
Well, I think I actually have it figured out this time, but it's ugly. Utils.cpp and CGrabBuffer.cpp should not get their own individual targets, because there doesn't seem to be an appropriate target type. Instead, you'll need to create an archive or library, which will automatically compile Utils.cpp and CGrabBuffer.cpp for you. Below, I'll assume you want static, but it's easy to change.
[For anyone to whom archives or libraries are not familiar, they basically just gather up .o files into a separate unit. It doesn't actually make the compilation harder. Read more here.]
1) Follow the first two and a half steps above (including making the main target, but not the other targets).
2) Switch to Utils.cpp and do M-x ede-new-target ; name: aux ; type: archive. When prompted, add Utils.cpp to this target.
3) Switch to CGrabBuffer.cpp and do C-c . a ; Target: aux .
4) Regenerate the Makefile with M-x ede-proj-regenerate. At this point, if you run make in the other subdirectory, you should get the archive libaux.a.
5) Switch back to main.cpp and do M-x ede-customize-current-target. This brings up an interactive emacs customization buffer, which allows you to edit details of the ede configuration. Under the Ldflags section, click [INS]. This pops out a new line that says Link Flag: and has some different-colored box for you to type in (mine is grey). Type -Lother -laux, so that other/libaux.a is included when compiling main. Then, at the top of the buffer, press [Accept], which should save that change and switch back to main.cpp.
6) Regenerate the Makefile with M-x ede-proj-regenerate.
Now, unfortunately, the Makefile makes the main target first, then descends into the other directory and makes that. Unfortunately, this means that a make from the top-level directory will not work on a clean tree. I don't know why this is, because it seems like that would never be what you want in any project that is ever made with EDE. I can't find any way to change that, except for this hack:
7) Do M-x customize-project; under Inference-Rules click [INS]. Then enter Target: all ; Dependencies: aux main ; Rules: [INS] ; String #: . (This last one is just to prevent an error on an empty rule with a tab; presumably an EDE bug.) Click [Accept], and regenerate the Makefiles.
So now, in your top directory, you can just run make, and main should be a working executable.
I'm quickly becoming convinced that EDE is not yet ready to be used by people other than its authors. Despite its size and the amount of effort they've clearly put into it, it is too buggy, too counterintuitive, and just not smart enough. That's a shame. Emacs needs something like this.