I wanted to run the cppcheck only for specific type of files and not all files.
For example, I want to run cppcheck for all files ending with "Operation.cpp" recursively (basically for *Operation.cpp). I don't find an option in cppcheck, doing the same. Could anyone help?
Also, can I grep if a function is present in the CPP file, and throw error, if it is not present in those files.
On Linux you can use
find foo*.cpp | xargs cppcheck
This is an update, previous example used ls, but as noticed by more experienced people, for automation purposes find should be preferable solution. Proof: Why you shouldn't parse the output of ls(1)
Related
All the cscope tutorials I found online talk about how to use the interactive mode of cscope to search for symbols in editors such as vim and emacs. But I think it should be possible to issue a command in terminal to do something like
cscope -d -some_options <my symbol>
And I should be able to see a list of results in stdout, instead of having to enter the ncurse UI and do everything there. I think this is possible because the "only" frontend cbrowser can do things like that in its TclTK UI. But the code unfortunately is quite beyond me.
However, I found no documentation about this capability.
Am I dreaming or is there an undocumented way of doing this?
Thanks!
UPDATE
Some progress: If I make a small project of a few files with sub-dir structure. Then rici's answer works out of the box. With a bigger project (thousands of files with complex folder structure). Even with a cscope.out and cscope.files present at the root of the project folder (also my current working directory), I got nothing from the same command and same symbol. I suspect that there is a scalability issue with the command. I also tried command
cat cscope.files | xargs cscope -d -L1 <symbol> -i
to no avail.
UPDATE
Extremely bizarre! I tried to use some other symbols. Turned out that the particular symbol I was searching for cannot be shown using the command line. But all other symbols I tried worked. And cbrowser has no problem finding that "failed" symbol. Anyways, I was just in bad luck. I'll ask a separate question about this anomaly in command line.
I marked rici's answer as correct.
You are probably looking for this:
cscope -L1<symbol>
You could use -d as well, although if you're modifying the files, it's good for cscope to update it's database.
-L means "execute a single line-oriented command", and the following digit (1 in this case), which could also have been written as a separate option, is the specific command, which the manpage confusingly calls a "field". The "fields" are given by the interactive cscope prompt; I added the digit for convenience. "this" refers to the text which follows the digit; remember that it's a pattern so you don't necessarily have to type the full symbol.
0 Find this C symbol:
1 Find this function definition:
2 Find functions called by this function:
3 Find functions calling this function:
4 Find this text string:
5 Change this text string:
6 Find this egrep pattern:
7 Find this file:
8 Find files #including this file:
You can call cscope with the -R version for recursive searching. For example:
cscope -d -f/path/to/cscope.out -R -L1 some_symbol
(searches for the definition of some_symbol)
cscope -d -f/path/to/cscope.out -R -L3 some_symbol
(shows all locations where some_symbol is called)
You can omit the -f option if cscope.out is located in the current working directory.
Note that the above call yield zero results for an indexed symbol if -R is omitted. Very old cscope versions don't support -R. For example, version 15.8a does support it.
The list of possible values for -L is:
0: Find this C symbol
1: Find this definition
2: Find functions called by this function
3: Find functions calling this function
4: Find this text string
6: Find this egrep pattern
7: Find this file
8: Find files #including this file
9: Find places where this symbol is assigned a value
The -R option can also be used when creating the cscope.out file, e.g.:
cscope -bR
Here's a problem I've had recently that just HAS to be a common pain to others here.
I'm working with someone else's legacy C code and need to find where a function or macro was defined. The code #includes a bunch of different standard system libraries in addition to those from the specific project.
Is there a tool or technique to quickly find where a specific function, macro, (or other global for that matter) was defined?
I tried:
grep -R 'function' /usr/lib
and other similar *nix/bash-fu with only limited success and lots of annoying chaff to cull. One of you sage coders out there must have a good solution to this seemingly common scenario.
I was very surprised to not find another question on this particular pain here or in my searches of the interwebs. (I'm sure there will be angry comments if I missed one... ;-))
Thanks in advance for any tips!
Use etags/ctags from the exuberant ctags project in conjunction with an editor (emacs, vim) that understands them, or GNU GLOBAL.
Oh, and if you happen to use automake it generates a target TAGS. So no need for complicated manual calls to {c,e}tags.
Use ctags/cscope + vim/emacs
you can google for their detail use.
if you use ctags + vim, you can :
1.go to the /usr/include directory, excute ctags -f tags1 -R . generate the tags
2.generate tags for your code in your code directory ctags -f tags2 -R.
3.run :set path+=tags1,tags2 in your vim
4.under a function or marco try CTRL+]
Here is what you can do, assuming you use gcc, if not just modify it accordingly.
gcc -E myfile.c | grep '^#' | cut -f 3 -d ' ' | sort |uniq | xargs -n 1 grep -l "MYMACROORFUNCTIONNAME"
You can use Eclipse CDT. For example here is described how to setup CDT project to navigate Linux kernel source - HowTo use the CDT to navigate Linux kernel source.
vim + ctags is the way to go. You can jump to and definition of functions, global variables, macros, etc. etc.
FYI, browsing programs with tags
Also, if you want to quickly switch between .c and .h files, please refer to this blog
you can use cscope or emacs/vim + xcscope.el to do that easily. I think it's batter than ctage and etage.
Provided the correct headers are included that directly or indirectly define what you look for, most IDEs have a jump-to-definition-functionality that works.
The tags-approaches are of course nice because they don't depend on correctly included headers.
I have a C shell script that calls two
C programs - one after the another
with some file handling before,
in-between and afterwards.
Now, as such I have three different files - one C shell script and 2 .c files.
I need to give this script to other users. The problem is that I have to distribute three files - which the users must keep in the same folder and then execute the script.
Is there some better way to do this?
[I know I can make one C code file out of those two... but I will still be left with a shell script and a C code. Actually, the two C codes do entirely different things... so I want them to be separate]
Sounds like you're worried that your users aren't savy enough to figure out how to resolve issues like command not found errors and the like. If absolutely MUST hide "complexity" of a collection of files you could have your script create the other files. In most other circumstances I would suggest that this approach is only going to increase your support workload since semi-experienced users are less likely to know how to troubleshoot the process.
If you choose to rely on the presence of a compiler on the system that you are running on you can store the C code as a collection of cat $STRING >> file.c commands to to create your two C files, which you then compile and use.
If you would want to use pre-compiled programsn instead then the same basic process can be used except instead use xxd to both generate the strings in your script and reverse the conversion process to give you working binaries. Note: Remember to chmod the binary so that it is executable.
use shar command to create self-extracting archive.
or better yet use unzipsfx with AUTORUN option.
This provides users with ONE file, and only ONE command to execute (as opposed to one for untarring and one for execution).
NOTE: The unzip command to run should use "-n" option, that way only the first run would extract the files and the subsequent would skip the extraction.
Use a zip or tar file? And you do realize that .c files aren't executable, you need to compile & link them first?
You can include the c code inside the shell script as a here document:
#!/bin/bash
cat > code.c << EOF
line #1
line #2
...
EOF
# compile
# execute
If you want to get fancy, you can test for the existence of the executable and skip compiling them if they exists.
If you are doing much shell programming, the rest of the Advanced Bash-Scripting Guide is worth looking at as well.
I'm working on a large c++ built library that has grown by a significant amount recently. Due to it's size, it is not obvious what has caused this size increase.
Do you have any suggestions of tools (msvc or gcc) that could help determine where the growth has come from.
edit
Things i've tried: Dumpbin the final dll, the obj files, creating a map file and ripping through it.
edit again
So objdump along with a python script seems to have done what I want.
If gcc, objdump. If visual studio, dumpbin.
I'd suggest doing a diff of the output of the tool for the old (small) library, vs. the new (large) library.
keysersoze's answer (compare the output of objdump or dumpbin) is correct. Another approach is to tell the linker to produce a map file, and compare the map files for the old and new versions of the DLL.
MSVC: link.exe /MAP
GCC and binutils: ld -M (or gcc -Wl,-M)
On Linux it should be quite easy to see if new files have been added with a recursive diff. They would certainly create an increase in the library size. You can then go an use the size command line tool on Linux to get the sizes of each of the new object files and sum them up. Then compare that sum to your library increase and check how much it differs.
G'day,
If you have any previous versions of the object file laying around can you run the size command to see which segment has grown?
Couple of questions:
Are you on a *nix platform or a Windows platform?
Which compiler are you using?
Was the compiler recently changed?
Was the -g flag added recently? (obvious question 1)
Was the object previously stripped? (obvious question 2)
Was the object dynamically linked previously? (obvious question 3)
Edit: If the code is under SCM, can you check out a version of the source that gave you the smaller object. Then compare:
the size of the source trees by doing a du -sk on the old source tree and then the new source tree without having built anything.
the number of files by doing something like find ./tree_top ( -name *.h -o -name *.cpp ) | wc -l
the location of an increased number of files by doing a find ./tree_top ( -name *.h -o -name *.cpp ) -print | sort > treelist and then do the same for the new larger tree. Doing a simple sdiff will show any large number of new files.
the size of code base, even a simple count of trailing semi-colons will give you a good basic mechanism for comparison between the two.
the Makefiles or build env. for the project to see if different options or settings have crept in to the build itself.
HTH
BTW Please post your findings here as I'm sure many people are interested in what you find out.
cheers,
I have an inherited project that uses a build script (not make) to build and link the project with various libraries.
When it performs a build I would like to parse the build output to determine what and where the actual static libraries being linked into the final executable are and where are they coming from.
The script is compiling and linking with GNU tools.
You might try using the nm tool. Given the right options, it will look at a binary (archive or linked image) and tell you what objects were linked into it.
Actually, here's a one-liner I use at work:
#!/bin/sh
nm -Ag $* | sed 's/^.*\/\(.*\.a\):/\1/' | sort -k 3 | grep -v ' U '
to find the culprits for undefined symbols. Just chop off the last grep expression and it should pretty much give you what you want.
Static libraries, that makes life more difficult in this regard. In case of dynamic libraries you could just have used ldd on the resulting executable and be done with it. The best bet would be some kind of configuration file. Alternatively you could try to look for -l arguments to gcc/ld. Those are used to specify libraries. You could write a script for extracting it from the output, though I suspect that you will have to do it manually because by the time you know what the script should look for you probably already know the answer.
It is probably possible to do something useful using e.g. Perl, but you would have to provide more details. On the other hand, it could be easier to simply analyze the script...