Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Normally, to analyse big C projects, I prefer grep/GNU command line tools, lint, simple Python scripts. Saying "to analyse" C project I mean to collect code statistics, to understand project's structure, its data structures and flow of execution - what function calls what, entry points in different modules, static members, threads, etc. But it works not so good with an object-oriented code.
Whenever I have a big C++ (or Objective-C) project, containing large number of source files and several directories, I would like to see it's class diagram, data fields, methods, messages, instances, etc.
I am looking for a most Unix way solution. Can you help me?
Doxygen is the closest i could find, when i was searching last time. It is not unix way, but it is available free for linux/windows/mac. It generated descent graphs for me. Hope it helps.
http://www.doxygen.nl/
http://en.wikipedia.org/wiki/Doxygen
With message passing and dynamic dispatch going around you are pretty much screwed. It doesn't even depend on language, message is as well used in C++ world. There is no tool that can analyze the code and tell what the application flow will look like. In those cases, the whole data/execution flow may depend on configuration files, how you hook up producers/consumers together etc.. and change significantly. If you are lucky, there would be some high-level documentation, maybe with pictures and description of overall ideas etc. Otherwise, the only option here is to run it under debugger for any given configuration and see what is going on in there, step by step. Isn't that a true UNIX way?
Your request is for a variety of views, some text-based, some structure based.
You might consider Understand for C++ which does a mixture of these. Don't know if it does ObjectiveC.
Our Source Code Search Engine (SCSE) is rather more limited, but provides a much faster way to "grep" than grep does. For large code bases this matters. It will handle multiple languages and dialects. We don't have an Objective C dialect, but I think our C or C++ front ends would actually work pretty well for this, since Objective C uses pretty much the same lexical syntax.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm having a read of other people's source code from open source projects such as Pidgin, Filezilla and various others so that I may get a brief idea how software really is written.
I noticed that when writing GUI, they like to split the whole interface into classes.
More or less, lots of projects I see every single bit broken down, into perhaps a total of 70 files (35cpp and 35.h).
For example: 1 listview may be an entire class, a menubar may be a class or a tabview a whole module.
And this isn't just the UI part - the network modules are also broken down by a huge amount - almost every function itself is a .cpp file.
My question: Is this really just preference or does it have any particular benefit?
I for example would've written the whole UI into a single module..
What is the actual reason?
Some languages encourage one file per type, and people who know those languages also program in c++, and bring that habit here.
For reuse, you want thing you put into a header to be simple, orthogonal, and conceptually clean, this tends to mean avoiding files that have everything one particular project needs.
Before git/mercurial it could be a real hassle to have multiple people edit the same file. Thus separating things into lots of files help a lot with multiple people editting, both for the edits and for your version control software.
It also speed up compilation. The small the file you are editting, the less compilation is needed, so unless the linking stage is slow, small file is a very good thing.
Many people have been hurt by cramming things into a single or small numbers of files. Few people have been seriously hurt by chopping them up into 50+ files. People tend towards things that dont overtly teach you hard lessons.
you might want to split the project into separate files to improve readability and to sometimes also make debugging easier. The filezilla project could have all been written into just two files something like main.cpp and main.h but if you do that, you will have to write tens of thousands of codes into the same file which is a very bad programming practice even though it is legal.
One benefit will come from Testing. Distributing system testing throughout the design hierarchy (e.g. rather than a single physical component) can be much effective and cheaper than testing at only the highest-level interface.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Recently I've read that we can code C/C++ and from python call those modules, I know that C/C++ is fast and strongly typed and those things but what advantages I got if I code some module and then call it from python? in what case/scenario/context it would be nice to implement this?
Thanks in advance.
Performance. That's why NumPy is so fast ("The NumPy array: a structure for efficient
numerical computation")
If you need to access a system library that doesn't have a wrapper
in python (Example: Shapely wraps around libgeos to do
geometrical computations), or if you're writing a wrapper around a
system library.
If you have a performance bottleneck in a function
that needs to be made a lot faster (and can benefit from using C).
Like Charlie said, profiling is essential to find out whether you
want to do this or not.
Profile your application. If it really is spending time in a couple of places that you can recode in C consider doing so. Don't do this unless profiling tells you you really need to.
Another reason is there might be a C/C++ library with functionality not available in python. You might write a python extension in C/C++ so that you can access/use that C/C++ library.
The primary advantage I see is speed. That's the price paid for the generality and flexibility of a dynamic language like Python. The execution model of the language doesn't match the execution model of the processor, by a wide margin, so there must be a translation layer someplace at runtime.
There are significant sections of work in many applications that can be encapsulated as strongly-typed functions. FFTs, convolutions, matrix operations and other types of array processing are good examples where a tightly-coded compiled loop can outperform a pure Python solution more than enough to pay for the runtime data copying and "environment switching" overhead.
There is also the case for "C as a portable assembler" to gain access to hardware functions for accelerating these functions. The Python library may have a high-level interface that depends on driver code that's not available widely enough to be part of the Python execution model. Multimedia hardware for video and audio processing, and array processing instructions on the CPU or GPU are examples.
The costs for a custom hybrid solution are in development and maintenance. There is added complexity in design, coding, building, debugging and deployment of the application. You also need expertise on-staff for two different programming languages.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am looking for an IDL-like (or whatever) translator which turns a DOM- or JSON-like document definition into classes which
are accessible from both C++ and Python, within the same application
expose document properties as ints, floats, strings, binary blobs and compounds: array, string dict (both nestable) (basically the JSON type feature set)
allow changes to be tracked to refresh views of an editing UI
provide a change history to enable undo/redo operations
can be serialized to and from JSON (can also be some kind of binary format)
allow to keep large data chunks on disk, with parts only loaded on demand
provide non-blocking thread-safe read/write access to exchange data with realtime threads
allow multiple editors in different processes (or even on different machines) to view and modify the document
The thing that comes closest so far is the Blender 2.5 DNA/RNA system, but it's not available as a separate library, and badly documented.
I'm most of all trying to make sure that such a lib does not exist yet, so I know my time is not wasted when I start to design and write such a thing. It's supposed to provide a great foundation to write editing UI components.
ICE is the closest product I could think of. I don't know if you can do serialization to disk with ICE, but I can't think of a reason why it wouldn't. Problem is it costs $$$. I haven't personally negotiated a license with them, but ICE is the biggest player I know of in this domain.
Then you have Pyro for python which is Distributed Objects only.
Distributed Objects in Objective-C (N/A for iPhone/iPad Dev, which sucks IMHO)
There are some C++ distributed objects libraries but they're mostly dead and unusable (CORBA comes to mind).
I can tell you that there would be a lot of demand for this type of technology. I've been delving into some serialization and remote object stuff since off-the-shelf solutions can be very expensive.
As for open-source frameworks to help you develop in-house, I recommend boost::asio's strands for async thread-safe read/write and boost::serialization for serialization. I'm not terribly well-read in JSON tech but this looks like an interesting read.
I wish something freely available already existed for this networking/serialization glue that so many projects could benefit from.
SWIG doesn't meet all your requirements, but does make interfacing c++ <-> python a lot easier.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am learning C++ as a first language. I feel like I am about to hit a ceiling on my learning (I am not learning through a class) if I don't start looking at actual code soon. Here are my two main questions:
Where can I find source code
What is a good litmus test on code's quality (I've obviously never developed in a work environment)
I hope this is relevant to SO, but I can see the need to close this. Thanks for the help.
Related:
Examples of "modern C++" in action?
I would recommend Boost. Using Boost will simplify your program design. Reading Boost source code can show you how to use C++ to solve some challenging problems in a concise way.
This add on library is itself written in C++, in a peer-reviewed fashion, and has a high standard of quality.
I think your two best bets for finding C++ code are to go to the popuplar open source repositories.
CodePlex: http://codeplex.com
Google Code: http://code.google.com
SourceForge: http://sourceforge.net/
These all have high quality C++ projects you can take a look at. I don't think there's a great metric for judging quality on a large scale. I would start with the more popular projects which may be more likely to have quality code.
The List:
SourceForge: http://sourceforge.net/
Boost: http://www.boost.org/
CodePlex: http://www.codeplex.com/
Google Code: http://code.google.com/
Google Code University: http://code.google.com/edu/
koders.com: http://www.koders.com/
The net is chock-full of open-source C++ code. Why not pick a few such projects, and, even better, start helping out of them? No better way of learning than by doing!
I would recommend getting a good book, which will be packed full of source code examples!
C++ in a Nutshell
You can also search open source code at www.koders.com
I think you got some good answers already, I would like to add this suggestions for picking a project from one of the open source project repositories: Pick a widely used but preferably smaller project that has been around for a while and targets a domain that you are specifically interested in. That way you will be able to get a better idea of production ready code and be able to learn something about that domain.
I found the source code and documentation of POCO are quite readable, and
unlike some other open source projects that focus on handling one specific problem, say GUI or Logging, this library focuses on developing a complete application, thus covering a quite broad area(file system, text processing, networking, logging etc ).
it uses modern C++ idioms. So by reading the implementation you can learn modern C++ skills as well.
I would recommend OpenSG
It is an interesting topic, it uses concurrency modeling, networking, includes links to scientific papers, is well documented, uses real c++ not c with objects stuff and almost all subparadigms and doesn't overuse them, is easily accessible AND who would have guessed... I am a fan of it ;)
OpenSG - Home
C++ is a great language, but kind of heavy as a first language. Try python.
1) Where can I find source code
Reading code is harder than writing it. This is especially true of large, complex languages like C++. Without already knowing the intricacies of the language, you don't stand much chance of getting knowledge from the complex code others write in production. You're going to have to learn the very smallest parts first, on your own by writing it. As you learn c++, you will also learn programming.
2) What is a good litmus test on code's quality
There isn't one. That's not going to be an easy thing to learn, either. It comes from experience. But really, the way you know the good code from the bad, is that after you've had some time to familiarize yourself with the layout of a project, you can understand what any given piece does, after you look at it. Readable code has quality, whereas confusing code falls short.
Looking at other peoples code is a hard way to learn the basics. Find a tutorial on the net and get your feet wet that way. I'm sure there are many, fine printed books on the subject as well.
As you go, and get stuck or confused or lost, post questions here.
Code Project is the best place for source code.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm working on a fairly big open source RTS game engine (Spring). I recently added a bunch of new C++ functions callable by Lua, and am wondering how to best document them, and at the same time also stimulate people to write/update documentation for a lot of existing Lua call-outs.
So I figured it may be nice if I could write the documentation initially as doxygen comments near the C++ functions - this is easy because the function body obviously defines exactly what the function does. However, I would like the documentation to be improved by game developers using the engine, who generally have little understanding of git (the VCS we use) or C++.
Hence, it would be ideal if there was a way to automatically generate apidocs from the C++ file, but also to have a wiki-like web interface to allow a much wider audience to update the comments, add examples, etc.
So I'm wondering, does there exist a web tool which integrates doxygen style formatting, wiki-like editing for those comments (preferably without allowing editing any other parts of the source file) and git? (to commit the comments changed through the web interface to a special branch)
We developers could then merge this branch every now and then to add the improvements to the master branch, and at the same time any improvements by developers to the documentation would end up on this web tool with just a merge of the master branch into this special branch.
I haven't found anything yet, doubt something this specific exists yet, so any suggestions are welcome!
This is a very cool idea indeed, and a couple of years ago I also had a very strong need for something like that. Unfortunately, at least back then, I wasn't able to find something like that. Doing a quick search on sourceforge and freshmeat also doesn't bring up anything related today.
But I agree that such a wiki frontend to user-contributed documentation would be very useful, I know for a fact that something like this was recently being discussed also within the Lua community (see this).
So, maybe we can determine the requirements in order to come up with a basic working draft/prototype?
Hopefully, this would get us going to initiate such a project with a minimum set of features and then simply release it into the wild as an open source project (e.g. on sourceforge), so that other users can contribute to it.
Ideally, one could use unified patches to apply changes that were contributed in such a fashion. Also, it would probably make sense to restrict modifications only to adding/editing comments, instead of allowing arbitrary modifications of text, this could probably be implemented by using a simple regex.
Maybe, one could implement something like that by modifying an existing (established) wiki software such as mediawiki. Or preferably something that's already using git as a backend for storage purposes. Then, one would mainly need to cater for those Doxygen-style comments, and provide a simple interface on top of it.
Thinking about it some more, DoxyGen itself already provides support for generating HTML documentation, so from that perspective it might actually be interesting to see, how DoxyGen could possibly be extended, so that it is well integrated with such a scripted backend that allows for easy customization of embedded source code documentation.
This would probably mainly boil down to providing a standalone script with doxygen (e.g. in python, php or perl) and then optionally embed forms in the automatically created HTML documentation, so that documentation fixes/augmentations can be sent to the corresponding script via a browser, which in turn would write any modifications back to a corresponding branch.
In the long term, it would be cool if such a script would support different types of backends (CVS, SVN or git), or at least be implemented generically enough, so that it is easily extendible.
So, if we can come up with a good design, it might even be possible that such a modification would be generally accepted as a contribution to doxygen itself, which would also give the whole thing much more exposure and momentum.
Even if the idea doesn't directly materialize into a real project, it would be interesting to see how many other users actually like the idea, so that it could possibly be mentioned in the doxygen issue tracker (https://github.com/doxygen/doxygen/issues/new).
EDIT: You may also want to check out this article titled "Documentation, Git and MediaWiki".