Using Sphinx-apidoc to generate documentation from C++ code - c++

There have been a couple of threads on this topic in the past that claim Sphinx doesn't support this at all. I had my doubts but either it has been updated since or the documentation for it was quite well hidden, because here is a link on the website stating otherwise:
http://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#cpp-domain
Anyway, I'm new to Sphinx but am trying to use it to (eventually) automate documentation using some text from some source C++ code. So far I haven't been able to get anywhere when using the sphinx-apidoc -o ... command. An almost blank document is created. I'm probably not using the right directives, since I don't know how - the supporting documentation hasn't been able to help me.
Can anyone provide some assistance with the basic steps needed to get it working? If it is not possible to auto-generate documentation from C++, what are the C++ domains for and how to use them?

On auto-generating C++ documentation:
After reading up on how to use sphinx at all, you should have a look into breathe:
Breathe provides a bridge between the Sphinx and Doxygen documentation
systems.
It is an easy way to include Doxygen information in a set of
documentation generated by Sphinx. The aim is to produce an autodoc
like support for people who enjoy using Sphinx but work with languages
other than Python. The system relies on the Doxygen’s xml output.
So additionally, you'll need to follow Doxygen commenting style and even setup an doxygen project. But I tried that and it works really well after the initial setup took place. Here is an excerpt of our CMakeLists.txt which might give you an idea on how sphinx and doxygen work together:
macro(add_sphinx_target TARGET_NAME BUILDER COMMENT_STR)
add_custom_target(${TARGET_NAME}
COMMAND sphinx-build -b ${BUILDER} . sphinx/build/${BUILDER}
WORKING_DIRECTORY docs
DEPENDS doxygen
COMMENT ${COMMENT_STR}
)
endmacro(add_sphinx_target)
add_custom_target(doxygen
COMMAND doxygen docs/doxygen.conf
COMMENT "Build doxygen xml files used by sphinx/breathe."
)
add_sphinx_target(docs-html
html
"Build html documentation"
)
So after initial setup, essentially it boils down to:
build doxygen documentation with doxygen path/to/config
cd into the directory where the sphinx configuration is.
build sphinx documentation with sphinx-build . path/to/output
On the c++ domain:
Sphinx is a „little bit“ more than a system to auto-generate documentation. I would suggest you have a look at the examples (and consider that the sphinx website itself is written in sphinx reST code). Especially click the Show Source link on many sphinx-generated pages.
So if you cannot generate documentation automatically for a project, you have to do it yourself. Basically sphinx is a reST to whatever (LaTeX, HTML, …) compiler. So you can write arbitrary text, but the advantage is that it has a lot of commands for documenting source code of different languages. Each language gets its own domain (prefix or namespace) to separate the namespaces of the different languages. So for example I can document a python function using:
.. py:function:: Timer.repeat([repeat=3[, number=1000000]])
Does something nasty with timers in repetition
(source)
I can do the same using the cpp domain:
.. cpp:function:: bool namespaced::theclass::method(int arg1, std::string arg2)
Describes a method with parameters and types.
(source)
So if you want to document your c++ project without doxygen+breathe but with sphinx, you'll have to write the restructured text files yourself. This also means that you split the documentation from your source code, which can be undesirable.
I hope that clears things up a bit. For further reading I strongly suggest that you have a good read on the sphinx tutorial and documentation until you understood what it actually does.

Related

Reading Django documentation with restview

I am using Fedora 18 on Virtual Box on my Windows XP desktop to learn Django. After going through the .txt documentation files, I discovered these files were written using restructuredText. I've been spending the last day or so trying to figure out how to convert the files into something readable (HTML, Latex, PDF, etc.). First thing I did, was install docutils (from source - download page) and used rst2html.py to convert the files to HTML to be readable.
When I used this tool, I was getting the Unknown interpreted text role "doc", Unknown interpreted text role "ref", Unknown interpreted text role "term" errors, and more when opening the docs/intro/index.txt, docs/intro/install.txt and docs/intro/tutorial01.txt files. I was able to find very little on Google describing the exact problem I was having so I tried to use a different option.
Naively thinking the errors were native to docutils I decided to search for another tool and found this page and installed restview. Well, I didn't realize restview used docutils so I ended up back at square one.
How do I get rid of these and other errors? Did I install docutils and restview correctly?
Please tell me if I need to add more info
You need to use Sphinx. This tool is used by the Django project and it defines additional reStructuredText constructs to complement those defined by docutils. Such as
http://sphinx-doc.org/markup/inline.html#role-doc
http://sphinx-doc.org/markup/inline.html#role-ref
http://sphinx-doc.org/markup/inline.html#role-term

Using ocamldoc with packs

I have an ocamlbuild project which includes some files in a subdirectory with an .mlpack file listing them.
e.g. I have a file support/logging.ml which defines the module Support.Logging. The _tags file says "support": for-pack(Support).
This all builds and runs fine. But how can I generate docs for this using ocamldoc?
The most recent post I found was ocamldoc generation and packed files from 2011, which suggests using ocp-pack to generate one large .ml file and pass that to ocamldoc. However, that doesn't take into account the build order, so the generated module doesn't work due to forward references.
What's the best way to handle this?
The problem is described in the following bugreport. Handling -pack inside ocamldoc requires an implementation effort that the maintainer is not motivated to perform, and so far nobody stepped up to contribute a patch for this feature.
In the meantime, you can easily copy your foo.mlpack file into a foo.odocl generating the documentation of the separate submodules. That's only an imperfect workaround as the doc will talk about X rather than Foo.X, but that's a least-effort solution.
Here's the solution I'm now using in my Makefile. It does work, and cross-references into the Support module work:
doc:
ocp-pack -o support.ml.tmp support/logging.ml support/common.ml support/utils.ml support/basedir.ml support/qdom.ml support/system.ml
echo '(** General support code; not 0install-specific *)' > support.ml
cat support.ml.tmp >> support.ml
rm support.ml.tmp
$(OCAMLBUILD) 0install.docdir/index.html
rm support.ml
It's hacky because:
You have to list the support.ml files in build order, by hand
The Makefile adds the doc comments for Support (otherwise, it takes the description of the first sub-module, which you don't want)

Emacs as an IDE for large C++ projects

I am a Emacs newbie. I have to trying to search on how to use Emacs for use with large C++ projects particularly to index code and auto-complete function names and behave Eclipse-like. I had been using Vim for some time where I used ctags to index code in my project and Vim used to try auto-completing my code using a drop down menu of options. I am trying to achieve the same with Emacs now. But, during my search, results pointed to CEDET and auto-complete and other 3rd party plugins.
I tried to use ctags with ctags -e -R . and etags, but with no success.
Am I missing a default way of Emacs to achieve the same behavior? Which is the best and easiest way to achieve what I want?
I use CEDET with autocomplete successfully. Basically, autocomplete is the drop-down box provider, and it takes its sources from various things, most interestingly from CEDET (but also from etags and Gnu Global, which I recommend too).
A good starting point for CEDET is http://alexott.net/en/writings/emacs-devenv/EmacsCedet.html
Alex Ott's emacs config is there: https://github.com/alexott/emacs-configs -- it's an useful resource.
Note that you'll need to grab CEDET from bzr, and install/configure autocomplete correctly. I strongly recommend el-get to install autocomplete (and some other stuff too). You'll need to set up generic projects for EDE to have autocompletion working for random C/C++ files not part of a structured EDE project.
You'll have to spend some time to configure emacs, but it pays off. The tool is amazingly productive once set up correctly.
Indexing
You might want to use GNU/global instead of ctags: it supports C++ and is in my opinion more efficient with large projects (especially since you can update the index instead of rebuilding it from scratch). And it still is a lot simpler to use that CEDET/Semantic (which is also a fantastic tool if you spend the time to set it up).
Example use:
$ cd sources
$ gtags -v # create the index
$ cd subdirectory
$ [hack hack hack]
$ global -u # update the index (can be called from anywhere in the project)
In Emacs, activate gtags-mode in the source code buffers to get access to the gtags commands:
gtags-find-tag (M-.) : find the definition of the specified tag in your source files (gtags lets you choose between all possible definitions if there are several, or directly jumps if there is only one possibility)
gtags-pop-stack (M-*) : return to the previous location
gtags-find-rtag : find all uses of the specified tag in the source files
Below is my configuration for gtags, which automatically activates gtags-mode if an index is found:
;; gtags-mode
(eval-after-load "gtags"
'(progn
(define-key gtags-mode-map (kbd "M-,") 'gtags-find-rtag)))
(defun ff/turn-on-gtags ()
"Turn `gtags-mode' on if a global tags file has been generated.
This function asynchronously runs 'global -u' to update global
tags. When the command successfully returns, `gtags-mode' is
turned on."
(interactive)
(let ((process (start-process "global -u"
"*global output*"
"global" "-u"))
(buffer (current-buffer)))
(set-process-sentinel
process
`(lambda (process event)
(when (and (eq (process-status process) 'exit)
(eq (process-exit-status process) 0))
(with-current-buffer ,buffer
(message "Activating gtags-mode")
(gtags-mode 1)))))))
(add-hook 'c-mode-common-hook 'ff/turn-on-gtags)
Automatic completion
I don't know of any better tool than auto-complete. Even if it is not included within Emacs, it is very easily installable using the packaging system (for example in the marmalade or melpa repositories).
It depends what you are looking for in an IDE. I have been using Emacs for a fairly large C++ project. Of course you need to configure emacs to work as you want it to work in a greater extent they any other IDE.
But yes CEDET is a start, even though it is not perfect.
However there is a very good auto complete mode for Emacs http://cx4a.org/software/auto-complete/ it is not intelisense but it should integrate with CEDET in some way to give you a farily good auto complete.
Another important feature that I often use is the function ff-find-other-file to easy jump from header and implementation files.
Then of course you need to roll your own bulid. CEDET has some support for projects, but I have not tested it. However Emacs integrate well with command-line build tools such as make. Errors are printed in a buffer and you can jump to the correct line easily within Emacs.
GDB is also integrates well with Emacs M-x gdb, then just remember the gdb-many-windows command.
I recommend to watch Atila Neves lightning talk at CppCon 2015 titled Emacs as a C++ IDE.
For for details, see my answer to this related question.
See https://emacs.stackexchange.com/questions/26518/sequence-of-packages-to-be-installed-to-make-emacs-an-ide-for-c-c
I use GNU Global and two popular Emacs plugins:
company for code completion
emacs-helm-gtags for code navigation

Expand macro inside doxygen comment for printing out software version

I have some C++ code base, documented with doxygen, and build with GNU make.
Version information is centralized in makefile, where I have something like:
VERSION=1.2.3.4
In my makefile, the CFLAGS add the following define:
CFLAGS += -DAPP_VERSION=$(VERSION)
This enables me to get the version in code, like this:
#define STR_EXPAND(tok) #tok
#define STR(tok) STR_EXPAND(tok)
int main()
{
cout << "software version is << STR(APP_VERSION) << endl;
}
Now, what I would like is to have this in the doxygen-produced html files:
Current version of software is 1.2.3.4
I managed to export the makefile variable into the doxygen configuration file with:
(edit: doxygen is called from makefile, through a 'make-doc' target)
PREDEFINED = APP_VERSION=$(VERSION)
But then, if I try in the doxygen \mainpage command something like this, it fails, because (of course), macro names don't get expanded in comments...
/**
\mainpage this is the doc
Current version is $(APP_VERSION) -- or -- ... is APP_VERSION
*/
Questions
Do you know of a way to "expand" that macro in the doxygen comments ? This could be done by some sed processing on the file holding the comment in the makefile, but maybe this can be solved directly with doxygen ?
How do other projects handle versioning (besides automatic versioning tool that VCS provide, I mean), in a way that the version id is uniquely defined in a file, so it can be fetched both by software build system and documentation build system.
Related: How to display a defined value
Macros in comments are not generally expanded (see, for example, this answer). This is not unique to doxygen and I can 't think of a way to do this using the PREDEFINED configuration option.
As you state in the question, you can use sed, see the third bullet point in this answer. For example, using the following
INPUT_FILTER = "sed -e 's/VERSION/1.0/'"
will replace all instances of VERSION with 1.0 in all your source files (you can specify which files to process with INPUT_FILTER, rather than processing all source files). You might not want VERSION to be expanded everywhere, so perhaps it is best to use something like $(VERSION) and sed this token. Also, you will need a way of getting your version number from your makefile and into your doxygen configuration file. This can be done with another sed.
To address your last bullet point, doxygen has the FILE_VERSION_FILTER configuration option for determining the version number of each file. Using this will print some version information (whatever is printed to standard out from the command specified in FILE_VERSION_FILTER) at the top of each file page. In the documentation there are examples of getting the version number using a number of different version control systems. Also, here is a page describing how to use git and doxygen to extract version information.
The only drawback with this configuration option is that I don't know how to specify where the file version information should appear in the final documentation. I presume you can use a layout file: I presume you can change the layout of pages, but I have never done this and don't know how easy it would be to use this to include version information on the mainpage.
You need to use the "export" functionality of make ie a very simple make file with
project_name=FooBar
export project_name
all:
doxygen Doxyfile
Will allow you to use the following comments in C++
/*! \mainpage Project $(project_name) Lorem ipsum dolor
I can see this becoming a PITA with a large set of exports but it's a fairly simple way to do it. Alternatively you could run doxygen from a separate BASH script with all the exports in it to avoid polluting your Makefile too much.
the commands manual suggests that $(VARIABLE) expands environment variables. So maybe you can put your version in an environment variable?

Using closure library with jsTestDriver

I'm learning about google closure tools by writing a simple JavaScript game. I'm having trouble figuring out how to set up jsTestDriver so that it works well with closure library.
Specifically: I'd like to use the goog.require mechanism to include any additional JavaScript files rather than have to manually add them all to the config file.
Following meyertee's suggestion I made a simple script to automatically write the dependencies to a config file
#!/bin/bash
cp tests/jsTestDriver.conf.proto tests/jsTestDriver.conf
libs/closure-library/closure/bin/build/closurebuilder.py --root="./libs/closure-library" --root="./js" --namespace="lds" | sed "s#^# - \.\./#" >> tests/jsTestDriver.conf
The tests/jsTestDriver.conf.proto file is a simple template:
test:
- "*.js"
load:
- ../libs/knockout-2.1.0.js
# Crucial, the load key needs to be last, and this comment must be followed by a newline.
It is a very fragile script, but hopefully someone (other than me) will find it useful.
You can do it semi-automatically by letting Closure Compile generate a manifest file, which will output all files in the correct order of dependency. You can then transform that file to relative paths and paste them into the JsTestDriver config file. That's how I do it.
You could even write a script that does this transformation automatically.
This is the relevant compiler argument:
--output_manifest manifest.MF
There are some details on the Closure Compiler's Google Code Wiki
Edit:
There are also some Python scripts to help you calculate dependencies. You can use calcdeps.py or closurebuilder.py to generate a manifest file, which even includes files that haven't been 'required' by your code.
Since JsTestDriver does not following the Closure Library convention of declaring dependencies with goog.provide() and goog.require(), your best option may be meyertee's solution.
However, the Closure Library includes its own testing framework. See:
Test Driven Development with the Closure Framework
Asserts API