I am currently working on a very big ocean model, and since I am developing a new parametrization, I want to explain it deeply in the documentation of the code. To do this, I am using Doxygen. Of course the documentation I want to provide is long, as it is necessary to explain a lot of details, and I need also a lot of formulas. I am talking about a couple of hundreds of lines of latex formatted text (including formulas) for each module (I am working in fortran90).
Since it seems bad to insert directly all this explanations in the .f90 source file, I am looking for a valid alternative. To be clear, I want something like the \input{} command in LaTex, to refer to an external file that could be possibly containing all my explanation in some suitable formatted text.
!----------------------------------------------------------------------
! A VERY BIG GENERAL CIRCULATION MODEL
!----------------------------------------------------------------------
!
! MODULE: tludyn
!
! DESCRIPTION:
!> #brief A novel parametrization of Navier stokes
!>
!> #details Here goes a long description. This parametrization needs a
!> lot of explaining, long formulas, long texts... Let's say that every
!> subroutine needs a couple of hundreds lines of latex-formatted code
!
!> \f[
!> \left( \nabla \cdot A \right)_{i}= \frac{1}{e_{1}e_{2}}
!> \left[
!> \frac{\partial}{\partial i}\left( e_{2}a_{i1} \right) +
!> \frac{\partial}{\partial j}\left( e_{1}a_{i2} \right) +
!> \right] +
!> \frac{1}{e_{3}}\left[
!> \frac{\partial}{\partial k}\left( a_{i3} \right)
!> \right]
!> \f]
!> #include tlu_dyn.md
!>
!----------------------------------------------------------------------
Potentially, I would love to put everything in a file, say flu_dyn.md and include sections of this file in different parts of my .f90 source code. If not possible I would like to just include external files as they are. The included file should appear in the html page.
How can I achieve this?
Edit 1
I am using version 1.9.2 and these are the differences with the reference config file:
PROJECT_NAME = DoxyNEMO
CREATE_SUBDIRS = YES
TAB_SIZE = 8
OPTIMIZE_FOR_FORTRAN = YES
EXTENSION_MAPPING = f=FortranFixed \
f90=FortranFree \
txt=md
EXTRACT_PRIVATE = YES
INPUT = release-4.0/src/tlu_dyn.f90 \ (source)
release-4.0/src/tlu_dyn.md (here I want to put
the long explanations)
FILE_PATTERNS = all the usual plus this: *.dox
EXAMPLE_PATH = release-4.0/src
LATEX_HEADER = doxy_header.tex
LATEX_FOOTER = doxy_footer.tex
In the file tlu_dyn.md there are the same lines from \f[ to \f].
The result looks like:
.
Edit 2
The file tlu_dyn.md, that contains my pedantic description of the code, is the result of 2 days of trials where I tried several extensions and several commands. In the first example I posted, the file contained exactly what shown in the figure, as I was trying to see if Doxygen syntax was ok. In a second trial, I tried to put two simple lines as
# Is this working? (Header)
This is plain text.
to see if a tex with markdown syntax was better.
I would like to write in this file in a simple way, so markdown is an option, pure latex would be even better (even though there is no evidence that pure latex is admitted), html or Doxygen syntax are last resorts.
Related
I looked for an applescript to extract the DOI from a PDF file, but could not find it. There is enough information available on the actual format of the DOI (i.e. the regular expression), but how could I use this to get the identifier from the PDF file?
(It would be no problem if some external program were used, such as Hazel.)
If you're ok with using an app, I'd recommend Skim. Good AppleScript support. I'd probably structure it like this (especially if the document might be large):
set DOIFound to false
tell application "Skim"
set pp to pages of document 1
repeat with p in pp
set t to text of p
--look for DOI and set DOIFound to true
if DOIFound then exit repeat--if it's not found then use url?
end repeat
end tell
I'm assuming a DOI would always exist on one page (not spread out to between two). Looks like they are invariably (?) on the first page of an article, which would make this quick of course, even with a large doc.
[edit]
Another way would be to get the Xpdf OSX binaries from http://www.foolabs.com/xpdf/download.html and use pdftotext in the command line (just tested this; it works well) and parse the text using AppleScript. If you want to stay in AppleScript, you can do something like:
do shell script "path/to/pdftotext 'path/to/pdf/file.pdf'"
which would output a file in the same directory with a txt file extension -- you parse that for DOI.
Have you tried it with pdfgrep? It works really well in commmandline
pdfgrep -n --max-count 1 --include "*.pdf" "DOI"
i have no idea to build an apple script though, but i would be interested in one also. so that if i drop a pdf into that folder it just automatically extracts the DOI and renames the file with the DOI in the filename.
When I include the line:
au BufWinEnter * silent! loadview "load folds
Which resumes my saved folds, it breaks the syntax highlighting... but only in files named specifically "prototypes.h"... which is what I always name my .h files, so it must be some saved data breaking it.
Where are folds saved?
It is stated in help, used the following sequence: :help :loadview: “… the view stored with ":mkview" is loaded.”, then :h :mkview (really <C-]> on :mkview in the text): “… a name is generated and 'viewdir' prepended …”, :h 'viewdir' (same, <C-]>): “default for … and Win32: "$VIM/vimfiles/view", for Unix: "~/.vim/view"…”. I was unable to find the description of how name is generated, but simple test has shown that for /usr/share/vim/vim73/doc/starting.txt it generated =+usr=+share=+vim=+vim73=+doc=+starting.txt=.
If you want to use vim efficiently you should know how to use its help system.
Standard headers for C++ are typically installed in /usr/include/c++/4.x (in Linux). Since most of the headers do not have any extension (.h, .hpp, etc.), vim cannot recognize the format for these files as C++.
I have seen this other question in SO, but the solutions posted in there do not solve my problem. One solution there involves using modeline but standard C++ headers do not include a vim-friendly signature. Instead, they include in the first line something like:
// <algorithm> -*- C++ -*-
I guess I could search for that pattern (-*- C++ -*-) in order to detect the file type. The other solution posted in the previously mentioned SO question actually goes in that direction. The answer suggests to use:
au BufRead * if search('MagicPattern', 'nw') | setlocal ft=cpp | endif
so I have tried to do:
au BufRead * if search('-*- C++ -*-', 'nw') | setlocal ft=cpp | endif
but it does not work (i.e., the file type is not detected).
Is it possible to detect the file type using that approach? Does it exist any plugin or any other way to solve this?
n.m's answer does the trick, but this is better:
au BufRead * if search('\M-*- C++ -*-', 'n', 1) | setlocal ft=cpp | endif
The extra argument to search is the stopline, and ensures that this rule will only be applied to files with the pattern in line 1.
This is important because, without the stopline any files that contain the pattern, including your vimrc, will satisfy the match and potentially be highlighted using the wrong syntax rules.
Also, using stopline the w flag is unnecessary.
Look at :help search for more information.
* is normally special in Vim searches. To disable it, use \M in the beginning of the search string, i.e.
au BufRead * if search('\M-*- C++ -*-', 'nw') | setlocal ft=cpp | endif
This actually works for me.
Need a way to navigate/browse XSLT files easily with Vim.
Similar to the way ctags can be used with C/C++ source code.
A good solution would allow installing scripts etc only for self or running them directly from inside Vim. Hate having to step out of vim just to search for a function definition or "text" search.
Searched for a few hours and came up with some good leads like 'ack'.
Apparently this can be integrated with vim using ack.vim. Yet to test this out properly.
Essentially I should be able to go through a maze of .xsl files which include other .xsl files and use functions, templates. Any suggestions?
I found the kipelovets' and GuruM's answers to be useful and worked out this solution that combines their advice.:
.ctags file:
--langdef=xslt
--langmap=xslt:.xsl
--regex-xslt=/<xsl:template[^>]+name=\"([-a-zA-Z0-9_:]+)\"( +mode="([^"]+)")?/\1 \2/n,namedtemplate/i
--regex-xslt=/<xsl:template[^>]+match=\"([^"]+)\"( +mode="([^"]+)")?/\1 \2/m,matchedtemplate/i
--regex-xslt=/<xsl:apply-templates +select="([^"]{2,})"( +mode="([^"]+)")?/\1 \2/a,applytemplate/i
--regex-xslt=/<xsl:call-template +select="([^"]+)"( +mode="([^"]+)")?/\1 \2/c,calltemplate/i
--regex-xslt=/<xsl:variable[^>]+name=\"([-a-zA-Z0-9_]+)\"/\1/v,variable/i
--regex-xslt=/<xsl:function[^>]+name=\"([-a-zA-Z0-9_:]+)\"/\1/f,function/i
--regex-xslt=/<xsl:param[^>]+name=\"([-a-zA-Z0-9_:]+)\"/\1/p,parameter/i
vim tagbar settings (which some people may find useful):
let g:tagbar_type_xslt = {
\ 'ctagstype' : 'xslt',
\ 'kinds' : [
\ 'n:templates (named)',
\ 'm:templates (matched)',
\ 'a:applied templates',
\ 'c:called templates',
\ 'f:functions',
\ 'p:parameters',
\ 'v:variables'
\ ]
\ }
It's better, but some issues I am still having include:
grouping templates by their modes
white space such as new lines in between attributes for a template/variable etc...
scoping variables and parameters and apply templates and call templates
So if you have multiple variables with the same name but in different templates,
the scope of which template they are in is not be captured.
commented templates, functions, variables and parameters
A better approach to try in future:
The regex's for parsing the information from the xslt get complex to do this properly
The best solution may be to write a custom xslt tag generator using xslt itself.
This is what jsctags does for javascript
The format of the tag file looks straightforward... so I will try this next.
See Tag File Format
thanks for your question and answer #GuruM
though your version of ~/.ctags didn't work for me, i used following:
--langdef=xslt
--langmap=xslt:.xsl
--regex-xslt=/<xsl:template[^>]+name=\"([-a-zA-Z0-9_:]+)\"/\1/f,function/i
it works perfectly with AutoTag plugin and requires only one following line for TagList plugin:
let s:tlist_def_xslt_settings = 'xslt;f:function'
Thanks #Izap. Sorry apparently I'm not allowed to upvote my own question.
Couldn't select your answer even though it's closest to what I wanted (example code would've helped).
Creating a tags file can be done very easily:
1) Test out the regular-expressions for your custom language: egrep 'pattern' *
2) Copy-paste custom language with the above regular-expressions into ~/.ctags.
--langdef=EXSLT
--langmap=EXSLT:.xsl
--regex-EXSLT=/<xsl:variable[ \t]+name=\"([-a-zA-Z0-9_]+)\"/\1/v,variable/i
--regex-EXSLT=/<func:function[ \t]+name=\"([-a-zA-Z0-9_:]+)\"/\1/f,function/i
--regex-EXSLT=/<xsl:template[ \t]+match=\"([/-a-zA-Z0-9_:]+)\"/\1/t,template/i
Note: The ( and ) are used to create a tag-entry in tags file where \1 is the matching keyword. Here I'm adding xslt with exslt support above.
You can extend the language mapping by appending regular-expression patterns below.
3) Run cd srcdir && ctags -R *
4) Check the tags file to see if ctags has found the patterns
5) Open vim and test out the tags file using Ctrl-] to jump from function usage to definition etc.
Note: "Extending" ctags requires C coding and is not discussed here.
I'm trying to set up automated Doxygen runs on our massive 78,000 file C++ codebase. It's doing okay with extracting basic type and hierarchy information, but I'd like to make it smarter about picking up the documentation comments that are already in place.
Most of the comments that have accumulated over the years do follow a general pattern, though not the pattern that Doxygen expected. Mostly they look like
// class description
class foo
{
// returns ascii art of a fruit
const char* apples( void );
// does something to some other thing
customtype_t baz( foo &other );
enum
{
kBADGER, // an omnivorous mustelid
kMUSHROOM, // tasty on pizza
kSNAKE, // oh no!
};
}
Which are double-slashed, rather than the /// or //! style comments that Doxygen expects.
There are far too many files to go through searching and replacing all such comments, and many of my programmers are violently allergic to seeing triple-slashes in their code, so I'd like to find some way to make Doxygen read ordinary comments as JavaDoc comments, when they're in the right place. Is there a way to make Doxygen read // as ///?
I couldn't find any such configuration parameter, so I figure I'll need to transform the input somehow. In general the rule I'd use is:
if there is a line containing just a
comment, immediately preceding a
function/class/type/variable
declaration, assume it is a ///
comment.
if there is a declaration
followed on the same line by a //
comment, treat it as a ///<
But I don't know how to go about teaching Doxygen this rule. The two ways I can think of are:
Write a program as an INPUT_FILTER, which parses the input C++ and transforms //s into ///s as above. But this kind of transform is too complicated to do as a regular expression, and I really don't want to have to write a full blown C++ parser just to feed input to another C++ parser! Also, spinning up an INPUT_FILTER program for each file slows down Doxygen unacceptably: it already takes over 30 minutes to run across our source, and adding an INPUT_FILTER makes it take over six hours.
Modify the Doxygen source code to include the above comment rules. That seems like a scary amount of work in unfamiliar code.
Any other ideas?
The answer is simple: You can't.
The special style of doxygen must be used, to mark a comment as documentation.
Doxygen does NOT only take comments, that precede the declaration. You also could use them everywhere in the code.
If you want to use the doxygen features, you would have to update the comments by hand, or write a script/tool that looks for declarations and preceding comments to change them.
You have to decide, choose one from the 3 solutions (your two, and the script, added as answer) or not using doxygen.
You can use a script to change comment to Doxygen style, here is a simple python script, just try it:
#!/usr/bin/env python
import os
import sys
import re
def main(input_file, output_file):
fin = open(input_file, 'r')
fout = open(output_file, 'w')
pattern1 = '^\s*//\s.*'
pattern2 = '^\s*\w.*\s//\s.*'
for line in fin.readlines():
if re.match(pattern1, line) != None:
line = line.replace('//', '///', 1)
if re.match(pattern2, line) != None:
line = line.replace('//', '///<', 1)
fout.write(line)
fin.close()
fout.close()
if __name__ == '__main__':
if len(sys.argv) != 3:
print 'usage: %s input output' % sys.argv[0]
sys.exit(1)
main(sys.argv[1], sys.argv[2])