I use Doxygen to document Fortran code. I have one module and want to document it. The module is:
!> Definition of object that defines a spatial domain where the
!! calculations will take place.
!! \author Antonio
!! \date 2018-09-13
module places
implicit none
! Maximum length for character type.
integer, parameter, private :: MAXLEN = 20 ! Maximum length for the name of places
!> \brief Object that defines a spatial domain where the calculations will take place.
!> \author Antonio
!> \date 2018-09-13
type typePlace
character(len=MAXLEN) :: Name
integer :: Id
real :: Lons
real :: Lats
end type typePlace
end module places
When I run Doxygen and generate the documentation, this documentation has one problem:
The main menu has three items: "Main Page", "Data Types List" and "Files", but there is no "Modules" item.
Does anybody know where I did mistake?.
The differences between my Doxyfile and the default one are:
PROJECT_NAME = "Calculate values"
OUTPUT_DIRECTORY = /calculations/doxygen
ALWAYS_DETAILED_SEC = YES
INLINE_INHERITED_MEMB = YES
JAVADOC_AUTOBRIEF = YES
OPTIMIZE_FOR_FORTRAN = YES
EXTENSION_MAPPING = F90=Fortran \
F95=Fortran \
F03=Fortran \
F08=Fortran
EXTRACT_ALL = YES
EXTRACT_PRIVATE = YES
EXTRACT_PACKAGE = YES
EXTRACT_STATIC = YES
EXTRACT_LOCAL_METHODS = YES
INPUT = /calculations
FILE_PATTERNS ...
*.qsf \
*.as \
*.js \
*.f77 \
*.f95 \
*.f03 \
*.f08
SOURCE_BROWSER = YES
INLINE_SOURCES = YES
REFERENCED_BY_RELATION = YES
REFERENCES_RELATION = YES
ALPHABETICAL_INDEX = NO
HTML_TIMESTAMP = YES
MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest
HAVE_DOT = YES
CALL_GRAPH = YES
CALLER_GRAPH = YES
Doxygen version 1.8.5 is from 2013, since that time a lot has changed. The current doxygen version is 1.8.14.
When using version 1.8.14 the "item" module is present (set OPTIMIZE_FOR_FORTRAN=YES otherwise it is shown as namespace).
Solution is to upgrade to a more recent version of doxygen.
Related
I was trying to add citation in a footnote (or even in any text in the table) but it's not working, the citation text appears as it is. I thought I need to change the format table to markdown instead of latex and using bookdown::pdf_document2 but both did not solve the problem. another attempt was to create a citation text outside kable with a separate code chunk and then paste it inside the footnote also didn't work.
this is my code:
---
title: "scientific report"
output:
pdf_document:
fig_caption: true
keep_tex: true
number_sections: yes
latex_engine: xelatex
csl: elsevier-with-titles.csl
bibliography: citations.bib
link-citations: true
linkcolor: blue
---
# This is an exaample
the number of the table below is [\ref{do}]
P.S. I wrote the superscript (a) manually in the xlsx file.
```{r echo=FALSE }
library(knitr)
library(kableExtra)
library("readxl")
dfdf <- read_excel("dyss_count.xlsx")
df <- as.data.frame(dfdf)
options(knitr.kable.NA = '')
kable(df, "latex", longtable = T, booktabs = T,escape = F ,caption = 'dosage \\label{do}',align = "c") %>%
kable_styling(latex_options = c('repeat_header'), font_size = 7) %>%
footnote(general ="A general footnote",
alphabet = 'the source is #Burg_2019',
general_title = "General: ", number_title = "Type I: ",
alphabet_title = "Type II: ",
footnote_as_chunk = T, title_format = c("italic", "underline")
)
result is:
I would be very thankful for any useful information.
well, after many attempts it worked with the conventional cross referencing here. so
in case someone else is having same issue, I just did this:
(ref:caption) The source is [#Burg_2019] outside the the chunk and then inside the footnote footnote(general ="A general footnote",alphabet = "(ref:caption)" )
You could possible try adding in a caption using css -> kable_styling(extra_css = ..) so you could modify its styling properties? Just a thought.
I am having an issue training my Naive Bayes Classifier. I have a feature set and targets that I want to use but I keep getting errors. I've had a look at other people who have similar problems but I can't seem to figure out the issue. I'm sure there's a simple solution but I'm yet to find it.
Here's an example of the structure of the data that I'm trying to use to train the classifier.
In [1] >> train[0]
Out[1] ({
u'profici': [False],
u'saver': [False],
u'four': [True],
u'protest': [False],
u'asian': [True],
u'upsid': [False],
.
.
.
u'captain': [False],
u'payoff': [False],
u'whose': [False]
},
0)
Where train[0] is the first tuple in a list and contains:
A dictionary of features and boolean values to indicate the presence or absence of words in document[0]
The target label for the binary classification of document[0]
Obviously, the rest of the train list has the features and labels for the other documents that I want to classify.
When running the following code
from nltk.classify.scikitlearn import SklearnClassifier
from sklearn.naive_bayes import MultinomialNB
MNB_clf = SklearnClassifier(MultinomialNB())
MNB_clf.train(train)
I get the error message:
TypeError: float() argument must be a string or a number
Edit:
features are created here. From a dataframe post_sent that contains the posts in column 1 and the sentiment classification in column 2.
stopwords = set(stopwords.words('english'))
tokenized = []
filtered_posts = []
punc_tokenizer = RegexpTokenizer(r'\w+')
# tokenizing and removing stopwords
for post in post_sent.post:
tokenized = [word.lower() for word in.
punc_tokenizer.tokenize(post)]
filtered = ([w for w in tokenized if not w in stopwords])
filtered_posts.append(filtered)
# stemming
tokened_stemmed = []
for post in filtered_posts:
stemmed = []
for w in post:
stemmed.append(PorterStemmer().stem_word(w))
tokened_stemmed.append(stemmed)
#frequency dist
all_words =.
list(itertools.chain.from_iterable(tokened_stemmed))
frequency = FreqDist(all_words)
# Feature selection
word_features = list(frequency.keys())[:3000]
# IMPORTANT PART
#######################
#------ featuresets creation ---------
def find_features(list_of_posts):
features = {}
wrds = set(post)
for w in word_features:
features[w] = [w in wrds]
return features
# zipping inputs with targets
words_and_sent = zip(tokened_stemmed,
post_sent.sentiment)
# IMPORTANT PART
##########################
# feature sets created here
featuresets = [(find_features(words), sentiment) for
words, sentiment in
words_and_sent]
Thanks to help from both Vivek & Lenz, who explained to me the problem, I was able to reorganise my training set and thankfully it now works. Thanks guys!
The problem was very well explained in Vivek's post. This is the code that reorganised the train data into the correct format.
features_targ = []
for feature in range(0,len(featuresets)):
dict_test = featuresets[feature]
values = list(itertools.chain.from_iterable(dict_test[0].values()))
keys = dict_test[0].keys()
target = dict_test[1]
dict_ion = {}
for key in range(x,len(keys)):
dict_ion[keys[key]] = values[key]
features_targ.append((dict_ion,target))
You are setting the train wrong. As #lenz said in comment, remove the parentheses in the feature dict values and only use single values.
As given in the official documentation:
labeled_featuresets – A list of (featureset, label) where each
featureset is a dict mapping strings to either numbers, booleans or
strings.
But you are setting the mapping (value of key in dict) as a list.
You correct train should look like :
[({u'profici':False,
u'saver':False,
u'four':True,
u'protest':False,
u'asian':True,
u'upsid':False,
.
.
}, 0),
..
..
({u'profici':True,
u'saver':False,
u'four':False,
u'protest':False,
u'asian':True,
u'upsid':False,
.
.
}, 1)]
You can take a look at more examples here:
- http://www.nltk.org/howto/classify.html
Unix.time() returns seconds.
How can I get the time in ms or ns?
Thanks
Try Unix.gettimeofday
# Unix.time ();;
- : float = 1447865942.
# Unix.gettimeofday();;
- : float = 1447865947.56802297
Core has very robust time related functions, and since 112.06.00 there is a Time_ns module.
utop # Time_ns.now();;
- : Time_ns.t = 2015-11-18 14:49:08.580109-05:00
I am having some issues with doxygen. I am trying to include an inline formula:
blah blah \f$ x \in [0,1] \f$ blah blah
but the html looks like
blah blah \( x \in [0,1] \) blah blah
Does anyone know why? If it helps:
EXTRA_PACKAGES = mathtools amsmath
USE_MATHJAX = YES
Make sure you have latex installed and verify whether you have these configurations on your Doxygen file:
GENERATE_LATEX = YES
LATEX_OUTPUT = latex
LATEX_CMD_NAME = latex #latex command name to be called from terminal
I have multiple f90 files that contain the same variables
integer,parameter :: ilatmin = 0.5
integer,parameter :: ilatmax = 180
integer,parameter :: ilonmin = 0.5
integer,parameter :: ilonmax = 360
I would like to write shell script or a makefile where I specify the value of each variable and that loop through each .f90 file and update the values for each variable.
I though about using a search and replace like in this example Linux find and replace but was wondering if it's a good idea to use sed in a bash script.
Why don't you put those parameters into a module that you use inside the *.f90 files? It could look like:
module myParameters
implicit none
integer,parameter :: ilatmin = 0.5
integer,parameter :: ilatmax = 180
integer,parameter :: ilonmin = 0.5
integer,parameter :: ilonmax = 360
end module
In your code you then can include that module, e.g.
subroutine foo
use myParameters, only: ilatmin, ilatmax, ilonmin, ilonmax
! ...
end subroutine
Now you just need to change the module myParameters...
To your question... Using sed this could look like:
for i in *.f90; do
sed -i -e 's/ilatmin = .*$/ilatmin = XYZ/g' \
-e 's/ilatmax = .*$/ilatmax = XYZ/g' \
-e 's/ilonmin = .*$/ilonmin = XYZ/g' \
-e 's/ilonmax = .*$/ilonmax = XYZ/g' $i
done