Mendeley issue with two author citation and bibliography (& vs. ir) - customization

Mendeley appears to have an issue with in text citation for two authors, where instead of (Author & Author Year) it produces (Author ir Author Year). It seems to have an issue with printing "&" and instead prints "ir". The same for the bibliography, see example below. It seems to be fine to print "&" in the Journal name, just not for two authors. Perhaps custom coding within Mendeley? Anyone know how to tackle this?
Example of in-text:
Insert text for example (Belitz ir Lang 2008)
Example of Bibliography:
Belitz, C., ir S. Lang. 2008. Simultaneous selection of variables and smoothing parameters in structured additive regression models. Computational S. & Data Analysis 53:61-81.
Thank you.

Presumably you're using a CSL style that uses a text term (like "and") between authors, and which is localized to Lithuanian (the Lithuanian CSL locale file specifies "ir" as translation for "and": https://github.com/citation-style-language/locales/blob/cbb45961b815594f35c36da7e78154feb5647823/locales-lt-LT.xml#L25).
If you wish to have the ampersand ("&") separating authors, you need a style that uses <name and="symbol"/> instead of <name and="text"/> (see https://docs.citationstyles.org/en/stable/specification.html#name).

Related

Is it possible to format in-text citations with author title and date in markdown/rmarkdown with CSL

I am trying to write a syllabus in Rmarkdown. I would like to be able to use my bibtex file and write citations in text and then knit them to html or pdf. So for instance, I would like to be able to write:
For week 1 please read:
+ [#author2005] Chapter 2
And have as output
Arthur A. Author, Book Title, University Press, 2005, Chapter 2
or something roughly similar that is a long citation appearing in-text.
At the moment, I only find .csl files that either will render this as:
(Author 2005) Chapter 2
or some other varient of author-year in-text citation, or else .csl files that render it as
.1
1 Arthur A. Author, Book Title, University Press, 2005, Chapter 2
Is it possible to create a .csl style that produces verbose in text citations? I have been wrestling with the csl visualizer without success. If not, is there another way to tackle citations in markdown/Rmarkdown that allows more control over the formatting of in-text citations?
That's absolutely possible, yes. The quickest way to do so would be to use an existing CSL style that produces note citations in the desired format and then convert it to an in-text style, which involves changing only one thing:
Option 1
In the style code, change class="note" to class="in-text" in the 2nd line of the style, i.e. the one that starts with <style
Option 2
Make the analog change in the visual editor under "Global Formatting Options"

Document classification: Preprocessing and multiple labels

I have a question about the word representation algorithms:
Which one of the algorithms word2Vec, doc2Vec and Tf-IDF is more suitable for handling text classification tasks ?
The corpus used in my supervised learning classification is composed of a list of multiple sentences, with both short length sentences and long length ones. As discussed in this thread, doc2vec vs word2vec choice is a matter of document length. As for Tf-Idf vs. word embedding, it's more a matter of text representation.
My other question is, what if for the same corpus I had more than one label to link to the sentences in it ? If I create multiple entries/labels for the same sentence, it affects the decision of the final classification algorithm. How can I tell the model that every label counts equal for every sentence of the document ?
Thank you in advance,
You should try multiple methods of turning your sentences into 'feature vectors'. There are no hard-and-fast rules; what works best for your project will depend a lot on your specific data, problem-domains, & classification goals.
(Don't extrapolate guidelines from other answers – such as the one you've linked that's about document-similarity rather than classification – as best practices for your project.)
To get initially underway, you may want to focus on some simple 'binary classification' aspect of your data, first. For example, pick a single label. Train on all the texts, merely trying to predict if that one label applies or not.
When you have that working, so you have a understanding of each step – corpus prep, text processing, feature-vectorization, classification-training, classification-evaluation – then you can try extending/adapting those steps to either single-label classification (where each text should have exactly one unique label) or multi-label classification (where each text might have any number of combined labels).

Text Analysis Tools

I am currently building a datatable in base sas and using an index function to flag certain company names embedded in a paragraph of text in a column. If the company name exists I will flag them with a one. When I've looked into the paragraphs in more detail this simple approach doesn't work. Take this example below;
"John Smith advised Coco-cola on its merger with Pepsi". I'm searching on both Coca-cola and Pepsi but only want to flag Coca-cola in this example as John Smith "advised" them. I don't want both Coco-cola and Pepsi flagged with a "1". I understand that I can write code that takes words after certain anchor words such as "advised", "represented" which does work. What happens if one record simply lists all companies that they have advised without using an anchor words to identify them? Is there any tools out there that can do this automatically by AI?
Thanks
Chris

regex to find sentences not terminated by a period

In a book ms., I have figure captions that take one of the following forms:
cap=['"]Figure caption['"] (with matching ' or ")
\caption{Figure caption} (a LaTeX caption)
where the style calls for all captions to be terminated by a ., i.e.,
\caption{Figure caption.}
Unfortunately, I wasn't consistent when writing, so only some captions obey the style, and I have ~300 figures in a file tree of ~100 files, so I'd like to find a perl solution for finding the problem cases and making corrections rather than editing manually.
Can someone help?
Let me try to make this more precise with some test cases from my files. For the \caption{} problem, here are a few example lines from my files. The first three are properly terminated with a .. The rest need a . appended before the caption-closing }. Note there can be several sentences in a caption, and other LaTeX material on the same line.
\caption{CA plot and mosaic display for the TV viewing data. The days of the week in the mosaic plot were permuted according to their order in the CA solution.}
\caption{Stacking approach for a three-way table. Two of the table variables are combined interactively to form the rows of a two-way table.}\label{fig:stacking}
\caption{Overview of fitting and graphing for model-based methods in \R.}
\caption{Each way of stacking a three-way table corresponds to a loglinear model}\label{tab:stacking}
\caption{CA biplot of the suicide data, showing calibrated axes for the suicide methods}
\caption{Arthritis treatment data, for the relationship of the binary response ``Better'' to Age}
\caption{Space shuttle data, with fitted logistic regression model}
\caption{Observed (points) and fitted (lines) log odds of admissions in the logit models for \data{UCB}}
(\\caption{[^}]*[^\.]}|cap=(['"]).+[^\.]\2)
https://regex101.com/r/kW9yZ3/2
This one works for the cases you provided for the \caption{} format, and some examples I mades for the cap="..." format.

Improving classification results with Weka J48 and Naive Bayes Multinomial classifiers

I have been using Weka’s J48 and Naive Bayes Multinomial (NBM) classifiers upon
frequencies of keywords in RSS feeds to classify the feeds into target
categories.
For example, one of my .arff files contains the following data extracts:
#attribute Keyword_1_nasa_Frequency numeric
#attribute Keyword_2_fish_Frequency numeric
#attribute Keyword_3_kill_Frequency numeric
#attribute Keyword_4_show_Frequency numeric
…
#attribute RSSFeedCategoryDescription {BFE,FCL,F,M, NCA, SNT,S}
#data
0,0,0,34,0,0,0,0,0,40,0,0,0,0,0,0,0,0,0,0,24,0,0,0,0,13,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BFE
0,0,0,12,0,0,0,0,0,20,0,0,0,0,0,0,0,0,0,0,25,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BFE
0,0,0,10,0,0,0,0,0,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BFE
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BFE
…
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,FCL
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,F
…
20,0,64,19,0,162,0,0,36,72,179,24,24,47,24,40,0,48,0,0,0,97,24,0,48,205,143,62,7
8,0,0,216,0,36,24,24,0,0,24,0,0,0,0,140,24,0,0,0,0,72,176,0,0,144,48,0,38,0,284,
221,72,0,72,0,SNT
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SNT
0,0,0,0,0,0,11,0,0,0,0,0,0,0,19,0,0,0,0,0,0,0,0,0,0,10,0,0,0,0,0,0,0,0,0,0,0,0,0
,0,0,0,0,0,0,0,0,0,17,0,0,0,0,0,0,0,0,0,0,0,0,0,20,0,S
And so on: there’s a total of 570 rows where each one is contains with the
frequency of a keyword in a feed for a day. In this case, there are 57 feeds for
10 days giving a total of 570 records to be classified. Each keyword is prefixed
with a surrogate number and postfixed with ‘Frequency’.
I am using 10 fold x validation for both the J48s and NBM classifiers on a
'black box' basis. Other parameters used are also defaults, i.e. 0.25 confidence
and min number of objects is 2 for the J48s.
So far, my classification rates for an instance of varying numbers of days, date
ranges and actual keyword frequencies with both J28 and NBM results being
consistent in the 50 - 60% range. But, I would like to improve this if possible.
I have reduced the decision tree confidence level, sometimes as low as 0.1 but
the improvements are very marginal.
Can anyone suggest any other way of improving my results?
To give more information, the basic process here involves a diverse collection of RSS feeds where each one belongs to a single category.
For a given date range, e.g. 01 - 10 Sep 2011, the text of each feed's item elements are combined. The text is then validated to remove words with numbers, accents and so on, and stop words (a list of 500 stop words from MySQL is used). The remaining text is then indexed in Lucene to work out the most popular 64 words.
Each of these 64 words is then searched for in the description elements of the feeds for each day within the given date range. As part of this, the description text is also validated in the same way as the title text and again indexed by Lucene. So a popular keyword from the title such as 'declines' is stemmed to 'declin': then if any similar words are found in the description elements which also stem to 'declin', such as 'declined', the frequency for 'declin' is taken from Lucene's indexing of the word from the description elements.
The frequencies shown in the .arff file match on this basis, i.e. on the first line above, 'nasa', 'fish', 'kill' are not found in the description items of a particular feed in the BFE category for that day, but 'show' is found 34 times. Each line represents occurrences in the description items of a feed for a day for all 64 keywords.
So I think that the low frequencies are not due to stemming. Rather I see it as the inevitable result of some keywords being popular in feeds of one category, but which don't appear in other feeds at all. Hence the spareness shown in the results. Generic keywords may also be pertinent here as well.
The other possibilities are differences in the numbers of feeds per category where more feeds are in categories like NCA than S, or the keyword selection process itself is at fault.
You don't mention anything about stemming. In my opinion you could have better results if you were performing word stemming and the WEKA evaluation was based on the keyword stems.
For example let's suppose that your WEKA model is built given a keyword surfing and a new rss feed contains the word surf. There should be a match between these two words.
There are many free available stemmers for several languages.
For the English language some available options for stemming are:
The Porter's stemmer
Stemming based on the WordNet's dictionary
In case you would like to perform stemming using the WordNet's dictionary, there are libraries & frameworks that perform integration with WordNet.
Below you can find some of them:
MIT Java WordNet interface (JWI)
Rita
Java WorNet Library (JWNL)
EDITED after more information was provided
I believe that the keypoint in the specified case is the selection of the "most popular 64 words". The selected words or phrases should be keywords or keyphrases. So the challenge here is the keywords or keyphrases extraction.
There are several books, papers and algorithms written about keywords/keyphrases extraction. The university of Waikato has implemented in JAVA, a famous algorithm called Keyword Extraction Algorithm (KEA). KEA extracts keyphrases from text documents and can be either used for free indexing or for indexing with a controlled vocabulary. The implementation is distributed under the GNU General Public License.
Another issue that should be taken into consideration is the (Part of Speech)POS tagging. Nouns contain more information than the other POS tags. Therefore may you would have better results if you were checking the POS tag and the selected 64 words were mostly nouns.
In addition according to the Anette Hulth's published paper Improved Automatic Keyword Extraction Given More Linguistic Knowledge, her experiments showed that the keywords/keyphrases mostly have or are contained in one of the following five patterns:
ADJECTIVE NOUN (singular or mass)
NOUN NOUN (both sing. or mass)
ADJECTIVE NOUN (plural)
NOUN (sing. or mass) NOUN (pl.)
NOUN (sing. or mass)
In conclusion a simple action that in my opinion could improve your results is to find the POS tag for each word and select mostly nouns in order to evaluate the new RSS feeds. You can use WordNet in order to find the POS tag for each word and as I mentioned above there are many libraries on the web that perform integration with the WordNet's dictionary. Of course stemming is also essential for the classification process and has to be maintained.
I hope this helps.
Try turning off stemming altogether. The Stanford Intro to IR authors provide a rough justification of why stemming hurts, and at the very least does not help, in text classification contexts.
I have tested stemming myself on a custom multinomial naive Bayes text classification tool (I get accuracies of 85%). I tried the 3 Lucene stemmers available from org.apache.lucene.analysis.en version 4.4.0, which are EnglishMinimalStemFilter, KStemFilter and PorterStemFilter, plus no stemming, and I did the tests on small and larger training document corpora. Stemming significantly degraded classification accuracy when the training corpus was small, and left accuracy unchanged for the larger corpus, which is consistent with the Intro to IR statements.
Some more things to try:
Why only 64 words? I would increase that number by a lot, but preferably you would not have a limit at all.
Try tf-idf (term frequency, inverse document frequency). What you're using now is just tf. If you multiply this by idf you can mitigate problems arising from common and uninformative words like "show". This is especially important given that you're using so few top words.
Increase the size of the training corpus.
Try shingling to bi-grams, tri-grams, etc, and combinations of different N-grams (you're now using just unigrams).
There's a bunch of other knobs you could turn, but I would start with these. You should be able to do a lot better than 60%. 80% to 90% or better is common.