Iam writing my thesis in r markdown and want to change the line spacing to 18 pt. But i cant find a solution for it. I only found the setting for onehalf spacing or 1.5 spacing but no for exactly some arbitrary pt.
If your output format is PDF, you can add
header-includes:
- \usepackage{leading}
- \leading{18pt}
to your YAML headers. Or just the LaTeX code to a preamble file you are including anyway.
linestretch is the YAML directive for that.
With all the power and caveats that \setstretch brings I guess, if you are looking for a specific physical size.
Related
I want to search for all lines that match this regex
^([0-9IVX]\.)*.*\R
and report with the page number they are at. The output would be something like:
1. Heading/page number
1.1 Subheading/page number
1.1.1. Subsubheading/page number
Is this possible to do in PDF? I suppose that would require Ghostscript, but searching the How to Use Ghostscript page for regex I find nothing.
I can't think why you would expect Ghostscript to do search for you.
I'm not sure if you are hoping to get the data type 'heading, page number' etc from the PDF file, or if you are going to work that out yourself based on the data you find.
If it's the former then the first problem is that, in general, PDF files don't have the kind of structure information you are looking for. There is nothing in most PDF files which says 'this is a heading', 'this is a page number' etc.
There are such things as 'tagged PDF' which adds non-printing elements to a PDF file which do carry that kind of data around with them. This is an entirely optional feature, the vast majority of PDF files don't contain it, and Ghostscript completely ignores it.
Since most PDF files don't have that information, you can't rely on it, unless you are in the happy position of knowing where your PDF files are being generated and that they contain this kind of information. In which case there are numerous tools around which will extract it for you, or enable you to write code to do so.
The problem with just searching for the text is that firstly the text need not be written as a contiguous stream. So if you are looking for '1.1' that might be written as:
(1.1) Tj
(1) Tj
(.) Tj
(1) Tj
[(1) -0.1 (.) 0.1 (1)] TJ
or any combination of those. The individual character codes need not even appear in order or in the same content stream.
Secondly the character code in a PDF content stream need not be (and often is not) a Unicode code point. Or ASCII, or any other standard coding scheme, it can be totally arbitrary.
Some PDF files carry a ToUnicode CMap around which maps the character codes to Unicode code points, but not all do. Some fonts may use a standard (that's PDF standard) Encoding, in which case it's possible to infer the Unicode code points. Some Encodings may contain glyph names, from which it's again possible to infer Unicode code points.
In the end though, some PDF files are simply impossible to extract text from without using OCR.
Your best bet is probably to write code to extract text, and Ghostscript will do that. It even goes through the heirarchy of fallbacks listed above to try and find a Unicode code point. If all else fails it just uses the character code and hopes that's good enough.
If you use Ghostscript's txtwrite device it will produce either a faked up text page (the default) which attempts, as far as possible, to mimic the text layout in the original PDF file, including merging bits of text that aren't contiguous in the PDF file but are next to each other on the page. Or an 'XML-like' output which will tell you which Unicode code points, or character codes, were encountered and what their position is on the original page. If you don't like txtwrite's attempts to figure out which text goes with what, then you can use this to write your own.
I suspect the text page is probably good enough for your purposes. You can have the txtwrite device produce one file per page, so you can get the page number from the filename. Then you can write your own regex expression(s) to search the files and find your matches.
I have a large text file that I'm going to be working with programmatically but have run into problems with a special character strewn throughout the file. The file is way too large to scan it looking for specific characters. Most of the other unwanted special characters I've been able to get rid of using some regex pattern. But there is a box character, similar to "□". When I tried to copy the character from the actual text file and past it here I get "�", so the example of the box is from Windows character map which includes the code 'U+25A1', which I'm not sure how to interpret or if it's something I could use for a regex search.
Would anyone know how I could search for the box symbol similar to "□" in a UTF-8 encoded file?
EDIT:
Here is an example from the text file:
"� Prune palms when flower spathes show, or delay pruning until after the palm has finished flowering, to prevent infestation of palm flower caterpillars. Leave the top five rows."
The only problem is that, as mentioned in the original post, the square gets converted into a diamond question mark.
It's unclear where and how you are searching, although you could use the hex equivalent:
\x{25A1}
Example:
https://regex101.com/r/b84oBs/1
The black diamond with a question mark is not a character, per se. It is what a browser spits out at you when you give it unrecognizable bytes.
Find out where that data is coming from.
Determine its encoding. (Usually UTF-8, but might be something else.)
Be sure the browser is configured to display that encoding. This is likely to suffice <meta charset=UTF-8> in the header of the page.
I found a workaround using Notepad++ and this website. It's still not clear what encoding system the square is originally from, but when I post it into the query field in the website above or into the Notepad++ Conversion Table (Plugins > Converter > Conversion Table) it gives the hex-character code for the "Replacement Character" which is the diamond with the question mark.
Using this code in a regex expression, \x{FFFD}, within Notepad++ search gave me all the squares, although recognizing them as the Replacement Character.
I've been using Doxygen to document my project but I've ran into some problems.
My documentation is written in a language which apostrophes are often used. Although my language config parameter is properly set, when Doxygen generates the HTML output, it can't parse apostrophes so the code is shown instead of the correct character.
So, in the HTML documentation:
This should be the text: Vector d'Individus
But instead, it shows this: Vector d'Individus
That's strange, but searching the code in the HTML file, I found that what happens is that instead of using an ampersand to write the ' code, it uses the ampersand code. Well, seeing the code is easier to see:
<div class="ttdoc">Vector d'Individus ... </div>
One other thing is to note that this only happens with the text inside tooltips...
But not on other places (same code, same class)...
What can I do to solve this?
Thanks!
Apostrophes in code comments must be encoded with the correct glyph for doxygen to parse it correctly. This seems particularly true for the SOURCE_TOOLTIPS popups. The correct glyph is \u2019, standing for RIGHT SINGLE QUOTATION MARK. If the keyboard you are using is not providing this glyph, you may write a temporary symbol (e.g. ') and batch replace it afterwards with an unicode capable auxiliary tool, for example: perl -pC -e "s/'/\x{2019}/g" < infile > outfile. Hope it helps.
Regarding the answer from ramkinobit, this is not necessary, doxygen can use for e.g. the Right Single quote: ’ (see doxygen documentation chapter "HTML commands").
Regarding the apostrophe the OP asks for one can use (the doxygen extension) ' (see also doxygen documentation chapter "HTML commands")).
There was a double 'HTML escape' in doxygen resulting in the behavior as observed for the single quote i.e. displaying '.
I've just pushed a proposed patch to github (pull request 784, https://github.com/doxygen/doxygen/pull/784).
EDIT 07/07/2018 (alternative) patch has been integrated in main branch on github.
Is there a Markdown equivalent to LaTeX's description environment? This gives an author the ability to generate lists where the first word is boldfaced, like this:
The LaTeX source would be:
\begin{description}
\item[First] The first item
\item[Second] The second item
\item[Third] The third etc \ldots
\end{description}
I'd like to replicate this in Markdown so that if I run the Markdown through a tool such as pandoc I will get similar results.
Markdown isn't as comprehensive as LaTeX, or even HTML:
Markdown’s syntax is intended for one purpose: to be used as a format for writing for the web.
Markdown is not a replacement for HTML, or even close to it. Its syntax is very small, corresponding only to a very small subset of HTML tags.
It supports bulleted and numbered lists, but not defiition lists.
However, you mention using "a tool such as pandoc" to process your input file. Pandoc supports inline LaTeX using the raw_tex extension:
Extension: raw_tex
In addition to raw HTML, pandoc allows raw LaTeX, TeX, and ConTeXt to be included in a document. Inline TeX commands will be preserved and passed unchanged to the LaTeX and ConTeXt writers. Thus, for example, you can use LaTeX to include BibTeX citations:
This result was proved in \cite{jones.1967}.
Note that in LaTeX environments, like
\begin{tabular}{|l|l|}\hline
Age & Frequency \\ \hline
18--25 & 15 \\
26--35 & 33 \\
36--45 & 22 \\ \hline
\end{tabular}
the material between the begin and end tags will be interpreted as raw LaTeX, not as markdown.
Inline LaTeX is ignored in output formats other than Markdown, LaTeX, and ConTeXt.
So simply including your LaTeX description environment and running Pandoc with the raw_tex extension should give you the result you want.
You may have to specify the source format manually, e.g. by using something like -f markdown_strict+raw_tex in your command.
If you are using the latest version of Pandoc (ver 2.0+), you can use the definition lists notation, as shown below. This would be much easier.
First
: The first item
Second
: The second item
Third
: The third etc
I'm looking for the best way to search all the occurrences of a string and replace them with something related to the line number.
For example, line 857 must become:
z-index: 96;
where 96 = (linenumber +7) / 9.
And line 848 must become:
z-index: 95;
where 95 = (linenumber +7) / 9.
sample screenshot of my document
If you are willing to switch half way through to some kind of spreadsheet application, please, read on.
Using TextWrangler (4.5.11) with grep enabled for searches:
Start adding line numbers using TextWrangler's -> Text -> Add/Remove line numbers.
Search for ^[[:digit:]]+ (?!z-index)(.*)$ and replace with \t\1.
Search for ^([[:digit:]]+) and replace with =(\1+7)/9\t.
Mark everything and copy.
Sorry - no clue, how to calculate inside TextWrangler. Thus:
Paste into some spreadsheet application. (Did test with Google table and Excel. Might need to adjust formula if other software is being used.)
Wait for the formulas to be calculated.
Mark everything and copy.
Go back to TextWrangler. Paste replacing original selection.
Verify the calculations' result to persist.
Search for ^([[:digit:]]+)\t(\s+z-index: )0; and replace with \2\1.
Verify the file's content (and provide more detail if necessary to address the problem in full; significant chunk of data would facilitate proper testing…).