I am writing my thesis right now and I have fallowing problem, I have couple of lists. List of figures, list of algorithms, list of listings... etc. Most of them are very short, but each of them takes whole page. So I have couple of lists which only list few things on one page, and the rest of the page is blank.
How can I put two lists on one page?
I have found the answer here:
http://www.cse.iitd.ernet.in/~anup/homepage/UNIX/latex.html#lotandlof
So I wrote something like that:
\begin{minipage}[b]{1\linewidth}
\listofalgorithms
\end{minipage}
\begin{minipage}[b]{1\linewidth}
\listoffigures
\end{minipage}
Right now if there is enough space two lists can be put on one page.
Just write the lists one after the other. If enough space is available, they will be put on the same page: lists don’t include page breaks.
You say this is for a thesis? Most thesis styles require each list to be on separate pages. I would first double check your thesis style guide. Further, many Universities provide thesis class files (some_thesis_name.cls) which automatically follow the style guide. Check it out, you may save a lot of time and worry.
Related
i was messing around organizing my music files when i asked myself why windows nor linux offer a way to organize a folder by custom tags in a database-likle manner rather than hierarchically.
The problem i wanted to solve is the following:
I have music files
A titled "tempest" from Beethoven, classical music in a piano only version.
B titled "whatever" from Mozart, classical music orchestral
D titled "one winged angel" from Uematsu, classical style, game ost, orchestral
C titled "one winged angel" same as before, violin only, cover from Taylor Davis.
And whatever "main" information i use for grouping, makes listing files by any other category immpossible.
Hence i whished to save files in an hidden folder with a simple increasing number.format, and have a program in which i can add files, add categories, search by tags, and end up with a list of the files i want. E.g. today i want to listen to all piano only pieces independently of their composer-time period.
I started making a structure of vectors containing vectors (aka matrix) but indexing lines and column by string started getting complicated when i want to remove a column.
And searching files by tag would require me to have each tag as an object knowing all files that use it, and it starts becomming more similar to a 3d matrix.
I though it would be better to think of this as a database, started with sqllite but ended with the problem of being unable to remove columns (i know i can create a copy etcc, but i wanted to avoid messy workarounds).
Also an sql-like database wouldn't allow me to have an area dedicated to a list of random tags for each file without a definite category.
Is there any existing library that rather then working as an sql database offers me something similar to a search/insert optimized matrix for strings? I don't think i was the first one thinking about that, someone must have done something similar.
This is very similar to what i want to achieve (strictly speaking about functionality), but rather than having only a bunch of random tags, i'd like to have some categories AND a set of random tags.
The problem with random tags only is you can't use the same word when it refers to different things. For example if the title of a piece is A and there's a film named A with a piece titled B, filtering A in the mess of tags would give both, while with categories i could filter pieces titled A. But the random mess of additional tags without category is useful too, for information you don't want to fill in most of the files and that would take pointless space in a standard database.
I have been unsuccessful at getting Rmarkdown to fold sections denoted by a #section_header. I know code folding is great, but my goal is to be able to compare results across a number of different analytic pipelines without having to scroll all over the place to find what I am looking for. If I could fold each iteration of my analyses neatly into a foldable section, then I would be able to easily compare any two pipelines, while ignoring the rest. I've seen a bit posted about how to implement this in a pdf output but does anyone know how to implement something like this in an html output?
Thanks in advance.
-N
You want to slide whole sections, not just code -- otherwise an HTML notebook would probably do it.
So take a look at this Gist, where I cobbled something together using a little bit of jQuery, probably from this answer here:
<script src="https://gist.github.com/flynn-d/b756e512f5be7f553aad007f0ac37220.js"></script>
Link to Gist
How do you make a list double spaced and continue the number? For example
This
Shouldn't
Be
Hard
If I add a line break then the numbering restarts from 1 again. Is there a way to make a whole numbered list double spaced or at least manually add some line breaks without resetting the count?
What is the point of wiki markup, why not just use HTML? It's at least as simple, if not more.
This works for me with Trac-1.0.1:
1. This [[BR]][[BR]]
1. Shouldn't [[BR]][[BR]]
1. Be [[BR]][[BR]]
1. Hard
I think it's because wiki markup is platform-, browser- and version-independent.
Complementing falkb's valuable answer on the first two of your three questions here are some words on Wiki markup:
I strongly disagree on your notion ('why not just use HTML?'), even more after being switched forcefully from Trac to a MS SharePoint business application. Me personal experience is:
Wiki markup is
simpler (and rather figurative/intuitive)
allows higher typing rate
easiler reviewable
faster to copy-n-paste and adapt
compared to equivalent HTML markup.
Most often used features like headings and list markup are a good example - very few extra characters in Wiki, just one additional char to add/remove for making a 2nd level a 3rd level heading. After I got a grip on Trac wiki markup syntax I started making plaintext notes outside of Trac's wiki editor in the same style too for clear text structur with minimal effort.
Let's assume, I am visiting a University webpage. There are many teacher profile there. Though these pages are not syntactically related, these are semantically related. How can I measure this type of relationship? Actually on which parameter should I focus to find the relation?
This SO post answers how to compute semantic similarity between phrases.
In your case you just need to represent the different pages as documents and follow the same approach.
In your case you algo can exploit more information such as the links between pages or publications (in case of researchers). I hope the link helps a bit...
Here a simple but very good algorithm:
Surely the page for each teacher, and the linked pages, contain text that characterize semantically this professor.
Suppose you create a set of words composed by the concatenation of the text on the page of the professor and on the linked pages (you can continue to concatenate text continuing to follow the links up to an arbitrary depth).
Now, you can clustering professors on the basis of information extracted using a vector space model:
each professor is represented by a vector whose components are the words contained in the extracted pages and values related term-frenquency.
The cosine-similarity will do the rest of the job.
It seems to by my nightmare for the last 4 weeks,
I can't come up with a solution for a "related posts" app in django/python in which it takes the users input and comes out with a related post that matches closely with the original input. I've tried using like statements but it seems that they are not sensitive enough.
Such as which i need typos to also be taken into consideration.
is there a library that could save me from all my pain and suffering?
Well, I suppose there are a few different ways to normalize the user input to produce desirable results (although I'm not sure to what extent libraries exist for them). One of the easiest ways to get related posts would be to compare the tags present on that post (granted your posts have tags). If you wanted to go another route, I would take the following steps: remove stop words from the subject, use some kind of stemmer on the remainder, and finally treat the remaining words as "tags" to compare with other posts. For the sake of efficiency, it would probably be a good idea to run these steps in a batch process on all of your current posts and store off the resulting "tags." As far as typos, I'm sure there are a multitude of spelling corrector libraries exist (I found this one after a few seconds with Google).