How do you make a list double spaced? - wiki

How do you make a list double spaced and continue the number? For example
This
Shouldn't
Be
Hard
If I add a line break then the numbering restarts from 1 again. Is there a way to make a whole numbered list double spaced or at least manually add some line breaks without resetting the count?
What is the point of wiki markup, why not just use HTML? It's at least as simple, if not more.

This works for me with Trac-1.0.1:
1. This [[BR]][[BR]]
1. Shouldn't [[BR]][[BR]]
1. Be [[BR]][[BR]]
1. Hard
I think it's because wiki markup is platform-, browser- and version-independent.

Complementing falkb's valuable answer on the first two of your three questions here are some words on Wiki markup:
I strongly disagree on your notion ('why not just use HTML?'), even more after being switched forcefully from Trac to a MS SharePoint business application. Me personal experience is:
Wiki markup is
simpler (and rather figurative/intuitive)
allows higher typing rate
easiler reviewable
faster to copy-n-paste and adapt
compared to equivalent HTML markup.
Most often used features like headings and list markup are a good example - very few extra characters in Wiki, just one additional char to add/remove for making a 2nd level a 3rd level heading. After I got a grip on Trac wiki markup syntax I started making plaintext notes outside of Trac's wiki editor in the same style too for clear text structur with minimal effort.

Related

How do I fix the margins on PDFs created by Sphinx easily?

I have a Django Project where I used Sphinx to create my documentation. I went through sphinx-apidoc and ran 'make latexpdf'. The resulting documentation has a quite a few lines that flow out of the margin. On top of margin issues, lines in the index start overflowing onto each other.
Overflowing Lines
Margin Issues :(
Is there an easy way to fix these issues (or an easier way to create PDF documentation)?
ELI5 if possible (I'm not well-versed in LaTeX)
The overflowing lines situation in the index should improve from adding this to conf.py:
latex_elements = {
'printindex': '\\footnotesize\\raggedright\\printindex',
}
Or, you can switch to Japanese language which does something like that (even better) out-of-the box from its special document class ;-)
TeX does not always know how by itself how to insert linebreaks: after all it is good at hyphenation of natural language. But as pointed out in comments Sphinx coerces LaTeX into handling better long code lines since 1.4.2.
Since recent 1.5.3, user can customize page margins, check http://www.sphinx-doc.org/en/stable/latex.html#the-sphinx-latex-style-package-options for documentation of hmargin and vmargin which can be configured via 'sphinxsetup'.

Latex, having two lists of something on the one page

I am writing my thesis right now and I have fallowing problem, I have couple of lists. List of figures, list of algorithms, list of listings... etc. Most of them are very short, but each of them takes whole page. So I have couple of lists which only list few things on one page, and the rest of the page is blank.
How can I put two lists on one page?
I have found the answer here:
http://www.cse.iitd.ernet.in/~anup/homepage/UNIX/latex.html#lotandlof
So I wrote something like that:
\begin{minipage}[b]{1\linewidth}
\listofalgorithms
\end{minipage}
\begin{minipage}[b]{1\linewidth}
\listoffigures
\end{minipage}
Right now if there is enough space two lists can be put on one page.
Just write the lists one after the other. If enough space is available, they will be put on the same page: lists don’t include page breaks.
You say this is for a thesis? Most thesis styles require each list to be on separate pages. I would first double check your thesis style guide. Further, many Universities provide thesis class files (some_thesis_name.cls) which automatically follow the style guide. Check it out, you may save a lot of time and worry.

How should I migrate a site from ZWiki to MediaWiki?

I have a fairly extensive wiki on ZWiki on Zope (in turn on Plone). Most pages are in reStructured text format, but there are several in straight HTML as well.
What is the best approach to migrate those pages over to a MediaWiki wiki with pages converted to MediaWiki and HTML formats? Of course I'd like to automagically convert all links (internal and external).
extract your wiki content to files, using the zwikiexport.py script. The command will be something like:
ZOPE/bin/zopectl run ZOPE/Products/ZWiki/bin/zwikiexport.py /zodb/path/to/wiki/folder
convert the restructured text markup to mediawiki markup. pandoc should work well - for each wiki page, run something like:
pandoc -r rst -w mediawiki PAGE.rst >PAGE.mw
convert the wiki links, which pandoc doesn't know about. Depending on your content, this may be the hardest part to do accurately. Write a perl script, or modify the zwikiexport script, using Zwiki's knowledge of where the links are (see methods in ZWikiPage.py).
import the mediawiki-format pages into mediawiki, however that's done
refinements:
the exported file tree will reflect your zwiki page hierarchy - if you use this heavily, you'll want to think about how to represent it in mediawiki
as Mark says, you'll lose the page history, unless you work extra hard to find a way to replicate that. The same goes for all page metadata you may have have been using (you can inspect most of this metadata in the page's Properties tab in the zope management interface). In particular the page creation time, last edit time, and the usernames of the page's creator and last editor are quite important, to understand your content. So I would try to script some way of preserving those or if all else fails, doing it by hand.
if you have uploaded files to the wiki, I think the export script might save those too, otherwise use the ZMI to export/save them. When you import them to mediawiki, you may need to choose a page to attach them to. You could use grep or Zwiki's search to find the pages that reference a particular file.
be prepared to iterate, testing the results pretty thoroughly and refining the process, before you declare victory. After that, the content will diverge and you won't want to re-do this.
manual fixups: at some point, it may be cheaper to stop fiddling with scripts and do the remaining cleanup by hand, by yourself or with an army of helpers.
Good luck! - Simon
http://zwiki.org
I've no experience of ZWiki and don't know how large your wiki is. But general advice - you can use find / replace in Notepad or Notepad++ - or you can write a macro in Excel.
This is per page copying which is only really suitable if your wiki is not larger than, say, 1000 pages.
I suspect you'll still have manually to check each page though, and update your scripts accordingly.
Good luck with it - I suspect you'll be pleased with the final result because MediaWiki is pretty awesome.
Update: one disadvantage of moving to a new wiki is that you will lose the page history (i.e. who wrote what, when).

Is there a way to build an easy related posts app in django

It seems to by my nightmare for the last 4 weeks,
I can't come up with a solution for a "related posts" app in django/python in which it takes the users input and comes out with a related post that matches closely with the original input. I've tried using like statements but it seems that they are not sensitive enough.
Such as which i need typos to also be taken into consideration.
is there a library that could save me from all my pain and suffering?
Well, I suppose there are a few different ways to normalize the user input to produce desirable results (although I'm not sure to what extent libraries exist for them). One of the easiest ways to get related posts would be to compare the tags present on that post (granted your posts have tags). If you wanted to go another route, I would take the following steps: remove stop words from the subject, use some kind of stemmer on the remainder, and finally treat the remaining words as "tags" to compare with other posts. For the sake of efficiency, it would probably be a good idea to run these steps in a batch process on all of your current posts and store off the resulting "tags." As far as typos, I'm sure there are a multitude of spelling corrector libraries exist (I found this one after a few seconds with Google).

Html renderer with limited resources (good memory management)

I'm creating a linux program in C++ for a portable device in order to render html files.
The problem is that the device is limited in RAM, thus making it impossible to open big files (with actual software).
One solution is to dynamically load/unload parts of the file, but I'm not sure how to implement that.
The ability of scrolling is a must, with a smooth experience if possible
I would like to hear from you what is the best approach for such situation ?
You can suggest an algorithm, an open-source project to take a look at, or a library that support what I'm trying to do (webkit?).
EDIT:
I'm writing an ebook reader, so I just need pure html rendering, no javascript, no CSS, ...
To be able to browse a tree document (like HTML) without fully loading, you'll have to make a few assumptions - like the document being an actual tree. So, don't bother checking close tags. Close tags are designed for human consumption anyway, computers would be happy with <> too.
The first step is to assume that the first part of your document is represented by the first part of your document. That sounds like a tautology, but with "modern" HTML and certainly JS this is technically no longer true. Still, if any line of HTML can affect any pixel, you simply cannot partially load a page.
So, if there's a simple relation between position the the HTML file and pages on screen, the next step is to define the parse state at the end of each page. This will then include a single file offset, probably (but not necessarily) at the end of a paragraph. Also part of this state is a stack of open tags.
To make paging easier, it's smart to keep this "page boundary" state for each page you've encountered so far. This makes paging back easy.
Now, when rendering a new page, the previous page boundary state will give you the initial rendering state. You simply read HTML and render it element by element until you overflow a single page. You then backtrack a bit and determine the new page boundary state.
Smooth scrolling is basically a matter of rendering two adjacent pages and showing x% of the first and 100-x% of the second. Once you've implemented this bit, it may become smart to finish a paragraph when rendering each page. This will give you slightly different page lengths, but you don't have to deal with broken paragraphs, and that in turn makes your page boundary state a bit smaller.
Dillo is the lightest weight Linux web browser that I'm aware of.
Edit: If it (or its rendering component) won't meet your needs, then you might find Wikipedia's list of and comparison of layout engines to be helpful.
Edit 2: I suspect that dynamically loading and unloading parts of an HTML file would be tricky; for example, how would you know that a randomly chosen chunk of the file isn't in the middle of a tag? You'd probably have to use something like SAX to parse the file into an intermediate representation, saving discrete chunks of the intermediate representation to persistent storage so that they won't take up too much RAM. Or you could parse the file with SAX to show whatever fits in RAM at once then re-parse it whenever the user scrolls too far. (Stylesheets and Javascript would ruin this approach; some plain HTML might too.) If it were me, I'd try to find a simple markup language or some kind of rich text viewer rather than going to all of that difficulty.