Publishing toolchain - xslt

I have a book project which I'd like to start sooner than later. This would follow an agile-like publishing workflow, i.e: publish early and often. It is meant to be self-publsihed by me and I'm not really looking to paper-publish it, even though we never know.
If I weren't a geek, I'd probably have already started writting in Word or any other WYSIWYG tool and just export to PDF. However, we know it is not the best solution, and emacs rules my text-editing life, so, the output format should be as simple as possible and be text-based.
I've thought about the following options:
Just use orgmode and export to PDF (orgmode has this feature natively)
Use markdown mode and export to PDF (markdown->LaTeX->PDF should not be hard to setup);
Use something similar to what the guys # Pragmatic Progammers do: A XML + XSLT + LaTeX.
More complex, but much more control over the style.
EDIT: Someone just told me that he uses a combo of Textile+Adobe In Design and the XTags plugin. Not sure how they are glued together though, gotta do some research.
Any other ideas / references ?
I want to start writting as soon as possible. In fact, I already have a draft in an org-formatted file. However, I do want to have and use the full power of LaTex later on to format it the way I want and make it look fabulous :)
Thanks in advance,
Marcelo.

I have done a TON of research on this lately, since I'm planning on starting my own small press soon.
It really depends on what you want your final output to be (PDF, HTML, other?), and what the book is about.
Org mode is great, as I'm sure you know, because it expands as you do. I often write my outlines in org mode, then just fill in the body text when I'm really ready to start writing.
IF it's prose, and you just need some simple divisions (chapters and sections and not much else), org mode -> latex should do you just fine. Then you also have the possibility of org mode -> html
IF you need math in it, you can just write the math right in the org mode file.
If it's really really technical information, docbook might be nice (emacs + nxml), then dockbook 4.5 -> jade -> jadetex -> pdf.
I'd stay away from docbook 5, because it uses FOP to generate PDFs, and the typesetting is really inferior to latex.
BOTTOM LINE: If you want a PDF, use org -> latex, the path of least resistance ;) -- whatever you do, concentrate on the content of the book first, and worry about what it looks like til after.
And why not paper publish? Have you looked at lulu.com? I recently formatted a book with latex, uploaded the pdf to lulu, and had them print it. The quality is pretty good, and definitely worth a look. I have a ton of bookmarks at home about publishing in general, if you're interested.

Typography is hard.
TeX/LaTeX are tools that can get you the best possible results, however they require knowledge about typography to be used correctly--especially with a big document like a book. And I haven't seen any other cheap (=not for professional use) software that would do things correctly automatically. (I haven't seen any professional software, so it is possible they don't do that either)
However, assuming that you'll write your book in some machine-readable format, putting it into TeX/LaTeX should not be very hard: once I had a set of documents in a custom XML format. Proper usage of XSLT, TeXML and LaTeX gave me something I could tweak manually (and this tweaking was necessary!) and get the best possible result.
My advice: prepare content in something that is easy to parse and easy to write in. I'd dismiss XML. Markdown seems to be good choice. This will also allow you to quickly show your work. Then if you decide to make the result better, write some simple script to translate that to TeX (it is not that hard to get basic functionality) and fix things by hand. This might actually be a good exercise to learn TeX.
Don't try to get everything right from the beginning. Firstly get the content, then play with formatting.

If you are really wanting to do online only, I would suggest you use org mode and just stay in HTML. Then you can use CSS to style it however you would like.
That being said, if you really want to output to PDF for technical stuff, I would strongly suggest using Docbook (www.docbook.org). It's made for that, it works great with Emacs.

You have already answered yourself. Not mentioning that you already started writing in org-mode. Org-mode is really extremely powerful and will enable you to publish to PDF and HTML eventually with no effort.
In case of PDF you can take advantage of LaTeX and how org-mode is working with exports. You can include any LaTeX code to your org file. Also IMHO it's way better to write the book/article in org-mode since something becomes even easier than in plain .tex files take for example tables.
Regarding Publishing it's a same story with one single function you can trigger exporting to HTML/PDF and uploading to your server. And notice that you are still using just plain text file which is human readable and very clean.
Org-mode really follows the Emacs philosphy just start using it and it will grow with you.

If you are writing a book, it would certainly be worth the overhead of learning tex.
Even something like,
\documentclass[a4paper,10pt]{book}
\title{SERPA'S BOOK}
\author{SERPA}
\date{\today}
\begin{document}
\maketitle
\tableofcontents
\include{chapterA}
\include{chapterB}
\include{chapterC}
\end{document}
Then, in the same directory have files chapterA.tex, chapterB.tex, chapterC.tex that look like
\chapter{My chapter title}
Lorem ipsum dolor sit amet, consectetur adipiscing elit....
That alone will produce an extremely nice looking document. You can edit each chapter separately and then just compile the main tex file. I think if you try to learn intermediate tools that try to abstract away from tex, you'll only make it more difficult later to do what you actually want, because you will be both fighting tex and an abstraction of tex at the same time.
Best of luck on such an undertaking.
Also, no matter what you do, make sure to use some kind of version control system, such as SVN, to manage your files. It will be worth it.

I would write it in Latex and have an online repository that does nightly compiles to PDF of the 'publish-ready' branch, available to readers.

I would not start with using LaTeX these days. TeX input is unstructured and the only thing you can get out of TeX input is PDF. If you need HTML or anything else, you are screwed.
Use something structured, such as XML (DocBook is a good suggestion) or define your own XML subset as you need it. Use XSLT to transform it into something usable (HTML etc.) That way you are set for the future.
Depending on your typographical needs, you can then use TeX as a backend processor, or XSLT or whatever.
Also, have a look at ConTeXt, it can read XML directly and has great typography!

Related

Libraries for .odt formatting

Are there any C/C++ libraries available that can be used in creating, loading and saving files in .odt-format?
Alternatively, where can I find tutorial-like information on implementing .odt(/ODF) specifications?
This site: OASIS OpenDocument Essentials seems to cover the problem, including howto-examples and shortcuts.It's quite nicely done and easy to follow.
Flexibility can be perceived as complexity. If you don't need the
flexibility, create a template ODT and just fill in the content as
needed. As mentioned, there exist XML parsers to actually handle IO.
ODT isn't a plaintext file, so some complexity/difficulty is expected.
– Ioan
From the link:
The Virtues of Cheating
As you begin to work with OpenDocument files, you may want to write a
program that constructs a document with some feature that isn’t
explained in this book—this is, after all, an “essentials” book. Just
start OpenOffice.org or KOffice, create a document that has the
feature you want, unpack the file, and look for the XML that
implements it. To get a better understanding of how things works,
change the XML, repack the document, and reload it. Once you know how
a feature works, don’t hesitate to copy and paste the XML from the
OpenDocument file into your program. In other words, cheat. It worked
for me when I was writing this book, and it can work for you too!

Create And Convert To PDF's.. NO Toolkit

Not sure where else to ask this, so I figured I'd give good old stackoverflow a shot.
Let's say, by chance, I would like to write a library or set of libraries that will create PDF's and convert files to PDF, AND I could care less about how long it will take me to complete (3 months - 10 years.. whatever). I have absolutely no interest in paying for a toolkit... the point of this would be to learn how to manipulate and create files like PDF's. There's nothing business critical about the project, I just want to learn how to do it. Where do I start? I would imagine something like this would be written in C++, but I'm not sure... maybe high level languages would work as well. I'm not looking for someone to tell me exactly how to do it, but send me in the write direction, or at least point out the concepts I would need to concretely grasp before proceeding with such a project.
Any advice and help in directing me here is greatly appreciated : )
Well, you will need a very good understanding of the PDF file format. Adobe publishes the standard and you can start at their site. You can start with the base 1.7 standard and then read the cumulative supplements from there. It is a daunting task, but it can be done and you can pretty much use any language you want, because in the end you are just generating bytes that can be saved to a file.
If you want to convert from, let's say, word documents, it will get a little trickier. Microsoft has published their file formats, which you would have to learn and then learn how to translate that into the corresponding PDF formatting. Also note that the .doc and .docx formats are completely separate file formats and would require separate engines to convert them.
With unlimited time, it is definitely doable, you would just need to ask yourself if it is worth it.

How should I migrate a site from ZWiki to MediaWiki?

I have a fairly extensive wiki on ZWiki on Zope (in turn on Plone). Most pages are in reStructured text format, but there are several in straight HTML as well.
What is the best approach to migrate those pages over to a MediaWiki wiki with pages converted to MediaWiki and HTML formats? Of course I'd like to automagically convert all links (internal and external).
extract your wiki content to files, using the zwikiexport.py script. The command will be something like:
ZOPE/bin/zopectl run ZOPE/Products/ZWiki/bin/zwikiexport.py /zodb/path/to/wiki/folder
convert the restructured text markup to mediawiki markup. pandoc should work well - for each wiki page, run something like:
pandoc -r rst -w mediawiki PAGE.rst >PAGE.mw
convert the wiki links, which pandoc doesn't know about. Depending on your content, this may be the hardest part to do accurately. Write a perl script, or modify the zwikiexport script, using Zwiki's knowledge of where the links are (see methods in ZWikiPage.py).
import the mediawiki-format pages into mediawiki, however that's done
refinements:
the exported file tree will reflect your zwiki page hierarchy - if you use this heavily, you'll want to think about how to represent it in mediawiki
as Mark says, you'll lose the page history, unless you work extra hard to find a way to replicate that. The same goes for all page metadata you may have have been using (you can inspect most of this metadata in the page's Properties tab in the zope management interface). In particular the page creation time, last edit time, and the usernames of the page's creator and last editor are quite important, to understand your content. So I would try to script some way of preserving those or if all else fails, doing it by hand.
if you have uploaded files to the wiki, I think the export script might save those too, otherwise use the ZMI to export/save them. When you import them to mediawiki, you may need to choose a page to attach them to. You could use grep or Zwiki's search to find the pages that reference a particular file.
be prepared to iterate, testing the results pretty thoroughly and refining the process, before you declare victory. After that, the content will diverge and you won't want to re-do this.
manual fixups: at some point, it may be cheaper to stop fiddling with scripts and do the remaining cleanup by hand, by yourself or with an army of helpers.
Good luck! - Simon
http://zwiki.org
I've no experience of ZWiki and don't know how large your wiki is. But general advice - you can use find / replace in Notepad or Notepad++ - or you can write a macro in Excel.
This is per page copying which is only really suitable if your wiki is not larger than, say, 1000 pages.
I suspect you'll still have manually to check each page though, and update your scripts accordingly.
Good luck with it - I suspect you'll be pleased with the final result because MediaWiki is pretty awesome.
Update: one disadvantage of moving to a new wiki is that you will lose the page history (i.e. who wrote what, when).

How do I write a Perl script to filter out digital pictures that have been doctored?

Last night before going to bed, I browsed through the Scalar Data section of Learning Perl again and came across the following sentence:
the ability to have any character in a string means you can create, scan, and manipulate raw binary data as strings.
An idea immediately hit me that I could actually let Perl scan the pictures that I have stored on my hard disk to check if they contain the string Adobe. It seems by doing so, I can tell which of them have been photoshopped. So I tried to implement the idea and came up with the following code:
#!perl
use autodie;
use strict;
use warnings;
{
local $/="\n\n";
my $dir = 'f:/TestPix/';
my #pix = glob "$dir/*";
foreach my $file (#pix) {
open my $pic,'<', "$file";
while(<$pic>) {
if (/Adobe/) {
print "$file\n";
}
}
}
}
Excitingly, the code seems to be really working and it does the job of filtering out the pictures that have been photoshopped. But problem is many pictures are edited by other utilities. I think I'm kind of stuck there. Do we have some simple but universal method to tell if a digital picture has been edited or not, something like
if (!= /the origianl format/) {...}
Or do we simply have to add more conditions? like
if (/Adobe/|/ACDSee/|/some other picture editors/)
Any ideas on this? Or am I oversimplifying due to my miserably limited programming knowledge?
Thanks, as always, for any guidance.
Your best bet in Perl is probably ExifTool. This gives you access to whatever non-image information is embedded into the image. However, as other people said, it's possible to strip this information out, of course.
I'm not going to say there is absolutely no way to detect alterations in an image, but the problem is extremely difficult.
The only person I know of who claims to have an answer is Dr. Neal Krawetz, who claims that digitally altered parts of an image will have different compression error rates from the original portions. He claims that re-saving a JPEG at different quality levels will highlight these differences.
I have not found this to be the case, in my investigations, but perhaps you might have better results.
No. There is no functional distinction between a perfectly edited image, and one which was the way it is from the start - it's all just a bag of pixels in the end, after all, and any other metadata you can remove or forge all you want.
The name of the graphics program used to edit the image is not part of the image data itself but of something called meta data - which may be stored in the image file but, as others have noted, is neither required (so some programs may not store it, some may allow you an option of not storing it) nor reliable - if you forged an image, you might have forged the meta data as well.
So the answer to your question is "no, there's no way to universally tell if the pic was edited or not, although some image editing software may write its signature into the image file and it'll be left there by carelessness of the editing person.
If you're inclined to learn more about image processing in Perl, you could take a look at some of the excellent modules CPAN has to offer:
Image::Magick - read, manipulate and write of a large number of image file formats
GD - create colour drawings using a large number of graphics primitives, and emit the drawings in various formats.
GD::Graph - create charts
GD::Graph3d - create 3D Graphs with GD and GD::Graph
However, there are other utilities available for identifying various image formats. It's more of a question for Super User, but for various unix distros you can use file to identify many different types of files, and for MacOSX, Graphic Converter has never let me down. (It was even able to open the bizarre multi-file X-ray of my cat's shattered pelvis that I got on a disc from the vet.)
How would you know what the original format was? I'm pretty sure there's no guaranteed way to tell if an image has been modified.
I can just open the file (with my favourite programming language and filesystem API) and just write whatever I want into that file willy-nilly. As long as I don't screw something up with the file format, you'd never know it happened.
Heck, I could print the image out and then scan it back in; how would you tell it from an original?
As other's have stated, there is no way to know if the image was doctored. I'm guessing what you basically want to know is the difference between a realistic photograph and one that has been enhanced or modified.
There's always the option of running some extremely complex image recognition algorithm that would analyze every pixel in your image and do some very complicated stuff to determine if the image was doctored or not. This solution would probably involve AI which would examine millions of photos that are both doctored and those that are not and learn from them. However, this is more of a theoretical solution and isn't very practical... you would probably only see it in movies. It would be extremely complex to develop and probably take years. And even if you did get something like this to work, it probably still wouldn't be 100% correct all the time. I'm guessing AI technology still isn't at that level and could take a while until it is.
A not-commonly-known feature of exiftool allows you to recognize the originating software through an analysis of the JPEG quantization tables (not relying on image metadata). It recognizes tables written by many applications. Note that some cameras may use the same quantization tables as some applications, so this isn't a 100% solution, but it is worth looking into. Here is an example of exiftool run on two images, the first was edited by photoshop.
> exiftool -jpegdigest a.jpg b.jpg
======== a.jpg
JPEG Digest : Adobe Photoshop, Quality 10
======== b.jpg
JPEG Digest : Canon EOS 30D/40D/50D/300D, Normal
2 image files read
This will work even if the metadata has been removed.
There is existing software out there which uses various techniques (compression artifacting, comparison to signature profiles in a database of cameras, etc.) to analyze the actual image data for evidence of alteration. If you have access to such software and the software available to you provides an API for external access to these analysis functions, then there's a decent chance that a Perl module exists which will interface with that API and, if no such module exists, it could probably be created rather quickly.
In theory, it would also be possible to implement the image analysis code directly in native Perl, but I'm not aware of anyone having done so and I expect that you'd be better off writing something that low-level and processor-intensive in a fully-compiled language (e.g., C/C++) rather than in Perl.
http://www.impulseadventure.com/photo/jpeg-snoop.html
is a tool that does the job almost good
If there has been any cloning , there is a variation in the pixel density..or concentration which sometimes shows up.. upon manual inspection
a Photoshop cloned area will have even pixel density(my meaning is variation of Pixels wrt a scanned image)

library for doing diffs

I've been tasked with creating a tool that can diff and merge the configuration files for my company's product. The configurations are stored as either XML or URL-encoded strings. I'm looking for a library, preferably open source with a license compatible with commercial software, that can do these diffs. Our app is written in C++, so C++ libraries would be best, but I'm willing to look at libraries that are C#-specific since I can write a wrapper that exposes it to C++ via COM. Three-way diffs would be ideal, but two-way is acceptable. If it has an understanding of XML, that would also be a plus (since XML nodes can be reordered without changing the document, etc). Any library suggestions? Should I even consider writing my own diff tools in the hopes of giving it semantic knowledge of our formats?
Thanks to this similar question, I've already discovered this google library, which seems really great, but I'm still looking for other options. It also seems to be able to output the diffs in HTML format (using the <ins> and <del> tags that I didn't know existed before I discovered it), which could be really handy, but it seems to be a unified diff only. I'm going to need to display the results in a web browser, and probably have to build an interface for doing the merges in the browser as well. I don't expect a library to be able to help with these tasks, but it must produce output in a format that is amenable to me building this on top of it. I'm currently envisioning something along the lines of TortoiseMerge (side-by-side diffs, not unified), except browser-based. Any tips/tricks/design ideas on how to present this would be appreciated too.
Subversion comes with libsvn_diff and libsvn_delta licensed under Apache Software License.
Here is a C++ library that can diff what the author calls semistructured data. It deals nicely with HTML and XML. Since your data is XML it would make a lot of sense to use this instead of plain text diff. This is especially the case when the files are machine generated.
I am currently trying to use this library to build a tool that diffs Visual Studio project files. These are basically XML files and using a plain diff tool like Winmerge is too painful because Visual Studio pretty much mucks up the whole file by crazy reordering. The idea is to do some kind of a structured diff to address the problem.
For diffing the XML I would propose that you normalize it first: sort all the elements in alphabetic order, then generate a stream of tokens/xml that represents the original document but is independent of the original formatting. After running the diff, parse the result to get a tree containing what was added / removed.