Which html sanitization library to use? - xss

I am working on a features where users can enter valid html markup and css and I render users web page. The problem is I am using OWASP AntiSamy Java libraries and its stripping out most of the modern HTML5 tags and CSS3 attributes. I looked at the policy file and it was quite outdated. I have been looking at other Sanitization Libraries like HTML Sanitizer and Google Caja, but I don't feel its doing anything extra. You still have to update your policy files as you find issues of stripping valid tags and styles.
I have been searching for a proper solution. Any recommendations on which library to use? Is there any advantage switching to HTML Sanitizer or Google Caja. Not sure if anyone has updated Antisamy policy files and open sourced it so that it supports new tags and style attributes.
Varun

I've had good experience with jsoup
All you need is a short snippet of code:
String safe = Jsoup.clean(unsafe, Whitelist.basic());
You can add tags and attributes to the Whitelist object fairly easily, though I found it doesn't support namespaced tags.
The jsoup jar itself is small (200+KB) and unlike owasp java html sanitizer, it doesn't depend on the Guava library which is 1.6MB.

Related

Robots.txt and Coldfusion

I am trying to disallow certain parts of a site instead of the whole thing.
I am relatively new to this so if someone walk me through it I will appreciate it.
I know that you can Disallow: /page1.cfm from crawlers but what if I want to disallow just a part of that page such as a link or a contact form that exists on that page? Is this functionality even possible?
Based on some forums I was reading recently, the "nofollow" function is not really working anymore since the crawlers are becoming smarter. (i don't know how credible that forum was, so if anyone has a better source please share)
Any suggestions?
Don't use nofollow, you'll loose linkjuice on your page.
Robots.txt are just a hint for crawler, half of the time with disalow rules if they already have find the page they still visit it and index it.
Use .htaccess rules to ban or block the access to this pages.
Or crypt your link with a complex .js (base64_encode() + str_rot13() encoding should be enough to lost the crawler)
You can use the attribute "nofollow" in the meta tag to hide the information on the page. Google wrote that they do not pass on the links labeled "nofollow." More information about this and examples you can find here:Robots.txt tutorial and Google supportHope this helps

Interrogating InDesign file beyond XMP Metadata

So, I've got an app that needs to deal with files created by Adobe InDesign (.INDD), and while the XMP Metadata is useful, there are additional things that I want to know about the files that do not appear to be in the metadata.
Specifically, I would want to know the number of actual pages (not just number of page previews created), and what the dimensions of those pages are.
Has anyone run across any toolkit, sdk, etc. that can get me this information?
This will be for a non-open source commercial app, so licenses are a potential roadblock. Also, this app will not be a plug-in for any Adobe product, so the InDesign Plugin SDK is not an option either.
C++ is the preferred language.
.indd is a proprietary format owned by Adobe. You are not allowed to interact with this format outside of InDesign. If the documents are saved in the .idml format, it's quite possible and not very difficult, but if all you have to work with is a bunch of .indd files that someone else created, you're gonna have to use a plugin or scripts together with InDesign.

"Smart" way of parsing and using website data?

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed yesterday.
Improve this question
How does one intelligently parse data returned by search results on a page?
For example, lets say that I would like to create a web service that searches for online books by parsing the search results of many book providers' websites. I could get the raw HTML data of the page, and do some regexs to make the data work for my web service, but if any of the websites change the formatting of the pages, my code breaks!
RSS is indeed a marvelous option, but many sites don't have an XML/JSON based search.
Are there any kits out there that help disseminate information on pages automatically? A crazy idea would be to have a fuzzy AI module recognize patterns on a search results page, and parse the results accordingly...
I've done some of this recently, and here are my experiences.
There are three basic approaches:
Regular Expressions.
Most flexible, easiest to use with loosely-structured info and changing formats.
Harder to do structural/tag analysis, but easier to do text matching.
Built in validation of data formatting.
Harder to maintain than others, because you have to write a regular expression for each pattern you want to use to extract/transform the document
Generally slower than 2 and 3.
Works well for lists of similarly-formatted items
A good regex development/testing tool and some sample pages will help. I've got good things to say about RegexBuddy here. Try their demo.
I've had the most success with this. The flexibility lets you work with nasty, brutish, in-the-wild HTML code.
Convert HTML to XHTML and use XML extraction tools. Clean up HTML, convert it to legal XHTML, and use XPath/XQuery/ X-whatever to query it as XML data.
Tools: TagSoup, HTMLTidy, etc
Quality of HTML-to-XHML conversion is VERY important, and highly variable.
Best solution if data you want is structured by the HTML layout and tags (data in HTML tables, lists, DIV/SPAN groups, etc)
Most suitable for getting link structures, nested tables, images, lists, and so forth
Should be faster than option 1, but slower than option 3.
Works well if content formatting changes/is variable, but document structure/layout does not.
If the data isn't structured by HTML tags, you're in trouble.
Can be used with option 1.
Parser generator (ANTLR, etc) -- create a grammar for parsing & analyzing the page.
I have not tried this because it was not suitable for my (messy) pages
Most suitable if HTML structure is highly structured, very constant, regular, and never changes.
Use this if there are easy-to-describe patterns in the document, but they don't involve HTML tags and involve recursion or complex behaviors
Does not require XHTML input
FASTEST throughput, generally
Big learning curve, but easier to maintain
I've tinkered with web harvest for option 2, but I find their syntax to be kind of weird. Mix of XML and some pseudo-Java scripting language. If you like Java, and like XML-style data extraction (XPath, XQuery) that might be the ticket for you.
Edit: if you use regular expressions, make sure you use a library with lazy quantifiers and capturing groups! PHP's older regex libraries lack these, and they're indispensable for matching data between open/close tags in HTML.
Without a fixed HTML structure to parse, I would hate to maintain regular expressions for finding data. You might have more luck parsing the HTML through a proper parser that builds the tree. Then select elements ... that would be more maintainable.
Obviously the best way is some XML output from the engine with a fixed markup that you can parse and validate. I would think that a HTML parsing library with some 'in the dark' probing of the produced tree would be simpler to maintain than regular expressions.
This way, you just have to check on <a href="blah" class="cache_link">... turning into <a href="blah" class="cache_result">... or whatever.
Bottom line, grepping specific elements with regexp would be grim. A better approach is to build a DOM like model of the page and look for 'anchors' to character data in the tags.
Or send an email to the site stating a case for a XML API ... you might get hired!
You don't say what language you're using. In Java land you can use TagSoup and XPath to help minimise the pain. There's an example from this blog (of course the XPath can get a lot more complicated as your needs dictate):
URL url = new URL("http://example.com");
SAXBuilder builder = new SAXBuilder("org.ccil.cowan.tagsoup.Parser"); // build a JDOM tree from a SAX stream provided by tagsoup
Document doc = builder.build(url);
JDOMXPath titlePath = new JDOMXPath("/h:html/h:head/h:title");
titlePath.addNamespace("h","http://www.w3.org/1999/xhtml");
String title = ((Element)titlePath.selectSingleNode(doc)).getText();
System.out.println("Title is "+title);
I'd recommend externalising the XPath expressions so you have some measure of protection if the site changes.
Here's an example XPath I'm definitely not using to screenscrape this site. No way, not me:
"//h:div[contains(#class,'question-summary')]/h:div[#class='summary']//h:h3"
You haven't mentioned which technology stack you're using. If you're parsing HTML, I'd use a parsing library:
Beautiful Soup (Python)
HTML Agility Pack (.NET)
There are also webservices that do exactly what you're saying - commercial and free. They scrape sites and offer webservice interfaces.
And a generic webservice that offers some screen scraping is Yahoo Pipes. previous stackoverflow question on that
It isn't foolproof but you may want to look at a parser such as Beautiful Soup It won't magically find the same info if the layout changes but it's a lot easier then writing complex regular expressions. Note this is a python module.
Unfortunately 'scraping' is the most common solution, as you said attempting to parse HTML from websites. You could detect structural changes to the page and flag an alert for you to fix, so a change at their end doesn't result in bum data. Until the semantic web is a reality, that's pretty much the only way to guarantee a large dataset.
Alternatively you can stick to small datasets provided by APIs. Yahoo are working very hard to provide searchable data through APIs (see YDN), I think the Amazon API opens up a lot of book data, etc etc.
Hope that helps a little bit!
EDIT: And if you're using PHP I'd recommend SimpleHTMLDOM
Have you looked into using a html manipulation library? Ruby has some pretty nice ones. eg hpricot
With a good library you could specify the parts of the page you want using CSS selectors or xpath. These would be a good deal more robust than using regexps.
Example from hpricot wiki:
doc = Hpricot(open("qwantz.html"))
(doc/'div img[#src^="http://www.qwantz.com/comics/"]')
#=> Elements[...]
I am sure you could find a library that does similar things in .NET or Python, etc.
Try googling for screen scraping + the language you prefer.
I know several options for python, you may find the equivalent for your preferred language:
Beatiful Soup
mechanize: similar to perl WWW:Mechanize. Gives you a browser like object to ineract with web pages
lxml: python binding to libwww
scrapemark: uses templates to scrape pieces of pages
pyquery: allows you to make jQuery queries in xml/xhtml documents
scrapy: an high level scraping and web crawling framework for writing spiders to crawl and parse web pages
Depending on the website to scrape you may need to use one or more of the approaches above.
If you can use something like Tag Soup, that'd be a place to start. Then you could treat the page like an XML API, kinda.
It has a Java and C++ implementation, might work!
Parsley at http://www.parselets.com looks pretty slick.
It lets you define 'parslets' using JSON what you're define what to look for on the page, and it then parses that data out for you.
As others have said, you can use an HTML parser that builds a DOM representation and query it with XPath/XQuery. I found a very interesting article here: Java theory and practice: Screen-scraping with XQuery - http://www.ibm.com/developerworks/xml/library/j-jtp03225.html
There is a very interesting online service for parsing websites https://loadsiteinmysql.site This service splits the site into tags and loads them into MySQL tables. This allows you to parse sites using MySQL syntax

Can you easily configure MediaWiki to accept full HTML/CSS or even JS content?

I'd like to create a technical wiki site and it requires the full use of HTML/CSS and maybe Javascript when editing a page. Is this something I can easily configure in MediaWiki? If not, is there any other wiki software that you'd recommend?
Thanks!
You can enable raw HTML support by setting $wgRawHtml = true; in your LocalSettings.php:
http://www.mediawiki.org/wiki/Manual:$wgRawHtml
However, as noted above this is rather insecure for a public site. (If locked down to registered usage only by known folks it's ok -- but you need to trust your users.)
There are some links on that manual page to extensions organized around letting you put specific known bits of HTML/JS in your output code as well, which may or may not fit your needs better.
Well, while MediaWiki itself does not support this, there are some extensions which allow at least HTML in a page. See for example this extension list. SecureHTML might so what you are looking for.
That said, I'd like to point out that allowing raw HTML rather defeats the purpose of a wiki:
it can and will mess up formatting and create weird problems (clashes between generated and user-provided HTML)
it makes it hard/impossible to convert the wiki to other formats (such as to print it)
it makes searching harder
it makes any kind of security impossible (think XSS)
This is doubly true for allowing Javascript.
So I'd like to ask why you need this. If you need special formatting that MediaWiki does not offer, consider using (or writing) an extension for this.
If you really need arbitrary HTML, a Wiki might not be the best tool for you. You should consider a CMS, or just put HTML files into Subversion.
So what are you trying to do?
Use nowiki tags. Docs can be found here: https://www.mediawiki.org/wiki/Help:Formatting

Workflow to Turn Wiki content into a system manual

We're in the middle of deploying a new software system to lot's of users in lot's of places (200+ users over 8 countries). In the past we've written a manual for the users, then update it every so often. This works ok, in that all the users ahve the same manual and it covers the main things but it has it's problems, like it doesn't get updated that often, we sometimes miss updates, and some users will have old copies.
We've been talking about using a wiki during the testing and deployment phases to build a knowledge base about the system. Ideally we'd then like some way to convert that into some form fo electronic document that we can then 'pretty-fie' and send out as the official manual, as well as letting users use and update the wiki.
Has anyone else done anything similar ? Any suggestions for wiki systems, workflows, document formats etc?
Most wikis support export via PDF e.g.:
MediaWiki PDF Export
DokuWiki PDF Export
TWiki PDF Export
You can write something that generates LaTeX from the wiki and renders a manual to PDF. With packages like hyperref you can retain cross-references as hyperlinks.
Additionally, you can integrate content from multiple sources such as a data dictionary into the LaTeX document, which can be mixed and matched with the wiki content. You could also set the architecture up so it can support cross-referencing that goes either way.
Framemaker could also support this using generated MIF files, and you could also use Lout in a similar way or convert your wiki content to docbook, which would allow you to use any of the many rendering options available to that format.
As an aside, the following Stackoverflow postings discuss various systems for maintaining documentation.
Application (Not a Markup Language) for Producing a User Manual
Can LaTeX be used for producing any documentation that accompanies software?
What tools are used to write documentation?
What tools does your team use for writing user manuals?
How best to write documentation (ideally in latex) targeting both the web (html) and paper (pdf)?
Best tool(s) for working with DocBook XML documents?
What is the recommended toolchain for formatting XML DocBook?
Is a successor for TeX/LaTeX in sight?
Madcap Flare is a help-and-manual authoring tool that uses HTML for the source of each topic. You could pretty easily do a mass import of the Wiki pages. Would then require some cleaning but after that you have a nice single-source system that can output CHM, web-browsable help, PDF, DOC/DOCX, etc.
How are you storing the help source at the moment? Is it MS Word files, MS help, LaTeX?
If you put your help source files under version control then you will get all the benefits of a wiki without having to migrate to a new system - people can make edits to the help files easily - those changes can be tracked, reverted etc. and you get the prettified manuals as before.
I followed Node's links and came across some mediawiki pages that I thought were noteworthy.
Extension:OpenDocument Export
Extension:PDF Writer
Category:Data extraction extensions
I gave a previous answer which may be useful for the "wiki to PDF" part -- look at using the open source PediaPress code or functionality. You can get ODFs from it too, although their PDFs are already quite pretty (but you might want to rebrand it and restyle it for your company I suppose).