Read Table, block and tuple - tuples

I have inserted into the orders table from TPCH using MonetDB.
Now I'd like to read it programmatically.
How can I read a table, block and tuple on monetDB?
Is there any way to read an specific block from a table?

I'm not sure what a "block" is in this context, but MonetDB can be accessed programmatically from a variety of programming languages including C/C++, Python, R and Java.
For more details, see the Language Bindings page on the MonetDB developers website.
Which programming language are you using?

Related

change the language of a sitecore item(tree)?

We have a small website which was developed with the default language ('en'), without paying mind to the language or versioning capabilities of sitecore (ouch!). We simply forgot to set the correct default language at the start of the project.
Now we have an entire content tree of 'en' items, when they should be 'nl-NL' (it's a Dutch site). And I am wondering if there is an easy way of changing the language for all items in that tree (that does not involve hacking).
I found this Q&A, but it just talks about setting the default language. I'd like to do that, yes, but I would also like to set the correct language for the existing item(versions).
thoughts?
From what I remember we had a similar problem before. Not with filling a website in the wrong language, but having empty content that should be filled with default english content after creating the new language. What we did was export the language. In your case you could export the English language, create a dutch language and replace all entries in the English XML file that comes out with nl-NL values.
After you've done that you could import the language file as the Dutch language and all items are filled.
To me this sounds as the easiest and quickest approach, since you only have to search and replace some xml tags.
Good luck!
You could write a .NET program that would go through your whole content tree and update language parameter of each item accordingly. Sitecore APIs give you access to almost everything you see in the backend (including content manipulation) so it shouldn't be much of a problem to automate this task.
As an anternative you could copy your whole content from one language to another and then remove the language you don't want. Here's how to do it.
I'll caveat that my experience with it is limited, but from what I've seen, the Sitecore Rocks plugin for Visual Studio might allow you to script this.
http://visualstudiogallery.msdn.microsoft.com/44a26c88-83a7-46f6-903c-5c59bcd3d35b

ColdFusion CRUD

For quite a long time now, I've been trying to write and have been in search of "a really good" CRUD application. Don't get me wrong - I didn't say "The ultimate" CRUD application. Just one that could be rated 1st class.
What I'm saying is: Please don't respond to this plea with an answer like "Well, every situation is different..."
Q: Is there a blog post or something in the Adobe documentation that shows CRUD on a one-to-many relationship (Header/Detail), that uses web standards css (instead of tables), that uses best practices (CF9 has changed so many things now: scripted components, ORM), that uses the latest UI techniques (jQuery or some of the built-in AJAX features of CF9), that has a nice front-end (a nice looking header and background along with some pretty buttons)?
I know that's a lot to ask, but such is my quest.
A good example of a one-to-many relationship is the city/state xml files built into the Spry examples. There are 23,000 cities in the sample xml files, so I think that's better than just using random data.
I'm not really sure what you're asking, but I just want to respond to a couple of points in your question (this is more a comment than an answer, but since SO is stupidly limited in this, I'll put it here instead.)
that uses web standards css (instead of tables),
There is no "css instead of tables" - they are two distinct and compatible things!
CSS describes visual aspects of a document, whilst tables markup tabular data.
If you're displaying tabular data, then tables is exactly what you should be using, and you can use CSS to make it look more exciting than the plain styles that tables come in.
Since you're asking for a CRUD app, odds are you are going to be wanting to display tabular data so should be using tables.
(The common mistake people make is not understanding the nature of the web, and using tables to apply grid layouts to documents, when they should be using strucuted semantic markup instead.)
that uses best practices (CF9 has changed so many things now:
scripted components, ORM)
Scripted components are not a best practise!
They are an alternative syntax (for people that prefer having non-descriptive braces everywhere) they do not offer anything you can't already do.
i would strongly suggest you check out cfwheels. read the documentation, it's built for doing such crud applications and has an amazing set of features and will save you a lot of time. as for the interface, there are many jquery plugins out there that can handle this. i suggest looking at ajaxrain and find a plugin you like

"Smart" way of parsing and using website data?

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed yesterday.
Improve this question
How does one intelligently parse data returned by search results on a page?
For example, lets say that I would like to create a web service that searches for online books by parsing the search results of many book providers' websites. I could get the raw HTML data of the page, and do some regexs to make the data work for my web service, but if any of the websites change the formatting of the pages, my code breaks!
RSS is indeed a marvelous option, but many sites don't have an XML/JSON based search.
Are there any kits out there that help disseminate information on pages automatically? A crazy idea would be to have a fuzzy AI module recognize patterns on a search results page, and parse the results accordingly...
I've done some of this recently, and here are my experiences.
There are three basic approaches:
Regular Expressions.
Most flexible, easiest to use with loosely-structured info and changing formats.
Harder to do structural/tag analysis, but easier to do text matching.
Built in validation of data formatting.
Harder to maintain than others, because you have to write a regular expression for each pattern you want to use to extract/transform the document
Generally slower than 2 and 3.
Works well for lists of similarly-formatted items
A good regex development/testing tool and some sample pages will help. I've got good things to say about RegexBuddy here. Try their demo.
I've had the most success with this. The flexibility lets you work with nasty, brutish, in-the-wild HTML code.
Convert HTML to XHTML and use XML extraction tools. Clean up HTML, convert it to legal XHTML, and use XPath/XQuery/ X-whatever to query it as XML data.
Tools: TagSoup, HTMLTidy, etc
Quality of HTML-to-XHML conversion is VERY important, and highly variable.
Best solution if data you want is structured by the HTML layout and tags (data in HTML tables, lists, DIV/SPAN groups, etc)
Most suitable for getting link structures, nested tables, images, lists, and so forth
Should be faster than option 1, but slower than option 3.
Works well if content formatting changes/is variable, but document structure/layout does not.
If the data isn't structured by HTML tags, you're in trouble.
Can be used with option 1.
Parser generator (ANTLR, etc) -- create a grammar for parsing & analyzing the page.
I have not tried this because it was not suitable for my (messy) pages
Most suitable if HTML structure is highly structured, very constant, regular, and never changes.
Use this if there are easy-to-describe patterns in the document, but they don't involve HTML tags and involve recursion or complex behaviors
Does not require XHTML input
FASTEST throughput, generally
Big learning curve, but easier to maintain
I've tinkered with web harvest for option 2, but I find their syntax to be kind of weird. Mix of XML and some pseudo-Java scripting language. If you like Java, and like XML-style data extraction (XPath, XQuery) that might be the ticket for you.
Edit: if you use regular expressions, make sure you use a library with lazy quantifiers and capturing groups! PHP's older regex libraries lack these, and they're indispensable for matching data between open/close tags in HTML.
Without a fixed HTML structure to parse, I would hate to maintain regular expressions for finding data. You might have more luck parsing the HTML through a proper parser that builds the tree. Then select elements ... that would be more maintainable.
Obviously the best way is some XML output from the engine with a fixed markup that you can parse and validate. I would think that a HTML parsing library with some 'in the dark' probing of the produced tree would be simpler to maintain than regular expressions.
This way, you just have to check on <a href="blah" class="cache_link">... turning into <a href="blah" class="cache_result">... or whatever.
Bottom line, grepping specific elements with regexp would be grim. A better approach is to build a DOM like model of the page and look for 'anchors' to character data in the tags.
Or send an email to the site stating a case for a XML API ... you might get hired!
You don't say what language you're using. In Java land you can use TagSoup and XPath to help minimise the pain. There's an example from this blog (of course the XPath can get a lot more complicated as your needs dictate):
URL url = new URL("http://example.com");
SAXBuilder builder = new SAXBuilder("org.ccil.cowan.tagsoup.Parser"); // build a JDOM tree from a SAX stream provided by tagsoup
Document doc = builder.build(url);
JDOMXPath titlePath = new JDOMXPath("/h:html/h:head/h:title");
titlePath.addNamespace("h","http://www.w3.org/1999/xhtml");
String title = ((Element)titlePath.selectSingleNode(doc)).getText();
System.out.println("Title is "+title);
I'd recommend externalising the XPath expressions so you have some measure of protection if the site changes.
Here's an example XPath I'm definitely not using to screenscrape this site. No way, not me:
"//h:div[contains(#class,'question-summary')]/h:div[#class='summary']//h:h3"
You haven't mentioned which technology stack you're using. If you're parsing HTML, I'd use a parsing library:
Beautiful Soup (Python)
HTML Agility Pack (.NET)
There are also webservices that do exactly what you're saying - commercial and free. They scrape sites and offer webservice interfaces.
And a generic webservice that offers some screen scraping is Yahoo Pipes. previous stackoverflow question on that
It isn't foolproof but you may want to look at a parser such as Beautiful Soup It won't magically find the same info if the layout changes but it's a lot easier then writing complex regular expressions. Note this is a python module.
Unfortunately 'scraping' is the most common solution, as you said attempting to parse HTML from websites. You could detect structural changes to the page and flag an alert for you to fix, so a change at their end doesn't result in bum data. Until the semantic web is a reality, that's pretty much the only way to guarantee a large dataset.
Alternatively you can stick to small datasets provided by APIs. Yahoo are working very hard to provide searchable data through APIs (see YDN), I think the Amazon API opens up a lot of book data, etc etc.
Hope that helps a little bit!
EDIT: And if you're using PHP I'd recommend SimpleHTMLDOM
Have you looked into using a html manipulation library? Ruby has some pretty nice ones. eg hpricot
With a good library you could specify the parts of the page you want using CSS selectors or xpath. These would be a good deal more robust than using regexps.
Example from hpricot wiki:
doc = Hpricot(open("qwantz.html"))
(doc/'div img[#src^="http://www.qwantz.com/comics/"]')
#=> Elements[...]
I am sure you could find a library that does similar things in .NET or Python, etc.
Try googling for screen scraping + the language you prefer.
I know several options for python, you may find the equivalent for your preferred language:
Beatiful Soup
mechanize: similar to perl WWW:Mechanize. Gives you a browser like object to ineract with web pages
lxml: python binding to libwww
scrapemark: uses templates to scrape pieces of pages
pyquery: allows you to make jQuery queries in xml/xhtml documents
scrapy: an high level scraping and web crawling framework for writing spiders to crawl and parse web pages
Depending on the website to scrape you may need to use one or more of the approaches above.
If you can use something like Tag Soup, that'd be a place to start. Then you could treat the page like an XML API, kinda.
It has a Java and C++ implementation, might work!
Parsley at http://www.parselets.com looks pretty slick.
It lets you define 'parslets' using JSON what you're define what to look for on the page, and it then parses that data out for you.
As others have said, you can use an HTML parser that builds a DOM representation and query it with XPath/XQuery. I found a very interesting article here: Java theory and practice: Screen-scraping with XQuery - http://www.ibm.com/developerworks/xml/library/j-jtp03225.html
There is a very interesting online service for parsing websites https://loadsiteinmysql.site This service splits the site into tags and loads them into MySQL tables. This allows you to parse sites using MySQL syntax

How to replace text in a PowerPoint (.ppt) document?

What solutions are there? I know only solutions for replacing Bookmarks in Word (.doc) files with Apache POI?
Are there also possibilities to change images, layouts, text-styles in .doc and .ppt documents?
I think about replacement of areas in Word and PowerPoint documents for bulk processing.
Platform: MS-Office 2003
What are your platform limitations?
Obviously Apache POI will get you at least part of the way there.
Microsoft's own COM API's are fairly powerful and are documented here. I would recommend using them if a) you are not running in a server (many users, multithreaded) environment; b) you can have a proper version of powerpoint installed on the production machine; and c) you can code against a COM object model.
It's a bit pricey, but Aspose.Slides is a very powerful library for manipulating PowerPoint files
If you include using other Office suits as an option, here's a list of possible solutions:
Apache POI-HSLF
PowerPoint 2007 APIs
OpenOffice.org UNO
Using POI you can't edit .pptx file format, but you don't depend on the apps installed on the system. Other two options, on the contrary, make use of other apps, but they are definitely better for dealing with presentations. OpenOffice has better compability with older formats, by the way. Also if you use UNO, you'll have a great choice of languages, UNO exists for Java, C++, Python and other languages.
My experience is not directly with Power Point, but I've actually rolled my own WordML (XML) generator. It a) removed all dependencies on Word, b) was very fast c) and let me build up documents from scratch.
But it was a lot of work to create. And I was only creating a write only implementation.
I'm not as familiar with Power Point, so this is conjecture, but you may be able to roll your own by reading XML (Power Point 2003??) and/or cracking the Office Open XML file (zipped XML), then using XPath to manipulate the data, and then saving everything back to disk.
This won't work on older OLE Compound Document based Power Point files though.
I've done something like that before: programmatically accessed and manipulated PowerPoint presentations. Back when I did it, it was all in C++ using COM, but similar principles apply to C#/VB .NET apps, since they do COM interop very easily.
What you're looking for is called the Office Document Model. Basically, Office applications expose their documents programmatically, as trees of objects that define their contents. These objects are accessible via an API, and you can manipulate them, add new ones, and do whatever other processing you want. It's exceedingly powerful; you can use it to manipulate pretty much all aspects of a document. But you'll need an installation of Office and Visual Studio to be able to use it.
Some links:
Intro: http://msdn.microsoft.com/en-us/library/d58327k6.aspx
Hope this helps!
Apparently new users can only include one link per posting. How lame! :)
Here's the other link I meant to include:
Example of manipulating PowerPoint documents programmatically: http://msdn.microsoft.com/en-us/library/cc668192.aspx

Enterprise-grade template printing system

I'm looking for an enterprise-grade template printing system. I'm interested in every software I can get my hands on to evaluate. Commercial or not.
What I need - a separate system ready to receive tags in order to print (digital or paper) a template (like a contract, invoice, etc). Templates should be managed by the same software. It should operate via web services or via enterprise bus (preferable JMS or MQSeries connectors).
Can I ask for some names and possibly some URLs? Anything will be helpful even if it does not fit the requirements exactly.
Thanks.
This is an old question, but for the Googlers out there, we use a couple of products to render documents in XSL-FO (a W3C standard paper specification that we generate using XSL) either to PDF, PostScript, etc. We use it to show documents online as well as bulk print a few hundred thousand of them monthly.
RenderX (.NET, Java, whatever)
provides a very powerful solution for
our bulk printing needs
IBEX PDF Creator (.NET
only) for online rendering to PDF
Calligo is a commercial package from InSystems. Can't reach the web site right now; could be a bad sign.
Then there are these open source possibilities.