I have a database driven website built with Django running on a Linux server. It manages many separate groups each with hundreds of users. It needs to be able to print customized docs (i.e. access credentials) on demand for one, some or all users. Each group has its own logo and each credential is customized with the user's name, photo and some number of additional graphic stamps. All the custom information is based on stored data for the user.
I'm trying to determine the best method for formatting the credentials and printing. Here are the options I've come up with so far:
straight HTML formatting, using table tags to break the credential into cells to contain the custom text or graphics. This seems straightforward except it doesn't seem to lend itself to printing a couple hundred credentials at once.
Starting with a doc template in the form of a PDF file and using available PDF command line toolkits to stamp in the custom information and append the multiple PDFs into a single file for printing. This also seems reasonable except that the cost of a server license for these toolkits is prohibitively expensive for Linux (>$500).
stand alone program running on the client that retrieves user data via a web service and does all the formatting and printing locally.
Are there other options? Any advice? Thanks for your help.
I once did something similar using SVG. This allows for great flexibility as you can design your "credential" in inkscape, use placeholder names and logos, and then once completed, open the output svg in a text editor and replace the placeholders with context variables.
One tip, put all django template code (if any) as xml comments, ex <!--{% load xyz_tags %}-->, otherwise, a lot of things get screwed up if you open it in inkscape.
Solution was to use the open source ReportLab library to build up the PDF pages from scratch.
I could not find an inexpensive way to stamp the custom components into an existing PDF. ReportLab can do this, but only through their commercial product.
Works great though.
Related
I'm trying to find a digital signage program that can display an ongoing powerpoint on half the screen, and a live view of Outlook calendars on the other half. We want a certain group of employees to be able to see what they're doing for the day, and for them to be able to see changes happen.
Here's an example of how Outlook Calendar would be displayed
I was looking into PiSignage, as well as Galaxy Signage. However, none of them seem to have the capability of displaying the calendar properly, or they don't store credentials.
I was looking for something relatively simple to use for the users that will be updating the content of the rotating powerpoint.
Having that live view of Outlook is mainly what is desired though.
There is no "relatively simply" solution, as you need a combination of features and some web-app developing.
PowerPoint:
I do not know any Digital Signage Player who plays PowerPoint files directly. In most cases, you have to convert ppt as videos or images.
Outlook Calendar with credentials:
This is possible via digital signage widgets.
Widgets are websites/web-app which are run locally on the player. This way, you can handle credentials and use every web API/service you want via ordinary HTML and JavaScript. In your case, it is not complex, but you will need some JS-developing.
Multiple Zones:
You require a software which can display these widgets and websites in multiple zones.
The player hardware from IAdea which base on W3C-SMIL language, features multiple zones and widgets. As an alternative, there is an open source SMIL-Player developed by me.
You can use both player solutions on any SMIL compatible SaaS. IAdea includes a Windows software for creating playlists. You can also create SMIL-indexes manually like HTML or XML.
I know that this question can be mostly answered generally for any Web App, but because I am specifically using Shiny I figured that your answers may be considerably more useful.
I have made a relatively complex app. The data is not complex, but the user interface is.
I am storing the data in S3 using the aws.s3 package, and have built my app using golem. Because most shiny apps are used to analyse or enter some data, they usually deal with a couple of datasets, and a relational database is very useful and fast for that type of app.
However, my app is quite UI/UX extensive. Users can have their own/shared whiteboard space(s) where they drag around items. The coordinates of the items are stored in rds files in my S3 bucket, for each user. They can customise many aspects of the app just for them, font size, colours of various experimental groups (it's a research app), experimental visits that are storing pdf files, .html files and .rds files.
The .rds files stored can contain variables, lists, data.frames, reactiveValues, renderUI() objects etc.. So they are widely different.
As such I have dozens of rds files that are stored in a bucket and everytime the app loads each of these .rds files need to be read one by one in order to recreate the environment appropriate for each user. The number of files/folders in directories are queried to know how many divs need to be generated for the user to click inside their files etc..
The range of objects stored is too wide for me to use a relational database - but my app is taking at least 40 seconds to load. It is also generally slow when submitting data as well, mostly because the data entered often modified many UI elements that need to be pushed to S3 again. Because I have no background in proper Web Dev, I have no idea what is the best way to store user-related UX/UI elements and how to retrieve them seamlessly.
Could anyone please recommend me to appropriate resources for me to learn more about it?
Am I doing it completely wrong? I honestly do not know how else to store and retrieve all these R objects.
Thank you in advance for your help with the above.
I'll provide more background information first. The question is asked again in the last bullet of "My Thoughts & Questions" section.
Background
I am working on a legacy system which looks kind of like a batch processing system: A job is passed along a series of worker programs that each works on part of the whole job until it is finished. The job status/management information is generated by each worker program along the way and written into local text files with names like "status_info.txt" and "mgmt_info.txt".
This processing system does NOT use any databases, just pure text files on the local file system. Changing their code to use a databases is also kind of expensive which I want to avoid.
I am trying to add a GUI to this system for primarily two purposes. First, let the users view (a read operation) the job status and management information so they can get a big picture of what's going on and whether there is any error in any steps. Second, allow the users to redo one or more steps by changing (a write operation) the job status and management information.
My Current Solution
I am thinking of using Django to develop the GUI because:
Django is fast on development; Django is web-based so almost no installation is required;
I want to enable remote monitoring of the system so a web-based, in-browser GUI makes more sense;
I used some Django before so had some experience.
However, I see Django mostly works with a real database: SQLite, MySQL, PostgreSQL, etc.. The user-defined model will be matched into the tables in these databases by Django automatically. However, the legacy system only produces text files.
My Thoughts & Questions
Fortunately, I noticed that the text files are all in one of the two formats:
Multiple lines of strings;
Multiple lines of key-value pairs.
Both formats look easy to match to a database table design. For example, a "multiple lines of strings" can be considered as a DB table of a single column of text, while a "multiple lines of key-value pairs" as a two-column table.
Therefore, my question is: Can I build my model upon local text files instead of a real database, and somehow override some code somewhere in Django that acts as the interface between the core framework and the external database, so these text files will play the role of a "database" to Django and the reading/writing operations will happen to these files?? I've searched on internet and stackoverflow but wasn't lucky enough. Will appreciate if you can give me some helpful links.
What Not to Do.
If you are going to reproduce an RDBMS using files you are in for a lot and I mean a lot of grief and hard work. Even the simplest RDBMS like sqlite has thousands of man hours of work invested on it. If you were to bring your files into django or any other framework you would need to write a custom backend for it.
What To Do
Create django models backed by an RDBMS and import the files into it. Alternatively since this data appears to be mostly in Key Value pairs, you might be able to use Mongodb or redis.
You can use inotify to monitor the file system to detect when a new file has been created by the batch processing system. When that happens you can invoke a django CLI script to process that file and import it's data into the database.
The rest of it is a straight forward django app.
I need to write an API which would provide access to data being served as HTML documents from a web server. I need for my users to be able to perform queries over the data.
Say on a web site there is a page which lists items and their owners. Then there is additional set of profile pages for owners which for each owner provide information about their reputation. An example query I may need to answer is "Give me ID's and owners of all items submitted in 2013 whose owners have reputation of at least 10".
Given a query to answer, I need to be able to screen scrape only the parts of the web site I need for answering the query at hand. And ideally cache the obtained information for future use with new queries.
I have no problem writing the screen scraping part, but I am struggling with designing the storage/query/cache part. Is there something about Clojure/Datomic that makes it an especially suitable technology choice for this kind of processing of data? I have been pointed in this direction before.
It seems a nice challenge but not sure about a few things: a) would you like to expose to your users a Datalog query box and so make them learn datalog-like syntax? b) what exact kind of results do you wish to cache, raw DB responses, html fomatted text, json ?
Anyway I suggest you to install and play a little bit with the Datomic console to get a grasp if you didn't before as it seems to me the more close idea to what you want to achieve atm https://www.youtube.com/watch?v=jyuBnl0XQ6s http://blog.datomic.com/2013/10/datomic-console.html
For the API I suggest you to use http://clojure-liberator.github.io/liberator/ as it provides sane defaults to implement REST services and let you focus on your app behaviour
I'm aware of the WoW add-on programming community, but what I can find no documentation on is any API for accessing WoW's databases from the web. I see third-party sites like WoWHeroes.com and Wowhead use game data (item and character databases,) so I know it's possible. But, I can't figure out where to begin. Is there a web service I can use or are they doing some sort of under-the-hood work that requires running the WoW client in their server environment?
Sites like Wowhead and WoWHearoes use client run addons from players which collect data. The data is then posted to their website. There is no way to access WoW's database. Your best bet is to hit the armory and extract the XML returned from your searches. The armory is just an xml transform on xml data returned.
Blizzard has recently (8/15/2011) published draft documentation for their RESTful APIs at the following location:
http://blizzard.github.com/api-wow-docs/
The APIs cover information about characters, items, auctions, guilds, PVP, etc.
Requests to the API are currently throttled to 3,000 per day for anonymous usage, but there is a process for registering applications that have a legitimate need for more access.
Update (January 2019): The new Blizzard Battle.net Developer Portal is here:
https://develop.battle.net/
Request throttling limits and authentication rules have changed.
Characters can be mined from the armory, the pages are xml.
Items are mined from the local installation game files, that's how wowhead does it at least.
It's actually really easy to get item data from the wow armory!
For example:
http://www.wowarmory.com/item-info.xml?i=33135
View the source of the page (not via Google Chrome, which displays transformed XML via XSLT) and you'll see the XML data!
You can use search listing pages to retrieve all blue gems, for example, then use an XML parser to retrieve the data
They are parsing the Armory information from www.wowarmory.com. There is no official Blizzard API for accessing it, but there is an open source PHP solution available (http://phparmory.sourceforge.net/)
Maybe a little late to the party, but for future reference check out the WoW API Documentation at http://blizzard.github.com/api-wow-docs/
Scraping HTML and XML is now pretty much obsolete and also discouraged by Blizzard.
The documentation:
http://blizzard.github.com/api-wow-docs/
enjoy
Sites like those actually get the data from the Armory. If you pull up any item, guild, character, etc. and do 'View Source' on the page you will see the XML data coming back. Here is a quick C# example of how to get the data.
This third-party site collection data from players. I think this collection based on addons for WoW or each player submit information manualy.
Next option is wraping wow site and parsing information from websites (HTML).
this is probably the wrong site for your question, but you're thinking of the wowarmory xml stuff. there is no official wow api. people just do httprequests and get the xml to do number crunching stuffs. try googling around. there are some libs out there in different languages that are already written for you. i know there are implementations in php/ruby. i was working on one in .net a while back until i got distracted. here's an article which kinda sums this all up.
http://www.wow.com/2008/02/11/mashing-up-wow-data-when-we-can-get-it-in-outside-applications/
Wowhead and other sites generally rely on data gathered by users with a wow add-in.
Wowhead also has a way for other sites to reference that data in hover pop-ups, so their content gets reused on a number of sites.
Powered by Wowhead
For actual ingame data collection:
cosmos.exe is what thottbot for example uses. It probably uses some form windows hack (dllinjection or something) or sniffs packets to determine what items have dropped and etc. (intercepts traffic from the wow server to your client and decodes it). It saves this data on the users computer and then uploads it to a webserver for storage. I don't know if any development libraries were created for this sort of thing.