.NET get data from printers which have www interface - c++

I would like to get data about ink, free pages (paper) etc from the printers in company network. Each of these printers (mostly Minolta) has an www interface, so I can get these data by creating browser process in my program, direct it to go to the address "http://192.168.X.YY/data.htm", download all the page code and retrieve the data from it. Is this possible without this process? If I know that these data are under each IP/data.htm can I use this information to download data in the different way: socket, ftp, etc.
In general: if you have some data on the website (no database access obviously), how you retrieve this data?

From the looks of it, your printer provides REST-based services. You can use libcurl to make REST based API calls. (This holds for most websites also!)

It doesn't sound like it implements a REST interface from the description you provided.
It sounds like your idea is to scrape data from an HTML page. That's fine too, although it's somewhat fragile (e.g. could break with a printer firmware upgrade).
Anyway, you tagged the question with .NET so if you want to use the .NET approach, you might want to look at creating a WebClient and parsing the resulting data return from the DownloadString method.

Related

Tacking offline data with xAPI

I would like to download a course and work offline on that course. How can I track my results?
I would like to record all my progress(slides that I viewed, quiz results, time for each content....), for example saving them on a file or a database, and then generate statements to send to an LRS when I'm online.
Someone could explain me how can I do that?
With TinCan statements (commonly including information about the student(actor) and then what they did, objectives, status etc) are being posted to a endpoint. Depending on how the content is written it may or may not failover to some alternative. If its a native application I would suspect you'll have limited ability to intercept these statements. If its a HTML course you may be able to locate where the content attempts to post these statements and re-direct those to local storage or some other sql/nosql option. Ultimately, it will depend on what content you're attempting to run, and what type of controls you'll have to attempt to. Based on what I know, the content itself would have to detect its 'offline' and store the statements until it is back online. Similar to this post - How tin-can-api works offline?
SCORM ultimately doesn't work like TinCan. LMS exposes a JavaScript API, and the HTML based content locates it in the DOM using JavaScript. Content then makes gets and set calls to it. The LMS is more responsible for committing this information to a server, or persisting the data in another fashion. This doesn't stop content developers from creating new and alternative ways to persist data if the LMS is not present. For this type of content its probably easier to intercept since you can be the LMS in this situation and expose that API for the content to use. In a offline situation you'd just have to manage the student attempts and then once online- sync them with your server.

Can I make an unbuffered query in ColdFusion?

I'm in the process of porting a Java desktop application to a ColdFusion web app. This desktop app made queries with very large result sets (thousands of text records) that, while being all right on the database side, could take a lot of memory on the client side if they were buffered. For this reason, the app explicitly tells the database driver to not buffer results too much.
Now that I'm working on the ColdFusion port, I'm being hit by the buffering problem. The ColdFusion page times out during the <cfquery> call, and I'm fairly sure this is because it tries to buffer everything.
Can I make an unbuffered query in ColdFusion?
If pagination is not an option (i.e., you're writing out a report for example), then you'll have to get low level with the java, using setFetchSize(). See this answer. Note that the code in the answer uses the DataSourceService, which, with latest security patches from Adobe, is no longer available on CF8. You'll have to figure out how to get a connection via the adminapi or create a connection outside of coldfusion. Or you could transition your datasource to use JNDI, and then you can lookup the resource yourself without using CF api's.
I'm almost certain that ColdFusion does not provide such a mechanism. As a language, it was meant to abstract the developer away from things like that.
I'd suggest that you look into a few options:
Re-work your query to use pagination, and run it in a loop.
Use the timeout attribute on the <cfquery> to prevent timeouts from happening
Use the CreateObject() syntax to instantiate a JDBC database connection.
With the last option, what you'd actually do is access the underlying Java classes to do the querying and getting results. Take a look at this article for a quick look at the CreateObject() function.
You can also look at the Adobe Livedocs for the function, but they don't really seem helpful.
I haven't tried to use CreateObject() to do querying with the Java database access classes, but I imagine that you can probably get it to work.

Free service that allows storing game data online?

I have created a small game in Java and I would like to add the ability for a player to publish his highscores online.
I'm willing to write the server software myself (it's easy these days with Ruby Mongrel, or even C++). I just need to have some sort hosting. One solution that immediately comes to mind is Amazon EC2. But that's kind of expensive for my needs. Since the requirements are very minimal (I don't even need a website, just a web service) I think there may be a cheaper solution out there.
Does anyone know of a free or cheap provider for this kind of thing?
Update
For those interested, this is solution I came up with:
a SliceHost
purchased a domain name
C++ HTTP server
built upon the Poco HTTPServer
uses SQLite database via Poco Data
Server implements a REST API supporting
High Score table
/hs content type deduced from accept header
/hs.xml forces xml
/hs.txt forces plain text
/hs/add html form, does a POST using XMLHttpRequest
/hof Hall of Fame, content type deduced from accept header
/hof.txt forces plain text
/hof.xml forces xml
game: my own Tetris clone written in Clojure
Something like Slicehost or any other small-scale VPS provider could probably work. You might even be able to write it as a small app and publish it on Google App Engine, which is free up to a certain point.
google app engine comes to mind: http://code.google.com/appengine/

Accessing World of Warcraft data from the web

I'm aware of the WoW add-on programming community, but what I can find no documentation on is any API for accessing WoW's databases from the web. I see third-party sites like WoWHeroes.com and Wowhead use game data (item and character databases,) so I know it's possible. But, I can't figure out where to begin. Is there a web service I can use or are they doing some sort of under-the-hood work that requires running the WoW client in their server environment?
Sites like Wowhead and WoWHearoes use client run addons from players which collect data. The data is then posted to their website. There is no way to access WoW's database. Your best bet is to hit the armory and extract the XML returned from your searches. The armory is just an xml transform on xml data returned.
Blizzard has recently (8/15/2011) published draft documentation for their RESTful APIs at the following location:
http://blizzard.github.com/api-wow-docs/
The APIs cover information about characters, items, auctions, guilds, PVP, etc.
Requests to the API are currently throttled to 3,000 per day for anonymous usage, but there is a process for registering applications that have a legitimate need for more access.
Update (January 2019): The new Blizzard Battle.net Developer Portal is here:
https://develop.battle.net/
Request throttling limits and authentication rules have changed.
Characters can be mined from the armory, the pages are xml.
Items are mined from the local installation game files, that's how wowhead does it at least.
It's actually really easy to get item data from the wow armory!
For example:
http://www.wowarmory.com/item-info.xml?i=33135
View the source of the page (not via Google Chrome, which displays transformed XML via XSLT) and you'll see the XML data!
You can use search listing pages to retrieve all blue gems, for example, then use an XML parser to retrieve the data
They are parsing the Armory information from www.wowarmory.com. There is no official Blizzard API for accessing it, but there is an open source PHP solution available (http://phparmory.sourceforge.net/)
Maybe a little late to the party, but for future reference check out the WoW API Documentation at http://blizzard.github.com/api-wow-docs/
Scraping HTML and XML is now pretty much obsolete and also discouraged by Blizzard.
The documentation:
http://blizzard.github.com/api-wow-docs/
enjoy
Sites like those actually get the data from the Armory. If you pull up any item, guild, character, etc. and do 'View Source' on the page you will see the XML data coming back. Here is a quick C# example of how to get the data.
This third-party site collection data from players. I think this collection based on addons for WoW or each player submit information manualy.
Next option is wraping wow site and parsing information from websites (HTML).
this is probably the wrong site for your question, but you're thinking of the wowarmory xml stuff. there is no official wow api. people just do httprequests and get the xml to do number crunching stuffs. try googling around. there are some libs out there in different languages that are already written for you. i know there are implementations in php/ruby. i was working on one in .net a while back until i got distracted. here's an article which kinda sums this all up.
http://www.wow.com/2008/02/11/mashing-up-wow-data-when-we-can-get-it-in-outside-applications/
Wowhead and other sites generally rely on data gathered by users with a wow add-in.
Wowhead also has a way for other sites to reference that data in hover pop-ups, so their content gets reused on a number of sites.
Powered by Wowhead
For actual ingame data collection:
cosmos.exe is what thottbot for example uses. It probably uses some form windows hack (dllinjection or something) or sniffs packets to determine what items have dropped and etc. (intercepts traffic from the wow server to your client and decodes it). It saves this data on the users computer and then uploads it to a webserver for storage. I don't know if any development libraries were created for this sort of thing.

Use cookies without sending them back to the server

I need a way to stash some data that is global to the browser. If I open a new window with a URL from my app, e.g. via a bookmark, I need to access some data that was created in another window and never sent to the server.
As far as I can tell the only thing that is global to the browser and not just a window, (like window.name), is a cookie. The problem I'm running into is if I set a cookie the cookie is then sent with every request to the server, but I don't ever want this data on the wire. Is there any way to set a cookie and just use it purely as a bucket for storing some data and never send that data to the server?
The HTML 5 storage API looks like exactly what you want here, but unfortunately it's only supported by a handful of browsers right now.
Is there any way to set a cookie and just use it purely as a bucket for storing some data and never send that data to the server?
No.
You'll need to look into a plugin that provides dedicated offline storage facility, or use the HTML5 storage API and tell everyone to upgrade their browsers
If you decide to go the plugin route, as far as I am aware you have 3 options:
Google Gears
Flash - it has an offline storage facility - you could write a small flash app to store things using this facility, then interop with it from javascript.
Silverlight also has offline storage - as with flash you could write a small app to do the storage, then interop with it from javascript.
I'd probably look into using flash first, as everyone already has it.
Development would likely be a lot easier if you were to use silverlight. It's not as widely installed, but it is spreading pretty rapidly. Last I heard* something like 30% of browsers had it installed which is pretty impressive.
Google gears would unfortunately be a distant third. People are going to be installing flash and silverlight for other reasons, but nobody has gears.
*This is an entirely unsubstantiated quote, but does seem to fit with what I've seen on various people's computers, etc.
Can you mandate that your users install Google Gears? It's a javascript API that lets you store local info- also lets you persist between sessions, which may be useful for your app.
Why not just read a field in the parent window using window.opener ? Or if you've three windows running - parent and two children which I think you might be implying then read/write to a hidden field in the parent from the children.
Sounds like your app is running 100% local, if that is the case the browser isn't the way to go anyway. Cookies can be easily deleted. If your app isn't local the webserver should be the one supplying information. Cookies are never the correct way to store sensitive information or information that should persist over longer amounts of time.