I'm working on the tile_map plugin of Mapviz. The Tile-Plugin loads images (tiles) using a WebRequest from the servers. As I have downloaded already all the images on my hard drive, I'm trying to remove the WebRequest from the code - so it wont use the network access. The plugin is using the QTNetworkAccessManager. What would be the recommended way/methods to replace the NetworkAccessManager?
Greets
QNetworkAccessManager is the recommended way (and only way that I know of that is supported by Qt) of accessing resources over HTTP. The API is made the way it is for performance reasons, it will hide implementation details, conserve power and allow for the kind of optimizations that are available in HTTP without you having to do anything special.
If you have all the files locally, I would simply wrap the code that uses QNAM so that it looks for and prefers the local copy (possibly while keeping a copy in memory too, for performance). So it would cache like this:
memory-copy > disk-copy > network-copy
TIP: I found this PDF to be really good for explaining how to use QNAM in the best way.
Related
I tried to find the best way to store data on a local persistant store but I did not find a lot of resources about this.
I found only :
Motion model
But what is the best gem/way to make a offline app. I mean, I sync with remote one time and after that, my application uses a local storage (cora data, sqlite...) to read data?
Thank you
I use MotionModel (heck, I wrote MotionModel) but I'm biased. It's supposed to be for use-cases where you don't want to set up the Core Data stack. That said, InfiniteRed has done a great job with RMQ so it's likely they did a great job with CDQ, which wraps Core Data.
I suggest you make a play app with each and decide for yourself.
I prefer the way HipByte suggests in the Locations sample.
Check the LocationsStore class, and how they use CoreData in a very simple way.
You could also use CouchbaseLite and leverage it's sync capabilities to make the data available offline. I created an CouchbaseLite RubyMotion Example which is a port of a TodoLite-iOS version of the App. I'm currently working on making the integration nicer and more ruby like, but it works as is.
I've been assigned a project which requires me to add some HTML page serving. This embedded system (running Linux CentOS 6.3) has some extra juice available, but also already has numerous responsibilities.
I considered Apache but tossed it due to bloat, I looked into Nginx but am now shying from that too. It just seems that I'm getting way more 'functionality' and as a result, more CPU usage than I need.
Can someone enlighten me as to why I wouldn't just implement the HTTP protocol myself using async sockets?
My specific needs are:
Receive and decode GETs and POSTs.
Send CSS, JS and JPG files as requested.
Output header, cookie, head and body data based upon the decode of the GETs/POSTs.
Given that I don't need the myriad things these webservers offer, am I being naive in assuming this course of doing it myself? What would you suggest or warn against?
Basically, you use a web server because then you get the functionality you want in a form that's already been tested, is more reliable than your first code is likely to be, and is supported by a large community of others. If Apache and nginx are too heavyweight for you (although nginx is pretty much characterized by how lightweight it is for heavy loads) and especially if the load you expect is very light, then look around for other options.
Wiki has a whole page of comparisons of lightweight web servers.
An easy trap to fall into: thinking "I don't need all the functionality in Product X, I'll just write my own with just the functionality that I need" only to end up reimplementing Product X entirely, one newly-discovered requirement at a time.
I sort of doubt that an embedded system that can run CentOS okay is so resource-starved that it can't run Nginx comfortably (or even Apache, which people run on the Raspberry Pi just fine with appropriate configuration tweaks), given reasonable assumptions about how many pages you are actually serving. I ran it on a Pentium 266 with something like 256MB of RAM serving a few simple PHP apps that served roughly a page every two seconds, with no issues. As I recall, it's fairly modular, so you can just choose not to load the functionality you don't think you need. And, later, when your requirements change and you find out you do need it, you can just plug it back in :)
If you are really and truly concerned about resource consumption, look into web servers designed for embedded applications. I hear Cherokee is quite nice. Mongoose looks promising as well.
Go further you can, I began with this http://www.w3.org/Protocols/HTTP/HTTP2.html
I've been looking into centralising my computer game saves to make it easier to backup and restore as well as putting them up on the cloud via dropbox but there in so may places that it makes it quite difficult. I noticed the Windows 7 and Vista now support Symbolic links so I've been playing around with that but I was wonder the follow:
Is it possible (code example or a point in the right direction) for an application (vb.net or C++) to spoof a file or folder?
E.g. Application A (a game like Diablo III or Civilization V) attempts to read or right from file A (the game save), application B (the save repository) detects this read/write request and pipes the request through itself preforming the request on file B (the actual game save in another location). Application A is in no way altered and treats the file normally.
Note: I realise there are many simple ways of preforming the same task in essence such as monitoring the use of Application A or periodically checking file A and copying it if it has been altered since the last check etc but all these methods have draw backs and less interested in making it work than if it is possible.
It is entirely possible to do this through a file system filter driver. For information about these, take a look here:
http://msdn.microsoft.com/en-us/windows/hardware/gg462968
Filter drivers can hook into CreateFile operations and redirect the create to a different place if you want, but they are much harder to write as compared to normal applications. They run in kernel mode and must obey the limitations of drivers.
You can "fake" special folders, like control panel does, but I don't think you can create anything accessible/writeable (in an easy way). I might be wrong though. I had the same idea once too (as a compatibility step for some company stuff), but couldn't find anything supporting an easy way to do it. It seems like it might be easier to be done on Unix systems (but that's obviously no option here). Also, I wouldn't expect any nice or easy solutions for .net.
Only approach I could think about right now, would be highjacking the according API calls (e.g. FileOpen) to reroute/manipipulate them (similar to what root kits do), but I wouldn't say that's a good idea, considering it might be detected as possible malware or cheats by things like punkbuster or antivirus solutions.
Yes or no depending on (using your terms) the level of abstraction that Application A is using.
If Application A is performing a CreateFile wto start access and passing a fixed filesystem path then Application B would need to emulate a file system and do so in the kernel.
On the other hand if Application A were to user HTTP with RESTful URLs then the HTTP server could answer all requests from files or by dynamically creating the content.
So the question can only be answered in specific by knowing the details of Application A.
How should a Windows 8 Metro application connect to a central database?
I've read about local storage, but I haven't read anything about connecting to a central database.
Obviously, this architectural design decision needs to support the disconnected scenario.
WCF web services seem to make sense.
But even if they do make sense, should we really create separate methods for all read/write operations?
Or are OData WCF services the way to go?
It seems like tablet software architecture should be able to borrow a lot from smartphone software architecture (but I am new to both).
Has Microsoft made any recommendations in its app samples?
It appears that others are asking similar questions on the Microsoft Developer Forums.
Here is what I've found:
According to Tim Heuer:
...You cannot directly have a SQL db embedded in your app or use
something like ADO.NET. This is more of an async/services
infrastructure. So if your data was exposed via services, then of
course you could connect that way. There are some other light-weight
methods you could use for local storage as well using things like the
Windows.Storage namespace (which is similar to Isolated Storage in
.NET).
Morten Nielsen agrees:
You can use HttpClient to download pretty much anything from the web.
Why don't you configure your WCF service to return data as JSON, and
use the DataContractJsonSerializer to deserialize the results?
Also, Tim Heuer cautions:
...Please note that while awesome, the SQLWinRT project on codeplex is a
wrapper to communicate with the classic SQLite engine...which uses
APIs that would not pass store validation currently.
Generic Object Storage Helper for WinRT and WinRTFile Based Database seem to have some promise.
But Daniel Stolt raises some good points:
It's awesome that there is good support for building OData clients and
other REST clients - but this only addresses the online scenario. The
"structured" part of Windows.Storage is a very limited model,
essentially limited to name/value pairs, insufficient for all but the
most basic scenarios. Yes there is local file storage, which is great
of course. But forcing every app developer out there to build her own
DBMS on top of local file storage will simply not cut it, especially
with all of System.Data having been removed from the profile. If local
file storage was sufficient for most device apps, then things like
SQLCE would have no purpose today already. And SQLCE clearly has a
purpose, and has played a very important role for occasionally
connected device apps for a very long time. There is also a tremendous
need for synchronization with a server-side database such as SQL
Azure, mostly to be able to roam data between devices. Yes there is
the roaming storage model in WinRT, but it shares the same limitations
of local storage mentioned above, and on top of this is very limited
in capacity (currently 30KB if memory serves). It is simply
insufficient for all but the simplest roaming data needs. Again,
forcing every app developer to design and implement her own
synchronization solution is very bad. You can do much better to enable
developers.
Many people are disappointed that the System.Data namespace is not supported in WinRT.
Richard Bethell said:
I don't even have words for this. This is astonishing. Leave aside for
the moment they want to force you to abstract to middleware for
database connectivity - I don't agree, but I can quasi understand a
rationale for that. I can even see pathways for developing like that.
But no System.Data.... at all? Do you even understand what you've done
to us?
What System.Data can do, outside of just having providers for Sql,
OleDb and other custom providers like Oracle, is provide a rich
abstraction of XML datasets that allow you to very quickly build a
data oriented Service Oriented Architecture.
For instance, I can easily create a web service using SOAP or WCF that
returns DataSets or DataTables, and then consume those objects easily
and directly. Being able to do this allows very rapid construction of
n-tier architectures, even without direct data connections available.
Without System.Data, and the power of DataViews, DataTables, etc. this
gets a lot harder. Sure you can custom create structs, put data in
there, and serve up structs, and use Linq to do whatever sorting,
filtering, etc. you want to do.... but it ends up being twice the
work, and makes code reuse a lot harder. And it means using our
existing service oriented architecture is impossible (without a big
overhaul.)
The withdrawal of System.Data is as big a thing for developers to deal
with as the loss of the Printer object in VB6 to vb.net 1.0 was. What
is harder to understand in this case is why it is necessary -
re-enabling it in the Metro profile can't possibly be a technical
difficulty of the product, can it?
It is valuable enough that I would seriously consider including Mono's
System.Data classes as part of any app I create (which would obviously
have to be open source.)
I think that this is another of those "it depends" questions...
The first and most obvious issue is that it very much depends on the context in which the application is running as to whether, to take the first case "Obviously...support...disconnected" is actually true - if the app is an internal corporate app then quite possibly not in that case no db == not work.
Secondly you could look (hmm, rash... one assumes you could look, this could be a bad assumption) at database synchronisation between a local SQL database and the remote db and so on and so forth.
Taking a step back... yes - you're absolutely right, look at it as being the same as phone or silverlight (although I don't know if there is yet RIA support) - but the thing is at this point its very hard to be prescriptive because given a general purpose platform one can therefore write applications to suit all sorts of purposes.
Not a hugely helpful answer really - but a start.
Having read #Jim G's answer it seems that I should probably withdrawn mine?
I'm trying to put a workable plan together for a charity that could really make good use of a forum and a wiki, but a crucial part of its operations happen in parts of the world where dial-up connection dominates and probably will continue to do so for the foreseeable future.
This site was recommended as one that behaves well even on a dial-up connection, so I thought I'd ask for some help here!
The site I want to hook this on to is using Drupal. Anyone out there with experiences like this who could maybe help?
Behaving well on dial-up involves sitting down and optimizing your HTML, CSS, and images to be as small as possible, and then ensuring that your server is sending sane HTTP headers for caching. Make sure your CSS stylesheet is external, and shared across all pages. If dial-up is a major issue, you'll want to stick to a single stylesheet if possible. Avoid JavaScript, because those computers usually don't have the processing power for it either. If you must use JavaScript, jQuery is extremely small and very fast and highly recommended, but I suspect that for most content-oriented websites, it won't be necessary.
To be honest, if you produce valid XHTML/HTML5, valid CSS, and you follow all of the usual best practices for standards-based web design (no table layouts, semantic markup, etc), dial-up really won't be an issue. It'll just work.
To tweak the maximum performance out of your site you might want to install this and use it on your site when you are done with the initial development- ySlow - this will analyse your pages and highlight all the areas you can improve. It's really a great tool for optimising site download speeds.
You should be able to accomplish this, but to be honest you are going to loose a lot in the way of user experience by creating a dial-up friendly site. It basically means you have to do the following to optimize for the experience:
Keep JS to a minimum
Make sure the JS is minified.
Reduce large image requirements w/ CSS and some optimal planning of layout
Make sure caching is enabled in the headers so that new files only get downloaded when nessisary.
If you do all this, you should have a site that is acceptable on dialup.
There are already some hints on how to keep page sizes and load times down.
To complement this, you could use a software that simulates limited bandwith. This helps you test the speed of your site on dialup.
There are several available (just google "simulate dialup").
Sloppy e.g. seems quite usable.
You could also do what Google does for Gmail, i.e. provide 2 versions of your view, one for slow connections that uses plain old HTML, and one for faster connections. You could make the default one the slow one, but provide a link to enable the faster one.
Gmail also has a built-in mechanism that detects when you load the page whether it's going fast or not and will automatically revert to the plain HTML view if it's too slow, which is another fancier alternative.
Your main goal should be minimum page size (keep only HTML in pages, all styling information should be externalized in css files for caching, same for JavaScript in js files) and minimum round trips to server (full requests and post backs). Contrary to popular belief a JS heavy site could work like a charm if you perform a lot of heavy duty client side and keep the server roundtrips clean with the minimum amount of data needed (think JQuery and AJAX here with small partial renderings).
P.S. If u'r using .NET throw ViewState away.