We are considering to develop a Flash front-end to a web application written using Django. The Flash front-end will send a simple "id" to the server and in response receive a couple of objects. The application will be open only to authenticated users.
To the extend of my current knowledge (which is basic for Flash) we can either use AMF or take an XML or JSON approach. AMF seems to have an upperhand as there are examples out on the internet showing it can cooperate easily with Django's authentication mechanism (most examples feature pyAMF). On the other hand, implementing a XML/JSON based solution may be easier and hassle free.
Guidance will be much appreciated.
We've used pyAMF + Django on many projects here, and it's a breeze to setup and get running. If you need speed, AMF3 is probably your best bet. It's the smallest/fastest way to transfer data, and serialization is taken care of for you.
On the flip side, setting up json with Django isn't much work either, and it would give you the ability to hook other, non-AMF systems into it without any extra work. You just sacrifice a little speed for that benefit.
If you think you'll ever need other systems working with the backend, or if you think you might switch to an HTML-only, or offer some kind of non-Flash version of your app, I'd go JSON, otherwise, I'd use AMF.
first of all, you should design your app in such a way this doesn't matter. the transport layer should be completely encapsulated, leaving the encoding format transparent to the rest of the app.
personally I prefer JSON to AMF because it's human readable (which makes debugging easier) and there are implementations for every platform/language (so you can reuse the server part with JavaScript for example). And I prefer JSON to XML because it's more compact and semantically less unambiguous as well as closer to common object models. Also it can transport numerical and boolean data in a typesafe manner.
JSON will probaby have the least complications and there's a great google code project that has JSON encoders and decoders here: http://code.google.com/p/as3corelib/
Related
I am pulling out a good amount of data using webservice from a java application. The data is a bit complex in its structure with lot of hierarchical pattern using array collections. I am experiencing huge performance issue of around 15 second(in jboss and WebSphere) to get the data loaded. Time consumed is mostly while converting the service data into flex object structure. Issue gets worsen while moving to Weblogic application server. I am using axis2 framework.
Is there any way to optimize this? What can be the alternative technologies I can use instead of webserivces?
I'm afraid you may not like my answer, because it will involve a lot of refactoring. I can't think of any easy fixes.
What can be the alternative technologies I can use instead of webserivces?
You'll get best performance by using AMF remoting instead of web services. Here's an article that explains what it is and contains a benchmark that will show you that this could cut easily cut your response time in half: http://www.themidnightcoders.com/products/weborb-for-net/developer-den/technical-articles/amf-vs-webservices.html. And that benchmark is using .Net on the server side. It'll work even beter with a Java server.
Is there any way to optimize this?
You should consider refactoring the objects you pass to the client to "Data Transfer Objects" (DTO's). These are simple value objects that contain only the data necessary for the client to display. Which means: less time spent transferring data from the server to the client and less time spent converting objects to ActionScript classes.
How can you limit the work involved?
You could add a layer on the server side, that would call your existing web services, convert complex data into simple DTO's and deliver them to the client through AMF services. That way you can leave your existing code intact and still get a significant performance boost.
I am writing a C++ API which is to be used as a web service. The functions in the API take in images/path_to_images as input parameters, process them, and give a different set of images/paths_to_images as outputs. I was thinking of implementing a REST interface to enable developers to use this API for their projects (independent of whatever language they'd like to work in). But, I understand REST is good only when you have a collection of data that you want to query or manipulate, which is not exactly the case here.
[The collection I have is of different functions that manipulate the supplied data.]
So, is it better for me to implement an RPC interface for this, or can this be done using REST itself?
Like lcfseth, I would also go for REST. REST is indeed resource-based and, in your case, you might consider that there's no resource to deal with. However, that's not exactly true, the image converter in your system is the resource. You POST images to it and it returns new images. So I'd simply create a URL such as:
POST http://example.com/image-converter
You POST images to it and it returns some array with the path to the new images.
Potentially, you could also have:
GET http://example.com/image-converter
which could tell you about the status of the image conversion (assuming it is a time consuming process).
The advantage of doing it like that is that you are re-using HTTP verbs that developers are familiar with, the interface is almost self-documenting (though of course you still need to document the format accepted and returned by the POST call). With RPC, you would have to define new verbs and document them.
REST use common operation GET,POST,DELETE,HEAD,PUT. As you can imagine, this is very data oriented. However there is no restriction on the data type and no restriction on the size of the data (none I'm aware of anyway).
So it's possible to use it in almost every context (including sending binary data). One of the advantages of REST is that web browser understand REST and your user won't need to have a dedicated application to send requests.
RPC presents more possibilities and can also be used. You can define custom operations for example.
Not sure you need that much power given what you intend to do.
Personally I would go with REST.
Here's a link you might wanna read:
http://www.sitepen.com/blog/2008/03/25/rest-and-rpc-relationship/
Compared to RPC, REST's(json style interface) is lightweight, it's easy for API user to use. RPC(soap/xml) seems complex and heavy.
I guess that what you want is HTTP+JSON based API, not the REST API that claimed by the REST author
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
How should a Windows 8 Metro application connect to a central database?
I've read about local storage, but I haven't read anything about connecting to a central database.
Obviously, this architectural design decision needs to support the disconnected scenario.
WCF web services seem to make sense.
But even if they do make sense, should we really create separate methods for all read/write operations?
Or are OData WCF services the way to go?
It seems like tablet software architecture should be able to borrow a lot from smartphone software architecture (but I am new to both).
Has Microsoft made any recommendations in its app samples?
It appears that others are asking similar questions on the Microsoft Developer Forums.
Here is what I've found:
According to Tim Heuer:
...You cannot directly have a SQL db embedded in your app or use
something like ADO.NET. This is more of an async/services
infrastructure. So if your data was exposed via services, then of
course you could connect that way. There are some other light-weight
methods you could use for local storage as well using things like the
Windows.Storage namespace (which is similar to Isolated Storage in
.NET).
Morten Nielsen agrees:
You can use HttpClient to download pretty much anything from the web.
Why don't you configure your WCF service to return data as JSON, and
use the DataContractJsonSerializer to deserialize the results?
Also, Tim Heuer cautions:
...Please note that while awesome, the SQLWinRT project on codeplex is a
wrapper to communicate with the classic SQLite engine...which uses
APIs that would not pass store validation currently.
Generic Object Storage Helper for WinRT and WinRTFile Based Database seem to have some promise.
But Daniel Stolt raises some good points:
It's awesome that there is good support for building OData clients and
other REST clients - but this only addresses the online scenario. The
"structured" part of Windows.Storage is a very limited model,
essentially limited to name/value pairs, insufficient for all but the
most basic scenarios. Yes there is local file storage, which is great
of course. But forcing every app developer out there to build her own
DBMS on top of local file storage will simply not cut it, especially
with all of System.Data having been removed from the profile. If local
file storage was sufficient for most device apps, then things like
SQLCE would have no purpose today already. And SQLCE clearly has a
purpose, and has played a very important role for occasionally
connected device apps for a very long time. There is also a tremendous
need for synchronization with a server-side database such as SQL
Azure, mostly to be able to roam data between devices. Yes there is
the roaming storage model in WinRT, but it shares the same limitations
of local storage mentioned above, and on top of this is very limited
in capacity (currently 30KB if memory serves). It is simply
insufficient for all but the simplest roaming data needs. Again,
forcing every app developer to design and implement her own
synchronization solution is very bad. You can do much better to enable
developers.
Many people are disappointed that the System.Data namespace is not supported in WinRT.
Richard Bethell said:
I don't even have words for this. This is astonishing. Leave aside for
the moment they want to force you to abstract to middleware for
database connectivity - I don't agree, but I can quasi understand a
rationale for that. I can even see pathways for developing like that.
But no System.Data.... at all? Do you even understand what you've done
to us?
What System.Data can do, outside of just having providers for Sql,
OleDb and other custom providers like Oracle, is provide a rich
abstraction of XML datasets that allow you to very quickly build a
data oriented Service Oriented Architecture.
For instance, I can easily create a web service using SOAP or WCF that
returns DataSets or DataTables, and then consume those objects easily
and directly. Being able to do this allows very rapid construction of
n-tier architectures, even without direct data connections available.
Without System.Data, and the power of DataViews, DataTables, etc. this
gets a lot harder. Sure you can custom create structs, put data in
there, and serve up structs, and use Linq to do whatever sorting,
filtering, etc. you want to do.... but it ends up being twice the
work, and makes code reuse a lot harder. And it means using our
existing service oriented architecture is impossible (without a big
overhaul.)
The withdrawal of System.Data is as big a thing for developers to deal
with as the loss of the Printer object in VB6 to vb.net 1.0 was. What
is harder to understand in this case is why it is necessary -
re-enabling it in the Metro profile can't possibly be a technical
difficulty of the product, can it?
It is valuable enough that I would seriously consider including Mono's
System.Data classes as part of any app I create (which would obviously
have to be open source.)
I think that this is another of those "it depends" questions...
The first and most obvious issue is that it very much depends on the context in which the application is running as to whether, to take the first case "Obviously...support...disconnected" is actually true - if the app is an internal corporate app then quite possibly not in that case no db == not work.
Secondly you could look (hmm, rash... one assumes you could look, this could be a bad assumption) at database synchronisation between a local SQL database and the remote db and so on and so forth.
Taking a step back... yes - you're absolutely right, look at it as being the same as phone or silverlight (although I don't know if there is yet RIA support) - but the thing is at this point its very hard to be prescriptive because given a general purpose platform one can therefore write applications to suit all sorts of purposes.
Not a hugely helpful answer really - but a start.
Having read #Jim G's answer it seems that I should probably withdrawn mine?
I have been developing a Mac Desktop app with an iOS device counterpart.
Basically I want to upload event information (music gigs etc.) from the Desktop to an online database and be able to read (only) the information whilst mobile.
I've got both apps working, using Core Data (with a sqlite database - I was going to use XML but the iOS doesn't seem to let me do that), but I'm at a loss when it comes to the Web Services part.
I've been googling and checking docs involving sqlite, XML, JSON, NSXMLParser (do I need restful services?)and umpteen other things and I'm just getting nowhere fast.
Could someone explain to me the principle involved? Do I actually need Core Data? Do I have to convert the sqlite data to XML and back again to read it via an iOS mobile device?
I feel I'm making this out to be way more complicated than it should be - or is it?
Hoping someone can put me straight. Hope I've given enough information.
What i do, and I have done many web service iOS apps. I make a webpage in JSON, call it, and then I use SBJsonParser, which parses the JSON into native objects, like a dictionary or array of dictionaries, then I display the data. It really is very simple.
The at a specific time like ViewDidLoad, I fetch the JSON file. Remember, the json document can be web service or just a text file. Whatever you need. JSON doesn't need extra code, is extremely lightweight, and parses without any interference into native objects. Less work for you.
I know that some big players have embraced it and are actually exposing some of their services in APP compliant way, already. However, I haven't found many other (smaller) players in this field. Do you know any web application/service that uses APP as its public API protocol? What is your own take on AtomPub? Do you have any practical experiences using it? What are its limitations and drawbacks? Do you prefer AtomPub as your REST style or do you have some other favourite one? And why?
I know, these are many questions, not just one. The thing I'm interested here in is simple, though - how did the APP standard hit the market and particularly how does it seem with its adoption among web developers?
The company that I work for, is developing a lot of RESTful services.
However none of them expose public APIs.(In the sense that all services are internally consumed by our own clients). The reason why we went for REST architectural style was that we wanted our services to be easily consumable and more importantly scale well.
From my own practical experience I have come to the conclusion that HTTP + ATOM syndication format is a good idea, provided you want to keep things flexible(In terms of different content model, attaching and extending meta data associated with payloads, uniform parsing etc). ATOM ensures that everybody interprets the payload in an uniform manner without any scope for ambiguity.
However if one does not have any such complex requirements or does not forsee such requirements then the ATOM format could be a bit of an overhead. (For instance elements like Author,Title etc make sense more in the blogging/RSS world and may not make sense in your particular problem domain).
Also if the goal is to just serialize data structures at one end and reconstruct it at the other end, then most web frameworks(like WCF) have custom formats which are more appealing.
So in my opinion ATOM Pub is good if you need flexiblity in terms of data representation and if the playing field is huge with different kind of client.
However if you have a good knowledge of potential clients and server/client usage patterns then custom formats might be a good idea.
If the client is browser based then formats like JSON are very appealing.
Hope this answers your question.
My own research so far:
Wordpress supports AtomPub as its API protocol since version 2.3
GData is probably the biggest shot in the AtomPub field so far
Habari - new promising blogging system promotes APP as one of its main features
BlogSvc.net - an AtomPub
server, blog engine for .NET
platform, written in C#
Jangle - an open source project
designed to facilitate API access to
Library Systems
There's also mod_atom - an Apache module that stores entries in the filesystem.
Last time I checked (2007 or so) Atompub was fairly complex to implement. While you can whip together something that emits valid Atom feeds during the lunch break, implementing AtomPub was a fairly big undertaking.
That might have changed due to better libraries and tools but still it might be too complex to be implemented by smaller sides just because it's cool.
And the lack of killer AtomPub client applications puts little or no pressure on server operators to offer an AtomPub interface.