Flex webservice huge performance issue - web-services

I am pulling out a good amount of data using webservice from a java application. The data is a bit complex in its structure with lot of hierarchical pattern using array collections. I am experiencing huge performance issue of around 15 second(in jboss and WebSphere) to get the data loaded. Time consumed is mostly while converting the service data into flex object structure. Issue gets worsen while moving to Weblogic application server. I am using axis2 framework.
Is there any way to optimize this? What can be the alternative technologies I can use instead of webserivces?

I'm afraid you may not like my answer, because it will involve a lot of refactoring. I can't think of any easy fixes.
What can be the alternative technologies I can use instead of webserivces?
You'll get best performance by using AMF remoting instead of web services. Here's an article that explains what it is and contains a benchmark that will show you that this could cut easily cut your response time in half: http://www.themidnightcoders.com/products/weborb-for-net/developer-den/technical-articles/amf-vs-webservices.html. And that benchmark is using .Net on the server side. It'll work even beter with a Java server.
Is there any way to optimize this?
You should consider refactoring the objects you pass to the client to "Data Transfer Objects" (DTO's). These are simple value objects that contain only the data necessary for the client to display. Which means: less time spent transferring data from the server to the client and less time spent converting objects to ActionScript classes.
How can you limit the work involved?
You could add a layer on the server side, that would call your existing web services, convert complex data into simple DTO's and deliver them to the client through AMF services. That way you can leave your existing code intact and still get a significant performance boost.

Related

How should a Windows 8 Metro Application connect to a central database?

How should a Windows 8 Metro application connect to a central database?
I've read about local storage, but I haven't read anything about connecting to a central database.
Obviously, this architectural design decision needs to support the disconnected scenario.
WCF web services seem to make sense.
But even if they do make sense, should we really create separate methods for all read/write operations?
Or are OData WCF services the way to go?
It seems like tablet software architecture should be able to borrow a lot from smartphone software architecture (but I am new to both).
Has Microsoft made any recommendations in its app samples?
It appears that others are asking similar questions on the Microsoft Developer Forums.
Here is what I've found:
According to Tim Heuer:
...You cannot directly have a SQL db embedded in your app or use
something like ADO.NET. This is more of an async/services
infrastructure. So if your data was exposed via services, then of
course you could connect that way. There are some other light-weight
methods you could use for local storage as well using things like the
Windows.Storage namespace (which is similar to Isolated Storage in
.NET).
Morten Nielsen agrees:
You can use HttpClient to download pretty much anything from the web.
Why don't you configure your WCF service to return data as JSON, and
use the DataContractJsonSerializer to deserialize the results?
Also, Tim Heuer cautions:
...Please note that while awesome, the SQLWinRT project on codeplex is a
wrapper to communicate with the classic SQLite engine...which uses
APIs that would not pass store validation currently.
Generic Object Storage Helper for WinRT and WinRTFile Based Database seem to have some promise.
But Daniel Stolt raises some good points:
It's awesome that there is good support for building OData clients and
other REST clients - but this only addresses the online scenario. The
"structured" part of Windows.Storage is a very limited model,
essentially limited to name/value pairs, insufficient for all but the
most basic scenarios. Yes there is local file storage, which is great
of course. But forcing every app developer out there to build her own
DBMS on top of local file storage will simply not cut it, especially
with all of System.Data having been removed from the profile. If local
file storage was sufficient for most device apps, then things like
SQLCE would have no purpose today already. And SQLCE clearly has a
purpose, and has played a very important role for occasionally
connected device apps for a very long time. There is also a tremendous
need for synchronization with a server-side database such as SQL
Azure, mostly to be able to roam data between devices. Yes there is
the roaming storage model in WinRT, but it shares the same limitations
of local storage mentioned above, and on top of this is very limited
in capacity (currently 30KB if memory serves). It is simply
insufficient for all but the simplest roaming data needs. Again,
forcing every app developer to design and implement her own
synchronization solution is very bad. You can do much better to enable
developers.
Many people are disappointed that the System.Data namespace is not supported in WinRT.
Richard Bethell said:
I don't even have words for this. This is astonishing. Leave aside for
the moment they want to force you to abstract to middleware for
database connectivity - I don't agree, but I can quasi understand a
rationale for that. I can even see pathways for developing like that.
But no System.Data.... at all? Do you even understand what you've done
to us?
What System.Data can do, outside of just having providers for Sql,
OleDb and other custom providers like Oracle, is provide a rich
abstraction of XML datasets that allow you to very quickly build a
data oriented Service Oriented Architecture.
For instance, I can easily create a web service using SOAP or WCF that
returns DataSets or DataTables, and then consume those objects easily
and directly. Being able to do this allows very rapid construction of
n-tier architectures, even without direct data connections available.
Without System.Data, and the power of DataViews, DataTables, etc. this
gets a lot harder. Sure you can custom create structs, put data in
there, and serve up structs, and use Linq to do whatever sorting,
filtering, etc. you want to do.... but it ends up being twice the
work, and makes code reuse a lot harder. And it means using our
existing service oriented architecture is impossible (without a big
overhaul.)
The withdrawal of System.Data is as big a thing for developers to deal
with as the loss of the Printer object in VB6 to vb.net 1.0 was. What
is harder to understand in this case is why it is necessary -
re-enabling it in the Metro profile can't possibly be a technical
difficulty of the product, can it?
It is valuable enough that I would seriously consider including Mono's
System.Data classes as part of any app I create (which would obviously
have to be open source.)
I think that this is another of those "it depends" questions...
The first and most obvious issue is that it very much depends on the context in which the application is running as to whether, to take the first case "Obviously...support...disconnected" is actually true - if the app is an internal corporate app then quite possibly not in that case no db == not work.
Secondly you could look (hmm, rash... one assumes you could look, this could be a bad assumption) at database synchronisation between a local SQL database and the remote db and so on and so forth.
Taking a step back... yes - you're absolutely right, look at it as being the same as phone or silverlight (although I don't know if there is yet RIA support) - but the thing is at this point its very hard to be prescriptive because given a general purpose platform one can therefore write applications to suit all sorts of purposes.
Not a hugely helpful answer really - but a start.
Having read #Jim G's answer it seems that I should probably withdrawn mine?

Windows Phone 7 - Best Practices for Speeding up Data Fetch

I have a Windows Phone 7 app that (currently) calls an OData service to get data, and throws the data into a listbox. It is horribly slow right now. The first thing I can think of is because OData returns way more data than I actually need.
What are some suggestions/best practices for speeding up the fetching of data in a Windows Phone 7 app? Anything I could be doing in the app to speed up the retrieval of data and putting into in front of the user faster?
Sounds like you've already got some clues about what to chase.
Some basic things I'd try are:
Make your HTTP requests as small as possible - if possible, only fetch the entities and fields you absolutely need.
Consider using multiple HTTP requests to fetch the data incrementally instead of fetching everything in one go (this can, of course, actually make the app slower, but generally makes the app feel faster)
For large text transfers, make sure that the content is being zipped for transfer (this should happen at the HTTP level)
Be careful that the XAML rendering the data isn't too bloated - large XAML structure repeated in a list can cause slowness.
When optimising, never assume you know where the speed problem is - always measure first!
Be careful when inserting images into a list - the MS MarketPlace app often seems to stutter on my phone - and I think this is caused by the image fetch and render process.
In addition to Stuart's great list, also consider the format of the data that's sent.
Check out this blog post by Rob Tiffany. It discusses performance based on data formats. It was written specifically with WCF in mind but the points still apply.
As an extension to the Stuart's list:
In fact there are 3 areas - communication, parsing, UI. Measure them separately:
Do just the communication with the processing switched off.
Measure parsing of fixed ODATA-formatted string.
Whether you believe or not it can be also the UI.
For example a bad usage of ProgressBar can result in dramatical decrease of the processing speed. (In general you should not use any UI animations as explained here.)
Also, make sure that the UI processing does not block the data communication.

Porting REST API connection code to Android NDK in c++

I have an Android application in the market which connects and send POST and GET queries to a REST API, and then stores the results in a DB which are then queries and displayed in an appropriate manner in the application.
I'm interested in speeding up the application and have noticed quite a lot of lag between the time of receiving the data back from the api and the data being ready to use. I'd like to investigate if and how I can write similar code in c++ using the NDK to connect to the REST API, process the results and store in a DB or raise an error. I've no previous c++ experience and need to know firstly if I can access the same DB in the C++ as the Java, secondly if there are any other caveats which I should be aware of?
Also I guess I should ask - is it worth doing this? Will I notice any difference?
Any links to similar code, or an overview of where I should look to get started in c++ would be greatly appreciated.
I'm doing the EXACT same thing, and trust me: if you have no previous C++ experience, this might be a bit too costly for little benefit.
In my case, after some profiling, I reordered things around and had an initial jump in performance only by dropping DOM and using SAX. All the rest is only making things marginally better, like processing the response while packets are still being transmitted (i.e. not wait for the full response to start processing), and multiplexing requests on the same thread instead of starting a new thread for each.
What you should be looking for in Google is POSIX sockets, HTTP and REST codes, if you wish to do it all by hand. A better option might be using CURL or something similar for the Socket/HTTP part. I did it all myself, but only because I have already done this a few times.

flash - django communication -- amf, xml, or json?

We are considering to develop a Flash front-end to a web application written using Django. The Flash front-end will send a simple "id" to the server and in response receive a couple of objects. The application will be open only to authenticated users.
To the extend of my current knowledge (which is basic for Flash) we can either use AMF or take an XML or JSON approach. AMF seems to have an upperhand as there are examples out on the internet showing it can cooperate easily with Django's authentication mechanism (most examples feature pyAMF). On the other hand, implementing a XML/JSON based solution may be easier and hassle free.
Guidance will be much appreciated.
We've used pyAMF + Django on many projects here, and it's a breeze to setup and get running. If you need speed, AMF3 is probably your best bet. It's the smallest/fastest way to transfer data, and serialization is taken care of for you.
On the flip side, setting up json with Django isn't much work either, and it would give you the ability to hook other, non-AMF systems into it without any extra work. You just sacrifice a little speed for that benefit.
If you think you'll ever need other systems working with the backend, or if you think you might switch to an HTML-only, or offer some kind of non-Flash version of your app, I'd go JSON, otherwise, I'd use AMF.
first of all, you should design your app in such a way this doesn't matter. the transport layer should be completely encapsulated, leaving the encoding format transparent to the rest of the app.
personally I prefer JSON to AMF because it's human readable (which makes debugging easier) and there are implementations for every platform/language (so you can reuse the server part with JavaScript for example). And I prefer JSON to XML because it's more compact and semantically less unambiguous as well as closer to common object models. Also it can transport numerical and boolean data in a typesafe manner.
JSON will probaby have the least complications and there's a great google code project that has JSON encoders and decoders here: http://code.google.com/p/as3corelib/

Atom Publishing Protocol in real life

I know that some big players have embraced it and are actually exposing some of their services in APP compliant way, already. However, I haven't found many other (smaller) players in this field. Do you know any web application/service that uses APP as its public API protocol? What is your own take on AtomPub? Do you have any practical experiences using it? What are its limitations and drawbacks? Do you prefer AtomPub as your REST style or do you have some other favourite one? And why?
I know, these are many questions, not just one. The thing I'm interested here in is simple, though - how did the APP standard hit the market and particularly how does it seem with its adoption among web developers?
The company that I work for, is developing a lot of RESTful services.
However none of them expose public APIs.(In the sense that all services are internally consumed by our own clients). The reason why we went for REST architectural style was that we wanted our services to be easily consumable and more importantly scale well.
From my own practical experience I have come to the conclusion that HTTP + ATOM syndication format is a good idea, provided you want to keep things flexible(In terms of different content model, attaching and extending meta data associated with payloads, uniform parsing etc). ATOM ensures that everybody interprets the payload in an uniform manner without any scope for ambiguity.
However if one does not have any such complex requirements or does not forsee such requirements then the ATOM format could be a bit of an overhead. (For instance elements like Author,Title etc make sense more in the blogging/RSS world and may not make sense in your particular problem domain).
Also if the goal is to just serialize data structures at one end and reconstruct it at the other end, then most web frameworks(like WCF) have custom formats which are more appealing.
So in my opinion ATOM Pub is good if you need flexiblity in terms of data representation and if the playing field is huge with different kind of client.
However if you have a good knowledge of potential clients and server/client usage patterns then custom formats might be a good idea.
If the client is browser based then formats like JSON are very appealing.
Hope this answers your question.
My own research so far:
Wordpress supports AtomPub as its API protocol since version 2.3
GData is probably the biggest shot in the AtomPub field so far
Habari - new promising blogging system promotes APP as one of its main features
BlogSvc.net - an AtomPub
server, blog engine for .NET
platform, written in C#
Jangle - an open source project
designed to facilitate API access to
Library Systems
There's also mod_atom - an Apache module that stores entries in the filesystem.
Last time I checked (2007 or so) Atompub was fairly complex to implement. While you can whip together something that emits valid Atom feeds during the lunch break, implementing AtomPub was a fairly big undertaking.
That might have changed due to better libraries and tools but still it might be too complex to be implemented by smaller sides just because it's cool.
And the lack of killer AtomPub client applications puts little or no pressure on server operators to offer an AtomPub interface.