Why would I want to use anything other than WCF? - web-services

After completing several small projects with WCF, I'm quite happy with what it can do.
However, having spent a brief amount of time looking into the alternatives, I'm struggling to find exactly what benefits/drawbacks I would experience from using Java based web services such as AXIS2 or Metro?
Obviously open-source is perhaps one of them and also breaking away from Windows Server/IIS, but I can't see much more?

In comparing these two approaches specifically, I would evaluate your overall productivity between the two. Assuming you have an option of pursuing either/or, I've found the logistical work around with Metro and AXIS2 to be higher than WCF.
Given that both of these are essentially access points, whatever system complexity lies behind the scenes in terms of compatibility are key decision points. Even though we live in a world of unlimited interop possibilities, I tend to prefer stacks where consistency can yield productivity and performance gains.
As for open-source, while there is greater volume on the Java side than with .Net, I've also found that more of those projects are built to support functionality that's missing in the Java web service plaform (RESTlet, for example.)
Getting out of Windows/IIS is certainly an option with Java/Metro/AXIS2, whereas with WCF you're stuck with that as your front-end server. I've personally found both to be (too) configuration heavy, so neither have worked as an advantage for me in that respect. However, alternative hosts for the Java combination are certainly a possibility, so that may hold more value in certain situations.
All in all, both platforms (in the aggregate) will have scenarios where they're more advantageous than the other. Where those scenarios apply in your environment is what I find most relevant.

Related

Using ColdFusion frameworks

Can anyone expound on disadvantages, if there are any, to using a ColdFusion development framework? I'm developing an application traditionally, and I'm tempted to use a framework having seen how simple some things can be done.
I'm new to ColdFusion and frameworks in general. I want to understand the implications of using a framework, including advantages and disadvantages.
Disadvantages:
learning curve (pick a lean framework to reduce this)
front controller makes ugly URL, often needs URL rewrite on web server layer
risk of framework being discontinued (no support, hard to maintain, break on new CF ver)
framework bugs (pick a popular framework with good & fast support)
harder to debug sometimes, since actions are generally not a .cfm anymore. Tip: make use of cfdump and cfabort to see the dump in the controller layer
some frameworks takes longer to reinit. Since most frameworks will cache the configurations and controller layer for performance, during the development phase, you'll need to reinit all the time. CF9 eases this problem 'cause it is much faster.
lastly, sometimes you'll be using framework's API, an abstraction from CFML, and missed out on the native ColdFusion way of solving the same problem.
Performance generally is a non issue. Don't worry.
Henry's already given a good answer, but I would just like to pick up on this part of your question:
But does it not come with a performance tax?
The performance overhead of a framework is negligible.
In fact, you may even get better performance from frameworks such as ColdBox, which have built-in caching.
Remember, most frameworks are mature codebases used by lots of people - most likely, your newly written untested code is going to be the culprit, not the framework.
However, as a general rule (not specific to frameworks) performance is not a problem unless you've got measurable results that say it is.
i.e. don't just think "I'm going to do X instead of Y because I think it'll be faster" - go with the simplest option that meets user's needs, and only change it if you can prove that it has a performance problem and that your proposed solution is better.
It depends the nature of project you are into. I think its always advisable to use a frameowrk for better code organization, scalability, conventions and other. If you are supposed to start with a enterprise level application then coldbox is the best framework as far as my expriece goes. It has a bigger learning curve but its worth learning. If its simple start up project then FW1 is good. You can find a list here
http://www.riaxe.com/blog/top-coldfusion-frameworks/

Node.js or Erlang

I really like these tools when it comes to the concurrency level it can handle.
Erlang/OTP looks like much more stable solution but requires much more learning and a lot of diving into functional language paradigm. And it looks like Erlang/OTP makes it much better when it comes to multi-core CPUs (correct me if I am wrong).
But which should I choose? Which one is better in the short and long term perspective?
My goal is to learn a tool which makes scaling my Web projects under high load easier than traditional languages.
I would give Erlang a try. Even though it will be a steeper learning curve, you will get more out of it since you will be learning a functional programming language. Also, since Erlang is specifically designed to create reliable, highly concurrent systems, you will learn plenty about creating highly scalable services at the same time.
I can't speak for Erlang, but a few things that haven't been mentioned about node:
Node uses Google's V8 engine to actually compile javascript into machine code. So node is actually pretty fast. So that's on top of the speed benefits offered by event-driven programming and non-blocking io.
Node has a pretty active community. Hop onto their IRC group on freenode and you'll see what I mean
I've noticed the above comments push Erlang on the basis that it will be useful to learn a functional programming language. While I agree it's important to expand your skillset and get one of those under your belt, you shouldn't base a project on the fact that you want to learn a new programming style
On the other hand, Javascript is already in a paradigm you feel comfortable writing in! Plus it's javascript, so when you write client side code it will look and feel consistent.
node's community has already pumped out tons of modules! There are modules for redis, mongodb, couch, and what have you. Another good module to look into is Express (think Sinatra for node)
Check out the video on yahoo's blog by Ryan Dahl, the guy who actually wrote node. I think that will help give you a better idea where node is at, and where it's going.
Keep in mind that node still is in late development stages, and so has been undergoing quite a few changes—changes that have broke earlier code. However, supposedly it's at a point where you can expect the API not to change too much more. So if you're looking for something fun, I'd say node is a great choice.
I'm a long-time Erlang programmer, and this question prompted me to take a look at node.js. It looks pretty damn good.
It does appear that you need to spawn multiple processes to take advantage of multiple cores. I can't see anything about setting processor affinity though. You could use taskset on linux, but it probably should be parametrized and set in the program.
I also noticed that the platform support might be a little weaker. Specifically, it looks like you would need to run under Cygwin for Windows support.
Looks good though.
Edit
Node.js now has native support for Windows.
I'm looking at the same two alternatives you are, gotts, for multiple projects.
So far, the best razor I've come up with to decide between them for a given project is whether I need to use Javascript. One existing system I'm looking to migrate is already written in Javascript, so its next version is likely to be done in node.js. Other projects will be done in some Erlang web framework because there is no existing code base to migrate.
Another consideration is that Erlang scales well beyond just multiple cores, it can scale to a whole datacenter. I don't see a built-in mechanism in node.js that lets me send another JS process a message without caring which machine it is on, but that's built right into Erlang at the lowest levels. If your problem isn't big enough to need multiple machines or if it doesn't require multiple cooperating processes, this advantage isn't likely to matter, so you should ignore it.
Erlang is indeed a deep pool to dive into. I would suggest writing a standalone functional program first before you start building web apps. An even easier first step, since you seem comfortable with Javascript, is to try programming JS in a more functional style. If you use jQuery or Prototype, you've already started down this path. Try bouncing between pure functional programming in Erlang or one of its kin (Haskell, F#, Scala...) and functional JS.
Once you're comfortable with functional programming, seek out one of the many Erlang web frameworks; you probably shouldn't be writing your app directly to something low-level like inets at this late stage. Look at something like Nitrogen, for instance.
While I'd personally go for Erlang, I'll admit that I'm a little biased against JavaScript. My advice is that you evaluate few points:
Are you reusing existing code in either of those languages (both in terms of source code, and programmer experience!)
Do you need/want on-the-fly updates without stopping the application (This is where Erlang wins by default - its runtime was designed for that case, and OTP contains all the tools necessary)
How big is the expected traffic, in terms of separate, concurrent operations, not bandwidth?
How "parallel" are the operations you do for each request?
Erlang has really fine-tuned concurrency & network-transparent parallel distributed system. Depending on what exactly is the project, the availability of a mature implementation of such system might outweigh any issues regarding learning a new language. There are also two other languages that work on Erlang VM which you can use, the Ruby/Python-like Reia and Lisp-Flavored Erlang.
Yet another option is to use both, especially with Erlang being used as kind of "hub". I'm unsure if Node.js has Foreign Function Interface system, but if it has, Erlang has C library for external processes to interface with the system just like any other Erlang process.
It looks like Erlang performs better for deployment in a relatively low-end server (512MB 4-core 2.4GHz AMD VM). This is from SyncPad's experience of comparing Erlang vs Node.js implementations of their virtual whiteboard server application.
There is one more language on the same VM that erlang is -> Elixir
It's a very interesting alternative to Erlang, check this one out.
Also it has a fast-growing web framework based on it-> Phoenix Framework
whatsapp could never achieve the level of scalability and reliability without erlang https://www.youtube.com/watch?v=c12cYAUTXXs
I will Prefer Erlang over Node.
If you want concurrency, Node can be substituted by Erlang or Golang because of their light weight processes.
Erlang is not easy to learn so requires a lot of effort but its community is active so can get help from that, this is only the reason why people prefer Node .

SOAP vs. REST: Pragmatic case studies?

I'm not satisfied with the answers given by the SOAP vs REST questions notably here:
Performance of SOAP vs. XML-RPC or REST
because it's just general philosophical answers and not pragmatic answers with some study cases.
Nobody can give precise cases of when soap would be more suitable than rest, especially as for performance point of view ?
Update:I think REST is winning the war.
Performance is not the deciding factor.
First I should say, asking a SOAP-vs-REST question is a little cockeyed, because SOAP is a XML envelope format, and REST is an architecture. So I will make a little assumption and suppose that you are really considering SOAP-vs-POX or SOAP-vs-JSON or SOAP-vs-some other data formatting approach.
The deciding factor should be this:
Do you now need, or will you need in the future, the SOAP envelope?
The SOAP Envelope allows things like framework-provided encryption, digsig, routing, and authorization checks, among other things. You can of course, do those things with REST (or more accurately, with plain-old-XML, or JSON, etc) but you have to do more work yourself, to make that happen.
If Performance - whatever you construe it to mean - really is your #1 criterion, then you should probably abandon SOAP and POX and move to protobufs or something else optimized for performance. These can be faster to serialize and faster to transmit.
If you think this answer is "too philosophical" and you really want hard figures, well, then I suppose you'll need to conduct some tests. The actual perf will vary greatly on the toolkits you choose, the shape of the messages, and the extra data services (like encryption and so on) that you use. But in the end, perf won't be, or shouldn't be, decisive either way.
If your SOAP toolkit is 20% easier to use. debug, and maintain as your POX toolkit, then you should use SOAP, regardless of the performance. People (coders, architects, testers) are much more expensive than CPUs and networks these days. You can always buy another 2 cpus, or a bigger network, if necessary, and if your design is correct. But you can't buy 20% less time developing, at any cost, if your framework is hard to use, or if it drives away your people. Unless you are running a geo-scale network, you will do better to optimize for the people, instead of for the network.
You can find an article comparing REST and SOAP here:
http://www.jopera.org/files/www2008-restws-pautasso-zimmermann-leymann.pdf
Authors conclusions seemed to be:
Use RESTful services for tactical, ad hoc integration over the Web
Prefer WS-* Web services in professional enterprise application integration scenarios with a longer lifespan and advanced QoS requirements
Personally I do not like terminology like "professional enterprise" because it is loose and informal. However in my opinion authors made some good points in the article. Maybe to conclude and to give some own thoughts:
If you want to make API public - do it in RESTful way. Why? It is simple to use for a client application so it will make your service more popular. For example Amazon is exposing both REST and SOAP APIs, but 85% of their users have chosen REST version Amazon API - SOAP vs. REST
Use SOAP and WS-* stack if you will create (or you have some control of the process of creating) both consumers and producers of your services and you do need advanced features of WS-*. This will probably required more resources also because SOAP applications tends to be "heavier" (more features, but more sophistication also).
Also considering performance REST could be faster (messages are definitely shorter and you do not need to parse xml).
Hope it will help.
In your example of flash client - it is really hard to tell without knowing the details, however if one do not need all this security and transactional features of WS-* I think building REST application would be simpler and faster.
Answering to comment
I should use soap because i'm in so
called "professional enterprise"
And assuming of course that your choice isn't really dictated by big software vendors.
SOAP is suited for bigger enterprises because it encourages more formal approach. It offers specifications, which are huge, so your developers may need time to learn them and maybe even some professional training --> so spending companies resources. It also offers tools - and not all of them are open source, so this can also mean additional resources. But if your team will learn this way of integrating services it will probably be efficient and resulting code will be high quality.
REST in contrary is more a philosophy of developing applications. So, no huge specifications, no specialized tools. No resource spending. This may work nice if you have a small team of good programmers - they will not need so many guidelines if they know the basic principles . Unfortunately it is also easier to do things wrong.
Another thing to consider is the applications size - the richer the API, the more services you want to integrate, the harder it will be to do it RESTful. Also building small SOAP application wouldn't be probably a good idea - whole overhead and entrance cost is just too high.
You need to evaluate pros and cons for your project. It is impossible to give recommendation without knowing all the details I think.
And finally - this has nothing to do with reasonable arguments but more with politics. I think that management level people seem to prefer WS-* stack and SOAP (it has support of "big enterprises" so it is easier for them to justify their choose). On the other hand people from academic background[1] prefers REST - because there is still a lot of research that can be conducted in the area.
[1] I'm somewhere in between, so I can observe both behaviors ;-)

Does three-tier architecture ever work?

We are building three-tier architectures for over a decade now. Dividing presentation-, logic- and data-tier is supposed to allow us to exchange each layer individually, should the need ever arise, be it through changed requirements or new technologies.
I have never seen it working in practice...
Mostly because (at least) one of the following reasons:
The three tiers concept was only visible in the source code (e.g. package naming in Java) which was then deployed as one, tied together package.
The code representing each layer was nicely bundled in its own deployable format but then thrown into the same process (e.g. an "enterprise container").
Each layer was run in its own process, sometimes even on different machines but through the static nature they were connected to each other, replacing one of them meant breaking all of them.
Thus what you usually end up with, in is a monolithic, tightly coupled system that does not deliver what it's architecture promised.
I therefore think "three-tier architecture" is a total misnomer. The true benefit it brings is that the code is logically sound. But that's at "write time", not at "run time". A better name would be something like "layered by responsibility". In any case, the "architecture" word is misleading.
What are your thoughts on this? How could working three-tier architecture be achieved? By that I mean one which holds its promises: Allowing to plug out a layer without affecting the other ones. The system should survive that and be in a well defined state afterwards.
Thanks!
The true purpose of layered architectures (both logical and physical tiers) isn't to make it easy to replace a layer (which is quite rare), but to make it easy to make changes within a layer without affecting the others (and as Ben notes, to facilitate scalability, consistency, and security) - which works all the time all around us.
One example of a 3-tier architecture is a typical database-driven web application:
End-user's web browser
Server-side web application logic
Database engine
In every system, there is the nice, elegant architecture dreamed up at the beginning, then the hairy mess when its finally in production, full of hundreds of bug fixes and special case handlers, and other typical nasty changes made to address specific issues not realized during the design.
I don't think the problems you've described are specific to three-teir architecture at all.
If you haven't seen it working, you may just have bad luck. I've worked on projects that serve several UIs (presentation) from one web service (logic). In addition, we swapped data providers via configuration (data) so we could use a low-cost database while developing and Oracle in higher environments.
Sure, there's always some duplication - maybe you add validation in the UI for responsiveness and then validate again in the logic layer - but overall, a clean separation is possible and nice to work with.
Once you accept that n-tier's major benefits--namely scalability, logical consistency, security--could not easily be achieved through other means, the question of whether or not any of the tiers can be replaced outright without breaking the the others becomes more like asking whether there's any icing on the cake.
Any operating system will have a similar kind of architecture, or else it won't work. The presentation layer is independent of the hardware layer, which is abstracted into drivers that implement a certain interface. The data is handled using logic that changes depending on the type of data being read (think NTFS vs. FAT32 vs. EXT3 vs. CD-ROM). Linux can run on just about any hardware you can throw at it and it will still look and behave the same because the abstractions between the layers insulate each other from changes within a single layer.
One of the biggest practical benefits of the 3-tier approach is that it makes it easy to split up work. You can easily have a DBA and a business anylist or two building a data layer, a traditional programmer building the server side app code, and a graphic designer/ web designer building the UI. The three teams still need to communicate, of course, but this allows for much smoother development in most cases. In this regard, I see the 3-tier approach working reliably everyday, and this enough for me, even if I cannot count on "interchangeable parts", so to speak.

Practical SOA for a newbie

I am a total newbie to the world of SOA. As such, I am looking at some "SOA frameworks/technologies", and trying to understand how to utilize them to build a highly scalable (Facebook class) website.
There are several "pains" I am trying to solve here:
Composability (+ managing dependencies, Pub/Sub)
Language-independence of services
Scalability & Performance
High availability
I looked into some technologies that answer a subset of the above criteria:
Thrift - Facebook's cross-platform RPC platform
WCF - supports SOAP, JSON & REST, so it can be considered language-interoperable. Generates WSDL files that can be use to generate java proxies.
Microsoft DSS - just inclued it in my survey, but it doesn't seem relevant as it is highly state-driven and .NET specific.
Web Services
Now, I understand how I get some aspects of composability and language-independence out of the above. But, I haven't found much concrete information (not buzz) about how to use the above / other tools for scalability and high availability. So finally I get to my question:
How does one leverage SOA technologies to solve the pains I defined above? Where can I find technical guides for that? I am looking for more than just system diagrams, but rather actual libraries, code samples, APIS...
I think the question has more to do with the concepts involved than the tools. Answers to the items:
Understand and internalize bounded context. Keeping unrelated pieces separate its important to get real reuse on different services. Related technologies won't help you on this, its you that separate the app in appropriate contexts, which you can appropriately reuse for different services.
Clear endpoint for communication based on known protocols allows implementing different pieces with different technologies
Having the operations flow like independent actions based on different protocols, gives you a lot of places where you can add tiers. Is a particular sub-process of the overall process using lots of resources and the server can't take it anymore, just move to a separate server. Load kept growing, and that server isn't taking it anymore, add an additional server and load balance. You also have more opportunity to use caching and connection pooling.
Have a critical sub-process that needs to be available all the time, add a server so you can have a fail over. Have an overall process that needs to be "available" all the time, use queues for pieces that can be processed later.
Do you really need to support that type of load? Set appropriate performance/load/interoperability targets that relate to the specific scenario. If you really need to support that type of load, I recommend you get someone on board who has dealt with it.
If it is something for a load that might eventually be, identify the bounded contexts and design the interaction between those with a SOA mindset. Keeping the code clean is all you have to do for the rest, use TDD, loose coupling, focused integration tests, etc in your code base. With good code, if you later need to separate pieces of the system, it will be a lot easier.
There are interesting and relevant things said about services and architecture by Amazon's CTO - Werner Vogels:
An interview about the Amazon technology platform
A post on his blog - "Eventually Consistent - Revisited"
A 50 min presentation about Availability & Consistency
IDesign.net has a bunch of great downloads for WCF.
Worth a look at: http://www.manning.com/davis/ ?
Check out the Mule project which bundles the CXF services stack and also the Mule REST pack which provides RESTful alternatives. I think you'll see it addresses all of your pains and there are lots of examples in the documentation as well as the distribution.
May I recommend the book: Enterprise Service Oriented Architectures published by Springer Verlag.
All of the advice here is well and good, but don't worry about it until you really need to.
Focus on building a usable, functional application that people really like. When you start running into problems, then start handling bottlenecks.
You will never be able to foresee every way an application will fail, so how can you tell if [[insert tech here]] is your answer?