I have read some posts and answers about the differences between Web Server and Application Server... Mainly, the fact that application server can serve other protocols than HTTP and provides EJB. Problem is, I have never understood what were the EJBs actually. The more I read on the subject, the less I seem to understand...
To sum up, I can't understand what EJBs bring that cannot be done another way, with simple Java classes ?
And, since I can't understand this... I can't understand when an application server is necessary. When is Apache Tomcat (for instance) not enough for my needs ? What would force me to use an application server ?
Well, you can actually use tomcat for Java EE development, but you will need to carry a lot of libraries (.jar) to use it, also, depending of the implementation of the specs (e.g: redhat jpa vs oracle jpa) you could mixing some libs and end up with a hard to maintain system.
Application Server is the term that stands for a server with a highly integrated (and tested) full Java EE stack. For example according to the spec, Tomee, Wildfly and Glassfish are enough to write and run applications that uses jpa, ejb, jsf (and all the zillion of Java EE specs). Also, with a minimum of effort you can migrate between app servers (well, this is easier to say than to do).
Web Server (á la tomcat) is more servlet oriented, you only have a minimum of specs running and because of that, you will need to add the interfaces and the implementations on your own (yeah, that zillion of .jars, belive me, you dont want this for a project). I will recommend tomcat for websites not depending on java ee infrastructure, unless!, that you like to work with Spring (some jars to add but is easier and well integrated).
Keep the things simple and use an App Server if you need a Java EE app, by the other hand, if you are a Spring devoted, use Tomcat+Spring, but, for the love of god, not mix both at the same time for the same app.
In case you use tomcat you can use springframefork. As says spring reference only one thing cannot be make using spring, but can be make using EJB. And it is two fase distributed transactions between different application servers throw remote call.
Related
My friend recently asked me the following question: given that Django already has runserver, why didn't wasn't it extended to be a production-ready customer-facing HTTP server? What people do instead is set up an uwsgi server that speaks WSGI and exposes something that Nginx forwards traffic to by reverse proxying...
Based on what I know, many other languages use this pattern: there is a "simple" HTTP server meant for development, as well as an interface for *GI (ASGI/WSGI/FCGI/CGI) that web server is supposed to reverse proxy to. What is the main reason those web servers don't grow production-ready and instead assume presence of another web server?
Here are some of my theories, but I'm not sure if I'm missing something more significant:
History: dynamic websites date back to perl/PHP, both worked as a "dumb" CGI backend that was basically a filter that processed HTTP request (stdin) to a response (stdout). This architecture worked for some time and became a common pattern,
Performance: web applications are often written in languages that don't JIT and having a web server written in such a language would introduce extra overhead while milliseconds matter. Also, this lets us speed up static file serving,
Security: Django's runserver is clearly described as potentially insecure, according to this quote:
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay.
The last point seems to suggest that writing a production-ready HTTP server is too complex to fit within Django's goals, what kind of edge cases would need to be supported to get there?
Is any of the points actually valid, or am I missing the elephant in the room here?
Because they don't want to get into the web server business, and I think that's a wise decision.
Creating, developing and most importantly maintaining a web server is not a trivial thing. They couldn't simply write it once and then it's done (in fact, that's pretty much what they did and it's runserver).
Rather than re-invent the wheel, they've chosen to leave it to those who do it best. They're not likely to match the stability and functionality of a proper web server by doing it as a side-project to support running Django applications. They're better spending their time making Django better.
It's also consistent with the UNIX philosophy, but that's not necessary to get into here.
I need advice on converting a desktop C++ application to a web app. (This is my first web app.) The desktop app currently has a C# GUI, but the functionality I need to use all resides in an unmanaged, non-threadsafe C++ DLL.
The web-app uses Rails, and will run on a Linux server. Mostly, it needs to pass a list of strings to the DLL and get a winnowed-down list in return. The DLL will need to run on a Windows server. It has a significant load time, so I want it to run persistently. And I'll need multiple instances to handle simultaneous requests in a timely manner. I need it to scale reasonably well. (In case it's relevant: The Windows server will be on Amazon Web Services.)
So I have to determine: (1) How to interact between Ruby and C++, and (2) how to manage concurrent requests.
Ruby to C++
I could use Ruby Extensions (perhaps with Swig or Rb++ to make it easier) to call the library from Ruby. But is that an option when they're running on separate servers?
Regardless, with the relative simplicity of the interactions, I should probably just go with HTTP requests. Right?
From what I've read, it sounds like FastCGI is the way to go. I'll just have to wrap my DLL in a process with a FastCGI interface. Is there any other option I should consider?
Multiple Processes
First I should clarify: The C++ DLL is not threadsafe. So I need the server to spawn a configurable number of processes, and route requests to an idle process (or hold it in a queue till one becomes idle).
If I've understood correctly, FastCGI in general supports this, and IIS in particular does too. (Apparently IIS doesn't support multithreaded FastCGI applications, but that's fine for me.)
So will this just be a matter of configuring the FastCGI Process Manager?
Look up ISAPI for IIS, then go Rails -> 'net -> Windows Server -> IIS -> ISAPI -> your ISAPI DLL plugin -> your HTTP webservice -> this DLL.
But that's a heck of a lot of hoops to jump through. Don't you have the source to the DLL?
If not, I would write a Windows test script which calls this DLL. Then I would writes tests that cover every single one of its behaviors. Then I would start a new project (maybe in Ruby, maybe in Gnu C++), and I would pass each one of those tests on the new project. I like to call this "extract algorithm refactor"; the result should be fresh code that does the same thing, more portably.
I'm developing a web service in Java EE using Apache Tomcat and so far I have written some basic server side methods and a test client. I can successfully invoke methods and get results but every time I invoke a method, the server constructor gets called again, and I also can't modify the instance variables of the server using the set methods. Is there a particular way to make my server stateful without using JAX-WS or EJB #Stateful tags?
This is a little bit of misconception here. The stateful EJB would maintain session between one client and server, so still the EJB state wouldn't be shared between various clients.
You can expose only stateless and singleton EJBs as a JAX-WS web service.
The best option is to use database for storing all bids and when the auction is finished choose the winning one.
If you want to use a file it is fine, as long as you like to play with issues like:
synchronizing access to that file from many clients
handling transactional reads and writes
resolve file corruption problems
a bunch of other problems that might happen if you are sufficiently unlucky
Sounds like a lot of work, which can be done by any sane database engine.
What C++ software stack do developers use to create custom fast, responsive and not very resource hungry web services?
I'd recommend you to take a look on CppCMS:
http://cppcms.com
It exactly fits the situation you had described:
performance-oriented (preferably web service) software stack
for C++ web development.
It should have a low memory footprint
work on UNIX (FreeBSD) and Linux systems
perform well under high server load and be able to handle many requests with great efficiency
[as I plan to use it in a virtual environment] where resources will be to some extent limited.
So far I have only come across Staff WSF, Boost, Poco libraries. The latter two could be used to implement a custom web server...
The problem that web server is about 2% of web development there are so much stuff to handle:
web templates
sessions
cache
forms
security-security-security - which is far from being trivial
And much more, that is why you need web frameworks.
You could write an apache module, and put all your processing code in there.
Or there's CppCMS, or Treefrog or for writing web services (not web sites) use gSOAP or Apache Axis
But ultimately, there's no "easy to use framework" because C++ developers like to build apps from smaller components. There's no Ruby-style framework, but there is all manner of libraries for handling xml or whatever, and Apache offers the http protocol bits in the module spec so you can build up your app quite happily using whatever pieces make sense to you. Now whether there's a market for bundling this up to make something easier to use is another matter.
Personally, the best web app system I wrote (for a company) used a very think web layer in the web server (IIS and ASP, but this applies to any webserver, use php for example) that did nothing except act as a gateway to pass the data from the requests through to a C++ service. The C++ service could then be written completely as a normal C++ command line server with well-defined entry points, using as thin an RPC system as possible (shared memory, but you may want to check out ZeroMQ), which not only increased security but allowed us to easily scale by shifting the services to app servers and running the web servers on different hardware. It was also really easy to test.
I'm working on a project of which a large part is server side software. I started programming in C++ using the sockets library. But, one of my partners suggested that we use a standard server like IIS, Apache or nginx.
Which one is better to do, in the long run? When I program it in C++, I have direct access to the raw requests where as in the case of using standard servers I need to use a scripting language to handle the requests. In any case, which one is the better option and why?
Also, when it comes to security for things like DDOS attacks etc., do the standard servers already have protection? If I would want to implement it in my socket server, what is the best way?
"Server side software" could mean lots of different things, for example this could be a trivial app which "echoes" everything back on a specific port, to a telnet/ftp server to a webserver running lots of "services".
So where in this gamut of possibilities does your particular application lie? Without further information, it's difficult to make any suggestions, but let's see..
Web Services, i.e. your "server side" requirement is to handle individual requests and respond having done some set of business logic. Typically communication is via SOAP/XML, and this is ideal if you bave web based clients (though nothing prevents your from accessing these services via standalone clients). Typially you host these on web servers as you mentioned, and often they are easiest written in Java (I've yet to come across one that needed to be written in C++!)
Simple web site - slightly different to the above, respods to HTML get/post requests and serves up static or dymanic content (I'm guessing this is not what you're after!)
Standalone server which responds to something specific, here you'd have to implement your own "messaging"/protocols etc. and the server will carry out a specific function on incoming request and potentially send responses back. Key thing here is that the server does something specific, and is not a generic container (at which point 1 makes more sense!)
So where does your application lie? If 1/2 use Java or some scripting language (such as Perl/ASP/JSP etc.) If 3, you can certainly use C++, and if you do, use a suitable abstraction, such as boost::asio and Google Protocol buffers, save yourself a lot of headache...
With regards to security, ofcourse bugs and security holes are found all the time, however the good thing with some of these OS projects is that the community will tackle and fix them. Let's just say, you'll be safer using them than your own custom handrolled imlpementation, the likelyhood that you'll be able to address all the issues that they would have encountered in the years they've been around is very small (no disrespect to your abilities!)
EDIT: now that there's a little more info, here is one possible approach (this is what I've done in the past, and I've jused Java most of the way..)
The client facing server should be something reliable, esp. if it's over the internet, here I would use a proven product, something like Apache is good or IIS (depends on which technologies you have available). IMHO, I would go for jBoss AS - really powerful and easily customisable piece of kit, and integrates really nicely with lots of different things (all Java ofcourse!) You could then have a simple bit of Java which can then delegate to your actual Server processes that do the work..
For the Server procesess you can use C++ if that's what you are comfortable with
There is one key bit which I left out, and this is how 1 & 2 talk to each other. This is where you should look at an open source messaging product (even more higher level than asio or protocol buffers), and here I would look at something like Zero MQ, or Red Hat Messaging (both are MQ messaging protocols), the great advantage of this type of "messaging bus" is that there is no tight coupling between your servers, with your own handrolled implementation, you'll be doing lots of boilerplate to get the interaction to work just right, with something like MQ, you'll have multiplatform communication without having to get into the details... You wil save yourself a lot of time and bother if you elect to use something like that.. (btw. there are other messaging products out there, and some are easier to use - such as Tibco RV or EMS etc, however they are commercial products and licenses will cost a lot of money!)
With a messaging solution your servers become trivial as they simply handle incoming messagins and send messages back out again, and you can focus on the business logic...
my two pennies... :)
If you opt for 1st solution in Nim's list (web services) I would suggest you to have a look at WSO's web services framework for C++ , Axis CPP and Axis2/C web services framework (if you are not restricted to C++). Web Services might be the best solution for your requirement as you can quickly build them and use either as processing or proxy modules on the server side of your system.