How to develop supporting services for a webservice to monitor the performance of the service and to diagnose the services by means of monitoring the users, memory, database etc.,
Please provide some pointers for developing such services.
Thanks,
Velmurugan R
You should take a look at JMX / MBeans. Mbeans can be used to read attributes from a running system in a standardized way. Or to invoke operations. With JConsole (distributed with Oracle JDK) you can connect to a running JVM and check out all existing MBeans.
http://docs.oracle.com/javase/tutorial/jmx/mbeans/standard.html
Related
I am trying to understand Serverless architecture which says 2 distinct things:
you as an app developer think about your function only and not about the server responsibilities. Well, the server still has got to be somewhere. By servers, I understand here that its mean both:
on the infrastructure side Physical Server/VM/container
as well as the on the software side: say, Tomcat
Now, I have worked on Cloud Foundry and studied the ER i.e. Diego Architecture of Cloud Foundry and the buildpack and open Service Broker API facility of Cloud foundry. Effectively, Cloud Foundry also already works on a "similar" model where the application developer focuses on his code and the deployment model with the help of buildpack prepares a droplet with the needed Java runtime and Tomcat runtime and then uses it to create a garden container that serves user requests. So, the developer does not have to worry about where the Tomcat server or the VM/container will come from. So, aren't we already meeting that mandate in Cloud Foundry?
your code comes into existence for the duration of execution and then dies. This I agree is different from the apps/microserevices that we write in Cloud Foundry in that they are long running server processes instead. Now, if I were to develop a Java webapp/microservice with 3 REST endpoints (myapp/resource1, myapp/resource2, myapp/resource3) possibly on a Tomcat Web Server, I need:
a physical machine or a VM or a container,
the Java runtime
the Tomcat container to be able to run my war file.
Going by what Serverless suggests, I infer I am supposed to concentrate only on the very specific function say handling the request to myapp/resource1. Now, in such a scenario:
What is my corresponding Java class supposed to look like?
Where do I get access to the J2EE objects like HttpServletRequest or HttpServletResponse objects and other http or servlet or JAX-RS or Spring MVC provided objects that are created by the Tomcat runtime?
Is my Java class executed within a container that is created for the duration of execution and then destroyed after execution? If yes, who manages the creation/destruction of such a container?
Would Tomcat even be required? Is there an altogether different generic way of handling requests to these three REST endpoints? Is it somewhat like httpd servers using python/Java CGI scripts to handle http requests?
so there is a set of applications that position itself as a distributed cluster O/S called DCOS.
It has an MPI and spark running on top of it.
I am a developer and I have a set of distributed services running connected via socket or ZeroMQ communication system.
How can I port my existing services to DCOS?
Meaning use it's communication facilities instead of sockets/zmq.
Is there any API \ Docs on how not to run it but develop for it?
There are a number of ways to get your application to run on DCOS (and/or Mesos).
First for legacy applications you can use the marathon framework which you can view as kind of the init system of DCOS/Mesos.
If you need more elaborate applications and want to really program against the apis you would write a mesos framework: see the framework development guide for more details.
For deeper integration of your framework into DCOS as for example using the package repository/ command line install option check out/contact mesosphere for more details.
Hope this helps!
Joerg
I am considering to implement a recommendation engine for a small size website.
The website will employ LAMP stack, and for some reasons the recommendation engine must be written in C++. It consists of an On-line Component and Off-line Component, both need to connect to MySQL. The difference is that On-line Component will need a connection pool, whereas several persistent connections or even connect as required would be sufficient for the Off-line Component, since it does not require real time performance in a concurrent requests scenario as in On-line Component.
On-line Component is to be wrapped as a web service via Apache AXIS2. The PHP frontend app on Apache http server retrieves recommendation data from this web service module.
There are two DB connection options for On-line Component I can think of:
1. Use ODBC connection pool, I think unixODBC might be a candidate.
2. Use connection pool APIs that come as a part of Apache HTTP server. mod_dbd would be a choice. http://httpd.apache.org/docs/2.2/mod/mod_dbd.html
As for Off-line Component, a simple DB connection option is direct connection using ODBC.
Due to lack of web app design experience, I have the following questions:
Option 1 for On-line Component is a tightly coupled design without taking advantage of pooling APIs in Apache HTTP server. But if I choose Option 2 (3-tiered architecture), as a standalone component apart from Apache HTTP server, how to use its connection pool APIs? A Java application can be deployed as a WAR file and contained in a servlet container such as tomcat(See Mahout in Action, section 5.5), is there any similar approach for my C++ recommendation engine?
I am not sure if I made a proper prototype.
Any suggestions will be appreciated:)
Thanks,
Mike
I want to create a web application with the following architecture:
There is some functionality, whiсh is encapsulated in the "Business logic" module (1). It uses MongoDB as a data store (5) and an external (command line) application (4).
The functionality of the application is brought to the end users via two channels:
The web application itself (2) and
public API (3), which allows third-party applications and mobile devices to access the business logic functionality.
The web application is written in Java and based on the Vaadin platform. Currently it runs in Jetty web server.
One important requirement: The web application should be scalable, i. e. it must be possible to increase the number of users/transactions it can service by adding new hardware.
I have following questions regarding the technical implementation of this architecture:
What technology can be used to implement the business logic part? What are the sensible options for creating a SCALABLE app server?
What web server can I choose for the web interface part (2) to make it scalable? What are the options?
Calculations done in the external system (4) are potentially CPU-intensive. Therefore I want to do them in an asynchronous way, i. e.
a) the user sends a request for this operation (via web interface or public API, 2 and 3 in the above image), that
b) request is put into a queue, then
c) the CPU-intensive calculations are done and
d) at some point in time the answer is sent to the user.
What technological options are there to implement this queueing (apart from JMS) ?
Thanks in advance
Dmitri
for scaling the interactions, have you look at Drools grid, Akka or JPPF ?
for making the web-application scalable, have you look at Terracotta or Glassfish clustering capabilities (Vaadin is a glassfish partner if i remember well) ?
Since nobody answered my question, I'll do it myself.
From other resources I learned that following technologies can be used for implementing this architecture:
1) Spring for Business logic (1)
2) GridGain or Apache Hadoop for scaling the interactions with the external system (4)
3) Hazelcast for making the web-application scalable (2, server-sided sessions).
We have a C++ (SOAP-based) web service deployed Using Systinet C++ Server, that has a single port for all the incoming connections from Java front-end.
However recently in production environment when it was tested with around 150 connections, the service went down and hence I wonder how to achieve load-balancing in a C++ SOAP-based web service?
The service is accessed as SOAP/HTTP?
Then you create several instances of you services and put some kind of router between your clients and the web service to distribute the requests across the instances. Often people use dedicated hardware routers for that purpose.
Note that this is often not truly load "balancing", in that the router can be pretty dumb, for example just using a simple round-robin alrgorithm. Such simple appraoches can be pretty effective.
I hope that your services are stateless, that simplifies things. If indiviual clients must maintain affinity to a particualr instance thing get a little tricker.