how to use connection pooling in cxf jaxrs webclient - web-services

I am building a REST service which internally calls other services and we use org.apache.cxf.jaxrs.client.WebClient to do this.
I want to use HTTP connection pooling to improve the performance but the documentation isn't very clear about how to do this or if this is even possible. Has anyone here done this?
The only other option I can think of it is to re-use clients but I'd rather not get into the whole set of thread-safety and synchronization issues that comes with that approach.

By default, CXF uses a transport based on the in-JDK HttpURLConnection object to perform HTTP requests.
Connection pooling is performed allowing persistent connections to reuse the underlying socket connection for multiple http requests.
Set system properties (default values)
http.keepalive=true
http.maxConnections=5
Increment the value of http.maxConnections to set the maximum number of idle connections that will be simultaneously kept alive, per destination.
In this post are explained some detail how it works
Java HttpURLConnection and pooling
When you need many requests executed simultaneosly CXF can also use the asynchronous apache HttpAsyncClient. Ser details here
http://cxf.apache.org/docs/asynchronous-client-http-transport.html

Related

Luminus -- multiple requests within the same db connection

In my Luminus app I have this:
(defn page1 [id]
(layout/render "page1.html"
{:article (db/get-single-article {:id (Integer/parseInt id)}))
I want to perform multiple different requests to the db within the same db connection. How can I do that?
From your question it's not clear whether you want to reuse the same DB connection to handle multiple HTTP requests or single HTTP request calling multiple functions using JDBC API (so all those JDBC calls use the same DB connection).
If this is the latter case you can use with-db-connection to wrap all your functions calling JDBC API. You can also use with-db-transaction if all SQL operations should be part of one DB transactions.
For the former case I am not sure why you would need to reuse the same connection for multiple HTTP requests but it is not a common idiom as HTTP is stateless by definition and causes multiple issues.
You might store the connection in your ring HTTP session so you can fetch it whenever you get a request associated with the session and use for JDBC logic.
However, such a solution has following drawbacks:
you have to make sure that the connection gets released to the pool (or closed if you don't use pooling) when is no longer needed. How would you detect that? What if the client fails and never finishes some workflow where you decide to clean up the DB connection?
how many concurrent 'sessions' do you need to handle? If many (like hundreds) keeping a dedicated connection for each session won't scale (DB connections are expensive resources on both sides: client and servers)

How to create a full-duplex communication between API and various clients?

In my website, I'd like to create a public API that would allow clients (unknown people) to interact with my services. A classic REST API would work well in that case.
However, I need to be able to send events to the clients too. These events are not related to client HTTP requests. I saw "webhooks" are a way to deal with this. If I understood well, with webhooks, my service would send HTTP POST requests to a URL specified by the client, with event data inside this request.
I think websocket can be used too as a solution for this full-duplex communication need.
What I want to know, is which method would be the simplest for clients to implement to talk to my services? Simplicity is the key point here.
The hard thing is that my clients can use various technologies (full websites with HTTP servers, iOS/Android apps without server, etc.)
What are implications for clients if I use REST API + webhooks? Websockets? etc?
How to make a choice?
Hope it's clear (but not sure). Thanks :)
I would consider webhooks a simpler solution. And yes, you understood it well, that with webhooks, a developer using your API would register a URL where your backend would POST event data. It's a common pattern that's used in APIs.
A great benefit of using a webhooks design is that a client/server connection does not need to stay open. After all, if events occur infrequently (i.e. only a few times per hour, or per day) or keeping a consistent connection open is a challenge, establishing a connection only when it's needed is rather efficient.
The challenge of using webhooks for you, the API provider, is designing an evented backend system that deals with change of state detection and reliable webhook calling mechanisms (i.e. dealing with webhook receiver URLs that are unresponsive or throw errors).
The challenge of using webhooks on the developer end is that they need to stand up a reliable web server that listens for the event POST data from your server.
Realtime APIs (i.e. based on Websockets, Bayeux/CometD) are really swell because that live connection means that new connections do not have to be established, which is particularly useful with very chatty sessions. Additionally, there are a lot of projects and companies out there that have taken care of the heavy lifting on the server and client with fully-baked libraries. One of those is Fanout.io which makes pushing messages between the client/server possible with just a few lines of code, utilizing XMPP, Bayeux, and Websockets when possible.
(I am not affiliated with Fanout, but I have used it)
So, to sum it up, webhooks are simple mostly because you are already familiar with the architecture needed to implement them, and the pattern is a well traveled one. If you are leaning toward a persistent connection approach, I would look at tools/platforms like Fanout because it takes care of the heavy lifting (i.e. subscribe/publish, concurrent connection scale, client/server libraries).

send data from server with java ee 6 to client

Problem
We have a client-server application, server side is Glassfish 3.1.2. This app has many users, as well as many modules (e.g. View Transactions, View Banks etc). There are some long running processes invoked by client which run on server. Currently we have not found a nice solution to show the user what is going on on the server side. We want the users to get updated messages from server with given frequency. What would you suggest to use?
What we have done/tried
We (independently) used an approach with Singleton bean and a Map of client IDs similar to this, and it works of course. But then on the server side every method doSomething(Object... vars) must be converted to doSomething(Object... vars, String clientID) or whatever ID is type of. The client pulls data from server say once per second. I would like to avoid adding facades between server and client.
I was thinking about JAX-WS or JAX-RS, but I'm not familiar with these technologies deeply and not sure about what they can do.
Sockets
I should note that on the server side we have only Stateless beans (there is a reason for that), that is why I did not mention the use of Stateful bean (which is very good candidate I think).
Regards, Oleg
WebSocket could be a suitable choice, it allows the server to send unsolicited data to clients with no strong coupling, you just have to store a client id to map client connections to running tasks and be able to push updates to the right connection.
The client id/socket connection mapping can be maintained in a singleton bean using an in-memory structure, i.e. a hash map, or a permanent datastore for scalability purposes or in case you need a robust solution.
Some useful links to better understand WebSocket technology are this and this.

pion::net HTTPServer persistent connections

I'm working on a low-latency high-throughput, minimalistic HTTP server (almost real-time message switch).
I'm very fond of pion::net, and I've seen numerous references that it supports persistent connections (thus potentially saving the whole TCP ordeal):
http://boost.2283326.n4.nabble.com/Boost-HTTP-td2637928.html
Could anyone point me in the right direction on how to use pion::net that way?
Persistence is a property of TCPConnection (see the setLifecycle method). So if you choose to go the WebServer / Webservice route set the Lifecycle property accordingly in WebService::operator().
Also, since you're talking HTTP you should set the connection persistence according to the info the client sends you (namely the HTTP version and the value of the Connection header).

Forcing asmx web service to handle requests one at a time

I am debugging an ASMX web service that receives "bursts" of requests. i.e., it is likely that the web service will receive 100 asynchronous requests within about 1 or 2 seconds. Each request seems to take about a second to process (this is expected and I'm OK with this performance). What is important however, is that each request is dealt with sequentially and no parallel processing takes places. I do not want any concurrent request processing due to the external components called by the web service. Is there any way I can force the web service to only handle each response sequentially?
I have seen the maxconnection attribute in the machine.config but this seems to only work for outbound connections, where as I wish to throttle the incoming connections.
Please note that refactoring into WCF is not an option at this point in time.
We are usinng IIS6 on Win2003.
What I've done in the past is to simply put a lock statement around any access to the external resource I was using. In my case, it was a piece of unmanaged code that claimed to be thread-safe, but which in fact would trash the C runtime library heap if accessed from more than one thread at a time.
Perhaps you should be queuing the requests up internally and processing them one by one?
It may cause the clients to poll for results (if they even need them), but you'd get the sequential pipeline you wanted...
In IIS7 you can set up a limit of connections allowed to a web site. Can you use IIS7?