Pairing cometd session with HTTP session - jetty

We have a web application using Jetty 8.1, dojo, and cometd that interacts between the browser and web container using (1) a JSON/HTTP REST API for synchronous operations and (2) a cometd API to receive numerous events from the server.
What we are not entirely clear on is how to elegantly manage the authentication sessions of these two different API's especially since cometd for us will use websocket instead of regular HTTP whenever possible. The application is using form-based authentication using a standard Jetty LDAP module. So from an HTTP perspective the container provides the browser with a standard jsessionid which looks like this:
Cookie: jsessionid=758E2FAD7C199D722DA8B5E243E0E27D
Based on Simone Bordet's post here it seems the recommended solution is to pass this token during the cometd handshake which is what we are doing.
The problem we have is there are two fundamentally different sessions - the HTTP session and the Bayeux cometd session. For reasons such as potential memory leaks and security issues, we want them to terminate in unison or to be "paired." If a user's HTTP session is terminated, we want the corresponding Bayeux session to terminate as well and vis-versa. Is there a recommended way of doing this?

The HTTP session and the CometD sessions have different lifecycles: for example, if there is a temporary connection failure, the CometD session will fail, and the server will ask to the client to re-handshake, thus creating a different CometD session (representing the same user, but with a different CometD clientId). In the same case, the HttpSession will remain the same.
Having this in mind, you need to maintain - at the application level - a mapping between a username, the correspondent HttpSession, and the correspondent ServerSession.
Let's call this mapping HttpCometDMapper.
Every time a new user logs in, you register its name (or another unique identifier of the user), the HttpSession, and the current ServerSession.
Probably you will need a two step process, where you first link the username and the HttpSession, and then the same username with the ServerSession.
If a CometD re-handshake is performed, you update the mapper with the new ServerSession.
You can link the two sessions by registering an HttpSessionListener to the HttpSession so that when it's destroyed, you retrieve the current CometD ServerSession from the mapper and call ServerSession.disconnect() on it.
The viceversa is a bit trickier because CometD does not have a concept of inactivity timeout like HttpSession has. It must be implemented in the application with your own logic.
One part of doing it is to register a RemoveListener on the ServerSession, like that:
serverSession.addListener(new ServerSession.RemoveListener()
{
public void removed(ServerSession session, boolean timeout);
{
if (!timeout)
{
// Explicitly disconnected, invalidate the HttpSession
httpCometDMapper.invalidate(session);
}
}
});
This listener watches for explicit disconnects from the client (and the server - beware of reentrancy).
Slightly more difficult is to implement the same mechanism for non-explicit disconnects. In this case, the timeout parameter will be true, but could have happened because of a temporary network failure (as opposed to the client disappearing for good), and the same user may have already re-handshaken with a new ServerSession.
I think in this case an application timeout could solve the issue: when you see a ServerSession removed because of a timeout, you note that user and start an application timeout. If the same user re-handshakes, cancel the application timeout; otherwise the user is really gone, the application timeout expires, and you invalidate the HttpSession too.
What above are just ideas and suggestions; the actual implementation depends heavily on application details (and that's why is not provided by CometD out of the box).
The key points are the mapper, the HttpSessionListener and the RemoveListener, and knowing the lifecycles of those components.
Once you manage that, you can write the right code that does the right thing for your application.
Finally, note that CometD has a transport-agnostic way of interacting with the HttpSession via the BayeuxContext instance, that you can obtain from BayeuxServer.getContext().
I suggest that you look at that also, to see if it can simplify things, especially for retrieving tokens stored in the HttpSession.

Is there any problem encountered if we are going to create a BayeuxClient after the temporary connection failure?
You can try with this below code.
try {
log.info("Running streaming client example....");
makeConnect();
} catch (Exception e) {
handleException("Error while setup the salesforce connection.", e);
}
}
private void makeConnect() {
try{
client = makeClient();
client.getChannel(Channel.META_HANDSHAKE).addListener
(new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
log.info("[CHANNEL:META_HANDSHAKE]: " + message);
boolean success = message.isSuccessful();
if (!success) {
String error = (String) message.get("error");
if (error != null) {
log.error("Error during HANDSHAKE: " + error);
}
Exception exception = (Exception) message.get("exception");
if (exception != null) {
handleException("Exception during HANDSHAKE: ", exception);
}
}
}
});
client.getChannel(Channel.META_CONNECT).addListener(
new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
log.info("[CHANNEL:META_CONNECT]: " + message);
boolean success = message.isSuccessful();
if (!success) {
client.disconnect();
makeConnect();
String error = (String) message.get("error");
if (error != null) {
//log.error("Error during CONNECT: " + error);
}
}
}
});
client.getChannel(Channel.META_SUBSCRIBE).addListener(
new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
log.info("[CHANNEL:META_SUBSCRIBE]: " + message);
boolean success = message.isSuccessful();
if (!success) {
String error = (String) message.get("error");
if (error != null) {
makeConnect();
log.error("Error during SUBSCRIBE: " + error);
}
}
}
});
client.handshake();
log.info("Waiting for handshake");
boolean handshaken = client.waitFor(waitTime, BayeuxClient.State.CONNECTED);
if (!handshaken) {
log.error("Failed to handshake: " + client);
}
log.info("Subscribing for channel: " + channel);
client.getChannel(channel).subscribe(new MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
injectSalesforceMessage(message);
}
});
log.info("Waiting for streamed data from your organization ...");
}catch (Exception e) {
handleException("Error while setup the salesforce connection.", e);
}
}

Related

Setting JMSMessageID on stubbed jms endpoints in camel unit tests

I have a route that I am testing. I use stub://jms:queue:whatever to send/receive messages and extending CamelTestSupport for my test classes. I am having an issue with one of the routes that has a bean that uses an idempotent repo to store messages by "message id" for which it reads and stores the JMSMessageID property from exchange.
The problem I run into is that I can't figure out a way to set this property on messages sent on stubbed endpoints. Every time the method that requires this prop is called, the id returns null and i have to handle it as a null pointer. I can do this but the cleanest approach would be to just set the header on the test message. I tried includeSentJMSMessageId=true on endpoint, I tried using sendBodyAndHeader on producer and passing "JMSMessageID", "ID: whatever" in arguments, doesn't appear to work? I read that the driver/connectionfactory is supposed to set the header, but I'm not too familiar with how/where to do this. And since I am using a stubbed end points, I'm not creating any brokers/connectionfactories in my uts.
So dont stud out the JMS component replace it with a processor and then add the preferred JMSMessageID in the processor.
Something like this code:
#Test
void testIdempotency() throws Exception {
mockOut.expectedMinimumMessageCount(1);
//specify the route to test
AdviceWithRouteBuilder.adviceWith(context, "your-route-name", enrichRoute -> {
//replace the from with a end point we can call directly.
enrichRoute.replaceFromWith("direct:start");
//replace the jms endpoint with a processor so it can act as the JMS Endpoint.
enrichRoute.weaveById("jms:queue:whatever").replace().process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
//Set that ID to the one I want to test
exchange.getIn().setHeader("JMSMEssageID", "some-value-to-test");
}
});
// add an endpoint at the end to check if received a mesage
enrichRoute.weaveAddLast().to(mockOut);
});
context.start();
//send some message
Map<String,Object> sampleMsg = getSampleMessageAsHashMap("REQUEST.json");
//get the response
Map<String,Object> response = (Map<String,Object>)template.requestBody("direct:start", sampleMsg);
// you will need to check if the response is what you expected.
// Check the headers etc.
mockOut.assertIsSatisfied();
}
The JMSMessageID can only be set by the provider. It cannot be set by a client despite the fact that javax.jms.Message has setJMSMessageId(). As the JavaDoc states:
This method is for use by JMS providers only to set this field when a message is sent. This message cannot be used by clients to configure the message ID. This method is public to allow a JMS provider to set this field when sending a message whose implementation is not its own.

Using third party http client on Armeria

I'm discovering Armeria framework and I want to consume a REST service.
Using the Armeria WebClient:
WebClient webClient = WebClient.of("http://localhost:9090");
RequestHeaders getJson = RequestHeaders.of(HttpMethod.GET, "/some-service",
HttpHeaderNames.CONTENT_TYPE, "application/json", "SomeHeader", "armeriaTest");
return webClient.execute(getJson).aggregate().thenApply(resp->{
if(HttpStatus.OK.equals(resp.status())) {
return parseBody(resp.contentUtf8());
}else if(HttpStatus.BAD_REQUEST.equals(resp.status())){
throw new IllegalStateException("not exists");
}
throw new RuntimeException("Error");
});
This code returns a CompletionStage that will be resolved asynchronously, because if I do a join() or get() right here causes an "java.lang.IllegalStateException: Blocking event loop, don't do this."
My question is: What if I want to use a third party httpclient library (like Apache HttpClient) instead the Web?
The client call should be wrapped in a Future too?
How should I manage the client requests to fit in the framework approach and avoid the "Blocking event loop" issue?
Thanks to all!
Yes. You should never perform any blocking operations when your code is running in an event loop thread. You can perform a blocking operation by submitting it to other thread pool dedicated to handling blocking operations.
If you are using Armeria on the server side, you can get one via ServiceRequestContext.blockingTaskExecutor():
Server server = Server
.builder()
.service("/", (ctx, req) -> {
CompletableFuture<String> f1 = CompletableFuture.supplyAsync(() -> {
// Perform some blocking operations that return a string.
}, ctx.blockingTaskExecutor());
CompletableFuture<String> f2 = f1.thenApply(result -> {
// Transform the result into an HttpResponse.
return HttpResponse.of("Result: %s", result);
});
return HttpResponse.from(f2);
})
.build();
If you are not using Armeria on the server side, you can use other Executor provided by your platform, or you can even create a new ThreadPoolExecutor dedicated to handling blocking operations.

Preventing a WCF client from issuing too many requests

I am writing an application where the Client issues commands to a web service (CQRS)
The client is written in C#
The client uses a WCF Proxy to send the messages
The client uses the async pattern to call the web service
The client can issue multiple requests at once.
My problem is that sometimes the client simply issues too many requests and the service starts returning that it is too busy.
Here is an example. I am registering orders and they can be from a handful up to a few 1000s.
var taskList = Orders.Select(order => _cmdSvc.ExecuteAsync(order))
.ToList();
await Task.WhenAll(taskList);
Basically, I call ExecuteAsync for every order and get a Task back. Then I just await for them all to complete.
I don't really want to fix this server-side because no matter how much I tune it, the client could still kill it by sending for example 10,000 requests.
So my question is. Can I configure the WCF Client in any way so that it simply takes all the requests and sends the maximum of say 20, once one completes it automatically dispatches the next, etc? Or is the Task I get back linked to the actual HTTP request and can therefore not return until the request has actually been dispatched?
If this is the case and WCF Client simply cannot do this form me, I have the idea of decorating the WCF Client with a class that queues commands, returns a Task (using TaskCompletionSource) and then makes sure that there are no more than say 20 requests active at a time. I know this will work but I would like to ask if anyone knows of a library or a class that does something like this?
This is kind of like Throttling but I don't want to do exactly that because I don't want to limit how many requests I can send in a given period of time but rather how many active requests can exist at any given time.
Based on #PanagiotisKanavos suggjestion, here is how I solved this.
RequestLimitCommandService acts as a decorator for the actual service which is passed in to the constructor as innerSvc. Once someone calls ExecuteAsync a completion source is created which along with the command is posted to the ActonBlock, the caller then gets back the a Task from the completion source.
The ActionBlock will then call the processing method. This method sends the command to the web service. Depending on what happens, this method will use the completion source to either notify the original sender that a command was processed successfully or attach the exception that occurred to the source.
public class RequestLimitCommandService : IAsyncCommandService
{
private class ExecutionToken
{
public TaskCompletionSource<bool> Source { get; }
public ICommand Command { get; }
public ExecutionToken(TaskCompletionSource<bool> source, ICommand command)
{
Source = source;
Command = command;
}
}
private IAsyncCommandService _innerSrc;
private ActionBlock<ExecutionToken> _block;
public RequestLimitCommandService(IAsyncCommandService innerSvc, int maxDegreeOfParallelism)
{
_innerSrc = innerSvc;
var options = new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
_block = new ActionBlock<ExecutionToken>(Execute, options);
}
public Task IAsyncCommandService.ExecuteAsync(ICommand command)
{
var source = new TaskCompletionSource<bool>();
var token = new ExecutionToken(source, command);
_block.Post(token);
return source.Task;
}
private async Task Execute(ExecutionToken token)
{
try
{
await _innerSrc.ExecuteAsync(token.Command);
token.Source.SetResult(true);
}
catch (Exception ex)
{
token.Source.SetException(ex);
}
}
}

Windows::Web::Http::HttpClient - Renegotiate SSL handshake

As part of the my Windows phone app, I use Windows::Web::Http::HttpClient to post requests to the server. I tried -
void sendRequest(HttpRequestMessage^ httpReqMsg)
{
HttpBaseProtocolFilter^ httpFilter = ref new HttpBaseProtocolFilter();
httpFilter->CacheControl->WriteBehavior = HttpCacheWriteBehavior::NoCache;
HttpClient^ httpClient = ref new HttpClient(httpFilter);
try
{
// Post the request
auto httpProgress = httpClient->SendRequestAsync(httpReqMsg);
// Handle the http progress and it's response messages
// ...
}
catch(Exception^ ex)
{
// ...
}
} // httpFilter, httpClient are auto released
When httpFilter, httpClient falls out of scope, I expect underlying sockets and memory resources should be released. During the call HttpClient::SendRequestAsync, I see SSL negotiation happening for the first time. Any further calls to sendRequest function, isn't triggering full handshake.
I amn't allowed to load any DLLs to explicity clear the SSL cache (SslEmptyCache). Isn't my assumption correct that full handshake should happen on every call to sendRequest function ? If not, how to achieve full SSL handshake ? Thanks.

Cancelling async webservice call in windows phone

I'm developing a windows phone app that consumes a .Net Web Service (develop also by me). When I call the a Web Service method a do it asynchronously and don't block the UI. For example, here's a code sample for asking the server for a list o flights Arrivals.
service.MobileWSSoapClient Proxy { get; set; }
Proxy = new service.MobileWSSoapClient();
Proxy.GetArrivalsCompleted += proxy_GetArrivalsCompleted;
Proxy.GetArrivalsAsync(searchFilter);
This way I give the freedom to the user to call again the same method or another one (ex: refreshing the arrival list or searching for a particular arrival). In case the user generates a new call to the services, the app should "cancel" the first call and only show the result of the last call. I think that is technically impossible to Cancel a web service call that already went to the server, we should wait for the server response and then ignore it. Knowing that, it would be helpful to mark somehow that call as obsolete. It would be enough to receive an error as a response of that obsolete call. I'll write a pseudo code of what I imagine/need.
void proxy_GetArrivalsCompleted(object sender, service.GetArrivalsCompletedEventArgs e){
if (e.Error == null){
// DO WORK
}
else
{
if(e.Error == Server Exception || e.Error == Connection Exception){
MessageBox.Show("error");
}
else if (e.Error == obsolete call){
// DO NOTHING
}
}
Thanks in advance.
You can use BackgroundWorker for your scenario. So, when the user calls again to the web service you can cancel your backgroundworker process that will end the service call.
How to use BackgroundWorker here.