I'm discovering Armeria framework and I want to consume a REST service.
Using the Armeria WebClient:
WebClient webClient = WebClient.of("http://localhost:9090");
RequestHeaders getJson = RequestHeaders.of(HttpMethod.GET, "/some-service",
HttpHeaderNames.CONTENT_TYPE, "application/json", "SomeHeader", "armeriaTest");
return webClient.execute(getJson).aggregate().thenApply(resp->{
if(HttpStatus.OK.equals(resp.status())) {
return parseBody(resp.contentUtf8());
}else if(HttpStatus.BAD_REQUEST.equals(resp.status())){
throw new IllegalStateException("not exists");
}
throw new RuntimeException("Error");
});
This code returns a CompletionStage that will be resolved asynchronously, because if I do a join() or get() right here causes an "java.lang.IllegalStateException: Blocking event loop, don't do this."
My question is: What if I want to use a third party httpclient library (like Apache HttpClient) instead the Web?
The client call should be wrapped in a Future too?
How should I manage the client requests to fit in the framework approach and avoid the "Blocking event loop" issue?
Thanks to all!
Yes. You should never perform any blocking operations when your code is running in an event loop thread. You can perform a blocking operation by submitting it to other thread pool dedicated to handling blocking operations.
If you are using Armeria on the server side, you can get one via ServiceRequestContext.blockingTaskExecutor():
Server server = Server
.builder()
.service("/", (ctx, req) -> {
CompletableFuture<String> f1 = CompletableFuture.supplyAsync(() -> {
// Perform some blocking operations that return a string.
}, ctx.blockingTaskExecutor());
CompletableFuture<String> f2 = f1.thenApply(result -> {
// Transform the result into an HttpResponse.
return HttpResponse.of("Result: %s", result);
});
return HttpResponse.from(f2);
})
.build();
If you are not using Armeria on the server side, you can use other Executor provided by your platform, or you can even create a new ThreadPoolExecutor dedicated to handling blocking operations.
Related
How does beast boost async http client works in c++11 when multiple simultaneous requests are made in a single threaded asynchronous system?
USE CASE:
I want to send multiple simultaneous asynchronous requests and I am creating new http client for each request. When response of any request is received then I am calling a callback function which deletes the client after 1 sec of the response received to avoid any memory leaks. But it appears that the system/ code hangs in between after some random number of simulatneous http requests even though I create a new client object for each request. Does beast boost use some shared resource as this pause looks like system is in an infinite deadlock. PS: I also tried commenting this delete block but then also system behaves the same.
Below are the specifications for the boost and compiler version:
boost: stable 1.68.0
BOOST_BEAST_VERSION 181
clang -v
clang version 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
Target: x86_64-pc-linux-gnu
Thread model: posix
void sendHttpRequest(){
HttpClient *client = new HttpClient();
deleteClient = [this,client]{
int timeout = 1;
boost::asio::deadline_timer *clientDeleteTimer = new boost::asio::deadline_timer( *this->context);
clientDeleteTimer->expires_from_now(boost::posix_time::seconds(timeout));
clientDeleteTimer->async_wait([client,this,clientDeleteTimer](const boost::system::error_code &ec){
if(ec == boost::asio::error::operation_aborted){
std::cout<<" Operation aborted\n"<<std::flush;
return;
}
else{
delete client;
}
delete clientDeleteTimer;
};
callback = [] {
std::cout<<"Response recieved successfully\n"<<std::flush;
deleteClient();
};
errback = [] {
std::cout<<"Response not recieved \n"<<std::flush;
deleteClient();
};
client.sendPostRequest(request, callback , errback);
}
this function above is a wrapper function which will be called for each request and internally will create new http async client and delete that client object after 1 sec of response / error is recieved (basically the request has been processed).
Visit https://github.com/boostorg/beast/issues/1458 .This issue also address the same issue. But i guess still its unresolved.
I am writing an application where the Client issues commands to a web service (CQRS)
The client is written in C#
The client uses a WCF Proxy to send the messages
The client uses the async pattern to call the web service
The client can issue multiple requests at once.
My problem is that sometimes the client simply issues too many requests and the service starts returning that it is too busy.
Here is an example. I am registering orders and they can be from a handful up to a few 1000s.
var taskList = Orders.Select(order => _cmdSvc.ExecuteAsync(order))
.ToList();
await Task.WhenAll(taskList);
Basically, I call ExecuteAsync for every order and get a Task back. Then I just await for them all to complete.
I don't really want to fix this server-side because no matter how much I tune it, the client could still kill it by sending for example 10,000 requests.
So my question is. Can I configure the WCF Client in any way so that it simply takes all the requests and sends the maximum of say 20, once one completes it automatically dispatches the next, etc? Or is the Task I get back linked to the actual HTTP request and can therefore not return until the request has actually been dispatched?
If this is the case and WCF Client simply cannot do this form me, I have the idea of decorating the WCF Client with a class that queues commands, returns a Task (using TaskCompletionSource) and then makes sure that there are no more than say 20 requests active at a time. I know this will work but I would like to ask if anyone knows of a library or a class that does something like this?
This is kind of like Throttling but I don't want to do exactly that because I don't want to limit how many requests I can send in a given period of time but rather how many active requests can exist at any given time.
Based on #PanagiotisKanavos suggjestion, here is how I solved this.
RequestLimitCommandService acts as a decorator for the actual service which is passed in to the constructor as innerSvc. Once someone calls ExecuteAsync a completion source is created which along with the command is posted to the ActonBlock, the caller then gets back the a Task from the completion source.
The ActionBlock will then call the processing method. This method sends the command to the web service. Depending on what happens, this method will use the completion source to either notify the original sender that a command was processed successfully or attach the exception that occurred to the source.
public class RequestLimitCommandService : IAsyncCommandService
{
private class ExecutionToken
{
public TaskCompletionSource<bool> Source { get; }
public ICommand Command { get; }
public ExecutionToken(TaskCompletionSource<bool> source, ICommand command)
{
Source = source;
Command = command;
}
}
private IAsyncCommandService _innerSrc;
private ActionBlock<ExecutionToken> _block;
public RequestLimitCommandService(IAsyncCommandService innerSvc, int maxDegreeOfParallelism)
{
_innerSrc = innerSvc;
var options = new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
_block = new ActionBlock<ExecutionToken>(Execute, options);
}
public Task IAsyncCommandService.ExecuteAsync(ICommand command)
{
var source = new TaskCompletionSource<bool>();
var token = new ExecutionToken(source, command);
_block.Post(token);
return source.Task;
}
private async Task Execute(ExecutionToken token)
{
try
{
await _innerSrc.ExecuteAsync(token.Command);
token.Source.SetResult(true);
}
catch (Exception ex)
{
token.Source.SetException(ex);
}
}
}
As part of the my Windows phone app, I use Windows::Web::Http::HttpClient to post requests to the server. I tried -
void sendRequest(HttpRequestMessage^ httpReqMsg)
{
HttpBaseProtocolFilter^ httpFilter = ref new HttpBaseProtocolFilter();
httpFilter->CacheControl->WriteBehavior = HttpCacheWriteBehavior::NoCache;
HttpClient^ httpClient = ref new HttpClient(httpFilter);
try
{
// Post the request
auto httpProgress = httpClient->SendRequestAsync(httpReqMsg);
// Handle the http progress and it's response messages
// ...
}
catch(Exception^ ex)
{
// ...
}
} // httpFilter, httpClient are auto released
When httpFilter, httpClient falls out of scope, I expect underlying sockets and memory resources should be released. During the call HttpClient::SendRequestAsync, I see SSL negotiation happening for the first time. Any further calls to sendRequest function, isn't triggering full handshake.
I amn't allowed to load any DLLs to explicity clear the SSL cache (SslEmptyCache). Isn't my assumption correct that full handshake should happen on every call to sendRequest function ? If not, how to achieve full SSL handshake ? Thanks.
I'm developing a windows phone app that consumes a .Net Web Service (develop also by me). When I call the a Web Service method a do it asynchronously and don't block the UI. For example, here's a code sample for asking the server for a list o flights Arrivals.
service.MobileWSSoapClient Proxy { get; set; }
Proxy = new service.MobileWSSoapClient();
Proxy.GetArrivalsCompleted += proxy_GetArrivalsCompleted;
Proxy.GetArrivalsAsync(searchFilter);
This way I give the freedom to the user to call again the same method or another one (ex: refreshing the arrival list or searching for a particular arrival). In case the user generates a new call to the services, the app should "cancel" the first call and only show the result of the last call. I think that is technically impossible to Cancel a web service call that already went to the server, we should wait for the server response and then ignore it. Knowing that, it would be helpful to mark somehow that call as obsolete. It would be enough to receive an error as a response of that obsolete call. I'll write a pseudo code of what I imagine/need.
void proxy_GetArrivalsCompleted(object sender, service.GetArrivalsCompletedEventArgs e){
if (e.Error == null){
// DO WORK
}
else
{
if(e.Error == Server Exception || e.Error == Connection Exception){
MessageBox.Show("error");
}
else if (e.Error == obsolete call){
// DO NOTHING
}
}
Thanks in advance.
You can use BackgroundWorker for your scenario. So, when the user calls again to the web service you can cancel your backgroundworker process that will end the service call.
How to use BackgroundWorker here.
We have a web application using Jetty 8.1, dojo, and cometd that interacts between the browser and web container using (1) a JSON/HTTP REST API for synchronous operations and (2) a cometd API to receive numerous events from the server.
What we are not entirely clear on is how to elegantly manage the authentication sessions of these two different API's especially since cometd for us will use websocket instead of regular HTTP whenever possible. The application is using form-based authentication using a standard Jetty LDAP module. So from an HTTP perspective the container provides the browser with a standard jsessionid which looks like this:
Cookie: jsessionid=758E2FAD7C199D722DA8B5E243E0E27D
Based on Simone Bordet's post here it seems the recommended solution is to pass this token during the cometd handshake which is what we are doing.
The problem we have is there are two fundamentally different sessions - the HTTP session and the Bayeux cometd session. For reasons such as potential memory leaks and security issues, we want them to terminate in unison or to be "paired." If a user's HTTP session is terminated, we want the corresponding Bayeux session to terminate as well and vis-versa. Is there a recommended way of doing this?
The HTTP session and the CometD sessions have different lifecycles: for example, if there is a temporary connection failure, the CometD session will fail, and the server will ask to the client to re-handshake, thus creating a different CometD session (representing the same user, but with a different CometD clientId). In the same case, the HttpSession will remain the same.
Having this in mind, you need to maintain - at the application level - a mapping between a username, the correspondent HttpSession, and the correspondent ServerSession.
Let's call this mapping HttpCometDMapper.
Every time a new user logs in, you register its name (or another unique identifier of the user), the HttpSession, and the current ServerSession.
Probably you will need a two step process, where you first link the username and the HttpSession, and then the same username with the ServerSession.
If a CometD re-handshake is performed, you update the mapper with the new ServerSession.
You can link the two sessions by registering an HttpSessionListener to the HttpSession so that when it's destroyed, you retrieve the current CometD ServerSession from the mapper and call ServerSession.disconnect() on it.
The viceversa is a bit trickier because CometD does not have a concept of inactivity timeout like HttpSession has. It must be implemented in the application with your own logic.
One part of doing it is to register a RemoveListener on the ServerSession, like that:
serverSession.addListener(new ServerSession.RemoveListener()
{
public void removed(ServerSession session, boolean timeout);
{
if (!timeout)
{
// Explicitly disconnected, invalidate the HttpSession
httpCometDMapper.invalidate(session);
}
}
});
This listener watches for explicit disconnects from the client (and the server - beware of reentrancy).
Slightly more difficult is to implement the same mechanism for non-explicit disconnects. In this case, the timeout parameter will be true, but could have happened because of a temporary network failure (as opposed to the client disappearing for good), and the same user may have already re-handshaken with a new ServerSession.
I think in this case an application timeout could solve the issue: when you see a ServerSession removed because of a timeout, you note that user and start an application timeout. If the same user re-handshakes, cancel the application timeout; otherwise the user is really gone, the application timeout expires, and you invalidate the HttpSession too.
What above are just ideas and suggestions; the actual implementation depends heavily on application details (and that's why is not provided by CometD out of the box).
The key points are the mapper, the HttpSessionListener and the RemoveListener, and knowing the lifecycles of those components.
Once you manage that, you can write the right code that does the right thing for your application.
Finally, note that CometD has a transport-agnostic way of interacting with the HttpSession via the BayeuxContext instance, that you can obtain from BayeuxServer.getContext().
I suggest that you look at that also, to see if it can simplify things, especially for retrieving tokens stored in the HttpSession.
Is there any problem encountered if we are going to create a BayeuxClient after the temporary connection failure?
You can try with this below code.
try {
log.info("Running streaming client example....");
makeConnect();
} catch (Exception e) {
handleException("Error while setup the salesforce connection.", e);
}
}
private void makeConnect() {
try{
client = makeClient();
client.getChannel(Channel.META_HANDSHAKE).addListener
(new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
log.info("[CHANNEL:META_HANDSHAKE]: " + message);
boolean success = message.isSuccessful();
if (!success) {
String error = (String) message.get("error");
if (error != null) {
log.error("Error during HANDSHAKE: " + error);
}
Exception exception = (Exception) message.get("exception");
if (exception != null) {
handleException("Exception during HANDSHAKE: ", exception);
}
}
}
});
client.getChannel(Channel.META_CONNECT).addListener(
new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
log.info("[CHANNEL:META_CONNECT]: " + message);
boolean success = message.isSuccessful();
if (!success) {
client.disconnect();
makeConnect();
String error = (String) message.get("error");
if (error != null) {
//log.error("Error during CONNECT: " + error);
}
}
}
});
client.getChannel(Channel.META_SUBSCRIBE).addListener(
new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
log.info("[CHANNEL:META_SUBSCRIBE]: " + message);
boolean success = message.isSuccessful();
if (!success) {
String error = (String) message.get("error");
if (error != null) {
makeConnect();
log.error("Error during SUBSCRIBE: " + error);
}
}
}
});
client.handshake();
log.info("Waiting for handshake");
boolean handshaken = client.waitFor(waitTime, BayeuxClient.State.CONNECTED);
if (!handshaken) {
log.error("Failed to handshake: " + client);
}
log.info("Subscribing for channel: " + channel);
client.getChannel(channel).subscribe(new MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
injectSalesforceMessage(message);
}
});
log.info("Waiting for streamed data from your organization ...");
}catch (Exception e) {
handleException("Error while setup the salesforce connection.", e);
}
}