As part of the my Windows phone app, I use Windows::Web::Http::HttpClient to post requests to the server. I tried -
void sendRequest(HttpRequestMessage^ httpReqMsg)
{
HttpBaseProtocolFilter^ httpFilter = ref new HttpBaseProtocolFilter();
httpFilter->CacheControl->WriteBehavior = HttpCacheWriteBehavior::NoCache;
HttpClient^ httpClient = ref new HttpClient(httpFilter);
try
{
// Post the request
auto httpProgress = httpClient->SendRequestAsync(httpReqMsg);
// Handle the http progress and it's response messages
// ...
}
catch(Exception^ ex)
{
// ...
}
} // httpFilter, httpClient are auto released
When httpFilter, httpClient falls out of scope, I expect underlying sockets and memory resources should be released. During the call HttpClient::SendRequestAsync, I see SSL negotiation happening for the first time. Any further calls to sendRequest function, isn't triggering full handshake.
I amn't allowed to load any DLLs to explicity clear the SSL cache (SslEmptyCache). Isn't my assumption correct that full handshake should happen on every call to sendRequest function ? If not, how to achieve full SSL handshake ? Thanks.
Related
I'm writing a code which uses routers' identities to dynamically manage the peers.
To do that, I build the messages with target identity in the first frame, and then I send them trough a router socket to the appropriate peer. If a peer with that identity doesn't exist, I'll create a new one.
In code is something like that:
Main.cpp
zmq::socket_t sendSocket(*_pZmqContext.get(), ZMQ_ROUTER);
// It forces to throw an exception when peer doesn't exists, so I can create a new one.
sendSocket.setsockopt(ZMQ_ROUTER_MANDATORY, 1);
sendSocket.bind("inproc://processors");
...
try
{
...
isSuccess = message2send.send(sendSocket);
...
}
catch (zmq::error_t& ex)
{
if (ex.num() == EHOSTUNREACH)
{
// new peer is created (see OtherRouter.cpp)
...
}
}
...
OtherRouter.cpp
// This is how reader sockets are created...
zmq::socket_t reader(*_pZmqContext, ZMQ_ROUTER);
reader.setsockopt(ZMQ_ROUTING_ID,(std::byte *)&newIdentityValueForSocket[0],sizeof(newIdentityValueForSocket)); ;
reader.connect("inproc://processors");
assert(reader.connected());
...
This works fine, but I need some extra.
Some peers might be destroyed due to inactivity, and being recreated later, when activity is back.
When this happen, code is not working as expected. Even the peer is created successfully, I'm getting EHOSTUNREACH exception. It's like sockets can't communicate again..
So, It seems the sender socket knows that the older peer has been disconnected, but It can't connect to new one.
Any suggestion about how to solve it ?
Thanks!
I'm discovering Armeria framework and I want to consume a REST service.
Using the Armeria WebClient:
WebClient webClient = WebClient.of("http://localhost:9090");
RequestHeaders getJson = RequestHeaders.of(HttpMethod.GET, "/some-service",
HttpHeaderNames.CONTENT_TYPE, "application/json", "SomeHeader", "armeriaTest");
return webClient.execute(getJson).aggregate().thenApply(resp->{
if(HttpStatus.OK.equals(resp.status())) {
return parseBody(resp.contentUtf8());
}else if(HttpStatus.BAD_REQUEST.equals(resp.status())){
throw new IllegalStateException("not exists");
}
throw new RuntimeException("Error");
});
This code returns a CompletionStage that will be resolved asynchronously, because if I do a join() or get() right here causes an "java.lang.IllegalStateException: Blocking event loop, don't do this."
My question is: What if I want to use a third party httpclient library (like Apache HttpClient) instead the Web?
The client call should be wrapped in a Future too?
How should I manage the client requests to fit in the framework approach and avoid the "Blocking event loop" issue?
Thanks to all!
Yes. You should never perform any blocking operations when your code is running in an event loop thread. You can perform a blocking operation by submitting it to other thread pool dedicated to handling blocking operations.
If you are using Armeria on the server side, you can get one via ServiceRequestContext.blockingTaskExecutor():
Server server = Server
.builder()
.service("/", (ctx, req) -> {
CompletableFuture<String> f1 = CompletableFuture.supplyAsync(() -> {
// Perform some blocking operations that return a string.
}, ctx.blockingTaskExecutor());
CompletableFuture<String> f2 = f1.thenApply(result -> {
// Transform the result into an HttpResponse.
return HttpResponse.of("Result: %s", result);
});
return HttpResponse.from(f2);
})
.build();
If you are not using Armeria on the server side, you can use other Executor provided by your platform, or you can even create a new ThreadPoolExecutor dedicated to handling blocking operations.
How does beast boost async http client works in c++11 when multiple simultaneous requests are made in a single threaded asynchronous system?
USE CASE:
I want to send multiple simultaneous asynchronous requests and I am creating new http client for each request. When response of any request is received then I am calling a callback function which deletes the client after 1 sec of the response received to avoid any memory leaks. But it appears that the system/ code hangs in between after some random number of simulatneous http requests even though I create a new client object for each request. Does beast boost use some shared resource as this pause looks like system is in an infinite deadlock. PS: I also tried commenting this delete block but then also system behaves the same.
Below are the specifications for the boost and compiler version:
boost: stable 1.68.0
BOOST_BEAST_VERSION 181
clang -v
clang version 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
Target: x86_64-pc-linux-gnu
Thread model: posix
void sendHttpRequest(){
HttpClient *client = new HttpClient();
deleteClient = [this,client]{
int timeout = 1;
boost::asio::deadline_timer *clientDeleteTimer = new boost::asio::deadline_timer( *this->context);
clientDeleteTimer->expires_from_now(boost::posix_time::seconds(timeout));
clientDeleteTimer->async_wait([client,this,clientDeleteTimer](const boost::system::error_code &ec){
if(ec == boost::asio::error::operation_aborted){
std::cout<<" Operation aborted\n"<<std::flush;
return;
}
else{
delete client;
}
delete clientDeleteTimer;
};
callback = [] {
std::cout<<"Response recieved successfully\n"<<std::flush;
deleteClient();
};
errback = [] {
std::cout<<"Response not recieved \n"<<std::flush;
deleteClient();
};
client.sendPostRequest(request, callback , errback);
}
this function above is a wrapper function which will be called for each request and internally will create new http async client and delete that client object after 1 sec of response / error is recieved (basically the request has been processed).
Visit https://github.com/boostorg/beast/issues/1458 .This issue also address the same issue. But i guess still its unresolved.
There are C++ Qt client & server. Following code works fine and the connection happens between the client and the server:
QWebSocket webSocket; // defined somewhere
...
QUrl url;
url.setScheme("ws"); // SSL encryption disabled
url.setHost(serverName); // "127.0.0.1" (can be "www.abc.com" too)
url.setPort(portNumber); // 2000
webSocket.open(url); // connects with the server properly
PRINT(url.toString()); // output: "ws://127.0.0.1:2000"
While sending the binary data, the function returns 0 instead of the number of bytes:
// though the message.size() is 80 bytes; the method returns 0
webSocket.sendBinaryMessage(QByteArray(message.data(), message.size()));
Note that, the QWebSocketServer works as expected.
We also have a Javascript client. That connects & sends the binary message properly. The only addition in that client is below:
webSocketJS.binaryType = "arraybuffer"; // <--- Javascript code
But such provision is not found in QWebSocket or I may have missed it.
Question: How to correctly send the binary data over the web connection?
For those interested, the server [pseudo] code is like below:
auto pWebSocket = WebServer.nextPendingConnection();
QObject::connect(pWebSocket, &QWebSocket::binaryMessageReceived,
[&] (const QByteArray& message) { DataRead(message, rManager); }); // This slot is not called as of now
It seems that there is no mention of how the QWebSocket::connected() signal is treated.
Due to internet delay and initial handshakes, the WebSocketServer may take some time to establish a connection. Ideally the binary/text message should be sent only after the connected() is received.
Before making a connection using webSocket.open(url), you should be handling this signal:
... // same code
QObject::connect(&webSocket, &QWebSocket::connected,
[&] ()
{
webSocket.sendBinaryMessage(QByteArray(message.data(), message.size()));
// ... set some internal state suggesting the established connection
}
webSocket.open(url);
Above is just a pseudo code to show that the first sendBinaryMessage() should be sent after the connect() signal. Ideally in real world code, you may want to set some state, which informs the client that the connection is established.
Similarly as mentioned in the comments, we should be checking for errors and disconnections as well.
We have a web application using Jetty 8.1, dojo, and cometd that interacts between the browser and web container using (1) a JSON/HTTP REST API for synchronous operations and (2) a cometd API to receive numerous events from the server.
What we are not entirely clear on is how to elegantly manage the authentication sessions of these two different API's especially since cometd for us will use websocket instead of regular HTTP whenever possible. The application is using form-based authentication using a standard Jetty LDAP module. So from an HTTP perspective the container provides the browser with a standard jsessionid which looks like this:
Cookie: jsessionid=758E2FAD7C199D722DA8B5E243E0E27D
Based on Simone Bordet's post here it seems the recommended solution is to pass this token during the cometd handshake which is what we are doing.
The problem we have is there are two fundamentally different sessions - the HTTP session and the Bayeux cometd session. For reasons such as potential memory leaks and security issues, we want them to terminate in unison or to be "paired." If a user's HTTP session is terminated, we want the corresponding Bayeux session to terminate as well and vis-versa. Is there a recommended way of doing this?
The HTTP session and the CometD sessions have different lifecycles: for example, if there is a temporary connection failure, the CometD session will fail, and the server will ask to the client to re-handshake, thus creating a different CometD session (representing the same user, but with a different CometD clientId). In the same case, the HttpSession will remain the same.
Having this in mind, you need to maintain - at the application level - a mapping between a username, the correspondent HttpSession, and the correspondent ServerSession.
Let's call this mapping HttpCometDMapper.
Every time a new user logs in, you register its name (or another unique identifier of the user), the HttpSession, and the current ServerSession.
Probably you will need a two step process, where you first link the username and the HttpSession, and then the same username with the ServerSession.
If a CometD re-handshake is performed, you update the mapper with the new ServerSession.
You can link the two sessions by registering an HttpSessionListener to the HttpSession so that when it's destroyed, you retrieve the current CometD ServerSession from the mapper and call ServerSession.disconnect() on it.
The viceversa is a bit trickier because CometD does not have a concept of inactivity timeout like HttpSession has. It must be implemented in the application with your own logic.
One part of doing it is to register a RemoveListener on the ServerSession, like that:
serverSession.addListener(new ServerSession.RemoveListener()
{
public void removed(ServerSession session, boolean timeout);
{
if (!timeout)
{
// Explicitly disconnected, invalidate the HttpSession
httpCometDMapper.invalidate(session);
}
}
});
This listener watches for explicit disconnects from the client (and the server - beware of reentrancy).
Slightly more difficult is to implement the same mechanism for non-explicit disconnects. In this case, the timeout parameter will be true, but could have happened because of a temporary network failure (as opposed to the client disappearing for good), and the same user may have already re-handshaken with a new ServerSession.
I think in this case an application timeout could solve the issue: when you see a ServerSession removed because of a timeout, you note that user and start an application timeout. If the same user re-handshakes, cancel the application timeout; otherwise the user is really gone, the application timeout expires, and you invalidate the HttpSession too.
What above are just ideas and suggestions; the actual implementation depends heavily on application details (and that's why is not provided by CometD out of the box).
The key points are the mapper, the HttpSessionListener and the RemoveListener, and knowing the lifecycles of those components.
Once you manage that, you can write the right code that does the right thing for your application.
Finally, note that CometD has a transport-agnostic way of interacting with the HttpSession via the BayeuxContext instance, that you can obtain from BayeuxServer.getContext().
I suggest that you look at that also, to see if it can simplify things, especially for retrieving tokens stored in the HttpSession.
Is there any problem encountered if we are going to create a BayeuxClient after the temporary connection failure?
You can try with this below code.
try {
log.info("Running streaming client example....");
makeConnect();
} catch (Exception e) {
handleException("Error while setup the salesforce connection.", e);
}
}
private void makeConnect() {
try{
client = makeClient();
client.getChannel(Channel.META_HANDSHAKE).addListener
(new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
log.info("[CHANNEL:META_HANDSHAKE]: " + message);
boolean success = message.isSuccessful();
if (!success) {
String error = (String) message.get("error");
if (error != null) {
log.error("Error during HANDSHAKE: " + error);
}
Exception exception = (Exception) message.get("exception");
if (exception != null) {
handleException("Exception during HANDSHAKE: ", exception);
}
}
}
});
client.getChannel(Channel.META_CONNECT).addListener(
new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
log.info("[CHANNEL:META_CONNECT]: " + message);
boolean success = message.isSuccessful();
if (!success) {
client.disconnect();
makeConnect();
String error = (String) message.get("error");
if (error != null) {
//log.error("Error during CONNECT: " + error);
}
}
}
});
client.getChannel(Channel.META_SUBSCRIBE).addListener(
new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
log.info("[CHANNEL:META_SUBSCRIBE]: " + message);
boolean success = message.isSuccessful();
if (!success) {
String error = (String) message.get("error");
if (error != null) {
makeConnect();
log.error("Error during SUBSCRIBE: " + error);
}
}
}
});
client.handshake();
log.info("Waiting for handshake");
boolean handshaken = client.waitFor(waitTime, BayeuxClient.State.CONNECTED);
if (!handshaken) {
log.error("Failed to handshake: " + client);
}
log.info("Subscribing for channel: " + channel);
client.getChannel(channel).subscribe(new MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
injectSalesforceMessage(message);
}
});
log.info("Waiting for streamed data from your organization ...");
}catch (Exception e) {
handleException("Error while setup the salesforce connection.", e);
}
}