Long-held TCP sessions in an ASMX client - web-services

I have an ASP.NET application which talks to a third-party SOAP web service. My application uses an ASMX client proxy (i.e. System.Web.Services.Protocols.SoapHttpClientProtocol). The third-party service uses WCF, although I don't expect that makes much difference.
I should note that we're using .NET 3.5 SP1.
We haven't customised the proxy or done anything unusual - we're just making standard web service requests and getting back the results. We have encapsulated the proxy reference within a using block so it will get disposed after the response is received.
We've been told that our application is behaving strangely in its use of TCP sessions. Instead of opening a new TCP session for each request from a new proxy instance (which is what I would have expected it to do), it's apparently keeping several connections alive and re-using them. This is causing some issues at the third party end, as they are expecting us to be using multiple sessions.
Is this a known behaviour for the SoapHttpClientProtocol client proxy? If so, is there any way we can override it so that each request results in a new TCP session?
Thanks,
John

See "Ways to Customize your ASMX Client Proxy". You'll see how to set the KeepAlive property of the HttpWebRequest used to make your requests.

Related

Choosing the scenario of using Web Sockets in standard HTTP REST API

I will be happy to get advice from more experienced developers about adding Web Sockets into my HTTP-based project.
That’s the thing. I have developed the REST API based service. Everything works well enough, but… In some special cases my server needs a long time to serve client requests. It may be from 1 minute to several hours (and even days)! I implement some not-so-good algorithm to address this issue:
Client sends HTTP request
Server replies about registering request
Client starts sending HTTP requests to get necessary data (if response does not have needed information the client sends another request and so on)
That is all in a nutshell.
And it seems to be a bad scenario and I am trying to integrate web sockets for adding duplex-channels in this architecture. I hope that my API will be able to send info about updated data as soon as possible without the necessity of many requests from the client.
But I am a bit confused in choosing one of two ways to use web socket (WS).
Variant A.
The server only tells the client via WS that data is ready. And the client gets data by standard request-response HTTP method from REST API.
Variant B.
The server sends all data to the client via WS without HTTP at all.
What variant is more suitable? Or maybe some other variants?
I do not want to remove HTTP at all. I just try to implement WS for a particular kind of end-points.
Variant A would be more suitable and easy to implement. You can send message to the client after the data is ready, and he can then send request for the data. It will be like a simple chat websocket, and will serve your purpose.

Load Balancer sticky sessions and very old webservices

With a hardware LoadBalancer, you can configure sticky sessions which will make sure the same session will always go to the same server.
But will this work with webservices also (rather than webservers)?
i.e. I have WebServices hosted behind a Load Balancer.
Will Webservice calls coming from different native clients (not browser clients) always go to the same webservice server?
These are very old style Webservices - uses RPC/Encoding - the native client program uses Axis 1.4 for the client stubs.
Webservice calls coming from different native clients (not browser clients) always go to the same webservice server should be possible.
To maintain the session stickiness, mostly load balance inject the server identifier in cookie while responding back to the client(kindly note cookie is not a browser feature it is HTTP feature defined by this specification) and should be supported by the HTTP client which is used by Axis 1.4 underneath).
I suggest you to to analyze how your load balance works and based on that you may have to change your needs to change your clients. If your load-balance uses the cookie based approach, this answer you may found useful.
Hope this helps.
If you can keep your application stateless,make it it's good in both performance and scalability.
Benefits of stateless :
Scalability. You can have as many servers as we want without having to share a user session. Each of them can process request (e.g. load balancing via round robin).
Saves server resources. We do not need to allocate memory on the server side (again - scalability).
No need to recover after a server restart.
Session stickiness can be tricky to get right. For example, if your web servers are running on multi-core machines, and you have several processes handling web traffic, you'll need a way to be sticky to both a specific machine and a single process on that machine. So make sure your system degrades well in cases where stickiness doesn't work correctly.
Good discussion you can find here : Sticky and NON-Sticky sessions
Sticky session pro and cons :
Pros and Cons of Sticky Session / Session Affinity load blancing strategy?
Now come to your question :
Will Webservice calls coming from different native clients (not
browser clients) always go to the same webservice server?
Yes in sticky session .
These are very old style Webservices - uses RPC/Encoding - the native
client program uses Axis 1.4 for the client stubs.
Session configuration you need load balancer/server and it can handle any old or new type of applications
this work with webservices also (rather than webservers)?
No its configuration you need to make on server level.
It will work as long as your native client correctly manage the session, ie. set the correct http header for each request.
Generally sticky session is managed by the load balancer by modifying the session cookie to add the server identity.
HA-proxy example
There must be a dedicated documentation for your load balancer.

How to make Qt Websocket and QNetworkRequest (HTTP) to use the same connection?

Is it possible with Qt to upgrade a HTTP connection that handles the normal HTTP requests to a Websocket with the same connection?
I'm thinking about something like this with Poco libraries, but all done in Qt similar to QtWebApp.
The simple answer is no and that is mostly because of specifics of the server side. And Qt just follows the protocol available and exposed by the server (HTTP/WebSocket) as mostly the client-side development framework and AFAIK won't be able to do the kind of transformation you want of going from HTTP to Websocket that are two different protocols. But of course, theoretically that can be done as long as both protocols able to use IP port 80. But that implies new unique sever and new unique client implementations.
We use both WebSocket and REST in our app. And WebSocket is for triggering the client by the server to do something. Client gets the "poke" from the server and starts normal JSON HTTP-based exchange with the server.
Somewhat relative link: https://softwareengineering.stackexchange.com/questions/276253/mixing-rest-and-websocket-in-the-same-api

How to handle SOAP service within persistent Perl web application with cookies?

Due to the fall of SOAP::WSDL which had generated me real Perl modules I have to look for something other in order to handle a SOAP service. The generated modules won't work starting from Perl v5.18.
I have the following situation with my web application.
I have a PSGI compatible, Dancer2 driven, persistent web application.
The web application handles multiple concurrent customers.
The web application is between the customer and an external SOAP service.
The SOAP service uses customer sessions via cookies which have to be integrated on the web application internally for the customer.
The web application holds an WSDL file copy of the SOAP service.
I'm looking for a module that creates an interface out of the WSDL file and handles parameter/schema validation and communication with the SOAP service. I would like to call a method (SOAP call) with parameters (SOAP call parameters) and receive a cleaned data or object structure of the response.
The problem is that the web application needs to handle multiple concurrent customer cookie sessions. So I need a module which offers the possibility to override the cookie jar for that particular request and extract the cookies after the request without causing interference with other concurrent requests.
I found XML::Compile which I can initialize as a singleton at web application start up. But with this solution I ran into problems with interfering other customer requests. So the requests are not separated. Initializing XML::Compile for every request is not the solution either because it will parse the WSDL and generate handlers over and over again for every request the customer sends to the web application.
Is there any solution/module that fits my needs or do I miss something with XML::Compile and it's possible with it?
Are you using Catalyst?
I've been happy using Catalyst::Controller::SOAP and its companion Catalyst::Model::SOAP to build SOAP/WSDL servers and consumers, being able to integrate Perl Applications even with Microsoft Document Literal-Wrapped thing.
Even if not using Catalyst you may probably learn from its code. It uses XML::Compile::WSDL11.

Secure data transfer over http with custom server

I am pretty new to security aspect of application. I have a C++ window service (server) that listens to a particular port for http requests. The http requests can be made via ajax or C# client. Due to some scope change now we have to secure this communication between the clients and custom server written in C++.
Therefore i am looking for options to secure this communication. Can someone help me out with the possible approaches i can take to achieve this.
Thanks
Dpak
Given that you have an existing HTTP server (non-IIS) and you want to implement HTTPS (which is easy to screw up and hard to get right), you have a couple of options:
Rewrite your server as a COM object, and then put together an IIS webservice that calls your COM object to implement the webservice. With this done, you can then configure IIS to provide your webservice via HTTP and HTTPS.
Install a proxy server (Internet Security and Acceleration Server or Apache with mod_proxy) on the same host as your existing server and setup the proxy server to listen via HTTPS and then reverse proxy the requests to your service.
The second option requires little to no changes to your application; the first option is the better long-term architectural move.
Use HTTPS.
A good toolkit for securing your communication channel is OpenSSL.
That said, even with a toolkit, there are plenty of ways to make mistakes when implementing your security layer that can leave your data open to attack. You should consider using an existing https server and having it forward the requests to your server on the loopback channel.
It's reasonably easy to do this using either OpenSSL or Microsoft's SChannel SSPI interface.
How complex it is for you depends on how you've structured your server. If it's a traditional style BSD sockets 'select' type server then it should be fairly straight forward to take the examples from either OpenSSL or SChannel and get something working pretty quickly.
If you're using a more complex server design (async sockets, IOCP, etc) then it's a bit more work as the examples don't tend to show these things. I wrote an article for Windows Developer Magazine back in 2002 which is available here which shows how to use OpenSSL with async sockets and this code can be used to work with overlapped I/O and IOCP based servers if you need to.