I'm trying to confirm a few things about the WSO2 API manager.
It can support pass through of HTTP and support a response from a back end HTTP server of type 206 partial content & range request or chanced responses. I believe the Pass through transport can do this but not 100% sure
Can session state be maintained in the above scenario?
Normally session details of TCP level is been handled by the implementation by default however we would like to know what exactly you mean by maintaining a session state here? Can you please elaborate more on your exact requirement and is there any specific scenario you are trying out here referring maintaining a session state?
Related
I was asked this question in a technical interview for a integration intern role.
He was digging much into understanding of SOAP web services.
Question). Consider that you are exposing a web service through SOAP to a Client.
The url through which you are providing the service is up and running when you check it.
But the Client has a problem, he is not able to access your webservice.
How will you go on troubleshooting this issue?
My response:
I would first check whether the url the client is trying to access the service is correct.
Will check the .wsdl file: port, bindings & will check once whether upon sending a SOAP request to the URL, am I receiving the SOAP response in local through SOAP UI.
If I get error, will troubleshoot based on the kind of error I get: Like page not found, null exception etc.
I felt he was still expecting some other point. He hinted saying where in what registry you will check all the web services which have been hosted(I guess this was much of a production support issue :P)
I told I may look into UDDI registry, but was not sure with this.
Please let me know your inputs on what could be possibly a right approach?
Apache jUDDI PMC here. Yes UDDI could be used to verify that the client is pointed at the right location, assuming the client knows where the UDDI server and that it is registered and the client knows what to query for on the UDDI server and a UDDI query is part of that client's normal workflow. That's a lot of assumptions but certainly feasible.
Most of time, the endpoint is in a config file somewhere or some idiot hard coded it.
That said, this my go to list for checking SOAP service connectivity (from the client's perspective)
DNS resolution of the hostname in the URL
Ping the remote host
HTTP GET to the URL of the SOAP service + ?wsdl (this usually works). This is also a good time to verify SSL connectivity.
You can also parse the WSDL doc, assuming one is returned for identify the endpoint url.
Finally if that all works, execute the service. HTTP 200 is general a positive sign
Edit:
Another alternative approach is to implement a very simple API (wsdl method) on every SOAP service that simple returns a true/false that answers the question "Am I open for business?". This method would provide a standardized approach for identifying if a service was available or not by testing an external dependencies (databases and whatnot).
Suppose that I have Apache/lighttpd or whatever to receive http requests. Now I want the web servers to act as a proxy for my web services running on internal servers written in Java/Clojure/Erlang?
What I want is to separate the layer that handles client connections and the server that handle application logic. These two should be separated and language independent. Is JSON or XML the format for communicating? If so, how do I perform it from the web servers?
Note: May be I missed the point of your question in this response. Pls do let me know if that is the case.
I dont think you should consider this as "forwarding" of the original request.
If your web-tier that receives the original request makes a call to one/more underlying services (thru HTTP or otherwise) it is part of the "processing" of the original request.
So, there is nothing different here than what you are already familiar with.
i.e You make a HTTP request in place where you would make a SOAP/XML request or a DB call or post a message.
When you say or think in terms of "forwarding", it is misleading.
Also, the data exchanged between your controller and services is solely based on your convenience.
It could be XML or JSON or regular POST parameters that gets sent over HTTP transport
I am just getting started with SOAP web services and stumbled across WS-Addressing.
I have read the Wikipedia page, but I'm having a hard time understanding just what the point of WS-Addressing is.
According to Wikipedia and various sources on the web, WS-Addressing allows to put "addressing information" or "routing information" into the header of a SOAP request.
Why is this useful? If I send a request via HTTP (or even via SMTP or UDP), then the address I send to is the address of the server which will process my request, and the server can simply reply by the same channel. So why is addressing/routing information needed?
I'd be particularly interested in some real-world (more or less) example where WS-Addressing is helpful.
I've found WS-Addressing particularly useful in situations where the SOAP response can't be served immediately. Either the resources to form the response are not available right away or the result itself takes a long time to be generated.
That can happen when your business process involves "a human touch" for example (processes like the ones WS-HumanTask is targeting). You can stick web services in front of your business, but sometimes business takes time. It might be a subscription that must be manually verified, something to be approved, whatever, but it takes days to do it. Are you going to keep the connection opened all that time? Are you going to do nothing else than wait for the response? No! That is inefficient.
What you need is a notification process. The client makes a requests but does not wait for the response. It instead instructs the server where to send the response by use of a "reply to" address. Once the response is available, the server connects to that address and sends the response.
And voila... asynchronous interactions between web services, decoupling the lifetime of the communication process from the lifetime of the HTTP connection. Very useful...
But wait... HTTP connection? Why should I care about that? What if I want the response sent back on another type of protocol? (which SOAP kindly provides as it is not tied to any protocol).
With normal request/response flow, the response comes on the same channel as the request 'cose it's a connection you know.... So for example you have a HTTP connection... that means HTTP in and HTTP out.
But with WS-Addressing you are not tied to that. You can demand the response on another type of channel. Request comes on HTTP for example, but you can instruct the server to send the response back over SMTP, for example.
In this way WS-Addressing defines standard ways
to route a message over multiple transports. As the wiki page is saying:
instead of relying on network-level transport to convey routing information,
a message utilizing WS-Addressing may contain its own dispatch metadata in a standardized SOAP header.
and as for your observation:
and the server can simply reply by the same channel
... what works for some, might not work for others, and for others we have WS-Addressing :D.
Why do we say that web services are stateless?
They don't persist any state between requests from the client. i.e. the service doesn't know, nor care, that a subsequent request came from client that has/hasn't made a previous request. Basically, its a 'give me this piece of info and forget about me' which puts the onus on the client to maintain any state.
Because web services are based on HTTP, which is a stateless protocol.
Quoting wikipedia :
A stateless server is a server that
treats each request as an independent
transaction that is unrelated to any
previous request.
i.e. each request is independant from the previous one : even if we use some "tricks", like cookies for instance, to preserve some state between requests, this is not something defined by the protocol.
Because HTTP is stateless. After a client request is fulfilled by the server, no information is stored for use in future transactions.
The concept of a web serivce is to model a RPC (Remote Procedure Call) aka a Function. Thus you should not need to use session. In addition, the idea of being stateless comes from the need to scale out web servers into a server farm and thus enable higher capacity.
However, the choice of using state is dependant upon the technology and the developer. There is nothing to prevent you from creating an ASP.Net Web Service and setting "EnableSession=True" in the method definition.
This can be useful in some basic authentication scenarios, i.e. home-grown forms authentication or to provide automatic correlation for short-lived "workflow"'s. (However I strongly urge you consider more modern techniques will provide a higher level of security and performance).
Requests are independent from one another.
I asked the question before but didn't phrase it quite right. I'm using RESTful principles to build a secure web-app that uses both transport authentication/encryption and message level security.
The message level security is essentially client-independent (still encrypted though), and hence this allows the individual messages to be cached, or stored on an intermediary server without significant risk of exposing private data.
Transport level security is needed to authenticate both end-points using TLS client-authentication. The situation is analogous to having a central mainframe where messages originate, and caches at each branch where the clients are located. I want the client->cache and cache->mainframe connections to be secured using TLS and the individual X509 Certificates. Hence, the client will know it is talking to a proxy, and the mainframe will know it is talking to the proxy and not directly to the client.
Is there some way of doing this using HTTP standards, and not through some hack?
Essentially, I want the client to try and access the mainframe URI, to know it has to go through the proxy, and use TLS with the proxy (with the proxy having its own certificate), and then for the proxy to proceed to connect to the mainframe (with each having their own certificate) on behalf of the client. The proxy can cache the data the mainframe returns, and use that instead of having to connect to the mainframe each time.
Does anybody know proxy/caching software or a method that will allow this?
Would this get more responses on serverfault.com as it's essentially a server software/config question rather than a programming problem per se?
Basically, it sounds like you want a standard SSL reverse proxy with caching. You could do this without writing any code with Apache + mod_cache, configured as a reverse proxy.
The kicker is the message security. It'd only work if your requests are 100% cacheable based only on path/querystring, and if they were "unique by client" (eg, a client ID in the QS or something). Something tells me that one or both of these are not true. This would be pretty trivial to build in ASP.NET, or by extending mod_cache (basically just standard response caching, bucketed by the client cert thumbprint).