Genexus - Mark unrecognized procedure parameters as ignorable in webservices - web-services

I have procedures that are exposed as Webservices (REST):
I need it to be able to parse the request body ignoring unrecognized fields (that are not specified
in "rules"). Right now, when procedures tries to parse something that is not defined within the parameters, they throw the following error:
Example:
Some procedure has the following definition:
parm(in:&parm1, in:&parm2, out:&someResponse);
Then we change to:
parm(in:&parm1, in:&parm2, in:&parm3, out:&someResponse);
The web service is updated on some distributions, but on some they're still on the old version with 2 in parameters.
The service that consumes these web services on different APP distributions are sending the body with the second (latest definition).
{
"parm1" : "somevalue",
"parm2" : "somevalue",
"parm3" : "somevalue"
}
Unfortunately we don't have control of the third party that is consuming our web services, so in that case, it would be a lot easier if unused parameters could be ignored...
USING GX 16 U11 - Java Generator

Unfortunately there is no way in GeneXus 16 to "catch" the request and do something previous to the object logic. In GeneXus 17 we have the new API object, there you can transform the parameters.
But, not everything is lost. Taking into account you're generating in Java, there is an "external way" to do it with Filters. I used them to log the client requests for debugging purposes.
If you don't want to mess with the code, there is also API Gateways you could put in front of your API services to redirect the requests to the right service. Bear in mind that I'm not a specialist in this topic, maybe a post in ServerFault would help.

Related

Camel route POSTs to service that takes 20+ minutes to respond

I have an Apache Camel (version 2.15.3) route that is configured as follows (using a mix of XML and Java DSL):
Read a file from one of several folders on an FTP site.
Set a header to indicate which folder it was read from.
Do some processing and auditing.
Synchronously POST to an external REST service (jax-rs 1.1, Glassfish, Java EE 6).
The REST service takes a long time to do its job, 20+ minutes.
Receive the reply.
Do some more processing and auditing.
Write the response to one of several folders on an FTP site.
Use the header set at the start to know which folder to write to.
This is all configured in a single path of chained routes.
The problem is that the connection to the external REST service will timeout while the service is still processing. The infrastructure is a bit complex (edge servers, load balancers, Glassfish), and regardless I don't think increasing the timeout is the right solution.
How can I implement this route such that I avoid timeouts while still meeting all my requirements to (1) write the response to the appropriate FTP folder, (2) audit the transaction, and (3) meet other transaction/context-specific requirements?
I'm relatively new to Camel and REST, so maybe this is easy, but I don't know what Camel and REST tools and techniques to use.
(Questions and suggestions for improvement are welcome.)
Isn't it possible to break the two main steps a part and have two asynchronous operations?
I would do as follows.
Read a file from one of several folders on an FTP site.
Set a header to indicate which folder it was read from.
Save the header and file name and other relevant information in a cache. There is a camel component called camel-cache that is relatively easy to setup and you can store key-value or any other objects.
Do some processing and auditing. Asynchronously POST to an external REST service (jax-rs 1.1, Glassfish, Java EE 6). Note that we are posting asynchronously here.
Step 2.
Receive the reply.
Lookup the reply identifiers i.e. filename or some other identifier in cache to match the reply and then fetch the header.
Do some more processing and auditing.
Write the response to one of several folders on an FTP site.
This way, you don't need to wait and processing can take 20 min or longer. You just set your cache values to not expire for say 24h.
This is a typical asynchronous use case. Can the rest service give you a token id or some unique id immediately after you hit them ?
So that you can have a batch job or some other camel route which will pick up this id from a database/cache and hit the rest service again after 20 minutes.
This is the ideal solution I can think of, if the rest service can provision this.
You are right, waiting for 20 minutes on a synchronous call is a crazy idea. Also what is the estimated size of the file/payload which you are planning to post to the rest service ?

how much has web-service been used in mule (several times)

I have a web service (with WSDL) with mule to be used by people.
I want to get some information about users that use my web-service. For example : ip and time-stamp of the API invocation.
Also, I want to know how much has web-service been used in mule?
I don't think there's such statistical information. However, you could add a logger processor to the flow (assuming it's a flow) marking something like "Web Service XXX was called." The logged message would also contain the timestamp, because of the logger formatter.
As to the IP that called the service, Mule places the calling address in the message Inbound property remoteAddress. So, you could just add this line to the flow:
<logger message="Incoming message. Caller Address: #[message.inboundProperties['remoteAddress']]"/>
This would log each access (which could be used for statistical purposes by an analyzing tool) and their respective calling address.
This sounds a good use case for either:
A custom interceptor, that takes care of storing usage statistics,
A wire-tap that sends to a flow in charge of storing usage statistics.

Apache camel to aggregate multiple REST service responses

I m new to Camel and wondering how I can implement below mentioned use case using Camel,
We have a REST web service and lets say it has two service operations callA and callB.
Now we have ESB layer in the front that intercepts the client requests before hitting this actual web service URLs.
Now I m trying to do something like this -
Expose a URL in ESB that client will actually call. In the ESB we are using Camel's Jetty component which just proxies this service call. So lets say this URL be /my-service/scan/
Now on receiving this request #ESB, I want to call these two REST endpoints (callA and callB) -> Get their responses - resA and resB -> Aggregate it to a single response object resScan -> return to the client.
All I have right now is -
<route id="MyServiceScanRoute">
<from uri="jetty:http://{host}.{port}./my-service/scan/?matchOnUriPrefix=true&bridgeEndpoint=true"/>
<!-- Set service specific headers, monitoring etc. -->
<!-- Call performScan -->
<to uri="direct:performScan"/>
</route>
<route id="SubRoute_performScan">
<from uri="direct:performScan"/>
<!-- HOW DO I??
Make callA, callB service calls.
Get their responses resA, resB.
Aggregate these responses to resScan
-->
</route>
I think that you unnecessarily complicate the solution a little bit. :) In my humble opinion the best way to call two independed remote web services and concatenate the results is to:
call services in parallel using multicast
aggregate the results using the GroupedExchangeAggregationStrategy
The routing for the solution above may look like:
from("direct:serviceFacade")
.multicast(new GroupedExchangeAggregationStrategy()).parallelProcessing()
.enrich("http://google.com?q=Foo").enrich("http://google.com?q=Bar")
.end();
Exchange passed to the direct:serviceFacadeResponse will contain property Exchange.GROUPED_EXCHANGE set to list of results of calls to your services (Google Search in my example).
And that's how could you wire the direct:serviceFacade to Jetty endpoint:
from("jetty:http://0.0.0.0:8080/myapp/myComplexService").enrich("direct:serviceFacade").setBody(property(Exchange.GROUPED_EXCHANGE));
Now all HTTP requests to the service URL exposed by you on ESB using Jetty component will generate responses concatenated from the two calls to the subservices.
Further considerations regarding the dynamic part of messages and endpoints
In many cases using static URL in endpoints is insufficient to achieve what you need. You may also need to prepare payload before passing it to each web service.
Generally speaking - the type of routing used to achieve dynamic endpoints or payloads parameters in highly dependent on the component you use to consume web services (HTTP, CXFRS, Restlet, RSS, etc). Each component varies in the degree and a way in which you can configure it dynamically.
If your endpoints/payloads should be affected dynamically you could also consider the following options:
Preprocess copy of exchange passed to each endpoint using the onPrepareRef option of the Multicast endpoint. You can use it to refer to the custom processor that will modify the payload before passing it to the Multicast's endpoints. This may be good way to compose onPrepareRef with Exchange.HTTP_URI header of HTTP component.
Use Recipient List (which also offers parallelProcessing as the Multicast does) to dynamically create the REST endpoints URLs.
Use Splitter pattern (with parallelProcessing enabled) to split the request into smaller messages dedicated to each service. Once again this option could work pretty well with Exchange.HTTP_URI header of HTTP component. This will work only if both sub-services can be defined using the same endpoint type.
As you can see Camel is pretty flexible and offers you to achieve your goal in many ways. Consider the context of your problem and choose the solution that fits you the best.
If you show me more concrete examples of REST URLs you want to call on each request to the aggregation service I could advice you which solution I will choose and how to implement it. The particularly important is to know which part of the request is dynamic. I also need to know which service consumer you want to use (it will depend on the type of data you will receive from the services).
This looks like a good example where the Content Enricher pattern should be used. Described here
<from uri="direct:performScan"/>
<enrich uri="ServiceA_Uri_Here" strategyRef="aggregateRequestAndA"/>
<enrich uri="ServiceA_Uri_Here" strategyRef="aggregateAandB"/>
</route>
The aggregation strategies has to be written in Java (or perhaps some script language, Scala/groovy? - but that I have not tried).
The aggregation strategy just needs to be a bean that implements org.apache.camel.processor.aggregate.AggregationStrategy which in turn requires you to implement one method:
Exchange aggregate(Exchange oldExchange, Exchange newExchange);
So, now it's up to you to merge the request with the response from the enrich service call. You have to do it twice since you have both callA and callB. There are two predefined aggregation strategies that you might or might not find usefull, UseLatestAggregationStrategy and UseOriginalAggregationStrategy. The names are quite self explainatory.
Good luck

Timestamp of server from a web service call

Is there a way that I can retrieve the timestamp of a web service call? I'm trying to get the time of the server hosting the web service.
Easiest thing to do is to just log them in the server implementation of your service contract, you can use PostSharp to make some attributes to take of this aspect.
For instance, you can write a Trace attribute which simply logs a debug message when a method is invoke. Here's one I wrote a while back which tracks how long a method takes and log a warning message if it takes longer than a set threshold:
http://theburningmonk.com/2010/03/aop-method-execution-time-watcher-with-postsharp/
I came across some 'trace' attribute example before, if you want I can look for it for ya.

API Design: How should distinct classes of errors be handled from an asynchronous XMLHTTP call?

I have a legacy VB6 application that needs to make asynchronous calls to a web service. The web service provides a search method allows end-users to query a central database and view the results from within the application. I'm using the MSXML2.XMLHTTP to make the requests, and have written a SearchWebService class that encapsulates the web service call and code to handle the response asychronously.
Currently, the SearchWebService raises one of two events to the caller: SearchCompleted and SearchFailed. A SearchCompleted event is raised that contains the search results in a parameter to the event if the call completes successfully. A SearchFailed is raised when any type of failure is detected, which can be anything from an improperly-formatted URL (this is possible because the URL is user-configurable), to low-level network errors such as "Host not found", to HTTP errors such as internal server errors. It returns a error message string to the end-user (which is extracted from the web service response body, if present, or from the HTTP status code text if the response has no body, or translated from the network error code if a network error occurs).
Because of various security requirements, the calling application does not access the web service directly, but instead accesses it through a proxy web server running at the customer site, which in turn accesses the actual web service through via a VPN. However, the SearchWebService doesn't know that the calling application is accessing the web service through a proxy: it's just given a URL and told to make the request. The existence of the proxy is a application-level requirement.
The problem is that from an end-user perspective, it's important that the calling application be able to distinguish between low-level network errors versus HTTP errors from the web service, and to distinguish proxy errors from remote web server errors. For example, the application needs to know if a request failed because the proxy server is down, or because the remote web service that the proxy is accessing is down. An application-specific message needs to be presented to the end-user in each case, such as "Search web service proxy server appears to be down. The proxy server may need to be restarted" versus "The proxy is currently running but the remote web server appears to be unavailable. Please contact (name of person in charge of the remote web server)." I could handle this directly in the SearchWebService class, but it seems wrong to generate these application-specific error messages from such a generic class (and the class might be used in environments that don't require a proxy, where the error messages would no longer make sense).
This distinction is important for troubleshooting: a proxy server problem can usually be resolved by the customer, but a remote web server error has to handled by a third party.
I was thinking one way to handle this would be to have the SearchWebService class detect different types of errors and raise different events in each case. For example, instead of a single SearchFailed event, I could have a NetworkError event for low-level network errors (which would indicate a problem accessing the proxy server), a ConfigurationError event for invalid properties on the SearchWebService class (such as passing an improperly-formatted URL), and a ServiceError for errors that occur on the remote web server (implying that the proxy is working properly but the remote server returned an error).
Now that I think about it, there is also an additional error scenario: it could be possible that the proxy server is running properly, but the remote web server is down, or the proxy server has been misconfigured.
Is the approach of using multiple error events to classify different classes of error a reasonable solution to this problem? For the last scenario (the proxy is running but the remote server cannot be reached), I'm guessing I may have to set up the proxy to return a specific HTTP error code so that client can detect this situation (i.e. something more specific than a 500 response).
Originally I kept the single SearchFailed event and simply added an additional errorCode parameter to the event, but that got messy quickly, especially in cases where there wasn't a logical error code to use (such as if the VB6 raises a "real" error, i.e. if the XMLHTTP class isn't registered).
I think that some ideas I've used with Java exceptions may apply here.
Having a large number of different Exceptions gets pretty messy, yet we need to give enough detail to the user so we don't want to lose information.
Hence I have a small number of specific Exceptions, which I guess would correspond to your Events:
InvalidRequestEvent: Used when the user specifies bad information
TransientErrorEvent: used when there's infrastructure issues when a retry might work.
I tend to work in environments where we have clusters of servers so if a user request hits a dying server then if he resubmits he'll probably get a good one, hence from his perspective a simple retry often works. However sometimes the error is with a service such as the Network or Database and in which case the user needs diagnostic information to report to the helpdesk. Hence we need to decide on the extra information to put into the exception. This is (if I understand you correctly) your question.
In the case of InvalidRequestException we would bet giving some information about the problems with the input. It could be on the lines of "Mismatched parenthese" or "Unknown column CUTSOMER in table ORDER". In the case of TransientErrorException it could be "Proxy server is down".
Now depending upon your exact requirments you may not actually choose to put that text in the Exception, but rather an error number which the presentation layer converts to a locale-specific string (English, French ...).
So either Exception might contain something like this (sorry for that Java syntax, but I hope the idea is clear):
BaseException {
String ErrorText; // the error text itself
// OR if you want to allow for internationaliation
int ErrorCode; // my application specific code, corresponds to text held by the UI
String[] params; // specific parameters to be substitued in the error text
// CUTSOMER and ORDER in my example above
int SystemErrorCode; // If you have an underlying error code it goes here
String SystemErrorText; // any further diagnoistic you might need to give to
// the user so that they can report the problem to the
// help desk.
// OR instead of the text (this is something I've seen done)
int SystemErrorTag; // A unique id for this particular error problem.
// This server systems will label their message in the
// server logs. Users just tell the help desk this number
// they don't need to read detailed server error text.
}