Timestamp of server from a web service call - web-services

Is there a way that I can retrieve the timestamp of a web service call? I'm trying to get the time of the server hosting the web service.

Easiest thing to do is to just log them in the server implementation of your service contract, you can use PostSharp to make some attributes to take of this aspect.
For instance, you can write a Trace attribute which simply logs a debug message when a method is invoke. Here's one I wrote a while back which tracks how long a method takes and log a warning message if it takes longer than a set threshold:
http://theburningmonk.com/2010/03/aop-method-execution-time-watcher-with-postsharp/
I came across some 'trace' attribute example before, if you want I can look for it for ya.

Related

Genexus - Mark unrecognized procedure parameters as ignorable in webservices

I have procedures that are exposed as Webservices (REST):
I need it to be able to parse the request body ignoring unrecognized fields (that are not specified
in "rules"). Right now, when procedures tries to parse something that is not defined within the parameters, they throw the following error:
Example:
Some procedure has the following definition:
parm(in:&parm1, in:&parm2, out:&someResponse);
Then we change to:
parm(in:&parm1, in:&parm2, in:&parm3, out:&someResponse);
The web service is updated on some distributions, but on some they're still on the old version with 2 in parameters.
The service that consumes these web services on different APP distributions are sending the body with the second (latest definition).
{
"parm1" : "somevalue",
"parm2" : "somevalue",
"parm3" : "somevalue"
}
Unfortunately we don't have control of the third party that is consuming our web services, so in that case, it would be a lot easier if unused parameters could be ignored...
USING GX 16 U11 - Java Generator
Unfortunately there is no way in GeneXus 16 to "catch" the request and do something previous to the object logic. In GeneXus 17 we have the new API object, there you can transform the parameters.
But, not everything is lost. Taking into account you're generating in Java, there is an "external way" to do it with Filters. I used them to log the client requests for debugging purposes.
If you don't want to mess with the code, there is also API Gateways you could put in front of your API services to redirect the requests to the right service. Bear in mind that I'm not a specialist in this topic, maybe a post in ServerFault would help.

Communicate internally between Google Cloud Functions?

We've created a Google Cloud Function that is essentially an internal API. Is there any way that other internal Google Cloud Functions can talk to the API function without exposing a HTTP endpoint for that function?
We've looked at PubSub but as far as we can see, you can send a request (per say!) but you can't receive a response.
Ideally, we don't want to expose a HTTP endpoint due to the extra security ramifications and we are trying to follow a microservice approach so every function is its own entity.
I sympathize with your microservices approach and trying to keep your services independent. You can accomplish this without opening all your functions to HTTP. Chris Richardson describes a similar case on his excellent website microservices.io:
You have applied the Database per Service pattern. Each service has
its own database. Some business transactions, however, span multiple
services so you need a mechanism to ensure data consistency across
services. For example, lets imagine that you are building an e-commerce store
where customers have a credit limit. The application must ensure that
a new order will not exceed the customer’s credit limit. Since Orders
and Customers are in different databases the application cannot simply
use a local ACID transaction.
He then goes on:
An e-commerce application that uses this approach would create an
order using a choreography-based saga that consists of the following
steps:
The Order Service creates an Order in a pending state and publishes an OrderCreated event.
The Customer Service receives the event attempts to reserve credit for that Order. It publishes either a Credit Reserved event or a
CreditLimitExceeded event.
The Order Service receives the event and changes the state of the order to either approved or cancelled.
Basically, instead of a direct function call that returns a value synchronously, the first microservice sends an asynchronous "request event" to the second microservice which issues a "response event" that the first service picks up. You would use Cloud PubSub to send and receive the messages.
You can read more about this under the Saga pattern on his website.
The most straightforward thing to do is wrap your API up into a regular function or object, and deploy that extra code along with each function that needs to use it. You may even wish to fully modularize the code, as you would expect from an npm module.

What is the best way to get response time in OSB

I am doing it like this:
Inside OSB pipeline's message flow, at the beginning of request, assign the current time to a variable. Then in the response, use the current time of the response subtract the variable to calculate the response time. Then I have a reporting action to reporting this number.
I know OSB has a build in monitoring tool, it can display the response time for proxy server, pipeline and business server. As you can see my solution only include the time from the beginning of the pipeline + business server, but not including the time of the request and response message going through the proxy server. Besides that calculating it this way also feels like a non-standard approach.
OSB provided a JMX API which can get these build in monitoring data. But this would make our project more complicated.
If we want to use the OSB reporting action to report the response time. Is there a best way to do it?
Just switch Weblogic to use extended log format, and tell it to add time-taken to the list of tokens it logs on each response.
http://middlewaretechnologies.blogspot.com.au/2012/03/configure-extended-logging-in-http.html
or if you want to read the official docs:
http://docs.oracle.com/cd/E14571_01/web.1111/e13701/web_server.htm#CNFGD207

Automate Suspended orchestrations to be resumed automatically

We have a BizTalk application which sends XML files to external applications by using a web-service.
BizTalk calls the web-services method by passing XML file and destination application URL as parameters.
If the external applications are not able to receive the XML, or if there is no response received from the web-service back to BizTalk the message gets suspended in BizTalk.
Presently for this situation we manually go to BizTalk admin and resume each suspended message.
Our clients want this process to be automated all, they want an dashboard which shows list of message details and a button, on its click all the suspended messages have to be resumed.
If you are doing this within an orchestration and catching the connection error, just add a delay shape configured to 5 hours. Or set a retry interval to 300 minutes and multiple retries on the send port if that makes sense. You can do this using the rule engine as well.
Why not implement an asynchronous pattern?
You make it so, so that the orchestration sends the file out via a send shape while initializing a certain correlation set.
You then put a listen shape with at one end:
- the receive (following the initialized correlation set)
- a delay shape set to 5 hours.
When you receive the message, your orchestration can handle it gracefully.
When you don't, the delay shape will kick in and you handle accordingly.
Benefit to this solution in comparison to the solution of 40Alpha will be that your orchestration will only 'wake up' from a dehydrated state if the timeout kicks in OR when the response is received. In the example of 40Alpha, the orchestration would wake up a lot of times, consuming extra resources.
You may want to look a product like BizTalk 360. It has those sort of monitoring and command built into it. I'm not sure it works with BizTalk 2006R2 though, but you should be thinking about moving off that platform anyway as it is going out of Microsoft support.

Auditing Jetty Client requests and responses

I have a requirement to count the jetty transactions and measure the time it took to process the request and get back the response using JMX for our monitoring system.
I am using Jetty 8.1.7 and I can’t seem to find a proper way to do this. I basically need to identify when request is sent (due to Jetty Async approach this is triggered from thread A) and when the response is complete (as the oncompleteResponse is done in another thread).
I usually use ThreadLocal for such state in other areas I need similar functionality, but obviously this won’t work here.
Any ideas how to overcome?
To use jetty's async requests you basically have to subclass ContentExchange and override its methods. So you can add an extra field to it which would contain a timestamp of when the request was sent, and use it later in your onResponseComplete() method to measure the processing time. If you need to know the time when your request was actually sent to the server instead of when it was created you can override the onRequestCommitted() and onRequestComplete() methods.