We are working on setting up an API Management portal for one of our Web API. We are using eventhubs for logging the events and we are transferring the event messages to Azure Blob storage using Azure functions.
We would like to know how can we find the Time taken by API Management portal for providing the response for a message (we are capturing the time taken at the back end api layer but not from the API Management layer).
Regards,
John
The simpler solution is to enable Azure Monitor Diagnostic Logs for the Apimanagement service. You will get raw logs for each request including
durationMs - interval between receiving request line and headers from a client and writing last chunk of response body to a client. All writes and reads include network latency.
BackendTime - time spent waiting on backend response
ClientTime - time spent with client for request and response
CacheTime - time spent on fetching from cache
You can also refer this video.
Not the correct way of doing this, but still get an idea of how much time each request is taking. We can actually use the context variable to set the start time in the inbound policy node and then calculate the end time in the outbound node.
Related
When I make the calls from Cloud Run instance to other cloud APIs for some reason there are huge delays in responses.
Everything works within 1 project.
Even from local machine the calls are much faster (couple of secs) - but deployed in the cloud it takes couple of mins for some requests to complete. As I see it is relevant for all APIs (apart from Firestore, Translate and TTS APIs as well). This is not related to cold starts for sure.
Code example (Node JS) and logs are below:
console.log('Received the request for stats');
const usersCollection = this.firestore.collection('users')
const snapshot = await this.usersCollection.get();
console.log('Fetched all users from Firestore');
Well after some further investigation I figured out what the problem was.
The thing is that all the operations I perform happen not before the response is sent but after (this is the way chatbot is architectured).
So the flow looks like this:
request to do smth - response 200 that the request is accepted
all the business logic and work
chatbot sends the message with the results
According to the docs the CPU is allocated only during the request processing by default so the only thing I had to change is to enable CPU allocation for background activities: https://cloud.google.com/run/docs/tips/general#background-activity
I'm building the Alexa skill that sends the request to my web server,
then web server will do some process and upload a file to Amazon S3.
During the period of web server process, I make skill keep getting the file from Amazon S3 per 10 seconds till get the file. And the response is based on the file content.
But unfortunately, the web server process takes more than 1 minute. That means skill must stay more than 1 minute to get the file to response.
For now, I used progressive response with async await in my code,
and skill did keep waiting for the file on S3.
But I found that the skill will send the second request to Lambda after 50 seconds automatically. That means for the same skill, i got the two lambda function running at the same time.
And the execution result is : After the first response that progressive response made, 50 seconds later will hear another response that also made by the progressive response which belongs to the second request.
And nothing happened till the end.
I know it is bad to let skill waits this long, but i still want to figure out the executable way if skill needs to wait this long.
There are some points I want to figure out.
Is there anyway to prevent the skill to send the second
requests to Lambda?
Is there another way I can try to accomplish the goal?
Thanks
Eventually, I found that the second invoke of Lambda is not from Alexa, is from AWS Lambda itself. Refer to the following artical
https://cloudonaut.io/your-lambda-function-might-execute-twice-deal-with-it/
So you have to deal with this kind of situation in your Lambda code. One thing can be used is these two times invoke's request id is the same. So you can tell if this is the first time execution by checking your storage for the same request id which you store at the first time execution.
Besides, I also found that once the Alexa Skill waits for more than 1 minutes, it will crash and return the error by speaking (test by Amazon Echo). And there is nothing different in the AWS Lambda log compare to the normal execution one. That meaning the Log seems to be fine but actually the execution result is not.
Hope this can help someone is also struggled at this problem.
I am working on a mobile app that will broadcast a push message to hundreds of thousands of devices at a time. When each user opens their app from the push message, the app will hit our API for data. The API resource will be identical for each user of this push.
Now let's assume that all 500,000 users open their app at the same time. API Gateway will get 500,000 identical calls.
Because all 500,000 nearly concurrent requests are asking for the same data, I want to cache it. But keep in mind that it takes about 2 seconds to compute the requested value.
What I want to happen
I want API Gateway to see that the data is not in the cache, let the first call through to my backend service while the other requests are held in queue, populate the cache from the first call, and then respond to the other 499,999 requests using the cached data.
What is (seems to be) happening
API Gateway, seeing that there is no cached value, is sending every one of the 500,000 requests to the backend service! So I will be recomputing the value with some complex db query way more times than resources will allow. This happens because the last call comes into API Gateway before the first call has populated the cache.
Is there any way I can get this behavior?
I know that based on my example that perhaps I could prime the cache by invoking the API call myself just before broadcasting the bulk push job, but the actual use-case is slightly more complicated than my simplified example. But rest assured, solving this simplified use-case will solve what I am trying to do.
If you anticipate that kind of burst concurrency, priming the cache yourself is certainly the best option. Have you also considered adding throttling to the stage/method to protect your backend from a large surge in traffic? Clients could be instructed to retry on throttles and they would eventually get a response.
I'll bring your feedback and proposed solution to the team and put it on our backlog.
I am doing it like this:
Inside OSB pipeline's message flow, at the beginning of request, assign the current time to a variable. Then in the response, use the current time of the response subtract the variable to calculate the response time. Then I have a reporting action to reporting this number.
I know OSB has a build in monitoring tool, it can display the response time for proxy server, pipeline and business server. As you can see my solution only include the time from the beginning of the pipeline + business server, but not including the time of the request and response message going through the proxy server. Besides that calculating it this way also feels like a non-standard approach.
OSB provided a JMX API which can get these build in monitoring data. But this would make our project more complicated.
If we want to use the OSB reporting action to report the response time. Is there a best way to do it?
Just switch Weblogic to use extended log format, and tell it to add time-taken to the list of tokens it logs on each response.
http://middlewaretechnologies.blogspot.com.au/2012/03/configure-extended-logging-in-http.html
or if you want to read the official docs:
http://docs.oracle.com/cd/E14571_01/web.1111/e13701/web_server.htm#CNFGD207
We need to fetch mutual friend data for each one of our new users. (We're currently doing that through the REST API.) In load testing for an upcoming traffic surge, we ran into API throttling, which breaks our production site. Oops!
In the Insights -> Diagnostics pane, it looks like they issue throttling warnings before they actually throttle. Is there some way we can monitor those limits in code so that we back off gracefully?
You will want to watch for the two errors coming back, then put your next call on a wait timer.
API_EC_TOO_MANY_CALLS Application request limit reached
API_EC_USER_TOO_MANY_CALLS User request limit reached
See: http://www.fb-developers.info/tech/fb_dev/faq/general/gen_10.html for more information.