Receiving all results from a Web API with pagination - wso2

I need to connect to a server using a Web API and receive all entries. The server however only provides 100 data entries at most (pagination) and a hint how to get the next batch. What is the proper way to realise that with WSO2 EI?
Using the regular mediators doesn't seem to work for me here. I tried using the Script mediator and perform requests in Ruby (or to be more precise the JRuby package WSO2 is using) - but I'd be required to use a Ruby Gem for processing the JSON (which doesn't seem to be working for me).
Is it possible for WSO2 EI to use Ruby Gems as well?
Or can anyone think of another solution to my problem (which does not necissarily involve writing a custom mediator with Java)?
Example API response (limited to 2 entries at a time)
{
"result": {
"data": [
{
"id": 1,
"title": "Test"
},
{
"id": 2,
"title": "Test 2"
}
],
"cursor": {
"limit": "2",
"after": "2",
"before": null
}
}
}
The cursor.after is the ID of the last data in this query. Calling the HTTP URL with param after=2 will select the next 2 entries. If there are no new entries cursor.after is null.

I would try a sequence that calls the api and stores the result, and if cursor after is not null, call itself. In the second iteration it would call the api using the cursor value, add the result to the previous result etc until the cursor.after is null.
Another option would be nested clones where you keep creating a new clone everytime the cursor.after is not null. And then use an aggregate mediator to collect all the responses.

Related

Which casing of property names is considered the "most correct" in a Google Cloud Pub/Sub Push Message?

If you use a "Push" subscription to a Google Cloud Pub/Sub, you'll be registering an HTTPS endpoint that receives messages from Google's managed service. This is great if you wish to avoid dependencies on Google Cloud's SDKs and instead trigger your asynchronous services via a traditional web request. However, the intended casing of the properties of the payload is not clear, and since I'm using Push subscriptions I don't have a SDK to defer to for deserialization.
If you look at this documentation, you see references to message_id using snake_case (Update 9/18/18: As stated in Kamal's answer, the documentation was updated since this was incorrect), e.g.:
{
"message": {
"attributes": {
"key": "value"
},
"data": "SGVsbG8gQ2xvdWQgUHViL1N1YiEgSGVyZSBpcyBteSBtZXNzYWdlIQ==",
"message_id": "136969346945",
"publish_time": "2014-10-02T15:01:23.045123456Z"
},
"subscription": "projects/myproject/subscriptions/mysubscription"
}
If you look at this documentation, you see references to messageId using camelCase, e.g.:
{
"message": {
"attributes": {
"key": "value"
},
"data": "SGVsbG8gQ2xvdWQgUHViL1N1YiEgSGVyZSBpcyBteSBtZXNzYWdlIQ==",
"messageId": "136969346945",
"publishTime": "2014-10-02T15:01:23.045123456Z"
},
"subscription": "projects/myproject/subscriptions/mysubscription"
}
If you subscribe to the topics and log the output, you actually get both formats, e.g.:
{
"message": {
"attributes": {
"key": "value"
},
"data": "SGVsbG8gQ2xvdWQgUHViL1N1YiEgSGVyZSBpcyBteSBtZXNzYWdlIQ==",
"messageId": "136969346945",
"message_id": "136969346945",
"publishTime": "2014-10-02T15:01:23.045123456Z",
"publish_time": "2014-10-02T15:01:23.045123456Z"
},
"subscription": "projects/myproject/subscriptions/mysubscription"
}
An ideal response would answer both of these questions:
Why are there two formats?
Is one more correct or authoritative?
The officially correct names for the variables should be camel case (messageId), based on the Google JSON style guide. In the early phases of Cloud Pub/Sub, snake case was used for message_id and publish_time, but was changed later in order to conform to style standards. The snake case ones were kept in addition to the camel case ones in order to ensure push endpoints depending on the original format did not break. The first documentation link you point apparently was not updated at the time and it will be fixed shortly.

GCP stackdriver fo OnPrem

Based on Stackdriver, I want to send notifications to my Centreon monitoring (behind Nagios) for workflow reasons, do you have any idea on how to do so?
Thank you
Stackdriver alerting allows webhook notifications, so you can run a server to forward the notifications anywhere you need to (including Centreon), and point the Stackdriver alerting notification channel to that server.
There are two ways to send external information in the Centreon queue without a traditional passive agent mode.
First, you can use the Centreon DSM (Dynamic Services Management) addon.
It is interesting because you don't have to register a dedicated and already known service in your configuration to match the notification.
With Centreon DSM, Centreon can receive events such as SNMP traps resulting from the detection of a problem and assign the event dynamically to a slot defined in Centreon, like a tray event.
A resource has a set number of “slots” on which alerts will be assigned (stored). While this event has not been taken into account by human action, it will remain visible in the Centreon web frontend. When the event is acknowledged, the slot becomes available for new events.
The event must be transmitted to the server via an SNMP Trap.
All the configuration is made through Centreon web interface after the module installation.
Complete explanations, screenshots, and tips are described on the online documentation: https://documentation.centreon.com/docs/centreon-dsm/en/latest/user.html
Secondly, Centreon developers added a Centreon REST API you can use to submit information to the monitoring engine.
This feature is easier to use than the SNMP Trap way.
In that case, you have to create both host/service objects before any API utilization.
To send status, please use the following URL using POST method:
api.domain.tld/centreon/api/index.php?action=submit&object=centreon_submit_results
Header
key value
Content-Type application/json
centreon-auth-token the value of authToken you got on the authentication response
Example of service body submit: The body is a JSON with the parameters provided above formatted as below:
{
"results": [
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "Memory",
"status": "2"
"output": "The service is in CRITICAL state"
"perfdata": "perf=20"
},
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "fake-service",
"status": "1"
"output": "The service is in WARNING state"
"perfdata": "perf=10"
}
]
}
Example of body response: :: The response body is a JSON with the HTTP return code, and a message for each submit:
{
"results": [
{
"code": 202,
"message": "The status send to the engine"
},
{
"code": 404,
"message": "The service is not present."
}
]
}
More information is available in the online documentation: https://documentation.centreon.com/docs/centreon/en/19.04/api/api_rest/index.html
Centreon REST API also allows to get real-time status for hosts, services and do the object configuration.

How to access sessionAttributes values from amazon lex response in Amazon Connect?

I have set the value of session attribute in my lambda function response, which I am getting in amazon lex after invoking it from Lex. But, When I tried to access this value in Amazon connect using -
$.Lex.SessionAttributes.dateFlag
I am not able to access it.
I have already tried using Type as external and Lex Attributes.
I am putting the condition in amazon connect based on the values received from Above.
In logs I found that the condition where I am comparing this value comes to false.
Can anyone suggest some idea on how to get the custom value/sessionAttribute values from Lex/Lambda in Amazon Connect.
Below is my response JSON from Lex. I am trying to access the dateFlag.
{
"dialogState": "Fulfilled",
"intentName": "suitabletime",
"message": "Thanks for the confirmation",
"messageFormat": "PlainText",
"responseCard": null,
"sessionAttributes": {
"dateFlag": "1",
"previousIntent": "suitabletime"
},
"slotToElicit": null,
"slots": {
"date": "2018-09-14",
"time": "13:00"
}
}
Finally I found the solution. This is simpler than what I was writing. We can directly access the session attribute in our connect by taking the attribute type as Lex attribute and Attribute as Attribute Key/Name.
Below is the screenshot for the same.

AWS API Gateway : Execution failed due to configuration error: No match for output mapping and no default output mapping configured

In AWS API Gateway, I have a GET method that invokes a lambda function.
When I test the method in the API Gateway dashboard, the lambda function executes successfully but API Gateway is not mapping the context.success() call to a 200 result despite having default mapping set to yes.
Instead I get this error:
Execution failed due to configuration error: No match for output mapping and no default output mapping configured
This is my Integration Response setup:
And this is my method response setup:
Basically I would expect the API Gateway to recognize the successful lambda execution and then map it by default to a 200 response but
that doesn't happen.
Does anyone know why this isn't working?
I have same issue while uploading api using serverless framework. You can simply follow bellow steps which resolve my issue.
1- Navigate to aws api gateway
2- find your api and click on method(Post, Get, Any, etc)
3- click on method response
4- Add method with 200 response.
5- Save it & test
I had the similar issue, got it resolved by adding the method response 200
This is a 'check-the-basics' type of answer. In my scenario, CORS and the bug mentioned above were not at issue. However, the error message given in the title is exactly what I saw and what led me to this thread.
Instead, (an an API Gateway newb) I had failed to redeploy the deployment. Once I did that, everything worked.
As a bonus, for Terraform 0.12 users, the magic you need and should use is a triggers parameter for your aws_api_gateway_deployment resource. This will automatically redeploy for you when other related APIGW resources change. See the TF documentation for details.
There was an issue when saving the default integration response mapping which has been resolved. The bug caused requests to API methods that were saved incorrectly to return a 500 error, the CloudWatch logs should contain:
Execution failed due to configuration error:
No match for output mapping and no default output mapping configured.
Since the 'ENABLE CORS' saves the default integration response, this issue also appeared in your scenario.
For more information, please refer to the AWS forums entry: https://forums.aws.amazon.com/thread.jspa?threadID=221197&tstart=0
Best,
Jurgen
What worked for me:
1. In Api Gateway Console created OPTIONS method manually
2. In the Method Response section under created OPTIONS method added 200 OK
3. Selected Option method and enabled CORS from menu
I found the problem:
Amazon had added a new button in the API-Gateway resource configuration
titled 'Enable CORS'. I had earlier clicked this however once enabled
there doesn't seem to be a way to disable it
Enabling CORS using this button (Instead of doing it manually which is what I ended up doing) seems to cause an internal server error even on a
successful lambda execution.
SOLUTION: I deleted the resource and created it again without clicking
on 'Enable CORS' this time and everything worked fine.
This seems to be a BUG with that feature but perhaps I just don't
understand it well enough. Comment if you have any further information.
Thanks.
Check the box which says "Use Lambda Proxy integration".
This works fine for me. For reference, my lambda function looks like this...
def lambda_handler(event, context:
# Get params
param1 = event['queryStringParameters']['param1']
param2 = event['queryStringParameters']['param2']
# Do some stuff with params to get body
body = some_fn(param1, param2)
# Return response object
response_object = {}
response_object['statusCode'] = 200
response_object['headers'] = {}
response_object['headers']['Content-Type']='application/json'
response_object['body'] = json.dumps(body)
return response_object
I just drop this here because I faced the same issue today and in my case was that we are appending at the end of the endpoint a /. So for example, if this is the definition:
{
"openapi": "3.0.1",
"info": {
"title": "some api",
"version": "2021-04-23T23:59:37Z"
},
"servers": [
{
"url": "https://api-gw.domain.com"
}
],
"paths": {
"/api/{version}/intelligence/topic": {
"get": {
"parameters": [
{
"name": "username",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "version",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "x-api-key",
"in": "header",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "X-AWS-Cognito-Group-ID",
"in": "header",
"schema": {
"type": "string"
}
}
],
...
Remove any / at the end of the endpoint: /api/{version}/intelligence/topic. Also do the same in uri on apigateway-integration section of the swagger + api gw extensions json.
Make sure your ARN for the role is indeed a role (and not, e.g., the policy).

How should I continuously observe a Web service-provided database query?

I'm developing a Web application that is supposed to query a Web service (ASP.NET Web API) for certain data, and visualize the results. The queried data may change while the client application is running, as items may be added to or removed from the corresponding database collection. Either the client itself or other clients may modify the collection (via the Web service). The database server, RavenDB, has the ability to notify its client (the Web service) of changes.
What I'm wondering is how should clients be kept up-to-date as data changes in the Web service? Specifically, if a change takes place in the Web service's database so that a client's view of the data becomes outdated, the client should receive fresh query results. Would it be a good idea to maintain a persistent connection to clients, e.g. via SignalR, and simply notify them each time changes are made to the database so that each client can re-query for data? Should these change notifications be throttled in case they become too frequent?
Example Scenario
The database contains the following items (JSON notation):
[{"Id": "2", "User": "usera"}, {"Id": "1", "User": "usera"},
{"Id": "3", "User": "userb"}, {"Id": "4", "User": "usera"}]
Client A requests items where User == "usera", paginated to max 2 items and sorted on Id; the service returns the following set:
[{"Id": "1", "User": "usera"}, {"Id": "2", "User": "usera"}]
Then client B tells the service to delete the following item: {"Id": "2", "User": "usera"}, so that the database becomes:
[{"Id": "1", "User": "usera"}, {"Id": "3", "User": "userb"},
{"Id": "4", "User": "usera"}]
The question now is, how does the Web service notify client A that it should re-query for new data? That is, client A should refresh its view to contain the following:
[{"Id": "1", "User": "usera"}, {"Id": "4", "User": "usera"}]
What you said sounds right. You can have Web API and SignalR hosted side-by-side. You can use Web API to retrieve the data and SignalR to notify clients whenever the data changes. You can either notify clients that the data has changed so that they can re-query or you could actually send the changes to clients so that they can avoid re-querying the API.
You could also go with a different model where the client polls the server every say 15 or 30 seconds and updates the visualized results. This has the advantage of not requiring a persistent-connection and being easier to implement. But changes will take longer to propagate to clients and you may end up consuming more bandwidth if the result set is large or if changes are infrequent (since the polling will happen regardless of whether there are actually any changes).