How should I continuously observe a Web service-provided database query? - web-services

I'm developing a Web application that is supposed to query a Web service (ASP.NET Web API) for certain data, and visualize the results. The queried data may change while the client application is running, as items may be added to or removed from the corresponding database collection. Either the client itself or other clients may modify the collection (via the Web service). The database server, RavenDB, has the ability to notify its client (the Web service) of changes.
What I'm wondering is how should clients be kept up-to-date as data changes in the Web service? Specifically, if a change takes place in the Web service's database so that a client's view of the data becomes outdated, the client should receive fresh query results. Would it be a good idea to maintain a persistent connection to clients, e.g. via SignalR, and simply notify them each time changes are made to the database so that each client can re-query for data? Should these change notifications be throttled in case they become too frequent?
Example Scenario
The database contains the following items (JSON notation):
[{"Id": "2", "User": "usera"}, {"Id": "1", "User": "usera"},
{"Id": "3", "User": "userb"}, {"Id": "4", "User": "usera"}]
Client A requests items where User == "usera", paginated to max 2 items and sorted on Id; the service returns the following set:
[{"Id": "1", "User": "usera"}, {"Id": "2", "User": "usera"}]
Then client B tells the service to delete the following item: {"Id": "2", "User": "usera"}, so that the database becomes:
[{"Id": "1", "User": "usera"}, {"Id": "3", "User": "userb"},
{"Id": "4", "User": "usera"}]
The question now is, how does the Web service notify client A that it should re-query for new data? That is, client A should refresh its view to contain the following:
[{"Id": "1", "User": "usera"}, {"Id": "4", "User": "usera"}]

What you said sounds right. You can have Web API and SignalR hosted side-by-side. You can use Web API to retrieve the data and SignalR to notify clients whenever the data changes. You can either notify clients that the data has changed so that they can re-query or you could actually send the changes to clients so that they can avoid re-querying the API.
You could also go with a different model where the client polls the server every say 15 or 30 seconds and updates the visualized results. This has the advantage of not requiring a persistent-connection and being easier to implement. But changes will take longer to propagate to clients and you may end up consuming more bandwidth if the result set is large or if changes are infrequent (since the polling will happen regardless of whether there are actually any changes).

Related

How can I use TTL to prevent a message backlog when using Firebase Cloud Messaging with Django-Push-Notifications?

I am working with Firebase Cloud Messaging in Django using django-push-notifications to deliver push notifications to our users via desktop notifications.
After a browser is fully closed (such as when the the computer is turned off), our users receive a backlog of all notifications previously sent next time they boot up.
While there are situations where a user would want to receive the entire backlog of messages, this is not one of them.
It seems the answer is to set TTL=0, as per this section of the FCM documentation, but my attempts are not resulting in the desired behavior.
Please help me better understand TTL in this context. If TTL is the right way to go, what is the proper way to format TTL in send_message() using django-push-notifications so that messages will not accumulate if not immediately delivered?
Here is what I have attempted:
devices.send_message(
body,
TTL=0,
time_to_live=0,
link='blah',
extra={'title': 'blah blah', 'icon': '/foo/bar.png'}
)
The format that you send seems different from the one in the documentation you linked. From the documentation:
{
"message":{
"token":"bk3RNwTe3H0:CI2k_HHwgIpoDKCIZvvDMExUdFQ3P1...",
"data":{
"Nick" : "Mario",
"body" : "great match!",
"Room" : "PortugalVSDenmark"
},
"apns":{
"headers":{
"apns-expiration":"1604750400"
}
},
"android":{
"ttl":"4500s"
},
"webpush":{
"headers":{
"TTL":"4500"
}
}
}
}
So key here is that the time-to-live for a webpush message is set under webpush/headers/TTL, while you're adding it to the top-level.

GCP stackdriver fo OnPrem

Based on Stackdriver, I want to send notifications to my Centreon monitoring (behind Nagios) for workflow reasons, do you have any idea on how to do so?
Thank you
Stackdriver alerting allows webhook notifications, so you can run a server to forward the notifications anywhere you need to (including Centreon), and point the Stackdriver alerting notification channel to that server.
There are two ways to send external information in the Centreon queue without a traditional passive agent mode.
First, you can use the Centreon DSM (Dynamic Services Management) addon.
It is interesting because you don't have to register a dedicated and already known service in your configuration to match the notification.
With Centreon DSM, Centreon can receive events such as SNMP traps resulting from the detection of a problem and assign the event dynamically to a slot defined in Centreon, like a tray event.
A resource has a set number of “slots” on which alerts will be assigned (stored). While this event has not been taken into account by human action, it will remain visible in the Centreon web frontend. When the event is acknowledged, the slot becomes available for new events.
The event must be transmitted to the server via an SNMP Trap.
All the configuration is made through Centreon web interface after the module installation.
Complete explanations, screenshots, and tips are described on the online documentation: https://documentation.centreon.com/docs/centreon-dsm/en/latest/user.html
Secondly, Centreon developers added a Centreon REST API you can use to submit information to the monitoring engine.
This feature is easier to use than the SNMP Trap way.
In that case, you have to create both host/service objects before any API utilization.
To send status, please use the following URL using POST method:
api.domain.tld/centreon/api/index.php?action=submit&object=centreon_submit_results
Header
key value
Content-Type application/json
centreon-auth-token the value of authToken you got on the authentication response
Example of service body submit: The body is a JSON with the parameters provided above formatted as below:
{
"results": [
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "Memory",
"status": "2"
"output": "The service is in CRITICAL state"
"perfdata": "perf=20"
},
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "fake-service",
"status": "1"
"output": "The service is in WARNING state"
"perfdata": "perf=10"
}
]
}
Example of body response: :: The response body is a JSON with the HTTP return code, and a message for each submit:
{
"results": [
{
"code": 202,
"message": "The status send to the engine"
},
{
"code": 404,
"message": "The service is not present."
}
]
}
More information is available in the online documentation: https://documentation.centreon.com/docs/centreon/en/19.04/api/api_rest/index.html
Centreon REST API also allows to get real-time status for hosts, services and do the object configuration.

Receiving all results from a Web API with pagination

I need to connect to a server using a Web API and receive all entries. The server however only provides 100 data entries at most (pagination) and a hint how to get the next batch. What is the proper way to realise that with WSO2 EI?
Using the regular mediators doesn't seem to work for me here. I tried using the Script mediator and perform requests in Ruby (or to be more precise the JRuby package WSO2 is using) - but I'd be required to use a Ruby Gem for processing the JSON (which doesn't seem to be working for me).
Is it possible for WSO2 EI to use Ruby Gems as well?
Or can anyone think of another solution to my problem (which does not necissarily involve writing a custom mediator with Java)?
Example API response (limited to 2 entries at a time)
{
"result": {
"data": [
{
"id": 1,
"title": "Test"
},
{
"id": 2,
"title": "Test 2"
}
],
"cursor": {
"limit": "2",
"after": "2",
"before": null
}
}
}
The cursor.after is the ID of the last data in this query. Calling the HTTP URL with param after=2 will select the next 2 entries. If there are no new entries cursor.after is null.
I would try a sequence that calls the api and stores the result, and if cursor after is not null, call itself. In the second iteration it would call the api using the cursor value, add the result to the previous result etc until the cursor.after is null.
Another option would be nested clones where you keep creating a new clone everytime the cursor.after is not null. And then use an aggregate mediator to collect all the responses.

How long is a RingOut ID supposed to live for?

Firstly, ringout is working correctly, it dials two numbers and connects them together successfully.
When I send a POST request to the Ringout REST API endpoint, I get a ringout ID back. I then use this ringout ID and issue a GET request every few seconds to track when both parties have answered there calls. ( I am aware of webhooks, but webhooks don't give me the callee's status)
{
"uri": "https://platform.devtest.ringcentral.com/restapi/v1.0/account/XXXX/extension/XXXXXX/ringout/XXx";,
"id": xxx,
"status": {
"callStatus": "Success",
"callerStatus": "Success",
"calleeStatus": "Success"
}
}
I use this same polling technique to work out when either party has disconnected from the call.
{
"uri": "https://platform.devtest.ringcentral.com/restapi/v1.0/account/xxxx/extension/xxxx/ringout/xxxx";,
"id": xxx,
"status": {
"callStatus": "CannotReach",
"callerStatus": "Finished",
"calleeStatus": "Finished"
}
}
I noticed that the ringout ID only lives for about 30 seconds, after this time when I send a GET request I get this response even though the phone call is still taking place.
{
"errorCode": "CMN-102",
"message": "Resource for parameter [ringoutId] is not found",
"errors": [
{
"errorCode": "CMN-102",
"message": "Resource for parameter [ringoutId] is not found",
"parameterName": "ringoutId"
}
],
"parameterName": "ringoutId"
}
Is this the expected behaviour for a ringout call, does the ID disappear after 30 seconds, even though the call is still active?
This question was also asked in the RingCentral Developer Community, and answered by the Principal Architect for the Platform: https://devcommunity.ringcentral.com/ringcentraldev/topics/how-long-does-a-ringout-id-live-for
Adding a copy of Anton's answer here to save people a click...
Ringout ID lives until both call legs are established (or canceled).
You cannot use this ID to check the status of a call which is already
connected to both parties or to cancel already connected call.
In order to monitor status of an established call you should use our
presence notifications.

Amazon ELB - Sticky session lost of cookie

I have a Node.js app on Elastic Beanstalk running on multiple ec2 instance behind a load balancer(elb).
Cause of the need of my app, i had to activate the session stickiness.
I activated the "AppCookieStickinessPolicy" using my custom cookie "sails.sid" as reference.
The problem is that my app need this cookie to work proprely, but as the moment I activate the session stickness (via Duration-Based Session Stickiness or in my case : Application-Controlled Session Stickiness), the headers going to my server are modified and I lost my custom cookie, who is replaced by the AWSELB (amazon ELB) cookie.
How can I configure the loadbalancer to not replace my cookie?
If I understood well, the AppCookieStickinessPolicies must keep my custom cookie but it's not the case.
I am doing wrong somewhere?
Thanks in advance
Description of my load balancer :
{
"LoadBalancerDescriptions": [
{
"AvailabilityZones": [
"us-east-1b"
],
....
"Policies": {
"AppCookieStickinessPolicies": [
{
"PolicyName": "AWSConsole-AppCookieStickinessPolicy-awseb-e-y-AWSEBLoa-175QRBIZFH0I8-1452531192664",
"CookieName": "sails.sid"
}
],
"LBCookieStickinessPolicies": [
{
"PolicyName": "awseb-elb-stickinesspolicy",
"CookieExpirationPeriod": 0
}
],
"OtherPolicies": []
},
"ListenerDescriptions": [
{
"Listener": {
"InstancePort": 80,
"LoadBalancerPort": 80,
"InstanceProtocol": "HTTP",
"Protocol": "HTTP"
},
"PolicyNames": [
"AWSConsole-AppCookieStickinessPolicy-awseb-e-y-AWSEBLoa-175QRBIZFH0I8-1452531192664"
]
}
]
....
}
]
}
The sticky session cookie set by the ELB is used to identify what node in the cluster to route request to.
If you are setting a cookie in your application that you need to rely on, then expecting the ELB to use that cookie, it is going to overwrite the value you're setting.
Try simply allowing the ELB to manage the session cookie.
I spent a lot of time trying out the ELB stickiness features and routing requests from the same client to the same machine in a back-end cluster server.
Problem is, it didn't always work 100%, so I had to write a backup procedure using sessions stored in MySQL. But then, I realised I didn't need the ELB stickiness functionality, I could just use the MySQL session system.
It is more complex to write a database session system, and there is an overhead of course in that every http call will inevitably involve a database query. However, if this is query uses a Primary Index, it's not so bad.
The big advantage is that any request can go to any server. OR if one of your servers dies, the next one can handle the job just as well. For truly resilient applications, a database session system is inevitable.