How can I immediately detect my phone going back online with IBM MobileFirst? - offline

I am facing a problem the heartbeat interval in my IBM MobileFirst application.
The first time the app runs everything is ok. If I go offline, it recognizes that I am offline. The problem is that if I go offline and then go online again, the apps tries to send the heartbeat and it keeps trying to send it for about 20-30 seconds. Even after being online with my phone, I am still 'offline' in the application because the heartbeat is still trying to send be sent and be successful. 20-30 later is when I receive that the connection was successful and then the app recognizes that I am online. Is there a way to avoid this 'delay'?
I want the app to know that I am offline/online as soon as possible. Is there a way to achieve this?
This is my initOptions where I am using the timeout:
var wlInitOptions = {
timeout: 5000,
.
.
.
And this is my app.js where i am using the WL.Client.setHeartBeatInterval
WL.Client.setHeartBeatInterval(5);
document.addEventListener(WL.Events.WORKLIGHT_IS_CONNECTED, function (event) {
WL.Logger.error('We are online, lower the heartbeat');
WL.Client.setHeartBeatInterval(5);
}, false);
document.addEventListener(WL.Events.WORKLIGHT_IS_DISCONNECTED, function (event) {
WL.Logger.error('We are no longer online, raise heartbeat');
WL.Client.setHeartBeatInterval(1);
}, false);

The heartbeat indirectly recognises you are online due to a successful heartbeat - it doesn't look at the status of online/offline using the phone's hardware. Therefore, there will always be a delay, which on average will be half of the heartbeat length. If you wish to increase the speed at which it recognises you're online, you need to reduce the heartbeat interval. Of course, that will increase network traffic.
There are Cordova plugins, such as this one, which instead look at the phone's hardware and detect whether it is online or offline, providing events you can listen to. They don't (as far as I know) attempt to initiate a network connection to a remote host, so that will tell you only whether the phone thinks it has a network connection, not whether it is stable/robust/fast. As far as I know, MFP doesn't have that functionality built in. To be clear, the plugin is not supported by IBM and I haven't tested it.

Related

Answering machine with Amazon Connect

We are trying to receive customer calls through Amazon Connect and leave messages in Amazon Kinesis.
When we call Amazon Connect from our cell phones, the voice plays the expected message and the Beep tone sounds as expected. But then the call ends and we cannot leave a message. We tried removing Wait and Stop media streaming but the problem persisted. What are we doing wrong?
Set Voice: OK
Play prompt(Message): OK
Play prompt(Beep): OK
Start media streaming: NG
If you have a simple, easy to understand sample for this application, let me know!
Looks like the problem is your Wait block. Wait isn't supported for voice calls, so immediately errors.
Replace the Wait block with a Get Customer Input block. Use Text to speech for the prompt, Set the prompt value manually to <speak></speak> and set Interpret as to SSML. Set it to detect DTMF and set the timeout to however long the message is allowed to be. From your flow above that is 10 seconds.
This should get the customers voice sent to the Kinesis stream and you can process the stream from there.
There is a really thorough implementation guide for voice mail here. I've used this then altered it to suite my exact needs in the past.

Is there an AWS / Pagerduty service that will alert me if it's NOT notified

We've got a little java scheduler running on AWS ECS. It's doing what cron used to do on our old monolith. it fires up (fargate) tasks in docker containers. We've got a task that runs every hour and it's quite important to us. I want to know if it crashes or fails to run for any reason (eg the java scheduler fails, or someone turns the task off).
I'm looking for a service that will alert me if it's not notified. I want to call the notification system every time the script runs successfully. Then if the alert system doesn't get the "OK" notification as expected, it shoots off an alert.
I figure this kind of service must exist, and I don't want to re-invent the wheel trying to build it myself. I guess my question is, what's it called? And where can I go to get that kind of thing? (we're using AWS obviously and we've got a pagerDuty account).
We use this approach for these types of problems. First, the task has to write a timestamp to a file in S3 or EFS. This file is the external evidence that the task ran to completion. Then you need an http based service that will read that file and calculate if the time stamp is valid ie has been updated in the last hour. This could be a simple php or nodejs script. This process is exposed to the public web eg https://example.com/heartbeat.php. This script returns a http response code of 200 if the timestamp file is present and valid, or a 500 if not. Then we use StatusCake to monitor the url, and notify us via its Pager Duty integration if there is an incident. We usually include a message in the response so a human can see the nature of the error.
This may seem tedious, but it is foolproof. Any failure anywhere along the line will be immediately notified. StatusCake has a great free service level. This approach can be used to monitor any critical task in same way. We've learned the hard way that critical cron type tasks and processes can fail for any number of reasons, and you want to know before it becomes customer critical. 24x7x365 monitoring of these types of tasks is necessary, and helps us sleep better at night.
Note: We always have a daily system test event that triggers a Pager Duty notification at 9am each day. For the truly paranoid, this assures that pager duty itself has not failed in some way eg misconfiguratiion etc. Our support team knows if they don't get a test alert each day, there is a problem in the notification system itself. The tech on duty has to awknowlege the incident as per SOP. If they do not awknowlege, then it escalates to the next tier, and we know we have to have a talk about response times. It keeps people on their toes. This is the final piece to insure you have robust monitoring infrastructure.
OpsGene has a heartbeat service which is basically a watch dog timer. You can configure it to call you if you don't ping them in x number of minutes.
Unfortunately I would not recommend them. I have been using them for 4 years and they have changed their account system twice and left my paid account orphaned silently. I have to find a new vendor as soon as I have some free time.

Occasional high latency in qpid application

I'm hoping someone can help me with an issue I'm seeing with a Qpid C++ application I'm using. Essentially, we have one application publishing a status to a last_value_queue at about a 10Hz rate and a couple other applications continuously processing this status. The receivers also use the status as a kind of heartbeat and will complain if the status message isn't updated for a certain amount of time (500ms, to be exact.)
This works fine for about a day, after which we start seeing issues. Every couple hours, a single fetch call by a receiver will block for over 500ms (sometimes for up to 900ms.) This behavior will continue until we restart the broker.
I'm no expert, but I don't think I'm doing anything particularly dumb. I've been able to repeat this behavior with a pair of small applications that connect to the broker. Every 100ms the sender sends a std::chrono::time_point object set to the current time. The receiver fetches the message and calculates the delay to the millisecond. The delay is always 0ms or 1ms, except for the single spikes every hour or so after the initial day of everything being happy. The connection is created like so:
qpid::messaging::Connection c("host1:5672","{ reconnect: true}");
and the sender and receiver are both created with the string
"testQueue; { mode: browse, create: always, node: { type: queue, x-declare:{ arguments:{'qpid.last_value_queue_key':'key','qpid.replicate':'none'}}}}"
High availability replication is enabled on the broker, but I have it explicitly disabled for everything for the purpose of my testing. I see no difference in behavior when the broker and apps are running on the same host or different hosts on the LAN. Using qpid-stat, I can see that the broker replication queue is still transmitting quite a bit of data, but its message count is always at 0 so I don't think it's sending more than it can handle. Can anyone think of anything I might be missing that could cause this behavior? We're using the Qpid 0.26 and the C++ broker.

Automate Suspended orchestrations to be resumed automatically

We have a BizTalk application which sends XML files to external applications by using a web-service.
BizTalk calls the web-services method by passing XML file and destination application URL as parameters.
If the external applications are not able to receive the XML, or if there is no response received from the web-service back to BizTalk the message gets suspended in BizTalk.
Presently for this situation we manually go to BizTalk admin and resume each suspended message.
Our clients want this process to be automated all, they want an dashboard which shows list of message details and a button, on its click all the suspended messages have to be resumed.
If you are doing this within an orchestration and catching the connection error, just add a delay shape configured to 5 hours. Or set a retry interval to 300 minutes and multiple retries on the send port if that makes sense. You can do this using the rule engine as well.
Why not implement an asynchronous pattern?
You make it so, so that the orchestration sends the file out via a send shape while initializing a certain correlation set.
You then put a listen shape with at one end:
- the receive (following the initialized correlation set)
- a delay shape set to 5 hours.
When you receive the message, your orchestration can handle it gracefully.
When you don't, the delay shape will kick in and you handle accordingly.
Benefit to this solution in comparison to the solution of 40Alpha will be that your orchestration will only 'wake up' from a dehydrated state if the timeout kicks in OR when the response is received. In the example of 40Alpha, the orchestration would wake up a lot of times, consuming extra resources.
You may want to look a product like BizTalk 360. It has those sort of monitoring and command built into it. I'm not sure it works with BizTalk 2006R2 though, but you should be thinking about moving off that platform anyway as it is going out of Microsoft support.

Server connection delay not recorded on some JMeter clients

While testing a webservice we set a connection delay on the server of 5 seconds. Thus you would expect JMeter to give response times >5000ms. In some cases / clients this works fine. As expected, but in others it doesn't.
On some clients JMeter just gives a response time of (e.g.) 315ms, whilst other machines give 5315ms (which includes the 5 second delay). On the problem-machines I also test through SoapUI, same response time, and Firefox. Firefox shows a response time of >5000ms.
Theoretically there shouldn't be a difference between the machines, but obviously there is. I just can't find what.
Please use transaction controller.
All your HTTP/s requests should be part of the same transaction controller.
In order to include the delay time, kindly check/select the property of transaction controller mentioned below:
"Include duration of timer and pre-post processors in generated sample"
hope this will help.