Use ColdFusion to read events over a TCP/IP stream - coldfusion

Our new phone system is making use of Asterisk manager API which allows to read events and issue commands over a TCP/IP stream. My question is.. Is there any way at all to use ColdFusion to read (and in-turn process) the stream of events? As of now I'm able to view the phone events (incoming calls, transfers, hang-ups etc) via telnet and I'm wondering if it's possible to use a ColdFusion event gateway to process these events as they come over?
Once the initial connection is made (via telnet), I have to submit the following key:values in order to authenticate the connection before the stream begins.
Action: login<CRLF>
Username: usr<CRLF>
Secret: abc123<CRLF>
<CRLF>
Just wanted to specify that as I'm not sure if that's a deal-breaker with possibly using a web service in this manner. Also note we are using ColdFusion 10 Enterprise.

I realize this is an old thread, but I am posting this in case it helps the next guy ....
AFAIK, it cannot be done with a standard CF Event Gateway. However, one possibility is using Asterisk-Java. It is a java library that allows communication with an Asterisk Server. More specifically it Manager interface:
... is capable of sending [actions] and receiving [responses] and
[events]. It does not add any further functionality but rather
provides a Java view to Asterisk's Manager API (freeing you from
TCP/IP connection and parsing stuff).
So it can be used to issue commands to the server, and receive events, just like telnet.
Starter example:
Download the Asterisk-Java jar and load it via this.javaSettings in your Application.cfc
Create a ManagerConnection with the settings for your Asterisk server
factory = createObject("java", "org.asteriskjava.manager.ManagerConnectionFactory");
connection = factory.init( "hostNameOrIP"
, portNum
, "userName"
, "theSecret" ).createManagerConnection();
Create a CFC to act as a listener. It will receive and handle events from Asterisk:
component {
public void function onManagerEvent(any managerEvent)
{
// For demo purposes, just output a summary of the event
WriteLog( text=arguments.managerEvent.toString(), file="AsteriskLog" );
}
}
Using a bit of dynamic proxy magic, register the CFC with the connection:
proxyListener = createDynamicProxy("path.YourCFCListener"
, ["org.asteriskjava.manager.ManagerEventListener"]);
connection.addEventListener( proxyListener );
Login to the server to begin receiving events. Setting the appropriate event level: "off", "on" or csv list of available events - "system", "call" and/or "log".
connection.login("on");
Run a simple "Ping" test to verify everything is working. Then sleep for a few seconds to allow some events to flow. Then close the connection.
action = createObject("java", "org.asteriskjava.manager.action.PingAction").init();
response = application.connection.sendAction(action);
writeDump(response.getResponse());
// disconnect and stop events
sleep(4000);
connection.logoff();
Check the demo log file. It should contain one or more events.
"Information","http-bio-8500-exec-4","10/14/16","15:17:19","XXXXX","org.asteriskjava.manager.event.ConnectEvent[dateReceived=Fri Oct 14 15:17:19 CDT 2016,....]"
NB: In a real application, the connection would probably be opened once, in OnApplicationStart, and stored in a persistent scope.
Events would continue to stream as long as the connection remained open. The connection should only be closed when the application
ends, to halt event streaming.

Yes-- you'd want to use a Socket Gateway. Ben Nadel has a great writeup about how to do this: Using Socket Gateways To Communicate Between ColdFusion And Node.js
Although he uses Node.js in his example, you should be able to use his guide to set up the Socket Gateway, then handle the data passed to it as you see fit.

What you want is a server-side TCP client. I suggest easySocket, a simple UDF that allows you to send TCP messages via Coldfusion by utilizing Java sockets.

Related

AWS API Gateway - Is there a way to append metadata to the connection session, so it propagates to disconnect when that is triggered?

So I need to build a WebSocket API for my org. The requirements from the business are pretty typical websocket pattern stuff except for one detail:
This websocket api will be used by different teams in our org, and each team needs its own separate activeconnections dynamodb table.
Now in a typical websocket api, there would be a single connections table that the connect and disconnect lambda functions write/delete to. Also, the hooks in the websocket api ensure that the connectionId needed to identify a connection/session are always in the event.requestContext. Easy peasy for a single connections table.
However, In my approach of having a separate active connections db/table for each team, it gets more complicated. Yes, it's true that for the connect lambda, It is very easy to code so that it expects a "TeamDatabaseID" from somewhere in the initial connection request - Headers, queryStringParameters, etc.
The problem is in the subsequent disconnect that could be triggered from either client or server. The disconnect hook will run the disconnect function, and pass in the default requestContext with the connectionId, but with no TeamDatabaseID - which the disconnect lambda needs to have access to in order to know which database to delete from.
Is there a way to do this? Is there some notion of a context object that I can set values in from the initial connection, so that when the disconnect happens, the teamDatabaseID is propagated in some way to the subsequent disconnect lambda? I tried writing to the requestContext - and that seems to only be alive for the execution of the given lambda.
Instead of having a single Amazon API Gateway Web Socket API for multiple teams, could you instead have one Web Socket API per team?

What notification is provided for a lost connection in a C++ gRPC async server

I have an async gRPC server for Windows written in C++. I’d like to detect the loss of connection to a client – whether a network connection is lost, or the client crashes, etc. I see references to the keepalive channel arguments, and I’ve tried various combinations of those settings, such as:
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
builder.AddChannelArgument(GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS, 9000);
builder.AddChannelArgument(GRPC_ARG_HTTP2_BDP_PROBE, 1);
I've done some testing with a streaming RPC method. If I kill the client process and then try to send data to the client, the lost connection is detected. I don't actually even have to send data. I can set an Alarm object to trigger immediately and that causes the call handler to be cancelled. However, if I don't try to send data (or set an alarm) after killing the client process then there's no notification or callback that I've been able to find/enable. I must not have a complete understanding. So:
How does the detection of a lost connection manifest itself for the server? Is there a callback method, or notification of some type? My server doesn’t receive any errors; the completion queue’s ‘Next()’ method never returns, etc.
Does this detection work for both unary (call/response) and streaming methods?
Does the server detection of a lost connection work whether or not the client has implemented lost connection / keepalive logic?
Is there some method besides the keepalive channel arguments that is preferred?
Thanks - any help is appreciated.
You can use ServerContext::AsyncNotifyWhenDone() to get a notification when the request has been cancelled.
https://grpc.github.io/grpc/cpp/classgrpc__impl_1_1_server_context_base.html#a0f1289f31257e6dbef57bc901bd7b5f2

Is it possible to connect to the Google IOTCore MQTT Bridge via Javascript?

I've been trying to use the javacscript version of the Eclipse Paho MQTT client to access the Google IOTCore MQTT Bridge, as suggested here:
https://cloud.google.com/iot/docs/how-tos/mqtt-bridge
However, whatever I do, any attempt to connect with known good credentials (working with other clients) results in this connection error:
errorCode: 7, errorMessage: "AMQJS0007E Socket error:undefined."
Not much to go on there, so I'm wondering if anyone has ever been successful connecting to the MQTT Bridge via Javascript with Eclipse Paho, the client implementation suggested by Google in their documentation.
I've gone through their troubleshooting steps, and things seem to be on the up and up, so no help there either.
https://cloud.google.com/iot/docs/troubleshooting
I have noticed that in their docs they have sample code for Java/Python, etc, but not Javascript, so I'm wondering if it's simply not supported and their documentation just fails to mention as such.
I've simplified my code to just use the 'Hello World' example in the Paho documentation, and as far as I can tell I've done things correctly (including using my device path as the ClientID, the JWT token as the password, specifying an 'unused' userName field and explicitly requiring MQTT v3.1.1).
In the meantime I'm falling back to polling via their HTTP bridge, but that has obvious latency and network traffic shortcomings.
// Create a client instance
client = new Paho.MQTT.Client("mqtt.googleapis.com", Number(8883), "projects/[my-project-id]/locations/us-central1/registries/[my registry name]/devices/[my device id]");
// set callback handlers
client.onConnectionLost = onConnectionLost;
client.onMessageArrived = onMessageArrived;
// connect the client
client.connect({
mqttVersion: 4, // maps to MQTT V3.1.1, required by IOTCore
onSuccess:onConnect,
onFailure: onFailure,
userName: 'unused', // suggested by Google for this field
password: '[My Confirmed Working JWT Token]' // working JWT token
function onFailure(resp) {
console.log(resp);
}
// called when the client connects
function onConnect() {
// Once a connection has been made, make a subscription and send a message.
console.log("onConnect");
client.subscribe("World");
message = new Paho.MQTT.Message("Hello");
message.destinationName = "World";
client.send(message);
}
// called when the client loses its connection
function onConnectionLost(responseObject) {
if (responseObject.errorCode !== 0) {
console.log("onConnectionLost:"+responseObject.errorMessage);
}
}
// called when a message arrives
function onMessageArrived(message) {
console.log("onMessageArrived:"+message.payloadString);
}
I'm a Googler (but I don't work in Cloud IoT).
Your code looks good to me and it should work. I will try it for myself this evening or tomorrow and report back to you.
I've spent the past day working on a Golang version of the samples published on Google's documentation. Like you, I was disappointed to not see all Google's regular languages covered by samples.
Are you running the code from a browser or is it running on Node.JS?
Do you have a package.json (if Node) that you would share too please?
Update
Here's a Node.JS (JavaScript but non-browser) that connects to Cloud IoT, subscribes to /devices/${DEVICE}/config and publishes to /devices/${DEVICE}/events.
https://gist.github.com/DazWilkin/65ad8890d5f58eae9612632d594af2de
Place all the files in the same directory
Replace values in index.js of the location of Google's CA and your key
Replaces [[YOUR-X]] values in config.json
Use "npm install" to pull the packages
Use node index.js
You should be able to pull messages from the Pub/Sub subscription and you should be able to send config messages to the device.
Short answer is no. Google Cloud IoT Core doesn't support WebSockets.
All the JavaScript MQTT libraries use WebSocket because JavaScript is restricted to perform HTTP requests and WebSocket connections only.

Automate Suspended orchestrations to be resumed automatically

We have a BizTalk application which sends XML files to external applications by using a web-service.
BizTalk calls the web-services method by passing XML file and destination application URL as parameters.
If the external applications are not able to receive the XML, or if there is no response received from the web-service back to BizTalk the message gets suspended in BizTalk.
Presently for this situation we manually go to BizTalk admin and resume each suspended message.
Our clients want this process to be automated all, they want an dashboard which shows list of message details and a button, on its click all the suspended messages have to be resumed.
If you are doing this within an orchestration and catching the connection error, just add a delay shape configured to 5 hours. Or set a retry interval to 300 minutes and multiple retries on the send port if that makes sense. You can do this using the rule engine as well.
Why not implement an asynchronous pattern?
You make it so, so that the orchestration sends the file out via a send shape while initializing a certain correlation set.
You then put a listen shape with at one end:
- the receive (following the initialized correlation set)
- a delay shape set to 5 hours.
When you receive the message, your orchestration can handle it gracefully.
When you don't, the delay shape will kick in and you handle accordingly.
Benefit to this solution in comparison to the solution of 40Alpha will be that your orchestration will only 'wake up' from a dehydrated state if the timeout kicks in OR when the response is received. In the example of 40Alpha, the orchestration would wake up a lot of times, consuming extra resources.
You may want to look a product like BizTalk 360. It has those sort of monitoring and command built into it. I'm not sure it works with BizTalk 2006R2 though, but you should be thinking about moving off that platform anyway as it is going out of Microsoft support.

Periodic tasks in Django/Celery - How to notify the user on screen?

I have now succesfully setup Django-celery to check after my existing tasks to remind the user by email when the task is due:
#periodic_task(run_every=datetime.timedelta(minutes=1))
def check_for_tasks():
tasks = mdls.Task.objects.all()
now = datetime.datetime.utcnow().replace(tzinfo=utc,second=00, microsecond=00)
for task in tasks:
if task.reminder_date_time == now:
sendmail(...)
So far so good, however what if I wanted to also display a popup to the user as a reminder?
Twitter bootstrap allows creating popups and displaying them from javascript:
$(this).modal('show');
The problem is though, how can a celery worker daemon run this javascript on the user's browser? Maybe I am going a complete wrong way and this is not possible at all. Therefore the question remains can a cronjob on celery ever be used to achieve a ui notification on the browser?
Well, you can't use the Django messages framework, because the task has no way to access the user's request, and you can't pass request objects to the workers neither, because they're unpickable.
But you could definitely use something like django-notifications. You could create notifications in your task and attach them to the user in question. Then, you could retrieve those messages from your view and handle them in your templates to your liking. The user would see the notification on their next request (or you could use AJAX polling for real-time-ish notifications or HTML5 websockets for real real-time [see django-websocket]).
Yes it is possible but it is not easy. Ways to do/emulate server to client communication:
polling
The most trivial approach would be polling the server from javascript. Your celery task could create rows in your database that can be fetched by a url like /updates which checks for new updates, marks the rows as read and returns them.
long polling
Often referred to as comet. The client does a request to the server which pends until the server decides to return something. See django-comet for example.
websocket
To enable true server to client communication you need an open connection from the client to the server. django-socketio and django-websocket are examples of reusable apps that make this possible.
My advice judging by your question's context: either do some basic polling or stick with the emails.