I'm currently attempting to port an existing Spring Cloud Stream app over to using Spring Cloud Function instead. When running the application, we use bindings to map from an input queue (either Kafka or IBMMQ). Messages sent on the queue are passed through a custom MessageConverter and then passed to our custom routing function.
In Spring Cloud Stream, the bindings allowed for direct access to the testing queues (via calling input()) for example, and one could simply send a message on that queue to start the process rolling.
What I'm trying to find is a similar mechanism for sending messages to the queue that the Spring Cloud Function is bound to. The problem is that I can't seem to figure out how to do that.
I've seen how one can access the routingFunction via HTTP, but unfortunately, this bypasses my MessageConverter so I still can't really do an end-to-end test.
Has anyone got any ideas or pointers to how I might get this to work with Cloud Function?
Related
I have a need to poll for a close-to-real time reading from a serial device (using ESP32) from a web application. I am currently doing this using Particle Photons and the Particle Cloud API, and am wondering if there is a way to achieve similar using Google Cloud IoT.
From reading the documentation, it seems a common way to do this is via PubSub and then to publish to BigQuery via DataFlow or Firebase via Cloud Functions. However, to reduce pricing overhead, I am hoping to only trigger a data exchange(s) when the device receives an external request.
It looks like there is a way to send commands to the IoT device - am I on the right track with this? I can't seem to find the documentation here, but after receiving a command it would use PubSub to publish to a Topic, which can trigger a Cloud Function to update Firebase?
Lastly, it also looks like there is a way to do a GET request to the device's DeviceState, but this can only be updated once per second (which might also work, though it sounds like they generally discourage using state for this purpose).
If there is another low-latency, low-cost way to allow a client to poll for a real-time value from the IoT device that I've missed, please let me know. Thank you!
Espressif has integrated Google's Cloud IoT Device SDK which creates an authenticated bidirectional MQTT pipe between the device and IoT Core. As you've already discovered, you can send anything from the cloud to the device (it's called a "command" but it's just an MQTT payload so you can put almost anything you want in it) and vice versa (it's called "telemetry" but again it's just an MQTT payload). Once incoming messages from devices reach the cloud, pubsub can route them wherever you want. I don't know if I'd call it real-time, but latencies in a good WiFi network tend to be under a second.
So I've built this API. It consists of a Lambda function (accessible via API Gateway) which talks to a Neptune graph database instance via websockets.
Everything is wired up and working. But I recently started noticing intermittent 500's coming from the API. After some investigation I found that the Neptune Gremlin server was dropping/refusing connections whenever multiple requests would come in close together.
I found this page which suggests that the ephemeral nature of serverless doesn't play nice with websockets, so the websocket connection should be closed manually after each request. But after implementing that I found no difference – still 500's.
The page also suggests that when using Gremlin on Neptune you should probably send HTTP requests to Neptune rather than using websockets,
Alternatively, if you are using Gremlin, consider submitting requests to the Gremlin HTTP REST endpoint rather than the WebSockets endpoint, thereby avoiding the need to create and manage the lifetime of a connection pool.
The downside to this approach is that we would then have to use string-based queries (which means re-writing a large portion of the project). Another downside is that the Gremlin HTTP endpoint returns pretty unstructured data.
So what I'm wondering is whether anyone has got Lambda reliably talking to Neptune over websockets? If so, how?
Edit:
Since I'm using the AWS Chalice framework I don't think I really have direct access to the handler function. Below is what my lambda looks like.
And here is the code that connect() is calling:
import os
from gremlin_python.structure.graph import Graph
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
def connect():
conn_string = os.environ.get('GRAPH_DB')
global g
g = Graph().traversal().withRemote(DriverRemoteConnection(conn_string, 'g'))
So when the app starts (when a lambda instance is spun up), that connect function is called and the app gets a connection to Neptune. From there the app passes around that global g variable so as to use the same connection instance for that invocation. I was then calling close() on the DriverRemoteConnection object before returning the results of a request (and that's where I found I was still getting 500's).
Yes, it is possible to use WebSockets within a Lambda function to communicate with Neptune. There are different nuances for doing this depending on the programming language that you're using. Ultimately, it comes down instantiating the client connection and closing the connection within the handler() of the Lambda function.
If using Java [1], you can create the cluster object outside of the handler so that it can be reused per each Lambda invocation. But the client that is configured from that cluster object must be instantiated and closed during each invocation.
Do you have a snippet of code that you're using that you could share for review?
[1] https://docs.aws.amazon.com/neptune/latest/userguide/best-practices-gremlin-java-close-connections.html
I'm newbie to AWS and trying to work on the SQS for the first time. I've an Oracle Service Bus (OSB) in non-cloud environment and would like to configure OSB to consume messages from Amazon SQS. The documentation mentions to use REST API and poll repeatedly for messages. I also read about the 'client library for JMS' so that the OSB could treat SQS as JMS provider. What is the best approach to achieve this? Appreciate your inputs.
The easiest (not necessarily the purest way) would be to create a Java EE app that imports the SQS libraries and pulls messages from AWS and puts them on a local queue for OSB to process. The example code snippets are in Java, so it should be relatively straight forward.
The purest way would be to set it up as a remote JMS provider. However, how to set that up is not so clear - you may end up writing most of the code that went into option #1 above, but making a JMS client library instead of a MDB.
I've got a Grails app (version 2.2.4) with a controller method that "logs" all requests to an external web service (JSON over HTTP - one way message, response is not needed). I want to decouple the controller method from calling the web service directly/synchronously and provide a simple "queue" which can store the calls if the web service is unavailable and then send them through once the service is back up again.
This sounds like a good fit for some sort of JMS solution but I've not got any experience with using JMS (so learning curve could be an issue). Should I be using one of the available messaging plugins or is that overkill for my simple requirements? I don't want a separate messaging app, it has to be embedded in my webapp and I'd prefer something small and simple vs more complicated and robust (so advice on which plugin would be welcome).
The alternative is to implement an async service myself and queue the "messages" in the database (reading them via a Quartz job) or with something like java.util.concurrent.ConcurrentLinkedQueue?
EDIT: Another approach could be to use log4j with a custom appender set up as a AsyncAppender.
The alternative is to implement an async service myself and queue the "messages" in the database (reading them via a Quartz job)
I went ahead and tried this approach. It was very straight forward and was only a "screen" length of code in the end. I tested it with a failing web service end point as well as an app restart (crash) and it handled both. I used a single service class to both persist the messages (Grails domain class) and to flush the queue (triggered by Quartz scheduler) which reads the DB and fires off the web service calls, removing the DB entity when web service returns 200 status code.
I need to create a PRIVATE message queue on a remote machine and I have resolved to fact that I can't do this with the .NET Framework in a straight forward manner. I can create a public message queue on a remote machine, but not a PRIVATE one. I can create a message queue (public or private) locally.
I am wondering if anyone knows how to access MSMQ through WMI.
Edit: I don't see anything to do it with using the MSMQ Provider. May have to get tricky and use PSExec to log onto a remote server and execute some code.
Yes, queue creation is simple in .NET, however you cannot create a private queue on a remote machine this way.
I have been thinking about adding queue creation to the MSMQ WMI provider for some time... If you need it for a real product / customer, you can contact me and I will consider giving this feature a priority.
All the best,
Yoel Arnon
A blog post about MSMQ and WMI is here: http://msmq.spaces.live.com/blog/cns!393534E869CE55B7!210.entry
It says there is a provider here: http://www.msmq.biz/Blog/MSMQWmiSetup.msi
It also says there is a reference here: http://www.msmq.biz/Blog/MSMQ%20WMI%20Provider%20Objects.doc
Hope this helps.
WMI can't do this out-of-box. The previous answer has some obsucre WMI provider, but it doesn't even seem to support Queue creation.
This is very simple in .NET however! I wouldn't go so far as PSExec.
MessageQueue.Create
I was wanting to create remote private queues also, but since .NET doesn't support it, we decided we will just use remote public queues instead. If we set Send and Receive permissions on the queues as desired, this should be fine.
One idea for a work around would be to write your own Windows service or web service that runs on the same machine where the queue needs to reside. You could call this service remotely through a socket or over http, and your locally-running code could create the local private queue.
If you use the direct name format to reference the queue, you can Send and Receive from a remote private queue.
set qinfo = CreateObject("MSMQ.MSMQQueueInfo")
qinfo.PathName = ".\Private$\TestQueue"
qinfo.Label = ".\Private$\TestQueue"
qinfo.Journal = "1"
qinfo.Create
Copy the code in a text editor, save the file as .vbs and execute.