I am developing a touch screen device that gets data from field devices using the BACnet protocol.
I'm currently developing some pages that show devices alarms. As per BACnet documentation, there is a service named AlarmSummary. When invoke (as a client) this service, field device answers with a list that include, for each alarm, the following information
object identifier
alarm state
list of acked transitions
Now my question is: how can I acknowledge an alarm that I read through the GetAlarmSummary service considering that the AckAlarm service requires the following information to be provided
Event Object Identifier
Event State
Acknowledged Time Stamp
Acknowledgment Source
Time Of Acknowledgment
Thanks in advance
In short, you can't or you read the required information in a second step. Use the GetEventInformation service to obtain information about active event states. It returns all information required to acknowledge alarms.
Note, that the execution of GetAlarmSummary and GetEnrollmentSummary is deprecated, see future Addendum 135-2012av. Regardless, initiation will be still required if a device doesn't support AE-INFO-B.
Related
We have a project which receives data from sensors and then we send this data to GCP. For this we have used GCP's Pub/Sub model. Issue here is when we pull the messages, they are not in ordered manner. So we are not able to verify that the data we have sent to GCP has reached there or not.
Also GCP has mentioned that they don't guarantee the order of messages https://cloud.google.com/pubsub/docs/ordering
Any better way to verify this messages, other than the solutions recommended by GCP.
Ordering is not guaranteed in general in Pub/Sub, it is true. However, when using ordering keys as described in the ordering documentation to which you link, ordering is guaranteed. You would need to set an ordering key on published messages and enable message ordering on your subscription. Right now, the documentation only shows how to do this in Java, though other language examples will be coming soon.
Without using ordering, you could potentially monitor the backlog to see when num_undelivered_messages is 0. However, this has some drawbacks:
You would have to continuously query the metric to see its value.
The delay in computing the metric is O(minutes) and so it may be stale, resulting in either not tracking messages that were very recently published (resulting in it showing a value less than the actual size of the backlog) or not recording the fact that some messages were delivered and acked (resulting in it showing a value greater than the actual size of the backlog).
In general, it is preferred with Pub/Sub that your subscribers are always running and ready to receive data when it is published. Cloud Pub/Sub guarantees that messages successfully published will be received by subscribers, assuming subscribers are able to receive the messages within the message retention duration, which defaults to seven days.
Is it possible to ensure that a message was successfully delivered to an Event Hub when sending it with the log-to-eventhub policy in API Management?
Edit: In our solution we cannot allow any request to proceed if a message was not delivered to the Event Hub. As far as I can tell the log-to-eventhub policy doesn't check for this.
Welcome to Stackoveflow!
Note: Once the data has been passed to an Event Hub, it is persisted and will wait for Event Hub consumers to process it. The Event Hub does not care how it is processed; it just cares about making sure the message will be successfully delivered.
For more details, refer “Why send to an Azure Event Hub?”.
Hope this helps.
Event Hubs is built on top of Service Bus. According to the Service Bus documentation,
Using any of the supported Service Bus API clients, send operations into Service Bus are always explicitly settled, meaning that the API operation waits for an acceptance result from Service Bus to arrive, and then completes the send operation.
If the message is rejected by Service Bus, the rejection contains an error indicator and text with a "tracking-id" inside of it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
When using the AMQP protocol, which is the exclusive protocol for the .NET Standard client and the Java client and which is an option for the .NET Framework client, message transfers and settlements are pipelined and completely asynchronous, and it is recommended that you use the asynchronous programming model API variants.
A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order.
I think this means the SDK is getting a receipt for each message.
This theory is further aided by the RetryPolicy Class used in the ClientEntity.RetryPolicy Property of the EventHubSender Class.
In the API Management section on logging-to-eventhub, there is also a section on retry intervals. Below that are sections on modifying the return response or taking action on certain status codes.
Once the status codes of a failed logging attempt are known, you can modify the policies to take action on failed logging attempts.
I am looking into building a simple solution where producer services push events to a message queue and then have a streaming service make those available through gRPC streaming API.
Cloud Pub/Sub seems well suited for the job however scaling the streaming service means that each copy of that service would need to create its own subscription and delete it before scaling down and that seems unnecessarily complicated and not what the platform was intended for.
On the other hand Kafka seems to work well for something like this but I'd like to avoid having to manage the underlying platform itself and instead leverage the cloud infrastructure.
I should also mention that the reason for having a streaming API is to allow for streaming towards a frontend (who may not have access to the underlying infrastructure)
Is there a better way to go about doing something like this with the GCP platform without going the route of deploying and managing my own infrastructure?
If you essentially want ephemeral subscriptions, then there are a few things you can set on the Subscription object when you create a subscription:
Set the expiration_policy to a smaller duration. When a subscriber is not receiving messages for that time period, the subscription will be deleted. The tradeoff is that if your subscriber is down due to a transient issue that lasts longer than this period, then the subscription will be deleted. By default, the expiration is 31 days. You can set this as low as 1 day. For pull subscribers, the subscribers simply need to stop issuing requests to Cloud Pub/Sub for the timer on their expiration to start. For push subscriptions, the timer starts based on when no messages are successfully delivered to the endpoint. Therefore, if no messages are published or if the endpoint is returning an error for all pushed messages, the timer is in effect.
Reduce the value of message_retention_duration. This is the time period for which messages are kept in the event a subscriber is not receiving messages and acking them. By default, this is 7 days. You can set it as low as 10 minutes. The tradeoff is that if your subscriber disconnects or gets behind in processing messages by more than this duration, messages older than that will be deleted and the subscriber will not see them.
Subscribers that cleanly shut down could probably just call DeleteSubscription themselves so that the subscription goes away immediately, but for ones that shut down unexpectedly, setting these two properties will minimize the time for which the subscription continues to exist and the number of messages (that will never get delivered) that will be retained.
Keep in mind that Cloud Pub/Sub quotas limit one to 10,000 subscriptions per topic and per project. Therefore, if a lot of subscriptions are created and either active or not cleaned up (manually, or automatically after expiration_policy's ttl has passed), then new subscriptions may not be able to be created.
I think your original idea was better than ephemeral subscriptions tbh. I mean it works, but it feels totally unnatural. Depending on what your requirements are. For example, do clients only need to receive messages while they're connected or do they all need to get all messages?
Only While Connected
Your original idea was better imo. What I probably would have done is to create a gRPC stream service that clients could connect to. The implementation is essentially an observer pattern. The consumer will receive a message and then iterate through the subscribers to do a "Send" to all of them. From there, any time a client connects to the service, it just registers itself with that observer collection and unregisters when it disconnects. Horizontal scaling is passive since clients are sticky to whatever instance they've connected to.
Everyone always get the message, if eventually
The concept is similar to the above but the client doesn't implicitly un-register from the observer on disconnect. Instead, it would register and un-register explicitly (through a method/command designed to do so). Modify the 'on disconnected' logic to tell the observer list that the client has gone offline. Then the consumer's broadcast logic is slightly different. Now it iterates through the list and says "if online, then send, else queue", and send the message to a ephemeral queue (that belongs to the client). Then your 'on connect' logic will send all messages that are in queue to the client before informing the consumer that it's back online. Basically an inbox. Setting up ephemeral, self-deleting queues is really easy in most products like RabbitMQ. I think you'll have to do a bit of managing whether or not it's ok to delete a queue though. For example, never delete the queue unless the client explicitly unsubscribes or has been inactive for so long. Fail to do that, and the whole inbox idea falls apart.
The selected answer above is most similar to what I'm subscribing here in that the subscription is the queue. If I did this, then I'd probably implement it as an internal bus instead of an observer (since it would be unnecessary) - You create a consumer on demand for a connecting client that literally just forwards the message. The message consumer subscribes and unsubscribes based on whether or not the client is connected. As Kamal noted, you'll run into problems if your scale exceeds the maximum number of subscriptions allowed by pubsub. If you find yourself in that position, then you can unshackle that constraint by implementing the pattern above. It's basically the same pattern but you shift the responsibility over to your infra where the only constraint is your own resources.
gRPC makes this mechanism pretty easy. Alternatively, for web, if you're on a Microsoft stack, then SignalR makes this pretty easy too. Clients connect to the hub, and you can publish to all connected clients. The consumer pattern here remains mostly the same, but you don't have to implement the observer pattern by hand.
(note: arrows in diagram are in the direction of dependency, not data flow)
I am looking into ways to order list of messages from google cloud pub/sub. The documentation says:
Have a way to determine from all messages it has currently received whether or not there are messages it has not yet received that it needs to process first.
...is possible by using Cloud Monitoring to keep track of the pubsub.googleapis.com/subscription/oldest_unacked_message_age metric. A subscriber would temporarily put all messages in some persistent storage and ack the messages. It would periodically check the oldest unacked message age and check against the publish timestamps of the messages in storage. All messages published before the oldest unacked message are guaranteed to have been received, so those messages can be removed from persistent storage and processed in order.
I tested it locally and this approach seems to be working fine.
I have one gripe with it however, and this is not something easily testable by myself.
This solution relies on server-side assigned (by google) publish_time attribute. How does Google avoid the issues of skewed clocks?
If my producer publishes messages A and then immediately B, how can I be sure that A.publish_time < B.publish_time is true? Especially considering that the same documentation page mentions internal load-balancers in the architecture of the solution. Is Google Pub/Sub using atomic clocks to synchronize time on the very first machines which see messages and enrich those messages with the current time?
There is an implicit assumption in the recommended solution that the clocks on all the servers are synchronized. But the documentation never explains if that is true or how it is achieved so I feel a bit uneasy about the solution. Does it work under very high load?
Notice I am only interested in relative order of confirmed messages published after each other. If two messages are published simultaneously, I don't care about the order of them between each other. It can be A, B or B, A. I only want to make sure that if B is published after A is published, then I can sort them in that order on retrieval.
Is the aforementioned solution only "best-effort" or are there actual guarantees about this behavior?
There are two sides to ordered message delivery: establishing an order of messages on the publish side and having an established order of processing messages on the subscribe side. The document to which you refer is mostly concerned with the latter, particularly when it comes to using oldest_unacked_message_age. When using this method, one can know that if message A has a publish timestamp that is less than the publish timestamp for message B, then a subscriber will always process message A before processing message B. Essentially, once the order is established (via publish timestamps), it will be consistent. This works if it is okay for the Cloud Pub/Sub service itself to establish the ordering of messages.
Publish timestamps are not synchronized across servers and so if it is necessary for the order to be established by the publishers, it will be necessary for the publishers to provide a timestamp (or sequence number) as an attribute that is used for ordering in the subscriber (and synchronized across publishers). The subscriber would sort message by this user-provided timestamp instead of by the publish timestamp. The oldest_unacked_message_age will no longer be exact because it is tied to the publish timestamp. One could be more conservative and only consider messages ordered that are older than oldest_unacked_message_age minus some delta to account for this discrepancy.
Google Cloud Pub-sub does not guarantee order of events receive to consumers as they were produced. Reason behind that is Google Cloud Pub-sub also running on a cluster of nodes. The possibility is there an event B can reach the consumer before event A. To Ensure ordering you have to make changes on both producer and consumer to identify the order of events. Here is section from docs.
I am working with wso2cep3.0. I went through the docs of wso2cep but there is no exaplanation about wso2cep how would i use it
1.what is the use of Input Event Adapter:-in my concern for getting the data from client.
2.Event Builder means incoming data format specifier.
3.Event Formatter means Outgoing data format specifier.
4.Output Event Adapter out put handler.
but how can i use this thing means any program or any event writer most important how would i publish this to external world example as http endpoint or https or jms.
I am unable to understand how would i start and where can i start.
Please suggest me i know the ESB,DSS,IS,BPS
For a typical CEP usecase, you configure the input event adaptor to connect to an event source, such as a JMS endpoint, Thrift endpoint(WSO2Event adaptor in CEP 3.0.0) etc. Event builder specifies how the incoming message will be mapped.
Next the execution plans will have the actual query (the processing part) of the execution flow. This is where the CEP engine actually processes the events.
Event formatter formats it to an output format as needed. The output event adaptor connects to the actual endpoint to which the processed result would be published. It would go and publish to a JMS endpoint, email, database etc.
To get started, you need to create an input event adaptor, then an event builder that uses the input event adaptor, then the execution plan, then the output event adaptor, and an event formatter that would format the result from the execution plan and send to the relevant output adaptor. You can find the flow in [1].
You can find example configurations in /samples/artifacts directory. You can find an overview of samples in [2] and can find how to run them [3]. Each sample has an associated producer and a consumer that simulates real world event producers/consumers. Samples would be the best place to learn more about CEP configurations.
[1] http://docs.wso2.org/display/CEP300/CEP+Configuration+Overview
[2] http://docs.wso2.org/display/CEP300/Overview+of+Samples
[3] http://docs.wso2.org/display/CEP300/Setting+up+CEP+Samples