Access the Contract State within the Notary - state

I want to trigger a transaction in an External network from the Notary Service Flow just before consuming the Input State. The example is a Custodian service that triggers the notification to the Depository: Custodian on Corda & Depository on Hedera Hashgraph.
But the Notary flow does not have access to read the attributes from the Contract State. Is there a way to send/broadcast custom attributes to the Notary?
Thank you in advance.

If you setup your notary as a validating one; it will have access to all of the transaction components (read here).
To setup your notary as validating; set the below configuration inside its node.conf (taken from this sample):
notary {
validating = true
}

Related

Trigger AWS Lambda function from successfully deployed Service Catalog Provisioned Product

In this setup, AWS accounts are created by launching a Service Catalog product for each account. After successful product provisioning, the status of the provisioned product changes to "Available". I wanted to capture this status change and trigger a Lambda function each time a provisioned product changes status to Available.
I have investigated AWS Event Bridge, however there are no Event type that shows "Provisioned Product Status Change"
Could someone explain how to specify the event type for Service Catalog Provisioned Products Status Change?
When you want to know what types of events are captured by EventBridge in your operations, you can just set All Events and see the results through the process. (I normally use SNS to catch results.)
But I'm afraid I don't think EventBridge won't capture the state change you want to check (because it is not shown on the console). EventBridge only captures normal APIs for Service Catalog.

Is there a way to implement a health check service for Eventhub - queue in Azure for resource monitoring

Requirement -
From webMethods we are sending event driven messages to Azure Eventhub queues over http. We are looking for an option to have a health check service on the availability of the queue rather than the landing zone to handle transient errors.
What are we trying to achieve -
We are basically trying to implement a transient error handler over resource monitoring in webMethods to avoid unnecessary automated prod alerts for high volume interfaces. Also to have a automated suspend and retry mechanism of feeds rather than doing it manually.
Please do let me know if there is a way to implement this solution.
There are four transient exceptions in Azure Event Hubs messaging which can be generated by using .NET framework API.
Microsoft.ServiceBus.Messaging.MessagingException
Microsoft.ServiceBus.Messaging.ServerBusyException
Microsoft.Azure.EventHubs.ServerBusyException
Microsoft.ServiceBus.Messaging.MessagingCommunicationException
Additionally from this, there are 3 top approaches to monitor the health status of Azure Event Hubs.
Metric Based Monitoring
Azure Event Hub Logging
Activity Log
Diagnostic Log
Status Monitoring
Note: Monitoring the Azure resource status does not mean monitoring the physical status of a resource, but it also includes monitoring availability, performance, reliability, and consumption.
To know more about these approaches and to implement them, please refer this third-party tutorial.

Azure VMSS Customscript extension execution monitoring with Azure Monitor

In some instances when executing custom script extension on Linux/Windows VMSS the execution fails may be for timeout or evening invalid file uris or invalid Storage Access Token. Is there a way using Azure monitor that I can capture this failure event so that I can trigger operational activities such as sending emails to the ops teams.
The VM Extension Provisioning Error failure events will pop up on the Activity Logs. So you can send the activity logs similar to Log Analytics workspace from the VM(s) to enable features of Azure Monitor Logs
Activity log data in a Log Analytics workspace is stored in a table called AzureActivity that you can retrieve with a log query in Log Analytics. The structure of this table varies depending on the category of the log entry. For a description of the table properties, see the Azure Monitor data reference.
For example you can filter
AzureActivity
| where * contains "VMExtensionProvisioningError"
Please ensure to add additional filters as required.
You can set up your log alert based on this.

Does AWS has options similar to resilince4j to incorporate Circuitbreaker pattern when a particular service is down

I would like to know whether it is a good option to implement resilience4j for the circuit breaker pattern on AWS project. My scenario here is, having a microservice hitting dynamo db (single region) and upon facing the transactions getting failed for at least 5 times, we need to redirect the call to make the transaction publish to some other service by enabling resilience4j. So that i cannot miss out any transactions.
Please put in your thoughts to understand how aws handles resiliency with dynamodb, when it is single region (global tables not enabled)

Google Cloud IoT Core and Pubsub Pricing?

I am using google IoT core and pubsub services for my IoT devices. I am publishing data using pubsub to the database. but I think its quite expensive to store every data into the database. I have some data like if the device is on or off and a configuration file which has some parameter which I need to process my IoT payload. Now I am not able to understand if configuration and state topic in IoT is expensive or not? and how long the data is stored in the config topic and is it feasible that whenever the parameter is changed in the config file it publish that data into config topic? and what if I publish my state of a device that if it is online or not every 3 seconds or more into the state topic?
You are mixing different things. There is Cloud IoT, where you have a device registry, with metadata, configuration and states. You also have PubSub topic in which you can publish message about IoT payload that can contain configuration data (I assume that is that you means in this sentence: "it publish that data into config topic").
In definitive it's simple.
All the management operations on Cloud IoT are free (device registration, configuration, metadata,...). There is no limitation and no duration limit. The only one which exists in the quotas for rate limit and configuration size.
The inbound and outbound traffic from and to the IoT devices is billed as described here
If you use PubSub for pushing your messages, Cloud Functions (or Cloud Run, or other compute option), a database (Cloud SQL or Datastore/Firestore), all these services are billed as usual, there is no relation with Cloud IoT service & billing. The constraints of each services are applied as a regular usage. For example, a PubSub message live up to 7 days (by default) in a subscription and until it hasn't acknowledged.
EDIT
Ok, got it, I took time to understood what you wanted to achieve.
The state is designed for getting the internal representation of the devices, but the current limitation doesn't allow you to update it automatically when you received message.
You have 2 solutions:
Either you can update your devices and send an update message only when its state changes (it's for this kind of use case that the feature is designed!)
Or, let the device published the messages every 3 seconds, but in the event PubSub topic. Get the events in a function which get the state list, get the first one (the most recent) and compare the value with the PubSub message. If different, update the state. This workflow also work with external database like Datastore or Firestore.