When would actor_tpye be "app" or "service_broker"? - cloud-foundry

The CF events api lists an "actor_type" field for events, which can be one of:
service_broker
system
user
v3-process
What is an example of an audit event's actor type being each of the above? Where is the documentation, at a higher level than a summary of the REST endpoints, and in more detail than this, for someone trying to consume this api?

actor_type represents what initiated the event. Similarly, actee_type is the resource that is being acted upon.
user: Most events (e.g. starting/stopping/deleting an app) will have actor_type "user", since the event was triggered by a user action.
service_broker: Some audit events are triggered by service brokers. Examples are registering a service offering or a service plan.
system: System is used when there is not a clear actor. There is currently a bug filed to investigate usage of this actor_type: https://www.pivotaltracker.com/story/show/132099009
v3-process: This was recently changed to be "process" in cf v245. This actor_type (along with the actor_type "app") is only used when the process/app crashes. There is also a bug around this actor_type: https://www.pivotaltracker.com/story/show/132098945
I could not find any documentation for audit events other than the API docs. How are you trying to consume the events API?

Related

Communicate internally between Google Cloud Functions?

We've created a Google Cloud Function that is essentially an internal API. Is there any way that other internal Google Cloud Functions can talk to the API function without exposing a HTTP endpoint for that function?
We've looked at PubSub but as far as we can see, you can send a request (per say!) but you can't receive a response.
Ideally, we don't want to expose a HTTP endpoint due to the extra security ramifications and we are trying to follow a microservice approach so every function is its own entity.
I sympathize with your microservices approach and trying to keep your services independent. You can accomplish this without opening all your functions to HTTP. Chris Richardson describes a similar case on his excellent website microservices.io:
You have applied the Database per Service pattern. Each service has
its own database. Some business transactions, however, span multiple
services so you need a mechanism to ensure data consistency across
services. For example, lets imagine that you are building an e-commerce store
where customers have a credit limit. The application must ensure that
a new order will not exceed the customer’s credit limit. Since Orders
and Customers are in different databases the application cannot simply
use a local ACID transaction.
He then goes on:
An e-commerce application that uses this approach would create an
order using a choreography-based saga that consists of the following
steps:
The Order Service creates an Order in a pending state and publishes an OrderCreated event.
The Customer Service receives the event attempts to reserve credit for that Order. It publishes either a Credit Reserved event or a
CreditLimitExceeded event.
The Order Service receives the event and changes the state of the order to either approved or cancelled.
Basically, instead of a direct function call that returns a value synchronously, the first microservice sends an asynchronous "request event" to the second microservice which issues a "response event" that the first service picks up. You would use Cloud PubSub to send and receive the messages.
You can read more about this under the Saga pattern on his website.
The most straightforward thing to do is wrap your API up into a regular function or object, and deploy that extra code along with each function that needs to use it. You may even wish to fully modularize the code, as you would expect from an npm module.

Best way to retrieve active calls without making request each second?

We need to create a monitor that will show any income calls in our extranet in live time.
We were able to show active calls by using /account/~/extension/~/active-calls, however, to achieve what we need we would need to make a request each second which I guess will be blocked by rate limits.
Is there a better solution for it?
Thanks
Subscription (Push Notification) API resource empowers developers to enable the client application(s) to create a single subscription (to one or more extension's) and continually receive push notifications in real time for each subscribed extension.When using this approach for your application(s) to receive events on your RingCentral account, no polling is involved.
You can create a subscription using either of the below-mentioned transportType for receiving push notifications:
PubNub
WebHook
Notifications which the client wants to receive can be specified by the event filters which are set in the subscription request. The event filter is exposed as a URL, pointing to the required RingCentral API resource. Currently the following event types are available for notifications: extensions, messages and presence. They are described in detail below:
Notifications Event Types
You can take a look at the Subscription API below:
Subscription API
If you are interested in Subscribing to Push notifications via WebHook then we have an Easy-to-follow Quickstart guide here:
RingCentral Webhooks Quickstart Guide

Keyspace event in AWS Redis

I have enabled the "notify-keyspace-events" for redis node, and getting the event published on key change on subscription.
But, I want to understand, what redis does with the events to be published if there are no subscribers to any key.
Any information or links, which could help me understand will be appreciated.
It is a fire and forget model. If there are no subscribers available, it will drop those events. It will even drop even if the subscriber is not available or will not be able to take those events.
Documentation from Redis:
https://redis.io/topics/notifications
Snippet from documentation,
Because Redis Pub/Sub is fire and forget currently there is no way to
use this feature if your application demands reliable notification of
events, that is, if your Pub/Sub client disconnects, and reconnects
later, all the events delivered during the time the client was
disconnected are lost.

How do I notify the client application when a chaincode is invoked?

When a chaincode is invoked, is there a way to call a REST API (external) so that the client application can be notified on the new transaction.
Apart from REST, is there any other option?
It's better to use events
https://github.com/hyperledger/fabric/blob/master/docs/protocol-spec.md#35-events
Validating peers and chaincodes can emit events on the network that
applications may listen for and take actions on. There is a set of
pre-defined events, and chaincodes can generate custom events. Events
are consumed by 1 or more event adapters. Adapters may further deliver
events using other vehicles such as Web hooks or Kafka.
Application can subscribe for events stream from Fabric and listen for messages generate by your chaincode.
An example for how to work with Events can be found here:
https://github.com/hyperledger/fabric/tree/master/examples/events/block-listener
To add to Sergey's answer, there are 3 types of events.
BLOCK EVENTs, which are created when the ledger changes.
REJECTION EVENTs, which are created when any error occur( either in user chain code or in system chain code )
CHAINCODE EVENTs, which are user handles which lets user chain code create events. [ Weird thing I noticed is, only one CHAINCODE EVENT per invoke is allowed as per current design ]
You can have an event listener/client running at your end, listening on the gRPC port, ( you can get the port from the core.yaml file ) Or you can even refer to the example Sergey has mentioned.
In your case, I am guessing that you are looking for a successful transaction. In that case, you should listen on BLOCK events and REJECTION Events. The Transaction UUID which you received when your invoke was triggered, can be used to scan the events and trigger an action when it matches.
Also note that if a transaction results in REJECTION EVENT, then it would not have a BLOCK EVENT.
Hope this helps.

WSO2 Identity Server - event handler - what events are handled?

The WSO2 Identity Server since version 5.1 has an option to engage the workflow on certain events using a custom event event/workflow handler. Nice! What events is it possible to handle? Well - from the example I see that any admin web service calls could be intercepted.
Next to that - I found the org.wso2.carbon.identity.event bundle providing option to handle events. What events is this feature intended?
Thank you all for any insight.
We developed identity-event component with the initial intention to handle events related to identity-management such as account lock, account disable, password reset, failed login attempt etc. We developed AbstractEventHandlerwhich defines different methods of handling events such as sending notifications. Account locking act as a method of handling events as well at the time of incorrect login attempt. Successful login attempt after failed login attempt also act as event where handler will reset the user's failed login attempt claim. We can map events to handler in repository/conf/identity/event-mgt.properties file. So we can register each events to 0 or more handlers which will fired when the event occurs.
Even though the initial intention of this event framework was to handle identity management event, later we improved it to be a more generic framework which can handle any event which we can describe in model I mentioned above. But as far as I know this only covers identity-management related event. But anyone who develop customize features can make use of this.
Its true that workflow handler is also a way of handling events which do the same task this framework do for some extent. After reading your question, I also got feeling that it also follows the same model. But we haven't thought of combining these two. So they will work as independent features.