I have some legacy code hosted in an Azure WebJob (.exe) that is generating a lot of ETW events for logging to a custom Event Provider.
How can I get those ETW events to Application Insights in an easy way? I would like them to show up in the same AI instance as my Website that is hosting the WebJob.
Here's a simple example of Event source tracking module.
https://github.com/AlexBulankou/ai-samples/blob/master/ETWTrackingModule.cs
The module wraps ETW listener that subscribes for configured event sources. You can specify what event sources you would like to subscribe to and whether you would like your ETW events to be tracked events and/or as traces. In your ApplicationInsights.config, register this module as follows:
<Add Type="Microsoft.ApplicationInsights.Samples.ETWTrackingModule, YourAssemblyName">
<TrackEvent>True</TrackEvent>
<TrackTrace>True</TrackTrace>
<EventSources>
<Add Name="System.Collections.Concurrent.ConcurrentCollectionsEventSource" EventLevel="LogAlways"/>
<Add Name="System.Diagnostics.Eventing.FrameworkEventSource" EventLevel="LogAlways"/>
</EventSources>
</Add>
Related
We have implemented an event driven architecture application which has around 7 spring boot microservices. As part of the happy flow,these microservices listen to AWS SQS(suppose app_queue) which is subscribed to AWS SNS topic(suppose app_topic).
For exception scenarios we have implemented something like below:
Categorised the exceptions as 500(int server errors) and 400(bad req) category errors
In both these scenarios, we are dropping a message to the SNS topic named status_topic. A SQS named status_queue is subscribed to this topic. We did this with a view in mind that once the end to end application development is done for happy scenarios, we will handle the messages in the status_queue in such a way that Production support team has a way to remediate both 500 or 400 errors.
SO basically : app_topic --> app_queue --> Microservice Error --> status_topic --> status_sqs
Needed some expert advice on the below approach if there would be any issues:
A k8s cron job(Spring boot microservice) would come up every night to handle the error messages from the above status_queue and handle only 500 int server errors FOR RETRY.
The cron job will push the messages back to the normal SNS app_topic which will retrigger the message and normal business flow will continue.
Is the above mentioned approach acceptable??
I know that DLQs are more suitable for such scenarios, but can I drop a message to a DLQ directly from my java app code?
Regardless of any approach we take, is there a way to automatially replay messages from a specific queue(normal and dlq) so we dont have to write a seperate microservice to replay messages?
I have a solution
SNS-->SQS-->LAMBDA-->ES(ElastciSearch)
I want to test this with heavy load like 10K or 5K request to SNS per second.
The size of the test record can be very small (1kb) and any type of json record .
Is there anyway to test this load ?I did find anything which is native to AWS for this test .
You could try with jmeter. JMeter has support for testing JMS interfaces for messaging systems. You can use the AWS Java SDK to get a SNS JMS interface
Agree, you can use JMeter to execute load testing over SNS. Create Java Request sampler class using AWS SDK library to publish messages in SNS topic, build a jar and install it under lib/ext.
https://github.com/JoseLuisSR/awsmeter
In this repository you cand find Java Request sampler classes created to publish messages in Standard Topic or FIFO Topic, depends of the kind of Topic you need use other message properties like deduplication id or group id for FIFO topic.
Here you can find details to subscribe SQS queue to SNS topic.
We need to create a monitor that will show any income calls in our extranet in live time.
We were able to show active calls by using /account/~/extension/~/active-calls, however, to achieve what we need we would need to make a request each second which I guess will be blocked by rate limits.
Is there a better solution for it?
Thanks
Subscription (Push Notification) API resource empowers developers to enable the client application(s) to create a single subscription (to one or more extension's) and continually receive push notifications in real time for each subscribed extension.When using this approach for your application(s) to receive events on your RingCentral account, no polling is involved.
You can create a subscription using either of the below-mentioned transportType for receiving push notifications:
PubNub
WebHook
Notifications which the client wants to receive can be specified by the event filters which are set in the subscription request. The event filter is exposed as a URL, pointing to the required RingCentral API resource. Currently the following event types are available for notifications: extensions, messages and presence. They are described in detail below:
Notifications Event Types
You can take a look at the Subscription API below:
Subscription API
If you are interested in Subscribing to Push notifications via WebHook then we have an Easy-to-follow Quickstart guide here:
RingCentral Webhooks Quickstart Guide
I've created a mule application with the following configuration to SQS. This lives in my config.xml file.
<sqs:config name="amazonSQSConfiguration" accessKey="${aws.sqs.accessKey}" secretKey="${aws.sqs.secretKey}" url="${aws.sqs.baseUrl}" region="${aws.account.region}" protocol="${aws.sqs.protocol}" doc:name="Amazon SQS: Configuration">
<reconnect count="5" frequency="1000"/>
</sqs:config>
This is problematic for me because when this SQS configuration loads up, it tries to connect to Amazon SQS queue but can't because access to the queue is not accessible from my machine.
For munit, unit purposes, I'm looking for a way to stop this from trying to connect on load?
Is there a way I can mock this sqs:config?
Please note this is different from mocking the connector in my flow? In this case I need to mock the config.
Or happy for any other suggestions.
thanks
The CF events api lists an "actor_type" field for events, which can be one of:
service_broker
system
user
v3-process
What is an example of an audit event's actor type being each of the above? Where is the documentation, at a higher level than a summary of the REST endpoints, and in more detail than this, for someone trying to consume this api?
actor_type represents what initiated the event. Similarly, actee_type is the resource that is being acted upon.
user: Most events (e.g. starting/stopping/deleting an app) will have actor_type "user", since the event was triggered by a user action.
service_broker: Some audit events are triggered by service brokers. Examples are registering a service offering or a service plan.
system: System is used when there is not a clear actor. There is currently a bug filed to investigate usage of this actor_type: https://www.pivotaltracker.com/story/show/132099009
v3-process: This was recently changed to be "process" in cf v245. This actor_type (along with the actor_type "app") is only used when the process/app crashes. There is also a bug around this actor_type: https://www.pivotaltracker.com/story/show/132098945
I could not find any documentation for audit events other than the API docs. How are you trying to consume the events API?