How to implement logging in fte ant script - websphere-mq-fte

I am trying to implement logging in FTE, i need to log all the monitors information that how many monitor jobs got successfully executed and how many got failed.

This may be of help. Are you implementing your own logging?
Monitor logs are published on SYSTE.FTE/Log topic. You can subscribe to receive publications on this topic and process the way you want. More details here.

Related

AWS Glue jobs status Dashboard

In our project total 10 Glue jobs are running daily. I would like to build a dashboard to show last 7 days jobs status it means either succeeded or failure. Tried to achieve it in CloudWatch with metrics, but not able do it. Please give an idea to build this dashboard.
Probably a little late for the original questioner, but maybe helpful for others.
We had a similar task in our project. We have many jobs and need to monitor success and failure. In our experience, the built-in metrics aren't really reliable, nor do they really answer the question of whether a job was successful or not.
But we found a good way for us by generating custom metrics in a generic way for all jobs. This also works for existing jobs afterwards without having to change the code.
I wrote an article about it: https://medium.com/#ettefette/metrics-for-aws-glue-jobs-as-you-know-them-from-lambda-functions-e5e1873c615c
We have set cloudwatch alerts based on these metrics and we use the metrics in our grafana dashboard to monitor the glue jobs.

How to handle bad events in a batch job on EMR

I am running an EMR which processes some logs containing around 15-20M log events. Sometimes few log events contain badly formatted data that break my pipeline. I am looking for some options to drop those log events in a file or a queue. Then I can verify them, report them to the corresponding service and reprocess them maybe not in the same pipeline as the analysis would require some time to correct the logs.
What are the best options available and widely used by different companies running batch job?

Can we get "agent status report" and "agent performance report" through historical metrics using APIs in Amazon Connect?

I hope everyone is doing great.
I am working on Amazon Connect reporting APIs. The basic need to post this question is that I want to get agent performance and agent status report through historical metrics using APIs. I am trying to find an API that will give me agent status from midnight that is only possible through historical metrics.
I don't want to use Streams APIs. If anyone has any solution kindly respond to me, it would be very helpful for me. Thanks.
The current reporting API only provides one function for retrieving historical data, GetMetricData, and that function is only able to return queue or channel statistics. Agent specific data is only available via agent event stream and console-based reports at this time. So, unfortunately, there is no way to do what your describing with an API in Amazon Connect right now.

Azure Scheduler Implementation

I have written a web job which will do multiple tasks that run on different schedules like once a day, once in every hour and so and I achieved this by using Timer delegate. Now I am thinking of changing that approach and create a Scheduler job for each scenario. I was able to find some information regarding schedules from googling but was never able to join them to form a flow.
I learned that we can create job collection and each collection can have 'n' jobs based on the pricing tier we are using. After creating a job the program logic that the job must do how can we bind them to the corresponding job?
Also linking jobs to job collection how can I achieve that?
Thanks
A typical workflow is that you would write to a Azure Message Queue with a message, then you would have an Azure Cloud Service that reads from that and does the processing.
To tie specific jobs to specific program logic you can either embed information about the type into the message and have something that generically picks the messages up and turns them into specific operations/classes or you could have behavior specific queues and each job would write to its own queue and you would read from each queue by a different Cloud Service.
I think this will solve my problem either using API calls or queue processing
Solution
If I understand your question, you have a WebJob that has multiple methods, each of which needs to be called on a different schedule. Instead of going through the hassle of setting up a Scheduler and having yet another resource that you have to manage, mark each method you need called with a TimerTriggerAttribute.

Replay events with Google Pub/Sub

I'm looking into Google Cloud, it is very appealing, specially for data intensive applications. I'm looking into Pub/Sub + Dataflow and I'm trying to figure out the best way to replay events that were send via Pub/Sub in case the processing logic changes.
As far as I can tell, Pub/Sub retention has an upper bound of 7 days and it is per subscription, the topic itself does not retain data. In my mind, it would allow to disable the log compaction, like in Kafka, so I can replay data from the very beginning.
Now, since dataflow promises that you can run the same jobs in batch and streaming mode, how effective would it be to simulate this desired behavior by dumping all events into Google Storage and replying from there?
I'm also open for any other ideas.
Thank you
As you said, Cloud Pub/Sub does not currently support replays, so you need to save events somewhere to replay later and Cloud Storage sounds like a good place to do that.
Cloud Pub/Sub now has the ability to replay previously acknowledged messages. Please see the quickstart and related blog post for information on how to use the feature.