SES metrics not updated in cloudwatch - amazon-web-services

We have configured tracking in cloudwatch for sending emails via AWS SES programmatically via AWS SDK ( as mentioned in the documentation). Whenever I send an email for the first time with a new configuration set I am able to see the metrics (open, click) value updated to 1. When I send the same email to another recipient the metric value should increase to 2 but there is no change in the cloudwatch metric and it always shows value as 1.
I have configured message tags and configuration set as well.
I checked after few hours but still, metrics were not updated. I am not sure if there is some issue with cloudwatch - SES configuration or I am missing any configuration in the graph?

You can check in the cloudwatch graphed metrics. You may have selected Statistic value to Average which is default selection. You can try after changing that value to Sum. View Example here

Related

MediaLive (AWS) how to view channel alerts from php SDK

Question
I have set up a Laravel project that connects to AWS MediaLive for streaming.
Everything is working fine, and I am able to stream, but I couldn't find a way to see if a channel that was running had anyone connected to it.
What I need
I want to be able to see if a running channel has anyone connected to it via the php SDK.
Why
I want to show a stream on the user's side only if there is someone connected to it.
I want to stop a channel that has noone connected to it for too long (like an hour?)
Other
I tried looking at the docs but the closest thing I could find was the DescribeChannel command.
This however does not return any informations about the alerts. I also tried comparing the output of DescribeChannel when someone was connected and when noone was connected, but there was no difference
On the AWS site I can see the alerts on the channel page, but I cannot find how to view that from my laravel application.
Update
I tried running these from the SDK:
CloudWatch->DescribeAlarms();
CloudWatchLogs->GetLogEvents(['logGroupName'=>'ElementalMediaLive', 'logStreamName'=>'channel-log-stream-name']);
But it seems to me that their output didn't change after a channel started running without anyone connected to it.
I went on the console's CloudWatch and it was the same.
Do I need to first set up Egress Points for alerts to show here?
I looked into SNS Topics and lambda functions, but it seems they are for sending messages and notifications? can I also use this to stop/delete a channel that has been disconnected for over an hour? Are there any docs that could help me?
I'm using AWS MediaStore, but I'm guessing I can do the same as AWS MediaPackage? How can the threshold tell me if, and for how long no-one has been connected to a MediaLive channel?
Overall
After looking here and there in the docs I am assuming I have to:
1. set up a metric alarm that detects when a channel had no input for over an hour
2. Send the alarm message to the CloudWatchLogs
3. retrieve the alarm message from the SDK and/or the SNS Topic
4. stop/delete the channel that sent the alarm message
Did I understand this correctly?
Thanks for your post.
Channel alerts will go your AWS CloudWatch logs. You can poll these alarms from SDK or CLI using a command of the form 'aws cloudwatch describe-alarms'. Related log events may be retrieved with a command of the form 'aws logs get-log-events'.
You can also configure a CloudWatch rule to propagate selected service alerts to an SNS Topic which can be polled by various clients including a Lambda function, which can then take various actions on your behalf. This approach works well to aggregate the alerts from multiple channels or services.
Measuring the connected sessions is possible for MediaPackage endpoints, using the 2xx Egress Request Count metric. You can set a metric alarm on this metric such that when its value drops below a given threshold, and alarm message will be sent to the CloudWatch logs mentioned above.
With Regard to your list:
set up a metric alarm that detects when a channel had no input for over an hour
----->CORRECT.
Send the alarm message to the CloudWatchLogs
----->The alarm message goes directly to an SNS Topic, and will be echoed to your CloudWatch logs. See: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html
a Lambda Fn will need to be created to process new entries arriving in the SNS topic (queue) mentioned above, and take a desired action. This Lambda Fn can send API or CLI calls to stop/delete the channel that sent the alarm message. You can also have email alerts or other actions triggered from the SNS Topic (queue); refer to https://docs.aws.amazon.com/sns/latest/dg/sns-common-scenarios.html
Alternatively, you could do everything in one lambda function that queries the same MediaPackage metric (EgressRequestCount), evaluates the response and takes a yes/no action WRT shutting down a specified channel. This lambda function could be scheduled to run in a recurring fashion every 5 minutes to achieve the desired result. This approach would be simpler to implement, but is limited in scope to the metrics and actions coded into the Lambda Function. The Channel Alert->SNS->LAMBDA approach would allow you to take multiple actions based on any one Alert hitting the SNS Topic (queue).

How to limit the scope in Google Cloud Platform Error Reporting

We host quite a few things on our GCP project and it's kinda nice to be alerted on new errors, but I wish to send email notifications to Pagerduty only from my production kubernetes cluster.
Is there a way to do this, or should I filter this somehow in pagerduty (unsure if possible - still new with it).
Here is the procedure of sending notifications from kubernetes to pagerduty:
A metric needs to be created based on the requirement and that metric needs to be added when we create an alert. When we proceed further in the notifications page you can select pagerduty and proceed further in creating alerts.
Step1:
Creating an log-based-Metric :
1.In console go to log-based metrics page and there go to create metric and create a new custom metric.
2.Here set the metric type as counter and in details add log metric name as (user/delete).
3.In metric give the query which can fetch the logs of the errors you are expecting to be alerted and create the metric.
Step 2:
Creating an alert policy :
1.In the console go to the Alerting page and there go to create policy and create a new alerting policy.
2.Go to add condition and in that resource type is the resource which we need to be triggered (in our case kubernetes pod) and metric is the metric we created in step1.
3.In filer add the project id and in period add the suitable period. Next in the configuration add these details and proceed to the next steps leaving the rest fields as default.
Step3:
1.Next you will be directed to select notification channels there you go to Manage notification channels and select pager duty services and add new and there add the display name and the service key and check connectivity save and proceed further.
2.Add the alert name and save the alert.

Google Cloud Functions alert based in Logs

I went through the docs but I couldn't find a way of sending an alert based on my Cloud function logs, is this possible?
What I want to do is trigger an alert when I get an specific log node.
Like, every time I have this:
Triggers an alert in my GCP, is that possible?
Create an log-based metric from the advanced filter of the Logs and create an alerting from the log-based metric. To create alert from the log-based metric you need to click on three dot right side of the metric name there you will see the create alert option.
As an alternative to log-based metrics, now log-based alerts are also supported. This lets you receive notification whenever a specific message appears in logs.

How to enable GET request metric for S3 bucket?

I want to add a CloudWatch alarm for GET requests in an S3 bucket. In the Console, I went to Management -> Metrics for this bucket and checked the box for Request metrics (10) (paid feature). This also automatically checked the box for Data transfer metrics (6) (paid feature). I thought this would enable the GET request metric. Instead though only 5 request metrics and 3 data transfer metrics have appeared, and the GET request metric is not one of them. How do I fix this? I thought 16 new metrics should have appeared, but only 8 have.
You can alternatively use S3 access logs to get the number of GET requests for your objects: Using Amazon S3 access logs to identify requests.
Also note that some CloudWatch metrics only appear when there are actual data points. Operation-specific metrics (such as GetRequests) are reported only if there are requests of that type for your bucket or your filter.

Cloudwatch Metric showing wrong value

I have an application publishing a custom cloudwatch metric using boto's put_metric_data. The metric shows the number of tasks waiting in a redis queue.
The 1-minute max shows '3', 1-minute min shows '0' and 1-minute average shows '1.5'.
It seems that the application is correctly setting the value to zero, but some other process is overwriting it with 3 at the same time, but I can't find this to stop it.
Is it possible to see logs for PutMetricData to diagnose where this value might be coming from?
Normally, Amazon CloudTrail would be the ideal way to discover information about API calls being made to your AWS account. Unfortunately, PutMetricData is not captured in Amazon CloudTrail.
From Logging Amazon CloudWatch API Calls in AWS CloudTrail:
The CloudWatch GetMetricStatistics, ListMetrics, and PutMetricData API actions are not supported.