How to display a CloudWatch dashboard on wall screen - amazon-web-services

I've built a CloudWatch dashboard, and I'd like to display it on a wall-mounted screen. The problem I'm facing is access: I'm using an IAM user with limited privileges to connect to the dashboard, and the user gets disconnected after 12 hours.
However, I'd like to show the dashboard indefinitely, and I don't want to have to manually login every day.
Is there a better way to publish an AWS CloudWatch dashboard? Is there a way for sessions to last longer?

You could use the GetMetricWidgetImage API or get-metric-widget-image command to achieve that.
An example get-metric-widget-image command line:
aws cloudwatch get-metric-widget-image --metric-widget '
{
"metrics": [
[ { "expression": "AVG(METRICS())", "label": "Average Access Speed", "id": "e1", "region": "eu-central-1" } ],
[ "...", "www.xyz.ee", { "label": "[avg: ${AVG}] www.xyz.ee", "id": "m2", "visible": false } ],
[ "...", "www.abc.com", { "label": "[avg: ${AVG}] www.abc.com", "id": "m3", "visible": false } ],
],
"view": "timeSeries",
"stacked": false,
"region": "eu-central-1",
"title": "Response Time",
"period": 300,
"stat": "Average"
}
' | jq -r .MetricWidgetImage | base64 -d | display
In order to get the metric-widget source, it is the easiest to go the cloudwatch dashboard, select the widget for editing, and select the "Source" tab. There you can copy the source code and use it in the above command line.
There is an also a thread about how to use the command: How to use aws cloudwatch get-metric-widget-image?

Related

EventBridge Rule for findings from SecurityHub

I am trying to create a EventBridge Rule for a "event" pattern as below :
My Json Structure :
{
"Findings": [
{
"SchemaVersion": "2018-10-08",
"Id": "arn:aws:securityhub:us-west-2:220307202362:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.9/finding/eeecfc8d-cb70-4686-8615-52d488f87959",
"ProductArn": "arn:aws:securityhub:us-west-2::product/aws/securityhub",
"ProductName": "Security Hub",
"CompanyName": "AWS",
"Region": "us-west-2",
"GeneratorId": "aws-foundational-security-best-practices/v/1.0.0/EC2.9",
"AwsAccountId": "220311111111",
"Types": [
"Software and Configuration Checks/Industry and Regulatory Standards/AWS-Foundational-Security-Best-Practices"
],
"FirstObservedAt": "2021-09-27T20:01:59.019Z",
"LastObservedAt": "2021-10-12T16:35:29.556Z",
"CreatedAt": "2021-09-27T20:01:59.019Z",
"UpdatedAt": "2021-10-12T16:35:29.556Z",
"Severity": {
"Product": 0,
"Label": "INFORMATIONAL",
"Normalized": 0,
"Original": "INFORMATIONAL"
},
"Title": "EC2.9 EC2 instances should not have a public IPv4 address"
}
]
}
My Json structure does not looks like Event pattern shown in above picture on right hand side so i thought of modifying the event pattern something like as per my json posted above.As soon as i Edit the event pattern the option on the left hand side changes to "custom pattern" as below :
When i try to test my above json it gives me error as below :
What I am missing here ? How I can configure my event Hub findings such that it is able to identify my above json and it go go to my target (Kinesis firehose) ?
In test event pattern, you need write full event including items like version, id,...
This tutorial shows simple example (for EC2 though).
And for Security Hub Findings, event test pattern will be like shown in this doc .
Update:
Here is the screenshot what I tried using your JSON. Note that Event pattern is only "source". And for Test event pattern headers except findings, I took codes from "Use sample event provided by AWS" of dropdown of custom event.
Event pattern JSON is:
{
"version": "0",
"id": "8e5622f9-d81c-4d81-612a-9319e7ee2506",
"detail-type": "Security Hub Findings - Imported",
"source": "aws.securityhub",
"account": "123456789012",
"time": "2019-04-11T21:52:17Z",
"region": "us-west-2",
"resources": ["arn:aws:securityhub:us-west-2::product/aws/macie/arn:aws:macie:us-west-2:123456789012:integtest/trigger/6294d71b927c41cbab915159a8f326a3/alert/f2893b211841"],
"detail": {
"Findings": [{
"SchemaVersion": "2018-10-08",
"Id": "arn:aws:securityhub:us-west-2:111122223333:subscription/aws-foundational-security-best-practices/v/1.0.0/EC2.9/finding/eeecfc8d-cb70-4686-8615-52d488f87959",
"ProductArn": "arn:aws:securityhub:us-west-2::product/aws/securityhub",
"ProductName": "Security Hub",
"CompanyName": "AWS",
"Region": "us-west-2",
"GeneratorId": "aws-foundational-security-best-practices/v/1.0.0/EC2.9",
"AwsAccountId": "220311111111",
"Types": [
"Software and Configuration Checks/Industry and Regulatory Standards/AWS-Foundational-Security-Best-Practices"
],
"FirstObservedAt": "2021-09-27T20:01:59.019Z",
"LastObservedAt": "2021-10-12T16:35:29.556Z",
"CreatedAt": "2021-09-27T20:01:59.019Z",
"UpdatedAt": "2021-10-12T16:35:29.556Z",
"Severity": {
"Product": 0,
"Label": "INFORMATIONAL",
"Normalized": 0,
"Original": "INFORMATIONAL"
},
"Title": "EC2.9 EC2 instances should not have a public IPv4 address"
}]
}
}
You can now use Test Event.
It's confusing that Event pattern and Test Event pattern is too far. The attributes like source is treated in EventBridge automatically.
For detecting specific attribute, "Event type->Security Hub Findings-Imported" might be useful.

How to filter an s3 data event by object key suffix on AWS EventBridge

I've created a rule on AWS EventBridge that trigger a Sagemaker Pipeline execution. To do so, I have the following event pattern:
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject", "CopyObject", "CompleteMultipartUpload"],
"requestParameters": {
"bucketName": ["my-bucket-name"],
"key": [{
"prefix": "folder/inside/my/bucket/"
}]
}
}
}
I have enabled CloudTrail to log my S3 Data Events and the rule are triggering my Sagemaker Pipeline execution correctly.
The problem here is:
A pipeline execution are being triggered for all put/copy of any object in my prefix. Then, I would like to trigger my pipeline execution only when a specific object is uploaded in the bucket, by I don't know its entire name.
For instance, possible object name I will have is, where this date is builded dynamically:
my-bucket-name/folder/inside/my/bucket/2021-07-28/_SUCESS
I would like to write an event pattern with something like this:
"prefix": "folder/inside/my/bucket/{current_date}/_SUCCESS"
or
"key": [{
"prefix": "folder/inside/my/bucket/"
}, {
"suffix": "_SUCCESS"
}]
I think that Event Pattern on AWS do not support suffix filtering. In the documentation, isn't clear the behavior.
I have configured a S3 Event Notification using a suffix and sent the filtered notification to a SQS Queue, but now I don't know what to do with this queue in order to invoke my EventBridge rule to trigger a Sagemaker Pipeline execution.
I was looking at a similar functionality.
Unfortunately, based on the docs from AWS, it looks like it only supports the following patterns:
Comparison
Example
Rule syntax
Null
UserID is null
"UserID": [ null ]
Empty
LastName is empty
"LastName": [""]
Equals
Name is "Alice"
"Name": [ "Alice" ]
And
Location is "New York" and Day is "Monday"
"Location": [ "New York" ], "Day": ["Monday"]
Or
PaymentType is "Credit" or "Debit"
"PaymentType": [ "Credit", "Debit"]
Not
Weather is anything but "Raining"
"Weather": [ { "anything-but": [ "Raining" ] } ]
Numeric (equals)
Price is 100
"Price": [ { "numeric": [ "=", 100 ] } ]
Numeric (range)
Price is more than 10, and less than or equal to 20
"Price": [ { "numeric": [ ">", 10, "<=", 20 ] } ]
Exists
ProductName exists
"ProductName": [ { "exists": true } ]
Does not exist
ProductName does not exist
"ProductName": [ { "exists": false } ]
Begins with
Region is in the US
"Region": [ {"prefix": "us-" } ]

How to get the job progression percentage for a particular job being processed by Mediaconvert?

I have found out that Status Interval Update Event is send to Cloudwatch event every minute when the job is progressing,the interval can be changed to 10 seconds as well.How to show the percentage at the client side from the Cloudwatch event sending events after every 10 seconds?
You will want to capture the STATUS_UPDATE event from CloudWatch and feed that into a service like Lambda that could update a database, or whatever data source you are using to display job stats out.
Example of the Event Pattern
{
"source": [
"aws.mediaconvert"
],
"detail-type": [
"MediaConvert Job State Change"
],
"detail": {
"status": [
"STATUS_UPDATE"
]
}
}
Example of what the Status Update event payload looks like:
{
"version": "0",
"id": "ABC",
"detail-type": "MediaConvert Job State Change",
"source": "aws.mediaconvert",
"account": "111122223333 ",
"time": "2021-02-18T17:52:32Z",
"region": "us-west-2",
"resources": [
"arn:aws:mediaconvert:us-west-2:111122223333 :jobs/1613670689802-emcngz"
],
"detail": {
"timestamp": 1613670752653,
"accountId": "111122223333 ",
"queue": "arn:aws:mediaconvert:us-west-2:111122223333 :queues/Default",
"jobId": "1613670689802-emcngz",
"status": "STATUS_UPDATE",
"userMetadata": {},
"framesDecoded": 2024,
"jobProgress": {
"phaseProgress": {
"PROBING": {
"status": "COMPLETE",
"percentComplete": 100
},
"TRANSCODING": {
"status": "PROGRESSING",
"percentComplete": 2
},
"UPLOADING": {
"status": "PENDING",
"percentComplete": 0
}
},
"jobPercentComplete": 7,
"currentPhase": "TRANSCODING",
"retryCount": 0
}
}
}
MediaConvert provides granular percentages per phases the job is in (probing input, transcoding, and uploading outputs) as well as an overall percentage. The one that is displayed on the MediaConvert Console UI is the jobPercentComplete, and would probably be the one you want to capture.
Documentation:
CloudWatch Events supported by MediaConvert:
https://docs.aws.amazon.com/mediaconvert/latest/ug/mediaconvert_cwe_events.html
How to setup CloudWatch Event:
https://docs.aws.amazon.com/mediaconvert/latest/ug/setting-up-cloudwatch-event-rules.html

Aws cli command substitution

I want to list volume's snapshots but in the output I would like to also see the Name of that volume (I mean tag).
So far I was using:
aws ec2 describe-snapshots
And in the reply I got something like:
Snapshots: [
{
"Description": "some description",
"Encrypted": false,
"OwnerId": "someownerid",
"Progress": "100%",
"SnapshotId": "snap-example",
"StartTime": "start time",
"State": "completed",
"VolumeId": "volume id",
"VolumeSize": 32
}
]
But what I would like to have in that output is also a volume name:
Snapshots: [
{
"Description": "some description",
"Encrypted": false,
"OwnerId": "someownerid",
"Progress": "100%",
"SnapshotId": "snap-example",
"StartTime": "start time",
"State": "completed",
"VolumeId": "volume id",
"VolumeSize": 32,
"VolumeName": "Volume Name" #additional key:val
}
]
The aws ec2 describe-snapshots does return tags on snapshots if they are present.
Something similar to this:
{
"Description": "This snapshot is created by the AWS Backup service.",
"Tags": [
{
"Value": "On",
"Key": "Backup"
},
{
"Value": "Jenkins_Machine",
"Key": "Name"
},
{
"Value": "*********",
"Key": "aws:backup:source-resource"
}
],
"Encrypted": false,
"VolumeId": "vol-*****",
"State": "completed",
"VolumeSize": 250,
"StartTime": "2019-08-01T11:29:31.654Z",
"Progress": "100%",
"OwnerId": "******",
"SnapshotId": "snap-******"
}
To be able to see the name (Assuming your snapshots have them) do this:
aws ec2 describe-snapshots --snapshot-id snap-**** --query 'Snapshots[*].{Description:Description,Name:Tags[?Key==`Name`].Value|[0],State:State}'
This should give you output like this:
[
{
"State": "completed",
"Description": "This snapshot is created by the AWS Backup service.",
"Name": "Jenkins_Machine"
}
]
The fields are curtailed but you can add fields that you need at the end of the query like this ...State:State,VolumeId:VolumeId}, where I have newly added the VolumeId.
If you remove the --snapshot-id parameter, the above command should return you all snapshots, however for snapshots that don't have the Name tag its going to print null.
Edit:
As #krishna_mee2004 pointed out, the OP is probably looking for snapshots for a particular volume. If that is the case you can still do it using this command. The filters option can be used to filter based on volume ID.
aws ec2 describe-snapshots --filters Name=volume-id,Values=vol-***** --query 'Snapshots[*].{Description:Description,Name:Tags[?Key==`Name`].Value|[0],State:State,VolumeId:VolumeId}'
If you are referring to snapshot's name tag, you can write a simple python or ruby script using aws sdk. For example, a ruby code to list the snapshot id and value of its name tag will look like this:
require 'aws-sdk'
# provide region and credentials in parameter
ec2 = Aws::EC2::Client.new
# paginate if you have a big list
resp = ec2.describe_snapshots
# iterate snapshots
resp.snapshots.each do |snapshot|
# iterate tags and print if it has a name tag.
snapshot.tags.each do |tag|
# print whatever is required/available in the response structure
puts "#{snapshot.snapshot_id} has the name tag with value #{tag.value}" if tag.key.casecmp? 'Name'
end
end
Refer the respective language api documentation to understand more about the usage of the sdk and the api calls. Make sure to setup the sdk before using and it varies based on the language you choose. For example, steps for setting up the ruby sdk is outlined here. You may also want to checkout the API reference for describe_snaphots used in the above code.

AWS ssm send-command : modify timeout in CLI

I'm using AWS SSM to compute a long script on an ec2 instance.
I would like to configure the execution timeout (execution time, not launch time) and I don't find how to do this on the official documentation (opposing informations or not working).
I'm using only the CLI interface.
This value is a document property that can be passed with --parameters option using executionTimeout key. You can use aws ssm describe-documents to find this and other document specific parameters.
aws ssm describe-document --name "AWS-RunShellScript"
{
"Document": {
"Hash": "99749de5e62f71e5ebe9a55c2321e2c394796afe7208cff048696541e6f6771e",
"HashType": "Sha256",
"Name": "AWS-RunShellScript",
"Owner": "Amazon",
"CreatedDate": "2017-08-21T22:25:02.029000+02:00",
"Status": "Active",
"DocumentVersion": "1",
"Description": "Run a shell script or specify the commands to run.",
"Parameters": [
{
"Name": "commands",
"Type": "StringList",
"Description": "(Required) Specify a shell script or a command to run."
},
{
"Name": "workingDirectory",
"Type": "String",
"Description": "(Optional) The path to the working directory on your instance.",
"DefaultValue": ""
},
{
"Name": "executionTimeout",
"Type": "String",
"Description": "(Optional) The time in seconds for a command to complete before it is considered to have failed. Default is 3600 (1 hour). Maximum is 172800 (48 hours).",
"DefaultValue": "3600"
}
],
"PlatformTypes": [
"Linux",
"MacOS"
],
"DocumentType": "Command",
"SchemaVersion": "1.2",
"LatestVersion": "1",
"DefaultVersion": "1",
"DocumentFormat": "JSON",
"Tags": []
}
}