So I have ran the following command in my CLI and it returned values, however, they are unreadable how would I format this into a table with a command?
do
echo "Check if SSE is enabled for bucket -> ${i}"
aws s3api get-bucket-encryption --bucket ${i} | jq -r .ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault.SSEAlgorithm
done
Would I need to change the command above?
You can specify an --output parameter when using the AWS CLI, or configure a default format using the aws configure command.
From Setting the AWS CLI output format - AWS Command Line Interface:
The AWS CLI supports the following output formats:
json – The output is formatted as a JSON string.
yaml – The output is formatted as a YAML string.
yaml-stream – The output is streamed and formatted as a YAML string. Streaming allows for faster handling of large data types.
text – The output is formatted as multiple lines of tab-separated string values. This can be useful to pass the output to a text processor, like grep, sed, or awk.
table – The output is formatted as a table using the characters +|- to form the cell borders. It typically presents the information in a "human-friendly" format that is much easier to read than the others, but not as programmatically useful.
I am sending comma-separated data to my kinesis stream, and I want my kinesis analytics app to recognize that there are two columns (both bigints). But when I populate my stream with some records and click "Discover Schema", it always gives me a schema of one column! Here's a screenshot:
I have tried many different delimiters to indicate columns, including comma, space, and comma-space, but none of these cause aws to detect my schema properly. At one point I gave up and edited the schema manually, which caused this error:
While I know that I have the option to keep the schema as a single column and use string and date-time manipulation to structure my data, I prefer not to do it this way... Any suggestions?
While I wasn't able to get the schema discovery tool to work, I realized that I am able to manually edit my schema and it works fine. I was getting that error because I had just populated the stream initially, and I was not continuously sending data.
Schema Discovery required me to send data to my input kinesis stream during the schema discovery. To do this for my Proof of Concept application I used the AWS CLI:
# emittokinesis.sh
JSON='{
"messageId": "31c14ee7-9bde-484d-af05-03509c2c33aa",
"myTest": "myValue"
}'
echo "$JSON"
JSONBASE64=$(echo ${JSON} | base64)
echo 'aws kinesis put-record --stream-name logstash-input-test --partition-key 1 --data "'${JSONBASE64}'"'
aws kinesis put-record --stream-name logstash-input-test --partition-key 1 --data "${JSONBASE64}"
I clicked the "Run Schema Discovery" button in the AWS UI and then quickly ran my shell script in a CMD window.
Once my initial schema was discovered I could manually edit the schema but it mostly matched what I expected based on my input JSON.
I'm trying to upload log files to AWS CloudWatch. The application is outputting log4j2 style JSON into a file:
https://logging.apache.org/log4j/2.x/manual/layouts.html#JSONLayout
AWS provide 2 cloudwatch log agents for this task. An 'older' agent, and a 'unified' agent:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_GettingStarted.html
I have tried using both agents, but run into the following problems, mainly related to parsing timestamps and the fact that the agent is doing a regex match on the entire log line and is not parsing the it as JSON:
An example log message (additional fields omitted):
{"message": "Processing data for previous day: 2019-06-17T02:01:00", "timestamp": "2019-06-18T17:16:19.338000+0100"}
The older agent threw an exception because it was attempting to use my configured timestamp format to parse the timestamp in the message, not the one in the timestamp field.
The unified agent is unable to parse timestamps with sub-second precision which results in problems when attempting to combine log streams from multiple sources.
So, is there a better tool/strategy to upload JSON formatted log4j2 files to CoudWatch?
I have a Lambda which queries the ec2 api and prints output to cloudwatch logs, which I want to use for metrics. However, I'm having trouble parsing the output (generated from a dictionary). Here is a typical #message:
defaultdict(None, {ec2.Instance(id='i-instance'): {'InstanceID': 'i-instance', 'Type': 't2.micro', 'ImageID': 'ami-0e5493310d2c6de5b', 'State': 'running'
I tried to |parse 'InstanceID': *' as InstanceId and similar but this errors, and I haven't found examples in the documentation (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html).
Assistance appreciated.
If you can modify the Lambda, probably the simplest solution would be to print the dictionary as a one-line JSON (instead of Python dictionary's string format) - something like print(json.dumps(myvalue)) should do the trick. Cloudwatch will then understand the fields automatically.
If you can't modify the Lambda's output, adding more quotes to the Logs Insights query might help: parse #message "'InstanceID': '*'" as InstanceID.
if you are using log insight the followings will give you instance id
fields #timestamp, #message, responseElements.instancesSet.items.0.instanceId as instanceId
I got a $1,200 invoice from Amazon for Cloudwatch services last month (specifically for 2 TB of log data ingestion in "AmazonCloudWatch PutLogEvents"), when I was expecting a few tens of dollars. I've logged into the Cloudwatch section of the AWS Console, and can see that one of my log groups used about 2TB of data, but there are thousands of different log streams in that log group, how can I tell which one used that amount of data?
On the CloudWatch console, use the IncomingBytes metrics to find the amount of data ingested by each log group for a particular time period in uncompressed bytes using Metrics page. Follow the below steps -
Go to CloudWatch metrics page and click on AWS namespace 'Logs' --> 'Log Group Metrics'.
Select the IncomingBytes metrics of the required log groups and click on 'Graphed metrics' tab to see the graph.
Change the start time and end time such that their difference is 30 days and change the period to 30 days. This way, we will get only one data point. Also changed the graph to Number and statistics to Sum.
This way, you will see the amount of data ingested by each log groups and get an idea about which log group is ingesting how much.
You can also achieve the same result using AWS CLI. An example scenario where you just want to know the total amount of data ingested by log groups for say 30 days, you can use get-metric-statistics CLI command-
sample CLI command -
aws cloudwatch get-metric-statistics --metric-name IncomingBytes --start-time 2018-05-01T00:00:00Z --end-time 2018-05-30T23:59:59Z --period 2592000 --namespace AWS/Logs --statistics Sum --region us-east-1
sample output -
{
"Datapoints": [
{
"Timestamp": "2018-05-01T00:00:00Z",
"Sum": 1686361672.0,
"Unit": "Bytes"
}
],
"Label": "IncomingBytes"
}
To find the same for a particular log group, you can change the command to accommodate dimensions like -
aws cloudwatch get-metric-statistics --metric-name IncomingBytes --start-time 2018-05-01T00:00:00Z --end-time 2018-05-30T23:59:59Z --period 2592000 --namespace AWS/Logs --statistics Sum --region us-east-1 --dimensions Name=LogGroupName,Value=test1
One by one, you can run this command on all log groups and check which log group is responsible for most of the bill for data ingested and take corrective measures.
NOTE: Change the parameters specific to your environment and requirement.
The solution provided by OP gives data for the amount of logs stored which is different from logs ingested.
What is the difference?
Data ingested per month is not same as Data storage bytes. After the data is ingested to CloudWatch, it is archived by CloudWatch which includes 26 bytes of metadata per log event and is compressed using gzip level 6 compression. So the Storage bytes refers to the storage space used by Cloudwatch to store the logs after they're ingested.
Reference : https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-statistics.html
We had a lambda logging GB of data of due to accidental check-in. Here's a boto3 based python script based on the info from the answers above that scans all log groups and prints out any group with logs greater than 1GB in the past 7 days. This helped me more than trying to use AWS dashboard which was slow to update.
#!/usr/bin/env python3
# Outputs all loggroups with > 1GB of incomingBytes in the past 7 days
import boto3
from datetime import datetime as dt
from datetime import timedelta
logs_client = boto3.client('logs')
cloudwatch_client = boto3.client('cloudwatch')
end_date = dt.today().isoformat(timespec='seconds')
start_date = (dt.today() - timedelta(days=7)).isoformat(timespec='seconds')
print("looking from %s to %s" % (start_date, end_date))
paginator = logs_client.get_paginator('describe_log_groups')
pages = paginator.paginate()
for page in pages:
for json_data in page['logGroups']:
log_group_name = json_data.get("logGroupName")
cw_response = cloudwatch_client.get_metric_statistics(
Namespace='AWS/Logs',
MetricName='IncomingBytes',
Dimensions=[
{
'Name': 'LogGroupName',
'Value': log_group_name
},
],
StartTime= start_date,
EndTime=end_date,
Period=3600 * 24 * 7,
Statistics=[
'Sum'
],
Unit='Bytes'
)
if len(cw_response.get("Datapoints")):
stats_data = cw_response.get("Datapoints")[0]
stats_sum = stats_data.get("Sum")
sum_GB = stats_sum / (1000 * 1000 * 1000)
if sum_GB > 1.0:
print("%s = %.2f GB" % (log_group_name , sum_GB))
Although the author of the question and other folks have answered the question in a good way, I will try to have a generic solution that could be applied without knowing the exact log-group-name which is causing too much of logs.
To do this, we can not use describe-log-streams function because this would need --log-group-name and as I said earlier I do not know the value of my log-group-name.
We can use describe-log-groups function because this function does not require any parameter.
Note that I am assuming that you have the required flag (--region) configured in ~/.aws/config file and your EC2 instance have the required permission to execute this command.
aws logs describe-log-groups
This command would list all the log groups in your aws account. The sample output of this would be
{
"logGroups": [
{
"metricFilterCount": 0,
"storedBytes": 62299573,
"arn": "arn:aws:logs:ap-southeast-1:855368385138:log-group:RDSOSMetrics:*",
"retentionInDays": 30,
"creationTime": 1566472016743,
"logGroupName": "/aws/lambda/us-east-1.test"
}
]
}
If you are interested in a specific prefix pattern only for the log group, you can use --log-group-name-prefix like this
aws logs describe-log-groups --log-group-name-prefix /aws/lambda
The output JSON of this command would also be similar to the above output.
If you have too many log groups in your account, analyzing the output of this becomes difficult and we need some command-line utility to give a brief insight into the result.
We will use the 'jq' command-line utility to get the desired thing. The intention is to get which log group has produced the most amount of log and hence caused more money.
From the output JSON, the fields which we need for our analysis would be "logGroupName" and "storedBytes". So taking these 2 fields in the 'jq' command.
aws logs describe-log-groups --log-group-name-prefix /aws/
| jq -M -r '.logGroups[] | "{\"logGroupName\":\"\(.logGroupName)\",
\"storedBytes\":\(.storedBytes)}"'
Using '\' in the command to do the escape because we want the output to be in the JSON format only to use the sort_by function of jq. The sample output of this would be something like below:
{"logGroupName":"/aws/lambda/test1","storedBytes":3045647212}
{"logGroupName":"/aws/lambda/projectTest","storedBytes":200165401}
{"logGroupName":"/aws/lambda/projectTest2","storedBytes":200}
Note that the output result would not be sorted on storedBytes, so we want to sort them in order to get which log group is the most problematic one.
we will use sort_by function of jq to accomplish this. The sample command would be like this
aws logs describe-log-groups --log-group-name-prefix /aws/
| jq -M -r '.logGroups[] | "{\"logGroupName\":\"\(.logGroupName)\",
\"storedBytes\":\(.storedBytes)}"'
| jq -s -c 'sort_by(.storedBytes) | .[]'
This would produce the below result for the above sample output
{"logGroupName":"/aws/lambda/projectTest2","storedBytes":200}
{"logGroupName":"/aws/lambda/projectTest","storedBytes":200165401}
{"logGroupName":"/aws/lambda/test1","storedBytes":3045647212}
The elements from the bottom of this list are the ones that have the most log associated with it. You may set the Expire Events After property to a finite period say 1 month to these log group.
If you want to know what is the sum of all the log byte then you can use the 'map' and 'add' function of jq like below.
aws logs describe-log-groups --log-group-name-prefix /aws/
| jq -M -r '.logGroups[] | "{\"logGroupName\":\"\(.logGroupName)\",
\"storedBytes\":\(.storedBytes)}"'
| jq -s -c 'sort_by(.storedBytes) | .[]'
| jq -s 'map(.storedBytes) | add '
The output of this command for the above sample output would be
3245812813
The answer has become lengthy but I hope it helps in figuring out the most problematic log group in cloudwatch.
You can also click the gear on the gear on the cloudwatch logs dashboard and choose the stored bytes column.
I also clicked anything that said 'never expire' and changed the logs to expire.
Use cloudwatch logs gear and select "Stored Bytes" column
*** UPDATE 20210907 - as #davur points out in one of the comments below, AWS deprecated storedBytes for individual LogStreams, so the method described in this answer no longer fulfils the requirement, although it might be interesting in other ways ***
Okay, I'm answering my own question here, but here we go (with all other answers welcome):
You can use a combination of AWS CLI tools, the csvfix CSV package and a spreadsheet to work this out.
Log into the AWS Cloudwatch Console and grab the name of the log group which has generated all the data. In my case it's called "test01-ecs".
Unfortunately in the Cloudwatch Console you can't sort the streams by "Stored Bytes" (which would tell you which ones are biggest). If there are too many streams in the log group to look through in the Console then you need to dump them somehow. For this you can use the AWS CLI tool:
$ aws logs describe-log-streams --log-group-name test01-ecs
The command above will give you JSON output (assuming your AWS CLI tool is set to JSON output - set it to output = json in ~/.aws/config if not) and it will look something like this:
{ "logStreams": [ { "creationTime": 1479218045690, "arn": "arn:aws:logs:eu-west-1:902720333704:log-group:test01-ecs:log-stream:test-spec/test-spec/0307d251-7764-459e-a68c-da47c3d9ecd9", "logStreamName": "test-spec/test-spec/0308d251-7764-4d9f-b68d-da47c3e9ebd8", "storedBytes": 7032 } ] }
Pipe this output to a JSON file - in my case the file was 31 MB in size:
$ aws logs describe-log-streams --log-group-name test01-ecs >> ./cloudwatch-output.json
Use the in2csv package (part of csvfix) to convert the JSON file to a CSV file which can easily be imported into a spreadsheet, making sure you define the logStreams key to be used to import on:
$ in2csv cloudwatch-output.json --key logStreams >> ./cloudwatch-output.csv
Import the resulting CSV file into a spreadsheet (I use LibreOffice myself as it seems great at dealing with CSV) making sure the storedBytes field is imported as an integer.
Sort the storedBytes column in the spreadsheet to work out which log stream or streams are generating the most data.
In my case this worked - it turned out one of my log streams (with logs from a broken TCP pipe in a redis instance) was 4,000 times the size of all the other streams combined!
An alternative to using the now deprecated storedBytes for log streams is to use Cloudwatch > Logs Insights and then run a query to count events by the log steam:
stats count(*) by #logStream
The log stream with the larger number of events will then probably be what is causing the high bill usage.