AWS CLI - Put output into a readable format - amazon-web-services

So I have ran the following command in my CLI and it returned values, however, they are unreadable how would I format this into a table with a command?
do
echo "Check if SSE is enabled for bucket -> ${i}"
aws s3api get-bucket-encryption --bucket ${i} | jq -r .ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault.SSEAlgorithm
done
Would I need to change the command above?

You can specify an --output parameter when using the AWS CLI, or configure a default format using the aws configure command.
From Setting the AWS CLI output format - AWS Command Line Interface:
The AWS CLI supports the following output formats:
json – The output is formatted as a JSON string.
yaml – The output is formatted as a YAML string.
yaml-stream – The output is streamed and formatted as a YAML string. Streaming allows for faster handling of large data types.
text – The output is formatted as multiple lines of tab-separated string values. This can be useful to pass the output to a text processor, like grep, sed, or awk.
table – The output is formatted as a table using the characters +|- to form the cell borders. It typically presents the information in a "human-friendly" format that is much easier to read than the others, but not as programmatically useful.

Related

AWS S3api put-object: unknown options (checksum-crc32)

So I want to upload a file and have AWS perform a specified CRC32 (let's say the CRC is ABCD1234) check after the upload, but i keep getting this error.
usage: aws [options] [ ...]
[parameters] To see help text, you can run:
aws help aws help aws help
Unknown options: --checksumcrc32, ABCD1234
The command I use goes as follows (brackets [] for variables)
aws s3api put-object --bucket [BUCKET_NAME] --checksum-crc32
"ABCD1234" --key [NAME_OF_FILE] --body [DESTINATION_PATH] --profile
[PROFILE_NAME]
Uploads without the --checksum-crc32 work just fine.
Version: aws-cli/2.4.4
Any guesses why I get this error?
Thanks in advance!
The documentation says that the CRC needs to be Base-64 encoded, not hexadecimal:
--checksum-crc32 (string)
This header can be used as a data integrity check to verify that the
data received is the same data that was originally sent. This header
specifies the base64-encoded, 32-bit CRC32 checksum of the object. For
more information, see Checking object integrity in the Amazon S3 User
Guide .
So your ABCD1234 would need to be either q80SNA== or NBLNqw==, depending on whether they expect the 32 bits to be rendered in big-endian or little-endian order, respectively. I didn't see anything in the documentation that says which it is.
The CRC32 doesn't match their calculation. Make sure you're encoding it properly.
You don't need to specify the checksum on the cli, you can have the client calculate it by removing --checksum-crc32 and replacing it with --checksum-algorithm "crc32"
If your goal is data integrity, consider a cryptographically secure algorithm like SHA256, which can also be automatically calculated by the cli.

AWS CLI DynamoDB Called From Powershell Put-Item fails when a value contains a space

So, let's say I'm trying to post this JSON via the command line (not in a file because I'm not going to write a file for every invocation of this script) to a dynamo DB table
{\"TeamId\":{\"S\":\"One_Space_123\"},\"TeamName\":{\"S\":\"One_Space\"},\"Environment\":{\"S\":\"cte\"},\"StartDate\":{\"S\":\"null\"},\"EndDate\":{\"S\":\"null\"},\"CreatedDate\":{\"S\":\"today\"},\"CreatedBy\":{\"S\":\"someones user\"},\"EmailDistributionList\":{\"S\":\"test#test.com\"},\"RemedyGroup\":{\"S\":\"OneSpace\"},\"ScomSubscriptionId\":{\"S\":\"guid-ab22-2345\"},\"ZabbixActionId\":{\"S\":\"11\"},\"SnsTopic\":{\"M\":{\"TopicName\":{\"S\":\"ATopicName\"},\"TopicArn\":{\"S\":\"AtopicArn1234\"},\"CreatedDate\":{\"S\":\"today\"},\"CreatedBy\":{\"S\":\"someones user\"}}}}
Then the result from the CLI is one like this:
Unknown options: Space"},"ScomSubscriptionId":{"S":"guid-ab22-2345"},"ZabbixActionId":{"S":"11"},"SnsTopic":{"M":{"TopicName":{"S":"ATopicName"},"TopicArn":{"S":"AtopicArn1234"},"CreatedDate":{"S":"today"},"CreatedBy":{"S":"someones, user"}}}}, user"},"EmailDistributionList":{"S":"test#test.com"},"RemedyGroup":{"S":"One
As you can see, it fails on the TeamName property that in the above example is "One Space". If I change that value to "OneSpace" then instead it starts to fail on the "CreatedBy" property that is populated by "someones user" but if I remove all spaces from all properties I can suddenly pass this json to dynamoDB successfully.
In a working example the json looks like this:
{\"TeamId\":{\"S\":\"One_Space_123\"},\"TeamName\":{\"S\":\"One_Space\"},\"Environment\":{\"S\":\"cte\"},\"StartDate\":{\"S\":\"null\"},\"EndDate\":{\"S\":\"null\"},\"CreatedDate\":{\"S\":\"today\"},\"CreatedBy\":{\"S\":\"someonesuser\"},\"EmailDistributionList\":{\"S\":\"test#test.com\"},\"RemedyGroup\":{\"S\":\"OneSpace\"},\"ScomSubscriptionId\":{\"S\":\"guid-ab22-2345\"},\"ZabbixActionId\":{\"S\":\"11\"},\"SnsTopic\":{\"M\":{\"TopicName\":{\"S\":\"ATopicName\"},\"TopicArn\":{\"S\":\"AtopicArn1234\"},\"CreatedDate\":{\"S\":\"today\"},\"CreatedBy\":{\"S\":\"someonesuser\"}}}}
I can't find any documentation that tells me I can't have spaces, if I read this in from a file it will post it with the spaces, so what gives? If anyone has any advice on this matter, I certainly appreciate it.
For what it's worth in Powershell the execution looks like this currently (though I've tried various combinations of quoting the $dbTeamTableEntry variable
$dbEntry = aws.exe dynamodb put-item --region $region --table-name $table --item "$($dbTeamTableEntry)"

Upload multi-lined JSON log to AWS CloudWatch Log

The put-log-events expect the JSON file need to wrap by a [ & ]
e.g.
# aws logs put-log-events --log-group-name my-logs --log-stream-name 20150601 --log-events file://events
[
{
"timestamp": long,
"message": "string"
}
...
]
However, my JSON file is in multi-lined format like
{"timestamp": xxx, "message": "xxx"}
{"timestamp": yyy, "message": "yyy"}
Is it possible to upload without writing my own program?
[1] https://docs.aws.amazon.com/cli/latest/reference/logs/put-log-events.html#examples
An easy way to handle publish the batch without any coding would be by using jq to do the necessary transformation in the file. jq is a command line utility to do the JSON processing.
cat events | jq -s '.'> events-formatted.json
aws logs put-log-events --log-group-name my-logs --log-stream-name 20150601 --log-events file://events-formatted.json
With this the data should be formatted and could be ingested to CloudWatch.
If you want to keep those lines as a single event, you can cast the lines to string, join them with \n and send them that way.
Since lines look like self sufficient json themselves, sending them as an array of events (hence [...]) might not be that bad, since they will get into same log group and will be easy to find as a batch.
You will need to escape it as suggested, and remove the new lines. Even though there is allot of JSON these days used as the consumer format, it isn't a great raw representation when it comes to logs. Reason being is that logs can get truncated.
Try parsing truncated JSON, no fun at all!
You also don't want to have timestamp embedded in your logs either, this will break the filter and search ability that you get with cloudwatch.
You can stream a RAW format to cloudwatch logs, and then use streams to parse that raw data, format it, filter it or whatever you want to do, into a service such as Elastic Search. I would recommend streaming to Elastic Search service on AWS if you are wanting to do more with your logs than what cloudwatch gives you, and you can do your embedded timestamp format as well if you so wish.

Creating Kinesis Analytics applications using aws cli

I want to create a kinesis analytics application using aws cli. I use this command to create the application
aws kinesisanalytics create-application --application-name smartfactorytest1 --application-code "CREATE OR REPLACE STREAM DESTINATION_SQL_STREAM ( "device_serial" VARCHAR(16), "uploadRate" INTEGER, "downloadRate" INTEGER);
CREATE OR REPLACE PUMP "STREAM_PUMP"
AS INSERT INTO DESTINATION_SQL_STREAM
SELECT STREAM "device_serial", "uploadRate", "downloadRate"
FROM SOURCE_SQL_STREAM_001
-- LIKE compares a string to a string pattern (_ matches all char, % matches substring)
-- SIMILAR TO compares string to a regex, may use ESCAPE
WHERE "uploadRate" >20000" --inputs NamePrefix="SOURCE_SQL_STREAM",KinesisStreamsInput={ResourceARN="sourcearn",RoleARN="rolearn"}
But I get this error
invalid type for parameter Inputs[0].KinesisStreamsInput, value: ResourceARN=string, type: <class 'str'>, valid types: <class 'dict'>
Can anyone tell me what am I doing wrong? Any help would be appreciated.
I believe the issue is either that you need to take the quotes out in the KinesisStreamsInput section, or you need to add quotes and escape them. The documentation is unclear on which is the correct option.
According to the AWS Kinesis Analytics CLI Reference, https://docs.aws.amazon.com/cli/latest/reference/kinesisanalytics/create-application.html, the syntax for --inputs with KinesisStreamsInput should look like the example provided for KinesisStreamsOutput:
Name=string,KinesisStreamsOutput={ResourceARN=string,RoleARN=string},...
This would mean removing the quotes around your sourcearn and rolearn. However, the documentation isn't clear that this refers to the CLI syntax in all cases.
If that doesn't work, according to this AWS CLI usage guide page, https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-quoting-strings.html, it specifies adding quotes and escaping the relevant ones, depending on your OS...
"Linux or macOS
Use single quotation marks (' ') to enclose the JSON data structure, as in the following example. You don't have to do anything special with the embedded double quotation marks embedded in the JSON string.
aws ec2 run-instances --image-id ami-12345678 --block-device-mappings '[{"DeviceName":"/dev/sdb","Ebs":{"VolumeSize":20,"DeleteOnTermination":false,"VolumeType":"standard"}}]'
PowerShell
PowerShell requires single quotation marks (' ') to enclose the JSON data structure. Also, because double quotation marks have a special meaning to PowerShell, you must use a backslash () to escape each double quotation mark (") within the JSON structure, as in the following example.
PS C:\> aws ec2 run-instances --image-id ami-12345678 --block-device-mappings '[{\"DeviceName\":\"/dev/sdb\",\"Ebs\":{\"VolumeSize\":20,\"DeleteOnTermination\":false,\"VolumeType\":\"standard\"}}]'
Windows Command Prompt
The Windows command prompt requires double quotation marks (" ") to enclose the JSON data structure. Also, to prevent the command processor from misinterpreting the double quotation marks embedded in the JSON, you must also escape (precede with a backslash [ \ ] character) each double quotation mark (") within the JSON data structure itself, as in the following example.
C:\> aws ec2 run-instances --image-id ami-12345678 --block-device-mappings "[{\"DeviceName\":\"/dev/sdb\",\"Ebs\":{\"VolumeSize\":20,\"DeleteOnTermination\":false,\"VolumeType\":\"standard\"}}]"
Only the outermost double quotation marks are not escaped."
This link also references needing to escape quotes on Windows, and is using the kinesisanalytics command: https://github.com/aws/aws-cli/issues/3103
"Rishi74744 commented on Feb 6, 2018
I got it to work as -
aws kinesisanalytics add-application-reference-data-source --endpoint https://kinesisanalytics.us-east-1.amazonaws.com --region us-east-1 --application-name alerts --reference-data-source "{\"TableName\":\"DeviceData\",\"S3ReferenceDataSource\":{\"BucketARN\":\"arn: aws: s3: : : bucket-name\",\"FileKey\":\"device.csv\",\"ReferenceRoleARN\":\"arn: aws: iam: : account-id: role/role-name\"},\"ReferenceSchema\":{\"RecordFormat\":{\"RecordFormatType\":\"CSV\",\"MappingParameters\":{\"CSVMappingParameters\":{\"RecordRowDelimiter\":\"\n\",\"RecordColumnDelimiter\":\", \"}}},\"RecordEncoding\":\"UTF-8\",\"RecordColumns\":[{\"Name\":\"key1\",\"SqlType\":\"VARCHAR(64)\"},{\"Name\":\"key2\",\"SqlType\":\"VARCHAR(64)\"}]}}" --current-application-version-id 2
But this should be mentioned in the documentation."
One note: it may be preferable to use JSON files as inputs and use this syntax instead: --cli-input-json file://input.json. This is referenced in the AWS Kinesis CLI Command Reference (first link, under 1.) and also mentioned in the GitHub link above. It's also the method used by the majority of the AWS Kinesis documentation. For example, JSON files used for different purposes in Kinesis Analytics:
https://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html
Please let me know what works, and I will work with my AWS rep to improve the documentation.

Why doesn't my Kinesis Analytics Application Schema Discovery work?

I am sending comma-separated data to my kinesis stream, and I want my kinesis analytics app to recognize that there are two columns (both bigints). But when I populate my stream with some records and click "Discover Schema", it always gives me a schema of one column! Here's a screenshot:
I have tried many different delimiters to indicate columns, including comma, space, and comma-space, but none of these cause aws to detect my schema properly. At one point I gave up and edited the schema manually, which caused this error:
While I know that I have the option to keep the schema as a single column and use string and date-time manipulation to structure my data, I prefer not to do it this way... Any suggestions?
While I wasn't able to get the schema discovery tool to work, I realized that I am able to manually edit my schema and it works fine. I was getting that error because I had just populated the stream initially, and I was not continuously sending data.
Schema Discovery required me to send data to my input kinesis stream during the schema discovery. To do this for my Proof of Concept application I used the AWS CLI:
# emittokinesis.sh
JSON='{
"messageId": "31c14ee7-9bde-484d-af05-03509c2c33aa",
"myTest": "myValue"
}'
echo "$JSON"
JSONBASE64=$(echo ${JSON} | base64)
echo 'aws kinesis put-record --stream-name logstash-input-test --partition-key 1 --data "'${JSONBASE64}'"'
aws kinesis put-record --stream-name logstash-input-test --partition-key 1 --data "${JSONBASE64}"
I clicked the "Run Schema Discovery" button in the AWS UI and then quickly ran my shell script in a CMD window.
Once my initial schema was discovered I could manually edit the schema but it mostly matched what I expected based on my input JSON.