How to enable server side encryption on DynamoDB via CLI? - amazon-web-services

I want to enable encryption on my production tables in DynamoDB. According to their docs at https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.tutorial.html#encryption.tutorial-cli I just use the --sse-specification flag; however, it's not working via CLI
I copied their exact command from the docs, below
aws dynamodb create-table \
--table-name Music \
--attribute-definitions \
AttributeName=Artist,AttributeType=S \
AttributeName=SongTitle,AttributeType=S \
--key-schema \
AttributeName=Artist,KeyType=HASH \
AttributeName=SongTitle,KeyType=RANGE \
--provisioned-throughput \
ReadCapacityUnits=10,WriteCapacityUnits=5 \
--sse-specification Enabled=true
Using their exact example or any other contrived setup I keep getting the same error message when ran from CLI
Unknown options: --sse-specification, Enabled=true
Is it possible to turn this on from CLI? The only other way I see is to create each table manually from the console and tick the encryption button during creation there
My AWS version is
aws-cli/1.14.1 Python/2.7.10 Darwin/17.5.0 botocore/1.8.32

You just need to update your version of the CLI. Version 1.14.1 was released on 11/29/2017, SSE on DynamoDB wasn't released until 2/8/2018.

Related

Unable to enable Kinesis Data Stream as destination stream for DynamoDB in local

I have a project in which I have to capture the DynamoDB table change events using the Kinesis Data Streams.
Here are the sequence of operations that I am performing on my local:
Start the DDB container: aws-dynamodb-local. On port 8000
Start the Kinesis container: aws-kinesis-local. On port 8001
Create a new DDB table:
aws dynamodb create-table \
--table-name Music \
--attribute-definitions \
AttributeName=Artist,AttributeType=S \
AttributeName=SongTitle,AttributeType=S \
--key-schema \
AttributeName=Artist,KeyType=HASH \
AttributeName=SongTitle,KeyType=RANGE \
--provisioned-throughput \
ReadCapacityUnits=5,WriteCapacityUnits=5 \
--table-class STANDARD --endpoint-url=http://localhost:8000
Create a new stream:
aws kinesis create-stream --stream-name samplestream --shard-count 3
--endpoint-url=http://localhost:8001
Enable the Kinesis streams on the table to capture change events:
aws dynamodb enable-kinesis-streaming-destination \
--table-name Music \
--stream-arn arn:aws:kinesis:us-east-1:000000000000:stream/samplestream
--endpoint-url=http://localhost:8000
An error occurred (UnknownOperationException) when calling the EnableKinesisStreamingDestination operation:
Can anyone help me here to understand what I am doing wrong here?
How can I resolve the above UnknownOperationException in my local?
Localstack provides a easy way to configure this but the DynamoDB of Localstack has very poor performance, so I am trying to find an alternate way for the setup.

AWS S3 (ap-south-1) returns Bad Request for Hudi DeltaStreamer job

I'm trying to run a DeltaStreamer job to push data to S3 bucket using the following cmd:
spark-submit \
--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3 \
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
--conf spark.hadoop.fs.s3a.endpoint=s3.ap-south-1.amazonaws.com \
--conf spark.hadoop.fs.s3a.access.key='AA..AA' \
--conf spark.hadoop.fs.s3a.secret.key='WQO..IOEI' \
--class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer $HUDI_UTILITIES_BUNDLE \
--table-type COPY_ON_WRITE \
--source-class org.apache.hudi.utilities.sources.JsonKafkaSource \
--source-ordering-field cloud.account.id \
--target-base-path s3a://test \
--target-table test1_cow \
--props /var/demo/config/kafka-source.properties \
--hoodie-conf hoodie.datasource.write.recordkey.field=cloud.account.id \
--hoodie-conf hoodie.datasource.write.partitionpath.field=cloud.account.id \
--schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider
This returns the following error:
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 9..1, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: G..g=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
...
I think I'm using the correct S3 endpoint. Do I need to create an S3 Access Point?
I'm following the versions mentioned in https://hudi.apache.org/docs/docker_demo.html (https://github.com/apache/hudi/tree/master/docker).
That AWS region is v4 signing only, so you must set the endpoint to the region.
But: that version of the hadoop-* JAR and AWS SDK doesn't handle setting endpoints through the fs.s3a.endpoint option, It is four years old, after all -before any of the v4-only AWS regions were launched.
Upgrade the hadoop version to something written in the last 2-3 years. My recommendation is for Hadoop 3.3.1 or 3.2.2.
that is:
all of the hadoop-* JAR, not just individual JARs. To try to upgrade just hadoop-aws.jar will only give you new stack traces.
And a matching sdk bundle JAR. Mvn repo shows the version you need.
Easiest is to go to hadoop.apache.org, download an entire release and then extract the jars.

How to create AWS glue job using CLI commands?

How can we create a glue job using CLI commands? Can I have one sample code?Thanks!
Refer to this link which talks about creating AWS Glue resources using CLI. This blog is in Japanese. Following is the sample to create a Glue job using CLI.
aws glue create-job \
--name ${GLUE_JOB_NAME} \
--role ${ROLE_NAME} \
--command "Name=glueetl,ScriptLocation=s3://${SCRIPT_BUCKET_NAME}/${ETL_SCRIPT_FILE}" \
--connections Connections=${GLUE_CONN_NAME} \
--default-arguments file://${DEFAULT_ARGUMENT_FILE}
Follow documentation and post error if any
Link to docs
https://docs.aws.amazon.com/cli/latest/reference/glue/create-job.html

AWS Cost Explorer get-cost-and-usage get cost&usage of each single resources without grouping

I am trying to list out cost and usage for each single resource in my AWS console such as RDS tables, SQS queues and Lambda functions using Cost Explorer.
I have read the general doc:
https://docs.aws.amazon.com/cli/latest/reference/ce/get-cost-and-usage.html
And this AWS CLI command returns list of cost/usage records grouped by service type
aws ce get-cost-and-usage \
--time-period Start=2020-01-01,End=2020-02-01 \
--granularity MONTHLY \
--metrics "BlendedCost" "UnblendedCost" "UsageQuantity" \
--group-by Type=DIMENSION,Key=SERVICE Type=LEGAL_ENTITY_NAME,Key=Environment
I have been trying to tweak the command to get a list of cost/usage records of all resources without grouping but there is no luck yet. Can anyone help me to correct my command?
The command you are looking for is:
aws cli get-cost-and-usage-with-resources
You can run
aws cli get-cost-and-usage-with-resources help
for usage help

UnrecognizedClientException error when I try to enable "time to live" on local DynamoDB

I use local DynamoDB on Docker and I want to set up a time to live (TTL) feature for the table.
To table creates I use:
aws dynamodb create-table \
--table-name activity \
--attribute-definitions \
AttributeName=deviceId,AttributeType=S \
AttributeName=time,AttributeType=S \
--key-schema \
AttributeName=deviceId,KeyType=HASH \
AttributeName=time,KeyType=RANGE \
--billing-mode 'PAY_PER_REQUEST' \
--endpoint-url http://dynamo:8000
And it works as need.
But when I try to enable TTL:
aws dynamodb update-time-to-live \
--table-name activity \
--time-to-live-specification Enabled=true,AttributeName=ttl
I got the error: An error occurred (UnrecognizedClientException) when calling the UpdateTimeToLive operation: The security token included in the request is invalid
Dummy credentials for the Docker I sent using docker-compose environment:
AWS_ACCESS_KEY_ID: 0
AWS_SECRET_ACCESS_KEY: 0
AWS_DEFAULT_REGION: eu-central-1
Used Docker images:
For DynamoDB - dwmkerr/dynamodb
For internal AWS CLI - garland/aws-cli-docker
What is wrong? How can I enable the feature using local Docker?
Thanks for any answer.
Best.
After an extra a few hours of failures, I have an answer. I hope it helps somebody save a bit of time:
Even if you use a local environment, you should use real AWS
credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). You can get it here after register.
If you use --endpoint-url parameter
for creating DB, then you should use it with the same value for
update-time-to-live or any other action for the DB.
Cheers!