How to specify filepath via --cli-input-json in s3api - amazon-web-services

I'm trying to issue an aws s3api put-object command with all arguments specified via the --cli-input-json document, which in this case looks like so:
{
"Body": "this is the part giving me trouble",
"Bucket": "my-bucket",
"Key": "my-key"
}
For the Body property, I can't figure out how to specify a file (on the local system) to put to S3. I've tried both:
"Body": "the_filepath"
"Body": "file://the_filepath"
... but neither work (both result in an Invalid base64 error).
I know I can add the file to the command line call via --body file://the_filepath, but I'm trying to put all command args into the JSON document. I'm also trying to avoid reading in the contents of the object by the controlling script.
I'm stumped and I can't seem to find AWS CLI documentation on this use case.

Related

Is it possible to download the contents of a public Lambda layer from AWS given the ARN?

I want to download the public arn for a more compact version of spacy from this GitHub repository.
"arn:aws:lambda:us-west-2:113088814899:layer:Klayers-python37-spacy:27"
How can I achieve this?
You can get it from a Arn using the get-layer-version-by-arn function in the CLI.
You can run the below command to get the source of the Lambda layer you requested.
aws lambda get-layer-version-by-arn \
--arn "arn:aws:lambda:us-west-2:113088814899:layer:Klayers-python37-spacy:27"
An example of the response you will receive is below
{
"LayerVersionArn": "arn:aws:lambda:us-west-2:123456789012:layer:AWSLambda-Python37-SciPy1x:2",
"Description": "AWS Lambda SciPy layer for Python 3.7 (scipy-1.1.0, numpy-1.15.4) https://github.com/scipy/scipy/releases/tag/v1.1.0 https://github.com/numpy/numpy/releases/tag/v1.15.4",
"CreatedDate": "2018-11-12T10:09:38.398+0000",
"LayerArn": "arn:aws:lambda:us-west-2:123456789012:layer:AWSLambda-Python37-SciPy1x",
"Content": {
"CodeSize": 41784542,
"CodeSha256": "GGmv8ocUw4cly0T8HL0Vx/f5V4RmSCGNjDIslY4VskM=",
"Location": "https://awslambda-us-west-2-layers.s3.us-west-2.amazonaws.com/snapshots/123456789012/..."
},
"Version": 2,
"CompatibleRuntimes": [
"python3.7"
],
"LicenseInfo": "SciPy: https://github.com/scipy/scipy/blob/master/LICENSE.txt, NumPy: https://github.com/numpy/numpy/blob/master/LICENSE.txt"
}
Once you run this you will get a response returned with a key of "Content", containing a subkey of "Location" which references the S3 path to download the layer contents.
You can download from this path, you will then need to configure this as a Lambda layer again after removing any dependencies.
Please ensure in this process that you only remove unnecessary dependencies.

AWS create-rule AWS CLI giving error "Unknown parameter in Conditions[0]: "PathPatternConfig", must be one of: Field, Values"

I am trying to add a path pattern /images/* to an existing ALB listener rule. Following is the command that I have executed. Please note that the variables $listenerARN and $tgARN has correct values, which I have not shown here due to security reason.
aws elbv2 create-rule --listener-arn "$listenerARN" --priority 5 --conditions "Field=path-pattern,PathPatternConfig={Values="/images/*"}" --actions Type=forward,TargetGroupArn="$tgARN"
When I execute the above command I get the following error:
Unknown parameter in Conditions[0]: "PathPatternConfig", must be one of: Field, Values
I get the same error if I provide the value for --conditions from the external .json file, which has the following content.
[
{
"Field": "path-pattern",
"PathPatternConfig": {
"Values": ["/images/*"]
}
}
]
I read the documentation several time and I am sure I am following the exact syntax, but I cannot get rid of this error.
It looks like you have to use an alternate syntax for complex JSON here:
--conditions file://conditions.json

AWS cli s3api put-bucket-tagging - cannot add tag to bucket unless bucket has 0 tags

As there is no create-tag for s3, only put-bucket-tagging can be used, which requires that you include all tags on the resource, not just the new one. Thus there is no way to add a new tag to a bucket that already has tags unless you include all existing tags PLUS your new tag. This makes it way more difficult to use for bulk operations, as you need to get all the tags first, extrapolate it into json, edit the json to add your new tag to every bucket, and then feed that to put-bucket-tagging.
Does anyone have a better way to do this or have a script that does this?
Command I'm trying:
aws s3api put-bucket-tagging --bucket cbe-res034-scratch-29 --tagging "TagSet=[{Key=Environment,Value=Research}]"
Error I get:
An error occurred (InvalidTag) when calling the PutBucketTagging operation: System tags cannot be removed by requester
I get the 'cannot be removed' error because put-bucket-tagging is trying to delete the other 10 tags on this bucket (because I didn't include them in the TagSet) and I don't have access to do so.
You can use resourcegroupstaggingapi to accomplish the result you expect, see it below.
aws resourcegroupstaggingapi tag-resources --resource-arn-list arn:aws:s3:::cbe-res034-scratch-29 --tags Environment=Research
To handle spaces on tag name or value, use it as json.
aws resourcegroupstaggingapi tag-resources --resource-arn-list arn:aws:s3:::cbe-res034-scratch-29 --tags '{"Environment Name":"Research Area"}'
I would strongly recommend using json file instead of command line flags. I have spent few hours yesterday without any success making key and value with white spaces work. This is in the context of Jenkins pipline in groovy calling bash shell script block.
Here is the syntax for calling json file.
aws resourcegroupstaggingapi tag-resources --cli-input-json file://tags.json
If you don't know exact format of json file then just run following, which will spit out format in tags.json file in current directory.
aws resourcegroupstaggingapi tag-resources --generate-cli-skeleton > tags.json
tags.json will have json. Just update the file and run the first commmand.
{
"ResourceARNList": [
""
],
"Tags": {
"KeyName": ""
}
}
You can fill up your data. e.g. for S3 bucket
{
"ResourceARNList": [
"arn:aws:s3:::my-s3-bucket"
],
"Tags": {
"Application": "My Application"
}
}

How to get the Initialization Vector (IV) from the AWS Encryption CLI?

I'm encrypting a file using the AWS Encryption CLI using a command like so:
aws-encryption-cli --encrypt --input test.mp4 --master-keys key=arn:aws:kms:us-west-2:123456789012:key/exmaple-key-id --output . --metadata-output -
From the output of the command, I can clearly see that it's using an Initialization Vector (IV) of strength 12, which is great, but how do I actually view the IV? In order to pass the encrypted file to another service, like AWS Elastic Transcoder, where it'll do the decryption itself, I need to actually know what the IV was that was used for encrypting the file.
{
"header": {
"algorithm": "AES_256_GCM_IV12_TAG16_HKDF_SHA384_ECDSA_P384",
"content_type": 2,
"encrypted_data_keys": [{
"encrypted_data_key": "...............",
"key_provider": {
"key_info": "............",
"provider_id": "..........."
}
}],
"encryption_context": {
"aws-crypto-public-key": "..............."
},
"frame_length": 4096,
"header_iv_length": 12,
"message_id": "..........",
"type": 128,
"version": "1.0"
},
"input": "/home/test.mp4",
"mode": "encrypt",
"output": "/home/test.mp4.encrypted"
}
Unfortunately, you won't be able to use the AWS Encryption SDK CLI to encrypt data for Amazon Elastic Transcoder's consumption.
One of the primary benefits of the AWS Encryption SDK is the message format[1] which packages all necessary information about the encrypted message into a binary blob and provides a more scalable way of handling large messages. Extracting the data primitives from that blob is not recommended and even if you did, they may or may not be directly compatible with another system, depending on how you used the AWS Encryption SDK and what that other system expects.
In the case of Elastic Transcoder, they expect the raw ciphertext encrypted using the specified AES mode[2]. This is not compatible with the AWS Encryption SDK format.
[1] https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/message-format.html
[2] https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/create-job.html#create-job-request-inputs-encryption

aws ec2 request-spot-instances CLI issues

Trying to start a couple of spot instances within a simple script, and the syntax supplied in the AWS documentation and aws ec2 request-spot-instances help output is listed in either JAVA or JSON syntax. How does one enter the parameters under the JSON syntax from inside a shell script?
aws --version
aws-cli/1.2.6 Python/2.6.5 Linux/2.6.21.7-2.fc8xen
aws ec2 request-spot-instances help
-- at the start of "launch specification" it lists JSON syntax
--launch-specification (structure)
Specifies additional launch instance information.
JSON Syntax:
{
"ImageId": "string",
"KeyName": "string",
}, ....
"EbsOptimized": true|false,
"SecurityGroupIds": ["string", ...],
"SecurityGroups": ["string", ...]
}
I have tried every possible combination of the following, adding & moving brackets, quotes, changing options, etc, all to no avail. What would be the correct formatting of the variable $launch below to have this work? Other command variations -- "ec2-request-spot-instances" are not working in my environment, nor does it work if I try to substitute --spot-price with -p.
#!/bin/bash
launch="{"ImageId":"ami-a999999","InstanceType":"c1.medium"} "SecurityGroups":"launch-wizard-6""
echo $launch
aws ec2 request-spot-instances --spot-price 0.01 --instance-count 1 --type c1.small --launch-specification $launch
This provides result:
Unknown options: SecurityGroups:launch-wizard-6
Substituting the security group number has the same result.
aws ec2 describe-instances works perfectly, as does aws ec2 start-instance, so the environment and account information are properly setup, but I need to utilize spot pricing.
In fact, nothing is working as listed in this user documentation: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RequestSpotInstances.html
Thank you,
I know this is an old question, but in case somebody runs into it. I had the same issue recently with the CLI. It was very hard to get all the parameters to work correctly for request-spot-instances
#!/bin/bash
AWS_DEFAULT_OUTPUT="text"
UserData=$(base64 < userdata-current)
region="us-west-2"
price="0.03"
zone="us-west-2c"
aws ec2 request-spot-instances --region $region --spot-price $price --launch-specification "{ \"KeyName\": \"YourKey\", \"ImageId\": \"ami-3d50120d\" , \"UserData\": \"$UserData\", \"InstanceType\": \"r3.large\" , \"Placement\": {\"AvailabilityZone\": \"$zone\"}, \"IamInstanceProfile\": {\"Arn\": \"arn:aws:iam::YourAccount:YourProfile\"}, \"SecurityGroupIds\": [\"YourSecurityGroupId\"],\"SubnetId\": \"YourSubnectId\" }"
Basically what I had to do is put my user data in an external file, load it into the UserData variable and then pass that on the command line. Trying to get everything on the command line or using an external file for the ec2-request-spot-instances just kept failing. Note that other commands worked just fine, so this is specific to the ec2-request-spot-instances.
I detailed more about what i ended up doing here.
You have to use a list in this case:
"SecurityGroups": ["string", ...]
so
"SecurityGroups":"launch-wizard-6"
become
"SecurityGroups":["launch-wizard-6"]
Anyway, I'm dealing with the CLI right now and I found more useful to use a external JSON
Here is an example using Python:
myJson="file:///Users/xxx/Documents/Python/xxxxx/spotInstanceInformation.json"
x= subprocess.check_output(["/usr/local/bin/aws ec2 request-spot-instances --spot-price 0.2 --launch-specification "+myJson],shell=True)
print x
And the output is:
"SpotInstanceRequests": [
{
"Status": {
"UpdateTime": "2013-12-09T02:41:41.000Z",
"Code": "pending-evaluation",
"Message": "Your Spot request has been submitted for review, and is pending evaluation."
etc etc ....
Doc is here : http://docs.aws.amazon.com/cli/latest/reference/ec2/request-spot-instances.html
FYI - I'm appending file:/// because I'm using MAC. If you are launching your bash script using Linux, you could just use myJson="/path/to/file/"
The first problem, here, is quoting and formatting:
$ launch="{"ImageId":"ami-a999999","InstanceType":"c1.medium"} "SecurityGroups":"launch-wizard-6""
This isn't going to generate valid JSON, because the block you copied from the help file includes a spurious closing brace from a nested object that you didn't include, the closing brace is missing, and the unescaped double quotes are disappearing.
But we're not really getting to the point where the json is actually being validated, because with that space after the last brace, the cli is assuming that SecurityGroups and launch-wizard-6 are more command line options following the argument to --launch-specification:
$ echo $launch
{ImageId:ami-a999999,InstanceType:c1.medium} SecurityGroups:launch-wizard-6
That's probably not what you expected... so we'll fix the quoting so that it looks like one long argument, after the json is valid:
From the perspective of just generating valid json structures (not necessarily content), the data you are most likely trying to send would actually look like this, based on the docs:
{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}
Check that as structurally valid JSON, here.
Fixing the bracing, commas, and bracketing, the CLI stops throwing that error, with this formatting:
$ launch='{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}'
$ echo $launch
{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}
That isn't to say the API might not subsequently reject the request due to something else incorrect or missing, but you were never actually getting to the point of sending anything to the API; this was failing local validation in the command line tools.