AWS CLI syntax error: Error parsing parameter '--image': Expected: '=', received: ''' for input: - amazon-web-services

I am trying to follow a walkthrough involving using AWS to use live feed facial recognition (see link below). I have done the 7 previous steps without issue and am currently on step 8, where I have uploaded a picture to a S3 bucket, and I have to do the command in cmd:
aws rekognition index-faces --image '{"S3Object":{"Bucket":"<S3BUCKET>","Name":"<MYFACE_KEY>.jpeg"}}' --collection-id "rekVideoBlog" --detection-attributes "ALL" --external-image-id "<YOURNAME>" --region us-west-2
I have done this, with my own information input in, but I get an error that states:
Error parsing parameter '--image': Expected: '=', received: ''' for input:
'{S3Object:{Bucket:,Name:.jpeg}}'
I have have looked up what could be the issue, with many solutions involving Windows systems having a different style of quotations compared to the walkthrough (i.e. switching single quotes to doubles and doubles to escaped quotes). No matter what I try, I cannot seem to figure out the issue. Does anyone have an idea of what would be the correct syntax for this on a Windows system or if I am doing something else wrong?
Walkthough: https://aws.amazon.com/blogs/machine-learning/easily-perform-facial-analysis-on-live-feeds-by-creating-a-serverless-video-analytics-environment-with-amazon-rekognition-video-and-amazon-kinesis-video-streams/

Related

AWS Service Quota: How to get service quota for Amazon S3 using boto3

I get the error "An error occurred (NoSuchResourceException) when calling the GetServiceQuota operation:" while trying running the following boto3 python code to get the value of quota for "Buckets"
client_quota = boto3.client('service-quotas')
resp_s3 = client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
In the above code, QuotaCode "L-89BABEE8" is for "Buckets". I presumed the value of ServiceCode for Amazon S3 would be "s3" so I put it there but I guess that is wrong and throwing error. I tried finding the documentation around ServiceCode for S3 but could not find it. I even tried with "S3" (uppercase 'S' here), "Amazon S3" but that didn't work as well.
What I tried?
client_quota = boto3.client('service-quotas') resp_s3 = client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
What I expected?
Output in the below format for S3. Below example is for EC2 which is the output of resp_ec2 = client_quota.get_service_quota(ServiceCode='ec2', QuotaCode='L-6DA43717')
I just played around with this and I'm seeing the same thing you are, empty responses from any service quota list or get command for service s3. However s3 is definitely the correct service code, because you see that come back from the service quota list_services() call. Then I saw there are also list and get commands for AWS default service quotas, and when I tried that it came back with data. I'm not entirely sure, but based on the docs I think any quota that can't be adjusted, and possibly any quota your account hasn't requested an adjustment for, will probably come back with an empty response from get_service_quota() and you'll need to run get_aws_default_service_quota() instead.
So I believe what you need to do is probably run this first:
client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
And if that throws an exception, then run the following:
client_quota.get_aws_default_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')

AWS Glue LAUNCH ERROR | java.net.URISyntaxException: Illegal character in scheme name at index 0: s3://py-function-bucket/aws_custom_functions.zip

I've started to develop an ETL Job in AWS Glue using a notebook to verify the results of each step. When running one cell at the time, the job runs correctly. However, when using the run option for Glue, the following error appears:
LAUNCH ERROR | java.net.URISyntaxException: Illegal character in scheme name at index 0:
s3://py-function-bucket/aws_custom_functions.zip
The magics I'm using to configure the notebook are as follows:
%extra_py_files s3://py-function-bucket/aws_custom_functions.zip
%number_of_workers 2
%%configure
{
"region": "region",
"iam_role": "arn:aws:iam::glue_role"
}
This means that I have a bucket with a zip file that contains additional functions to use in the notebook, which they need to be imported.
I have tried using the full URL instead of the URI:
%extra_py_files https://py-function-bucket.s3.region.amazonaws.com/aws_custom_functions.zip
Shows the same error as before but with the full URL.
Finally I tried running the cell with the magics and deleting the cell. This only causes that the input argument for importing the additional functions is not passed and therefore causes error that the functions are not defined.
Anyone else having this issue? A work around would be declaring the functions in the notebook but this would mean doing the same for all Jobs I plan to develop.

aws ssm register-task-with-maintenance-window reports error as Invalid JSON?

I am running the below command and it is reporting an error like Invalid JSON:
AWS SSM Command:
aws ssm register-task-with-maintenance-window --window-id mw-06344a0189162e0b3 --targets Key=WindowTargetIds,Values=ae5c621a-17d8-454f-977f-46298f1e6eb8 --task-arn AWS-RunPatchBaseline --service-role-arn arn:aws:iam::xxxxxxxxxxxxx:role/AmazonSSMRoleForInstancesQuickSetup --task-type RUN_COMMAND --max-concurrency 2 --max-errors 1 --priority 1 --task-invocation-parameters '{\"Operation\":{\"Values\":[\"Install\"]}}'
Error:
Error parsing parameter '--task-invocation-parameters': Invalid JSON: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
JSON received: {\"Operation\":{\"Values\":[\"Install\"]}}
I'm using the AWS Documentation.
I used double quotes instead of the single quote but still no charm.
Finally, created an issue on GitHub for AWS Docs.
Kindly assist if you have come across an issue like this before.
The error is saying it wants "double quotes" but your example has single quotes. Maybe the docs have been updated since your posting, but it currently recommends the following format.
Linux/Windows:
--task-invocation-parameters "RunCommand={Parameters={Operation=Install}}"

AWS | Boto3 | RDS |function DownloadDBLogFilePortion |cannot download a log file because it contains binary data |

When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.

aws ec2 request-spot-instances CLI issues

Trying to start a couple of spot instances within a simple script, and the syntax supplied in the AWS documentation and aws ec2 request-spot-instances help output is listed in either JAVA or JSON syntax. How does one enter the parameters under the JSON syntax from inside a shell script?
aws --version
aws-cli/1.2.6 Python/2.6.5 Linux/2.6.21.7-2.fc8xen
aws ec2 request-spot-instances help
-- at the start of "launch specification" it lists JSON syntax
--launch-specification (structure)
Specifies additional launch instance information.
JSON Syntax:
{
"ImageId": "string",
"KeyName": "string",
}, ....
"EbsOptimized": true|false,
"SecurityGroupIds": ["string", ...],
"SecurityGroups": ["string", ...]
}
I have tried every possible combination of the following, adding & moving brackets, quotes, changing options, etc, all to no avail. What would be the correct formatting of the variable $launch below to have this work? Other command variations -- "ec2-request-spot-instances" are not working in my environment, nor does it work if I try to substitute --spot-price with -p.
#!/bin/bash
launch="{"ImageId":"ami-a999999","InstanceType":"c1.medium"} "SecurityGroups":"launch-wizard-6""
echo $launch
aws ec2 request-spot-instances --spot-price 0.01 --instance-count 1 --type c1.small --launch-specification $launch
This provides result:
Unknown options: SecurityGroups:launch-wizard-6
Substituting the security group number has the same result.
aws ec2 describe-instances works perfectly, as does aws ec2 start-instance, so the environment and account information are properly setup, but I need to utilize spot pricing.
In fact, nothing is working as listed in this user documentation: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RequestSpotInstances.html
Thank you,
I know this is an old question, but in case somebody runs into it. I had the same issue recently with the CLI. It was very hard to get all the parameters to work correctly for request-spot-instances
#!/bin/bash
AWS_DEFAULT_OUTPUT="text"
UserData=$(base64 < userdata-current)
region="us-west-2"
price="0.03"
zone="us-west-2c"
aws ec2 request-spot-instances --region $region --spot-price $price --launch-specification "{ \"KeyName\": \"YourKey\", \"ImageId\": \"ami-3d50120d\" , \"UserData\": \"$UserData\", \"InstanceType\": \"r3.large\" , \"Placement\": {\"AvailabilityZone\": \"$zone\"}, \"IamInstanceProfile\": {\"Arn\": \"arn:aws:iam::YourAccount:YourProfile\"}, \"SecurityGroupIds\": [\"YourSecurityGroupId\"],\"SubnetId\": \"YourSubnectId\" }"
Basically what I had to do is put my user data in an external file, load it into the UserData variable and then pass that on the command line. Trying to get everything on the command line or using an external file for the ec2-request-spot-instances just kept failing. Note that other commands worked just fine, so this is specific to the ec2-request-spot-instances.
I detailed more about what i ended up doing here.
You have to use a list in this case:
"SecurityGroups": ["string", ...]
so
"SecurityGroups":"launch-wizard-6"
become
"SecurityGroups":["launch-wizard-6"]
Anyway, I'm dealing with the CLI right now and I found more useful to use a external JSON
Here is an example using Python:
myJson="file:///Users/xxx/Documents/Python/xxxxx/spotInstanceInformation.json"
x= subprocess.check_output(["/usr/local/bin/aws ec2 request-spot-instances --spot-price 0.2 --launch-specification "+myJson],shell=True)
print x
And the output is:
"SpotInstanceRequests": [
{
"Status": {
"UpdateTime": "2013-12-09T02:41:41.000Z",
"Code": "pending-evaluation",
"Message": "Your Spot request has been submitted for review, and is pending evaluation."
etc etc ....
Doc is here : http://docs.aws.amazon.com/cli/latest/reference/ec2/request-spot-instances.html
FYI - I'm appending file:/// because I'm using MAC. If you are launching your bash script using Linux, you could just use myJson="/path/to/file/"
The first problem, here, is quoting and formatting:
$ launch="{"ImageId":"ami-a999999","InstanceType":"c1.medium"} "SecurityGroups":"launch-wizard-6""
This isn't going to generate valid JSON, because the block you copied from the help file includes a spurious closing brace from a nested object that you didn't include, the closing brace is missing, and the unescaped double quotes are disappearing.
But we're not really getting to the point where the json is actually being validated, because with that space after the last brace, the cli is assuming that SecurityGroups and launch-wizard-6 are more command line options following the argument to --launch-specification:
$ echo $launch
{ImageId:ami-a999999,InstanceType:c1.medium} SecurityGroups:launch-wizard-6
That's probably not what you expected... so we'll fix the quoting so that it looks like one long argument, after the json is valid:
From the perspective of just generating valid json structures (not necessarily content), the data you are most likely trying to send would actually look like this, based on the docs:
{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}
Check that as structurally valid JSON, here.
Fixing the bracing, commas, and bracketing, the CLI stops throwing that error, with this formatting:
$ launch='{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}'
$ echo $launch
{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}
That isn't to say the API might not subsequently reject the request due to something else incorrect or missing, but you were never actually getting to the point of sending anything to the API; this was failing local validation in the command line tools.