Python: Boto3: get_metric_statistics() only accepts keyword arguments - amazon-web-services

Just started using Boto3 with Python so definitely new at this.
I'm trying to use a simple get_metric_statistics script to return information about CPUUtilization for an instance. Here is the script I'm looking to use:
import boto3
import datetime
cw = boto3.client('cloudwatch')
cw.get_metric_statistics(
300,
datetime.datetime.utcnow() - datetime.timedelta(seconds=600),
datetime.datetime.utcnow(),
'CPUUtilization',
'AWS/EC2',
'Average',
{'InstanceId':'i-11111111111'},
)
but I keep getting the following message:
Traceback (most recent call last):
File "C:..../CloudWatch_GetMetricStatistics.py", line 13, in <module>
{'InstanceId':'i-0c996c11414476c7c'},
File "C:\Program Files\Python27\lib\site-packages\botocore\client.py", line 251, in _api_call
"%s() only accepts keyword arguments." % py_operation_name)
TypeError: get_metric_statistics() only accepts keyword arguments.
I have:
Looked at the documentation on Boto3 and I believe I have got everything correctly written/included
Set the correct region/output format/security credentials in the .aws folder
Googled similar problems with put_metric_statistics, etc to try and figure it out
I'm still stuck as to what I'm missing?
Any guidance would be much appreciated.
Many thanks
Ben

This works:
import boto3
import datetime
cw = boto3.client('cloudwatch')
cw.get_metric_statistics(
Period=300,
StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=600),
EndTime=datetime.datetime.utcnow(),
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Statistics=['Average'],
Dimensions=[{'Name':'InstanceId', 'Value':'i-abcd1234'}]
)
To find the right values, I use the AWS Command-Line Interface (CLI):
aws cloudwatch list-metrics --namespace AWS/EC2 --metric-name CPUUtilization --max-items 1
It returns information such as:
{
"Metrics": [
{
"Namespace": "AWS/EC2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-abcd1234"
}
],
"MetricName": "CPUUtilization"
}
],
"NextToken": "xxx"
}
You can then use these values to populate your get_metric_statistics() requet (such as the Dimensions parameter).

Refer to the documentation, and your error message:
get_metric_statistics() only accepts keyword agruments
Named arguments must be passed to the function as is defined in the docs:
get_metric_statistics(**kwargs)

have you used region_name when trying to get details. Can you share your github to know better, what you are doing.

Related

Is Snomed Integration For AWS Medical Comprehend Actually Working?

According to a recent update, the AWS medical comprehend service should now be returning snomed categories along with other medical terms.
I am running this in a Python 3.9 Lambda:
import json
def lambda_handler(event, context):
clinical_note = "Patient X was diagnosed with insomnia."
import boto3
cm_client = boto3.client("comprehendmedical")
response = cm_client.infer_snomedct(Text=clinical_note)
print (response)
I get the following response:
{
"errorMessage": "'ComprehendMedical' object has no attribute 'infer_snomedct'",
"errorType": "AttributeError",
"requestId": "560f1d3c-800a-46b6-a674-e0c3c3cc719f",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 7, in lambda_handler\n response = cm_client.infer_snomedct(Text=clinical_note)\n",
" File \"/var/runtime/botocore/client.py\", line 643, in __getattr__\n raise AttributeError(\n"
]
}
So either I am missing something (probably something obvious) or maybe the method is not actually available yet? Anything to set me on the right path would be welcome.
This is most likely due to the default Boto3 version being out of date on my lambda.

Aws Transribe unable to start_transcription_job without LanguageCode in boto3

I have an audio file in S3.
I don't know the language of the audio file. So I need to use IdentifyLanguage for start_transcription_job().
LanguageCode will be blank since I don't know the language of the audio file.
Envirionment
Using
Python 3.8 runtime,
boto3 version 1.16.5 ,
botocore version: 1.19.5,
no Lambda Layer.
Here is my code for the Transcribe job:
mediaFileUri = 's3://'+ bucket_name+'/'+prefixKey
transcribe_client = boto3.client('transcribe')
response = transcribe_client.start_transcription_job(
TranscriptionJobName="abc",
IdentifyLanguage=True,
Media={
'MediaFileUri':mediaFileUri
},
)
Then I get this error:
{
"errorMessage": "Parameter validation failed:\nMissing required parameter in input: \"LanguageCode\"\nUnknown parameter in input: \"IdentifyLanguage\", must be one of: TranscriptionJobName, LanguageCode, MediaSampleRateHertz, MediaFormat, Media, OutputBucketName, OutputEncryptionKMSKeyId, Settings, ModelSettings, JobExecutionSettings, ContentRedaction",
"errorType": "ParamValidationError",
"stackTrace": [
" File \"/var/task/app.py\", line 27, in TranscribeSoundToWordHandler\n response = response = transcribe_client.start_transcription_job(\n",
" File \"/var/runtime/botocore/client.py\", line 316, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 607, in _make_api_call\n request_dict = self._convert_to_request_dict(\n",
" File \"/var/runtime/botocore/client.py\", line 655, in _convert_to_request_dict\n request_dict = self._serializer.serialize_to_request(\n",
" File \"/var/runtime/botocore/validate.py\", line 297, in serialize_to_request\n raise ParamValidationError(report=report.generate_report())\n"
]
}
With this error, means that I must specify the LanguageCode and IdentifyLanguage is an invalid parameter.
100% sure the audio file exist in S3. But without LanguageCode it don't work, and IdentifyLanguage parameter is unknown parameter
I using SAM application to test locally using this command:
sam local invoke MyHandler -e lambda\TheDirectory\event.json
And I cdk deploy, and check in Aws Lambda Console as well, tested it the same events.json, but still getting the same error
This I think is Lambda Execution environment, I didn't use any Lambda Layer.
I look at this docs from Aws Transcribe:
https://docs.aws.amazon.com/transcribe/latest/dg/API_StartTranscriptionJob.html
and this docs of boto3:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.start_transcription_job
Clearly state that LanguageCode is not required and IdentifyLanguage is a valid parameter.
So what I missing out? Any idea on this? What should I do?
Update:
I keep searching and asked couple person online, I think I should build the function container 1st to let SAM package the boto3 into the container.
So what I do is, cdk synth a template file:
cdk synth --no-staging > template.yaml
Then:
sam build --use-container
sam local invoke MyHandler78A95900 -e lambda\TheDirectory\event.json
But still, I get the same error, but post the stack trace as well
[ERROR] ParamValidationError: Parameter validation failed:
Missing required parameter in input: "LanguageCode"
Unknown parameter in input: "IdentifyLanguage", must be one of: TranscriptionJobName, LanguageCode, MediaSampleRateHertz, MediaFormat, Media, OutputBucketName, OutputEncryptionKMSKeyId, Settings, JobExecutionSettings, ContentRedaction
Traceback (most recent call last):
File "/var/task/app.py", line 27, in TranscribeSoundToWordHandler
response = response = transcribe_client.start_transcription_job(
File "/var/runtime/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 607, in _make_api_call
request_dict = self._convert_to_request_dict(
File "/var/runtime/botocore/client.py", line 655, in _convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
File "/var/runtime/botocore/validate.py", line 297, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
Really no clue what I doing wrong here. I also report a github issue here, but seem like cant reproduce the issue.
Main Question/Problem:
Unable to start_transription_job
without LanguageCode
with IdentifyLanguage=True
What possible reason cause this, and how can I solve this problem(Dont know the languange of the audio file, I want to identify language of audio file without given the LanguageCode) ?
Check whether you are using the latest boto3 version.
boto3.__version__
'1.16.5'
I tried it and it works.
import boto3
transcribe = boto3.client('transcribe')
response = transcribe.start_transcription_job(TranscriptionJobName='Test-20201-27',IdentifyLanguage=True,Media={'MediaFileUri':'s3://BucketName/DemoData/Object.mp4'})
print(response)
{
"TranscriptionJob": {
"TranscriptionJobName": "Test-20201-27",
"TranscriptionJobStatus": "IN_PROGRESS",
"Media": {
"MediaFileUri": "s3://BucketName/DemoData/Object.mp4"
},
"StartTime": "datetime.datetime(2020, 10, 27, 15, 41, 2, 599000, tzinfo=tzlocal())",
"CreationTime": "datetime.datetime(2020, 10, 27, 15, 41, 2, 565000, tzinfo=tzlocal())",
"IdentifyLanguage": "True"
},
"ResponseMetadata": {
"RequestId": "9e4f94a4-20e4-4ca0-9c6e-e21a8934084b",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"content-type": "application/x-amz-json-1.1",
"date": "Tue, 27 Oct 2020 14:41:02 GMT",
"x-amzn-requestid": "9e4f94a4-20e4-4ca0-9c6e-e21a8934084b",
"content-length": "268",
"connection": "keep-alive"
},
"RetryAttempts": 0
}
}
End up I notice this is because my packaged lambda function isn’t being uploaded for some reason. Here is how I solved it after getting help from couple of people.
First modify CDK stack which define my lambda function like this:
from aws_cdk import (
aws_lambda as lambda_,
core
)
from aws_cdk.aws_lambda_python import PythonFunction
class MyCdkStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# define lambda
my_lambda = PythonFunction(
self, 'MyHandler',
entry='lambda/MyHandler',
index='app.py',
runtime=lambda_.Runtime.PYTHON_3_8,
handler='MyHandler',
timeout=core.Duration.seconds(10)
)
This will use aws-lambda-python module ,it will handle installing all required modules into the docker.
Next, cdk synth a template file
cdk synth --no-staging > template.yaml
At this point, it will bundling all the stuff inside entry path which define in PythonFunction and install all the necessary dependencies defined in requirements.txt inside that entry path.
Next, build the docker container
$ sam build --use-container
Make sure template.yaml file in root directory. This will build a docker container, and the artifact will build inside .aws-sam/build directory in my root directory.
Last step, invoke the function using sam:
sam local invoke MyHandler78A95900 -e path\to\event.json
Now finally successfully call start_transcription_job as stated in my question above without any error.
In Conclusion:
At the very beginning I only pip install boto3, this only will
install the boto3 in my local system.
Then, I sam local invoke without build the container 1st by sam build --use-container
Lastly, I have sam build at last, but in that point, I didn't
bundle what defined inside requirements.txt into the
.aws-sam/build, therefore need to use aws-lambda-python
module as stated above.

Google Cloud Scheduler: Why cloud function runs successfully but logger still shows error?

I set up a google cloud scheduler job that triggers a cloud function through HTTP. I can be sure that the cloud function is triggered and runs successfully - it has produced the expected outcome.
However, the scheduler job still shows "failed" and the logger is like:
{
"insertId": "8ca551232347v49",
"jsonPayload": {
"jobName": "projects/john/locations/asia-southeast2/jobs/Get_food",
"status": "UNKNOWN",
"url": "https://asia-southeast2-john.cloudfunctions.net/Get_food",
"#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished",
"targetType": "HTTP"
},
"httpRequest": {},
"resource": {
"type": "cloud_scheduler_job",
"labels": {
"job_id": "Get_food",
"location": "asia-southeast2",
"project_id": "john"
}
},
"timestamp": "2020-10-22T04:08:24.521610728Z",
"severity": "ERROR",
"logName": "projects/john/logs/cloudscheduler.googleapis.com%2Fexecutions",
"receiveTimestamp": "2020-10-22T04:08:24.521610728Z"
}
I have pasted the cloud function code below with edits necessary to remove sensitive information:
import requests
import pymysql
from pymysql.constants import CLIENT
from google.cloud import storage
import os
import time
from DingBot import DING_BOT
from decouple import config
import datetime
BUCKET_NAME = 'john-test-dataset'
FOLDER_IN_BUCKET = 'compressed_data'
LOCAL_PATH = '/tmp/'
TIMEOUT_TIME = 500
def run(request):
"""Responds to any HTTP request.
Args:
request (flask.Request): HTTP request object.
Returns:
The response text or any set of values that can be turned into a
Response object using
`make_response <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>`.
"""
while True:
# some code that will be break the loop in about 200 seconds
DING_BOT.send_text(msg)
return 'ok'
what I can be sure of is that the line right before the end of the fucntion, DING_BOT.send_text(msg) executed successfully. I have received the text message.
What cloud be wrong here?
It's a common problem because of partial UI of Google Cloud Console. So, I took the hypothesis that you set up your scheduler only with the console.
So, you need to create, or to update it with command line (GCLOUD) or API (but GCLOUD is easier), to add the "attempt-deadline" parameter.
In fact Cloud Scheduler also have a timeout (60s by default)and if the URL don't answer in this timeframe, the call is considered as fail
Increase this param to 250s, and it should be OK.
Note: you can also set retry policies with the CLI, it could be interesting if you need it!

Missing field 'SetIdentifier' in Change when deleting/upserting with ChangeResourceRecordSets Boto3

I've struggled for a week trying to delete/upsert a simple Route 53 resource record with Boto3 v1.10.39.
My code:
resp=r53_client.change_resource_record_sets(
HostedZoneId=<ZONE_ID>,
ChangeBatch={
'Comment': 'del_ip',
'Changes': [
{
'Action': 'DELETE',
'ResourceRecordSet': {
'Name': <SUBDOMAIN>,
'Type': 'A',
'Region': 'us-east-1',
'TTL': 300,
'ResourceRecords': [{'Value': <OLD_IP>}]
}
}
]
}
)
Error msg:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/botocore/client.py", line 272, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/lib/python2.7/dist-packages/botocore/client.py", line 576, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.InvalidInput: An error occurred (InvalidInput) when calling the ChangeResourceRecordSets operation: Invalid request: Missing field 'SetIdentifier' in Change with [Action=DELETE, Name=<SUBDOMAIN>., Type=A, SetIdentifier=null]
Troubleshoot steps:
I've gone through all possible documentations (Boto3, AWSCLI, AWS developer guide), the 'SetIdentifier' field is only required when it's not a simple RR (weighted, multivalue, failover etc.):
SetIdentifier (string)
Resource record sets that have a routing policy other than simple: An identifier that differentiates among multiple resource record sets that have the same combination of name and type, such as multiple weighted resource record sets named acme.example.com that have a type of A. In a group of resource record sets that have the same name and type, the value of SetIdentifier must be unique for each resource record set.
For information about routing policies, see Choosing a Routing Policy in the Amazon Route 53 Developer Guide.
Tried the same operation with AWS CLI, same error as calling with Boto3.
Called list_resource_record_sets and verified there's no set identifier value exists.
Tried to add 'SetIdentifier': None, 'SetIdentifier': '' and 'SetIdentifier': , none of them works, which make sense as the original RR doesn't have this property at all.
Environment:
OS: amzn-ami-hvm-2015.09.1.x86_64-gp2
Python: v2.7.14
BOTO3: v1.10.39
botocore: 1.13.39
aws-cli: v1.16.301
I'm wondering if this is a bug from AWS API.
Created an issue in Boto3 github and got help from them saying removing 'Region' attribute is going to fix my issue, and it works like a charm for my code once I do that.
resp=r53_client.change_resource_record_sets(
HostedZoneId=<ZONE_ID>,
ChangeBatch={
'Comment': 'del_ip',
'Changes': [
{
'Action': 'DELETE',
'ResourceRecordSet': {
'Name': <SUBDOMAIN>,
'Type': 'A',
'TTL': 300,
'ResourceRecords': [{'Value': <OLD_IP>}]
}
}
]
}
)

Getting error of PipelineActivity must have one and only one member when using Boto3 and Create_Pipeline

I have a python program that is using boto3 to create an IoT Analytics path. My program was able to successfully create the channel and the datastore but fails when I try to connect the two through the create pipeline function. My code is as follows:
dactivity = [{
"channel": {
"channelName": channel["channelName"],
"name": IoTAConfig["channelName"],
"next" : IoTAConfig["datastoreName"]
},
"datastore": {
"datastoreName": ds["datastoreName"],
"name": IoTAConfig["datastoreName"]
}
}]
pipeline = iota.create_pipeline(
pipelineActivities = dactivity,
pipelineName = IoTAConfig["pipelineName"]
)
The error code is as follows:
Traceback (most recent call last):
File "createFullGG.py", line 478, in <module>
createIoTA()
File "createFullGG.py", line 268, in createIoTA
pipelineName = IoTAConfig["pipelineName"]
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 320, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 623, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.InvalidRequestException: An error occurred (InvalidRequestException) when calling the UpdatePipeline operation: PipelineActivity must have one and only one member
According to the documentation pipeline activities can contain from 1 to 25 entries as long as they are in an array of 1 object. I have no idea why this continues to fail. Any help is appreciated.
The public documentation looks a little confusing because of the way that optional elements are represented, the good news is that this is an easy fix.
A corrected version of what you are trying would be written as;
dactivity=[
{
"channel": {
"channelName": channel["channelName"],
"name": IoTAConfig["channelName"],
"next" : IoTAConfig["datastoreName"]
}
},
{
"datastore": {
"datastoreName": ds["datastoreName"],
"name": IoTAConfig["datastoreName"]
}
}
]
response = client.create_pipeline(
pipelineActivities = dactivity,
pipelineName = IoTAConfig["pipelineName"]
)
So it's an array of activities that you are providing, like [ {A1},{A2} ] if that makes sense?
Does that help?