I have an audio file in S3.
I don't know the language of the audio file. So I need to use IdentifyLanguage for start_transcription_job().
LanguageCode will be blank since I don't know the language of the audio file.
Envirionment
Using
Python 3.8 runtime,
boto3 version 1.16.5 ,
botocore version: 1.19.5,
no Lambda Layer.
Here is my code for the Transcribe job:
mediaFileUri = 's3://'+ bucket_name+'/'+prefixKey
transcribe_client = boto3.client('transcribe')
response = transcribe_client.start_transcription_job(
TranscriptionJobName="abc",
IdentifyLanguage=True,
Media={
'MediaFileUri':mediaFileUri
},
)
Then I get this error:
{
"errorMessage": "Parameter validation failed:\nMissing required parameter in input: \"LanguageCode\"\nUnknown parameter in input: \"IdentifyLanguage\", must be one of: TranscriptionJobName, LanguageCode, MediaSampleRateHertz, MediaFormat, Media, OutputBucketName, OutputEncryptionKMSKeyId, Settings, ModelSettings, JobExecutionSettings, ContentRedaction",
"errorType": "ParamValidationError",
"stackTrace": [
" File \"/var/task/app.py\", line 27, in TranscribeSoundToWordHandler\n response = response = transcribe_client.start_transcription_job(\n",
" File \"/var/runtime/botocore/client.py\", line 316, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 607, in _make_api_call\n request_dict = self._convert_to_request_dict(\n",
" File \"/var/runtime/botocore/client.py\", line 655, in _convert_to_request_dict\n request_dict = self._serializer.serialize_to_request(\n",
" File \"/var/runtime/botocore/validate.py\", line 297, in serialize_to_request\n raise ParamValidationError(report=report.generate_report())\n"
]
}
With this error, means that I must specify the LanguageCode and IdentifyLanguage is an invalid parameter.
100% sure the audio file exist in S3. But without LanguageCode it don't work, and IdentifyLanguage parameter is unknown parameter
I using SAM application to test locally using this command:
sam local invoke MyHandler -e lambda\TheDirectory\event.json
And I cdk deploy, and check in Aws Lambda Console as well, tested it the same events.json, but still getting the same error
This I think is Lambda Execution environment, I didn't use any Lambda Layer.
I look at this docs from Aws Transcribe:
https://docs.aws.amazon.com/transcribe/latest/dg/API_StartTranscriptionJob.html
and this docs of boto3:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.start_transcription_job
Clearly state that LanguageCode is not required and IdentifyLanguage is a valid parameter.
So what I missing out? Any idea on this? What should I do?
Update:
I keep searching and asked couple person online, I think I should build the function container 1st to let SAM package the boto3 into the container.
So what I do is, cdk synth a template file:
cdk synth --no-staging > template.yaml
Then:
sam build --use-container
sam local invoke MyHandler78A95900 -e lambda\TheDirectory\event.json
But still, I get the same error, but post the stack trace as well
[ERROR] ParamValidationError: Parameter validation failed:
Missing required parameter in input: "LanguageCode"
Unknown parameter in input: "IdentifyLanguage", must be one of: TranscriptionJobName, LanguageCode, MediaSampleRateHertz, MediaFormat, Media, OutputBucketName, OutputEncryptionKMSKeyId, Settings, JobExecutionSettings, ContentRedaction
Traceback (most recent call last):
File "/var/task/app.py", line 27, in TranscribeSoundToWordHandler
response = response = transcribe_client.start_transcription_job(
File "/var/runtime/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 607, in _make_api_call
request_dict = self._convert_to_request_dict(
File "/var/runtime/botocore/client.py", line 655, in _convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
File "/var/runtime/botocore/validate.py", line 297, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
Really no clue what I doing wrong here. I also report a github issue here, but seem like cant reproduce the issue.
Main Question/Problem:
Unable to start_transription_job
without LanguageCode
with IdentifyLanguage=True
What possible reason cause this, and how can I solve this problem(Dont know the languange of the audio file, I want to identify language of audio file without given the LanguageCode) ?
Check whether you are using the latest boto3 version.
boto3.__version__
'1.16.5'
I tried it and it works.
import boto3
transcribe = boto3.client('transcribe')
response = transcribe.start_transcription_job(TranscriptionJobName='Test-20201-27',IdentifyLanguage=True,Media={'MediaFileUri':'s3://BucketName/DemoData/Object.mp4'})
print(response)
{
"TranscriptionJob": {
"TranscriptionJobName": "Test-20201-27",
"TranscriptionJobStatus": "IN_PROGRESS",
"Media": {
"MediaFileUri": "s3://BucketName/DemoData/Object.mp4"
},
"StartTime": "datetime.datetime(2020, 10, 27, 15, 41, 2, 599000, tzinfo=tzlocal())",
"CreationTime": "datetime.datetime(2020, 10, 27, 15, 41, 2, 565000, tzinfo=tzlocal())",
"IdentifyLanguage": "True"
},
"ResponseMetadata": {
"RequestId": "9e4f94a4-20e4-4ca0-9c6e-e21a8934084b",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"content-type": "application/x-amz-json-1.1",
"date": "Tue, 27 Oct 2020 14:41:02 GMT",
"x-amzn-requestid": "9e4f94a4-20e4-4ca0-9c6e-e21a8934084b",
"content-length": "268",
"connection": "keep-alive"
},
"RetryAttempts": 0
}
}
End up I notice this is because my packaged lambda function isn’t being uploaded for some reason. Here is how I solved it after getting help from couple of people.
First modify CDK stack which define my lambda function like this:
from aws_cdk import (
aws_lambda as lambda_,
core
)
from aws_cdk.aws_lambda_python import PythonFunction
class MyCdkStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# define lambda
my_lambda = PythonFunction(
self, 'MyHandler',
entry='lambda/MyHandler',
index='app.py',
runtime=lambda_.Runtime.PYTHON_3_8,
handler='MyHandler',
timeout=core.Duration.seconds(10)
)
This will use aws-lambda-python module ,it will handle installing all required modules into the docker.
Next, cdk synth a template file
cdk synth --no-staging > template.yaml
At this point, it will bundling all the stuff inside entry path which define in PythonFunction and install all the necessary dependencies defined in requirements.txt inside that entry path.
Next, build the docker container
$ sam build --use-container
Make sure template.yaml file in root directory. This will build a docker container, and the artifact will build inside .aws-sam/build directory in my root directory.
Last step, invoke the function using sam:
sam local invoke MyHandler78A95900 -e path\to\event.json
Now finally successfully call start_transcription_job as stated in my question above without any error.
In Conclusion:
At the very beginning I only pip install boto3, this only will
install the boto3 in my local system.
Then, I sam local invoke without build the container 1st by sam build --use-container
Lastly, I have sam build at last, but in that point, I didn't
bundle what defined inside requirements.txt into the
.aws-sam/build, therefore need to use aws-lambda-python
module as stated above.
Related
I have a Python Flask app that I am able to run locally. I have been able to run it with python 3.8 and 3.9. When running locally the app is able to connect to both a local instance of Cosmos DB using the gremlin API, as well as an instance hosted on Microsoft Azure. When I or a coworker deploy the Flask app as an Azure App Service (python 3.8.6), we get an error when trying to query cosmos DB. The stack trace and code is below. I am not sure why I am getting
[TypeError: init() missing 1 required positional argument: 'max_workers']
since ThreadPoolExecutor has default arguments for all of it's parameters. I have attempted to specify workers when I initialize the gremlin client but it makes no difference. It looks like the client will default a number of workers anyways if no value is passed in. I have also specified workers for gunicorn when running on Azure, but it does not make a difference. I am running on Windows when developing locally, but the App Service runs on Linux when deployed to Azure. The Flask app does start fine and I can hit other flask endpoints that do not query Cosmos.
gunicorn --bind=0.0.0.0 --workers=4 app:app
Stack trace:
File "/tmp/8d9a55045f9e6ed/myApp/grem.py", line xxxx, in __init__
res = self.call_graph(gremlin_query)
File "/tmp/8d9a55045f9e6ed/myApp/grem.py", line xxxx, in call_graph
callback = self.client.submitAsync(query)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/client.py", line 144, in submitAsync
return conn.write(message)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/connection.py", line 55, in write
self.connect()
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/connection.py", line 45, in connect
self._transport.connect(self._url, self._headers)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/aiohttp/transport.py", line 77, in connect
self._loop.run_until_complete(async_connect())
File "/opt/python/3.8.6/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/gremlin_python/driver/aiohttp/transport.py", line 67, in async_connect
self._websocket = await self._client_session.ws_connect(url, **self._aiohttp_kwargs, headers=headers)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/client.py", line 754, in _ws_connect
resp = await self.request(
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/client.py", line 520, in _request
conn = await self._connector.connect(
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/connector.py", line 535, in connect
proto = await self._create_connection(req, traces, timeout)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/connector.py", line 892, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/connector.py", line 999, in _create_direct_connection
hosts = await asyncio.shield(host_resolved)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/connector.py", line 865, in _resolve_host
addrs = await self._resolver.resolve(host, port, family=self._family)
File "/tmp/8d9a55045f9e6ed/antenv/lib/python3.8/site-packages/aiohttp/resolver.py", line 31, in resolve
infos = await self._loop.getaddrinfo(
File "/opt/python/3.8.6/lib/python3.8/asyncio/base_events.py", line 825, in getaddrinfo
return await self.run_in_executor(
File "/opt/python/3.8.6/lib/python3.8/asyncio/base_events.py", line 780, in run_in_executor
executor = concurrent.futures.ThreadPoolExecutor()
TypeError: __init__() missing 1 required positional argument: 'max_workers'
requirements.txt
Flask
Flask-SQLAlchemy
Flask-WTF
Flask-APScheduler
python-dateutil
azure-storage-file-share
gremlinpython
openpyxl
python-dotenv
python-logstash-async
futures
Gremlin Client Initialization
self.client = client.Client(end_point,
'g',
username=username,
password=password,
message_serializer=serializer.GraphSONSerializersV2d0()
)
Send Query To Gremlin, this line causes the Exception
callback = self.client.submitAsync(query)
I got by this issue by creating a new requirements.txt by executing pip freeze > requirements.txt against my local code. I then deployed my application with the updated file. I am thinking that azure might have been providing me with a different version of aiohttp that was not compatible with python 3.8.6 but I am not sure. In any case, providing all of these dependency definitions got me by my issue. Hopefully this helps someone else down the road.
aenum==2.2.6
aiohttp==3.7.4
APScheduler==3.8.0
async-timeout==3.0.1
attrs==21.2.0
azure-core==1.16.0
azure-cosmos==4.2.0
azure-functions==1.7.2
azure-storage-blob==12.8.1
azure-storage-file-share==12.5.0
certifi==2021.5.30
cffi==1.14.6
chardet==3.0.4
click==8.0.1
colorama==0.4.4
cryptography==3.4.7
et-xmlfile==1.1.0
Flask==1.1.2
Flask-APScheduler==1.12.2
gremlinpython==3.5.1
idna==2.10
isodate==0.6.0
itsdangerous==2.0.1
Jinja2==3.0.1
limits==1.5.1
MarkupSafe==2.0.1
msrest==0.6.21
multidict==5.1.0
neo4j==4.3.2
nest-asyncio==1.5.1
oauthlib==3.1.1
openpyxl==3.0.7
pycparser==2.20
pylogbeat==2.0.0
pyparsing==2.4.7
python-dateutil==2.8.2
python-dotenv==0.19.0
python-logstash-async==2.3.0
pytz==2021.1
PyYAML==5.4.1
requests==2.25.1
requests-oauthlib==1.3.0
six==1.16.0
tornado==6.1
typing-extensions==3.10.0.0
tzlocal==2.1
urllib3==1.26.6
Werkzeug==2.0.1
yarl==1.6.3
Please note that client.submitAsync(query) has been changed to client.submit_async, see https://github.com/apache/tinkerpop/blob/master/gremlin-python/src/main/python/gremlin_python/driver/client.py
def submitAsync(self, message, bindings=None, request_options=None):
warnings.warn(
"gremlin_python.driver.client.Client.submitAsync will be replaced by "
"gremlin_python.driver.client.Client.submit_async.",
DeprecationWarning)
return self.submit_async(message, bindings, request_options)
def submit_async(self, message, bindings=None, request_options=None):
...
I try to use the custom entity recognition I just trained on Amazon Web Service (AWS).
The training worked so far:
However, if i try to recognize my entities with AWS Lambda (code below) with the given ARN-Endpoint I get the following error (even tho AWS should use the latest Version of the botocore/boto3 framework "EntpointArn" is not available (Docs)):
Response:
{
"errorMessage": "Parameter validation failed:\nUnknown parameter in input: \"EndpointArn\", must be one of: Text, LanguageCode",
"errorType": "ParamValidationError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 21, in lambda_handler\n entities = client.detect_entities(\n",
" File \"/var/runtime/botocore/client.py\", line 316, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 607, in _make_api_call\n request_dict = self._convert_to_request_dict(\n",
" File \"/var/runtime/botocore/client.py\", line 655, in _convert_to_request_dict\n request_dict = self._serializer.serialize_to_request(\n",
" File \"/var/runtime/botocore/validate.py\", line 297, in serialize_to_request\n raise ParamValidationError(report=report.generate_report())\n"
]
}
I fixed this error with the first 4 lines in my code:
#---The hack I found on stackoverflow----
import sys
from pip._internal import main
main(['install', '-I', '-q', 'boto3', '--target', '/tmp/', '--no-cache-dir', '--disable-pip-version-check'])
sys.path.insert(0,'/tmp/')
#----------------------------------------
import json
import boto3
client = boto3.client('comprehend', region_name='us-east-1')
text = "Thats my nice text with different entities!"
entities = client.detect_entities(
Text = text,
LanguageCode = "de", #If you specify an endpoint, Amazon Comprehend uses the language of your custom model, and it ignores any language code that you provide in your request.
EndpointArn = "arn:aws:comprehend:us-east-1:215057830319:entity-recognizer/MyFirstRecognizer"
)
However, I still get one more error I cannot fix:
Response:
{
"errorMessage": "An error occurred (ValidationException) when calling the DetectEntities operation: 1 validation error detected: Value 'arn:aws:comprehend:us-east-1:XXXXXXXXXXXX:entity-recognizer/MyFirstRecognizer' at 'endpointArn' failed to satisfy constraint: Member must satisfy regular expression pattern: arn:aws(-[^:]+)?:comprehend:[a-zA-Z0-9-]*:[0-9]{12}:entity-recognizer-endpoint/[a-zA-Z0-9](-*[a-zA-Z0-9])*",
"errorType": "ClientError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 25, in lambda_handler\n entities = client.detect_entities(\n",
" File \"/tmp/botocore/client.py\", line 316, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/tmp/botocore/client.py\", line 635, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
]
}
This error also occurs if I use the NodeJS framework with the given endpoint. The funny thing I should mention is that every ARN-Endpoint I found (in tutorials) looks exactly like mine and do not match with the regex-pattern returned as error.
I'm not quite sure if I do something wrong here or if it is a bug on the AWS-Cloud (or SDK).. Maybe somebody can reproduce this error and/or find a solution (or even a hack) for this problem
Cheers
Endpoint ARN are different AWS resource as compared to Model ARN. Model ARN refers to a custom model while endpoint hosts that model. In your code, your code you are passing in the modelARN instead of endpointARN which is causing the error to be raised.
You can differentiate between the two ARNs on the basis of the prefix.
endpoint arn - arn:aws:comprehend:us-east-1:XXXXXXXXXXXX:entity-recognizer-endpoint/xxxxxxxxxx
model arn - arn:aws:comprehend:us-east-1:XXXXXXXXXXXX:entity-recognizer/MyFirstRecognizer
You can read more about Comprehend custom endpoints and its pricing on the documentation page.
https://docs.aws.amazon.com/comprehend/latest/dg/detecting-cer-real-time.html
We are using CDK to build our infrastructure configuration. Moreover, I create my template.yml for SAM with cdk synth <stack_name> --no-staging > template.yml if it helps. I am using AWS Toolkit to invoke/debug my lambda functions on Intellij which works fine. However, if I run sam local start-api on terminal and send a request to one of my functions then it returns an error with stacktrace;
Traceback (most recent call last):
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 2317, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 1840, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 1743, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/_compat.py", line 36, in reraise
raise value
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 1838, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/flask/app.py", line 1824, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/samcli/local/apigw/local_apigw_service.py", line 203, in _request_handler
self.lambda_runner.invoke(route.function_name, event, stdout=stdout_stream_writer, stderr=self.stderr)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/samcli/commands/local/lib/local_lambda.py", line 84, in invoke
function = self.provider.get(function_name)
File "/usr/local/Cellar/aws-sam-cli/0.53.0/libexec/lib/python3.7/site-packages/samcli/lib/providers/sam_function_provider.py", line 65, in get
raise ValueError("Function name is required")
ValueError: Function name is required
This is the command I run
sam local start-api --env-vars env.json --docker-network test
which gives the output
Mounting None at http://127.0.0.1:3000/v1 [GET, OPTIONS, POST]
Mounting None at http://127.0.0.1:3000/v1/user [GET, OPTIONS, POST]
You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2020-08-22 16:32:46 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
2020-08-22 16:33:03 Exception on /v1/user [OPTIONS]
And here is the env.json I am using as environment variables for my functions
{
"tenantGetV1Function54F63CB9": {
"db": "alpha",
"connectionString": "mongodb://mongo"
},
"tenantPostV1FunctionA56822D0": {
"db": "alpha",
"connectionString": "mongodb://mongo"
},
"userGetV1Function7E6E55C2": {
"db": "alpha",
"connectionString": "mongodb://mongo"
},
"userPostV1FunctionEB035EB0": {
"db": "alpha",
"connectionString": "mongodb://mongo"
}
}
I am also running Docker Desktop on macOS operating system.
EDIT: Here you can find the simplified template.yml with only one endpoint (one function definition) which is for tenantGetV1Function54F63CB9 function. It will map to GET /v1 endpoint. I didnt want include the whole template for 4 functions which makes around a thousand lines of .yml code.
https://gist.github.com/flexelem/d887136484d508e313e0a745c30a2d97
The problem solved if I create LambdaIntegration by passing the Function instance instead of its Alias instance in CDK. So, we are creating lambdas along with an alias. Then, we pass the alias to their associated Resource instance from Api Gateway.
This is the way are creating;
Function tenantGetV1Function = Function.Builder.create(this, "tenantGetV1Function")
.role(roleLambda)
.runtime(Runtime.JAVA_8)
.code(lambdaCode)
.handler("com.yolda.tenant.lambda.GetTenantHandler::handleRequest")
.memorySize(512)
.timeout(Duration.minutes(1))
.environment(environment)
.description(Instant.now().toString())
.build();
Alias tenantGetV1Alias = Alias.Builder.create(this, "tenantGetV1Alias")
.aliasName("live")
.version(tenantAdminGetV1Function.getCurrentVersion())
.provisionedConcurrentExecutions(provisionedConcurrency)
.build();
Resource v1Resource = v1Resource.addResource("{tenantId}");
v1Resource.addMethod("GET", LambdaIntegration.Builder.create(tenantGetV1Alias).build(), options);
And if I replace tenantGetV1Alias with tenantGetV1Function then sam build command successfully builds all the functions which will make sam local start-api to spin up them.
Resource v1Resource = v1Resource.addResource("{tenantId}");
v1Resource.addMethod("GET", LambdaIntegration.Builder.create(tenantGetV1Function).build(), options);
Somehow, SAM is not able to get function name property from CloudFormation templates if we assign aliases.
Just started using Boto3 with Python so definitely new at this.
I'm trying to use a simple get_metric_statistics script to return information about CPUUtilization for an instance. Here is the script I'm looking to use:
import boto3
import datetime
cw = boto3.client('cloudwatch')
cw.get_metric_statistics(
300,
datetime.datetime.utcnow() - datetime.timedelta(seconds=600),
datetime.datetime.utcnow(),
'CPUUtilization',
'AWS/EC2',
'Average',
{'InstanceId':'i-11111111111'},
)
but I keep getting the following message:
Traceback (most recent call last):
File "C:..../CloudWatch_GetMetricStatistics.py", line 13, in <module>
{'InstanceId':'i-0c996c11414476c7c'},
File "C:\Program Files\Python27\lib\site-packages\botocore\client.py", line 251, in _api_call
"%s() only accepts keyword arguments." % py_operation_name)
TypeError: get_metric_statistics() only accepts keyword arguments.
I have:
Looked at the documentation on Boto3 and I believe I have got everything correctly written/included
Set the correct region/output format/security credentials in the .aws folder
Googled similar problems with put_metric_statistics, etc to try and figure it out
I'm still stuck as to what I'm missing?
Any guidance would be much appreciated.
Many thanks
Ben
This works:
import boto3
import datetime
cw = boto3.client('cloudwatch')
cw.get_metric_statistics(
Period=300,
StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=600),
EndTime=datetime.datetime.utcnow(),
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Statistics=['Average'],
Dimensions=[{'Name':'InstanceId', 'Value':'i-abcd1234'}]
)
To find the right values, I use the AWS Command-Line Interface (CLI):
aws cloudwatch list-metrics --namespace AWS/EC2 --metric-name CPUUtilization --max-items 1
It returns information such as:
{
"Metrics": [
{
"Namespace": "AWS/EC2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-abcd1234"
}
],
"MetricName": "CPUUtilization"
}
],
"NextToken": "xxx"
}
You can then use these values to populate your get_metric_statistics() requet (such as the Dimensions parameter).
Refer to the documentation, and your error message:
get_metric_statistics() only accepts keyword agruments
Named arguments must be passed to the function as is defined in the docs:
get_metric_statistics(**kwargs)
have you used region_name when trying to get details. Can you share your github to know better, what you are doing.
I try install the adwords api for python following the steps of this reference guide : https://github.com/googleads/googleads-python-lib/wiki/API-access-using-own-credentials-(installed-application-flow)#step-2---setting-up-the-client-library
Everything is okay, but in the last step(6), I have one problem.
I try run the code:
from googleads import adwords
# Initialize the AdWords client.
adwords_client = adwords.AdWordsClient.LoadFromStorage()
And Error is:
> >pythonw -u "teste_adwords_api.py" Traceback (most recent call last): File "teste_adwords_api.py", line 3, in <module>
> adwords_client = adwords.AdWordsClient.LoadFromStorage() File "C:\Users\Flávio\Google Drive\BI Caiçara\Python\googleads\adwords.py",
> line 243, in LoadFromStorage
> cls._OPTIONAL_INIT_VALUES)) File "C:\Users\Flávio\Google Drive\BI Caiçara\Python\googleads\common.py", line 128, in
> LoadFromStorage
> 'Given yaml file, %s, could not be opened.' % path) googleads.errors.GoogleAdsValueError: Given yaml file,
> C:\Users\Flávio\googleads.yaml, could not be opened.
My googleads.yaml is:
adwords:
client_id: xxxxxxx
client_secret: xxxxxx
refresh_token: xxxxxx
Where xxxx is my passwords keys
I can't understand what problem in my install process.
Had faced similar problem.
By default it searchs for the googleads.yaml file in the home directory, you can point it to your location while creating your AdWords client, e.g.,
adwords_client = AdWordsClient.LoadFromStorage("full_path_to_your_googleads.yaml")
eg:
adwords_client=AdWordsClient.LoadFromStorage(""C:\\MacUSer\\Documents\\googleads.yaml")
Hope this solves it.
does the yaml file exist in C:\Users\Flávio\ or another path? I was getting the same error before copying the yaml file to the Users\.<