AWS personalize service not available using boto3 - amazon-web-services

I am trying to setup a boto3 client to use the AWS personalize service along the lines of what is done here:
https://docs.aws.amazon.com/personalize/latest/dg/data-prep-importing.html
I have faithfully followed the tutorial up to this point. I have my s3 bucket set up, and I have an appropriately formatted csv.
I configured my access and secret tokens and can successfully perform basic operations on my s3 bucket, so I think that part is working:
import s3fs
fs = s3fs.S3FileSystem()
fs.ls('personalize-service-test')
'personalize-test/data'
When I try to create my service, things start to fail:
import boto3
personalize = boto3.client('personalize')
---------------------------------------------------------------------------
UnknownServiceError Traceback (most recent call last)
<ipython-input-13-c23d30ee6bd1> in <module>
----> 1 personalize = boto3.client('personalize')
/anaconda3/envs/ds_std_3.6/lib/python3.6/site-packages/boto3/__init__.py in client(*args, **kwargs)
89 See :py:meth:`boto3.session.Session.client`.
90 """
---> 91 return _get_default_session().client(*args, **kwargs)
92
93
/anaconda3/envs/ds_std_3.6/lib/python3.6/site-packages/boto3/session.py in client(self, service_name, region_name, api_version, use_ssl, verify, endpoint_url, aws_access_key_id, aws_secret_access_key, aws_session_token, config)
261 aws_access_key_id=aws_access_key_id,
262 aws_secret_access_key=aws_secret_access_key,
--> 263 aws_session_token=aws_session_token, config=config)
264
265 def resource(self, service_name, region_name=None, api_version=None,
/anaconda3/envs/ds_std_3.6/lib/python3.6/site-packages/botocore/session.py in create_client(self, service_name, region_name, api_version, use_ssl, verify, endpoint_url, aws_access_key_id, aws_secret_access_key, aws_session_token, config)
836 is_secure=use_ssl, endpoint_url=endpoint_url, verify=verify,
837 credentials=credentials, scoped_config=self.get_scoped_config(),
--> 838 client_config=config, api_version=api_version)
839 monitor = self._get_internal_component('monitor')
840 if monitor is not None:
/anaconda3/envs/ds_std_3.6/lib/python3.6/site-packages/botocore/client.py in create_client(self, service_name, region_name, is_secure, endpoint_url, verify, credentials, scoped_config, api_version, client_config)
77 'choose-service-name', service_name=service_name)
78 service_name = first_non_none_response(responses, default=service_name)
---> 79 service_model = self._load_service_model(service_name, api_version)
80 cls = self._create_client_class(service_name, service_model)
81 endpoint_bridge = ClientEndpointBridge(
/anaconda3/envs/ds_std_3.6/lib/python3.6/site-packages/botocore/client.py in _load_service_model(self, service_name, api_version)
115 def _load_service_model(self, service_name, api_version=None):
116 json_model = self._loader.load_service_model(service_name, 'service-2',
--> 117 api_version=api_version)
118 service_model = ServiceModel(json_model, service_name=service_name)
119 return service_model
/anaconda3/envs/ds_std_3.6/lib/python3.6/site-packages/botocore/loaders.py in _wrapper(self, *args, **kwargs)
130 if key in self._cache:
131 return self._cache[key]
--> 132 data = func(self, *args, **kwargs)
133 self._cache[key] = data
134 return data
/anaconda3/envs/ds_std_3.6/lib/python3.6/site-packages/botocore/loaders.py in load_service_model(self, service_name, type_name, api_version)
376 raise UnknownServiceError(
377 service_name=service_name,
--> 378 known_service_names=', '.join(sorted(known_services)))
379 if api_version is None:
380 api_version = self.determine_latest_version(
UnknownServiceError: Unknown service: 'personalize'. Valid service names are: acm, acm-pca, alexaforbusiness, amplify, apigateway, apigatewaymanagementapi, apigatewayv2, application-autoscaling, appmesh, appstream, appsync, athena, autoscaling, autoscaling-plans, backup, batch, budgets, ce, chime, cloud9, clouddirectory, cloudformation, cloudfront, cloudhsm, cloudhsmv2, cloudsearch, cloudsearchdomain, cloudtrail, cloudwatch, codebuild, codecommit, codedeploy, codepipeline, codestar, cognito-identity, cognito-idp, cognito-sync, comprehend, comprehendmedical, config, connect, cur, datapipeline, datasync, dax, devicefarm, directconnect, discovery, dlm, dms, docdb, ds, dynamodb, dynamodbstreams, ec2, ecr, ecs, efs, eks, elasticache, elasticbeanstalk, elastictranscoder, elb, elbv2, emr, es, events, firehose, fms, fsx, gamelift, glacier, globalaccelerator, glue, greengrass, guardduty, health, iam, importexport, inspector, iot, iot-data, iot-jobs-data, iot1click-devices, iot1click-projects, iotanalytics, kafka, kinesis, kinesis-video-archived-media, kinesis-video-media, kinesisanalytics, kinesisanalyticsv2, kinesisvideo, kms, lambda, lex-models, lex-runtime, license-manager, lightsail, logs, machinelearning, macie, marketplace-entitlement, marketplacecommerceanalytics, mediaconnect, mediaconvert, medialive, mediapackage, mediastore, mediastore-data, mediatailor, meteringmarketplace, mgh, mobile, mq, mturk, neptune, opsworks, opsworkscm, organizations, pi, pinpoint, pinpoint-email, pinpoint-sms-voice, polly, pricing, quicksight, ram, rds, rds-data, redshift, rekognition, resource-groups, resourcegroupstaggingapi, robomaker, route53, route53domains, route53resolver, s3, s3control, sagemaker, sagemaker-runtime, sdb, secretsmanager, securityhub, serverlessrepo, servicecatalog, servicediscovery, ses, shield, signer, sms, sms-voice, snowball, sns, sqs, ssm, stepfunctions, storagegateway, sts, support, swf, textract, transcribe, transfer, translate, waf, waf-regional, workdocs, worklink, workmail, workspaces, xray
Indeed the service name 'personalize' is missing from the list.
I already tried upgrading boto3 and botocore to their latest version and restarting my kernel.
boto3 1.9.143
botocore 1.12.143
Any idea as to what to try next would be great.

The boto3 documentation does not (currently on 1.9.143) list personalize as a supported service.

Have you signed up for the preview through the landing page? Personalize is still in limited preview, so your account will not be able to access it through boto3 by default, unless otherwise whitelisted.
Edit: Personalize is currently publicly available, so this answer is no longer relevant.

I had neglected to perform these setup steps:
https://docs.aws.amazon.com/personalize/latest/dg/aws-personalize-set-up-aws-cli.html
When I did that and restarted my kernel, boto3 picked up the service definition and now things seem to work.

Related

Creating Connection for RedshiftDataOperator

So i when to the airflow documentation for aws redshift there is 2 operator that can execute the sql query they are RedshiftSQLOperator and RedshiftDataOperator. I already implemented my job using RedshiftSQLOperator but i want to do it using RedshiftDataOperator instead, because i dont want to using postgres connection in RedshiftSQLOperator but AWS API.
RedshiftDataOperator Documentation
I had read this documentation there is aws_conn_id in the parameter. But when im trying to use the same connection id there is error.
[2023-01-11, 04:55:56 UTC] {base.py:68} INFO - Using connection ID 'redshift_default' for task execution.
[2023-01-11, 04:55:56 UTC] {base_aws.py:206} INFO - Credentials retrieved from login
[2023-01-11, 04:55:56 UTC] {taskinstance.py:1889} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/redshift_data.py", line 146, in execute
self.statement_id = self.execute_query()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/redshift_data.py", line 124, in execute_query
resp = self.hook.conn.execute_statement(**filter_values)
File "/home/airflow/.local/lib/python3.7/site-packages/botocore/client.py", line 415, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/botocore/client.py", line 745, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the ExecuteStatement operation: The security token included in the request is invalid.
From task id
redshift_data_task = RedshiftDataOperator(
task_id='redshift_data_task',
database='rds',
region='ap-southeast-1',
aws_conn_id='redshift_default',
sql="""
call some_procedure();
"""
)
What should i fill in the airflow connection ? Because in the documentation there is no example of value that i should fill to airflow. Thanks
Airflow RedshiftDataOperator Connection Required Value
Have you tried using the Amazon Redshift connection? There is both an option for authenticating using your Redshift credentials:
Connection ID: redshift_default
Connection Type: Amazon Redshift
Host: <your-redshift-endpoint> (for example, redshift-cluster-1.123456789.us-west-1.redshift.amazonaws.com)
Schema: <your-redshift-database> (for example, dev, test, prod, etc.)
Login: <your-redshift-username> (for example, awsuser)
Password: <your-redshift-password>
Port: <your-redshift-port> (for example, 5439)
(source)
and an option for using an IAM role (there is an example in the first link).
Disclaimer: I work at Astronomer :)
EDIT: Tested the following with Airflow 2.5.0 and Amazon provider 6.2.0:
Added the IP of my Airflow instance to the VPC security group with "All traffic" access.
Airflow Connection with the connection id aws_default, Connection type "Amazon Web Services", extra: { "aws_access_key_id": "<your-access-key-id>", "aws_secret_access_key": "<your-secret-access-key>", "region_name": "<your-region-name>" }. All other fields blank. I used a root key for my toy-aws. If you use other credentials you need to make sure that IAM role has access and the right permissions to the Redshift cluster (there is a list in the link above).
Operator code:
red = RedshiftDataOperator(
task_id="red",
database="dev",
sql="SELECT * FROM dev.public.users LIMIT 5;",
cluster_identifier="redshift-cluster-1",
db_user="awsuser",
aws_conn_id="aws_default"
)

Amazon sagemaker training job using prebuild docker image

Hi I am newbie to AWS Sagemaker, I am trying to deploying the custom time series model on sagemaker, so for that build a docker image using sagemaker terminal,But when i am trying to creating training job it showing some error.I am struggling with past four days, please any one could help me.
Here my code:
lstm = sage.estimator.Estimator(image,
role, 1, 'ml.m4.xlarge',
output_path='s3://' + s3Bucket,
sagemaker_session=sess)
lstm.fit(upload_data)
Here my Error, I attached policy of ecr full access permissions to sagemaker Iam role and also account is in same region.
ClientErrorTraceback (most recent call last)
<ipython-input-48-1d7f3ff70f18> in <module>()
4 sagemaker_session=sess)
5
----> 6 lstm.fit(upload_data)
/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/sagemaker/estimator.pyc in fit(self, inputs, wait, logs, job_name, experiment_config)
472 self._prepare_for_training(job_name=job_name)
473
--> 474 self.latest_training_job = _TrainingJob.start_new(self, inputs, experiment_config)
475 self.jobs.append(self.latest_training_job)
476 if wait:
/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/sagemaker/estimator.pyc in start_new(cls, estimator, inputs, experiment_config)
1036 train_args["enable_sagemaker_metrics"] = estimator.enable_sagemaker_metrics
1037
-> 1038 estimator.sagemaker_session.train(**train_args)
1039
1040 return cls(estimator.sagemaker_session, estimator._current_job_name)
/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/sagemaker/session.pyc in train(self, input_mode, input_config, role, job_name, output_config, resource_config, vpc_config, hyperparameters, stop_condition, tags, metric_definitions, enable_network_isolation, image, algorithm_arn, encrypt_inter_container_traffic, train_use_spot_instances, checkpoint_s3_uri, checkpoint_local_path, experiment_config, debugger_rule_configs, debugger_hook_config, tensorboard_output_config, enable_sagemaker_metrics)
588 LOGGER.info("Creating training-job with name: %s", job_name)
589 LOGGER.debug("train request: %s", json.dumps(train_request, indent=4))
--> 590 self.sagemaker_client.create_training_job(**train_request)
591
592 def process(
/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/botocore/client.pyc in _api_call(self, *args, **kwargs)
314 "%s() only accepts keyword arguments." % py_operation_name)
315 # The "self" in this scope is referring to the BaseClient.
--> 316 return self._make_api_call(operation_name, kwargs)
317
318 _api_call.__name__ = str(py_operation_name)
/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/botocore/client.pyc in _make_api_call(self, operation_name, api_params)
624 error_code = parsed_response.get("Error", {}).get("Code")
625 error_class = self.exceptions.from_code(error_code)
--> 626 raise error_class(parsed_response, operation_name)
627 else:
628 return parsed_response
ClientError: An error occurred (ValidationException) when calling the CreateTrainingJob operation: Cannot find repository: sagemaker-model in registry ID: 534860077983 Please check if your ECR repository exists and role arn:aws:iam::534860077983:role/service-role/AmazonSageMaker-ExecutionRole-20190508T215284 has proper pull permissions for SageMaker: ecr:BatchCheckLayerAvailability, ecr:BatchGetImage, ecr:GetDownloadUrlForLayer
TL;DR: Seems like you're not providing the correct repository for the ECR image to the SageMaker estimator. Maybe the repository doesn't exist?
Also make sure that the repository's permissions are configured to allow the principal sagemaker.amazonaws.com to do ecr:BatchCheckLayerAvailability, ecr:BatchGetImage, ecr:GetDownloadUrlForLayer

AWS AMI - fail to create json import file

I created a bucket in Amazon S3, uploaded my FreePBX.ova, and created permissions, etc. When I run this command:
aws ec2 import-image --cli-input-json "{\"Description\":\"freepbx\", \"DiskContainers\":[{\"Description\":\"freepbx\",\"UserBucket\":{\"S3Bucket\":\"itbucket\",\"S3Key\":\"FreePBX.ova\"}}]}"
I get:
Error parsing parameter 'cli-input-json': Invalid JSON: Extra data: line 1 column 135 - line 1 column 136 (char 134 - 135)
JSON received: {"Description":"freepbx", "DiskContainers":[{"Description":"freepbx","UserBucket":{"S3Bucket":"itbucket","S3Key":"FreePBX.ova"}}]}?
And I can't continue the process. I tried to Google it with no results.
What is wrong with this command? How can I solve it?

Programmatic way to get a list of available AWS products?

There seems to be about a hundred AWS products available. The only way to get an authoritative listing of them is to look on the web.
Is there any API that could give me a list of all currently available AWS products, ideally with some metadata about each one (product title, description, what regions and edge locations it's available in, etc)?
Python API libraries Boto3 and Botocore. I am providing a code snippet to list the services. You have to look at the docs to get other info you want.
>>> import boto3
>>> session = boto3.Session()
>>> session.get_available_services()
['acm', 'apigateway', 'application-autoscaling', 'appstream', 'autoscaling', 'batch', 'budgets', 'clouddirectory', 'cloudformation', 'cloudfront', 'cloudhsm', 'cloudsearch', 'cloudsearchdomain', 'cloudtrail', 'cloudwatch', 'codebuild', 'codecommit', 'codedeploy', 'codepipeline', 'cognito-identity', 'cognito-idp', 'cognito-sync', 'config', 'cur', 'datapipeline', 'devicefarm', 'directconnect', 'discovery', 'dms', 'ds', 'dynamodb', 'dynamodbstreams', 'ec2', 'ecr', 'ecs', 'efs', 'elasticache', 'elasticbeanstalk', 'elastictranscoder', 'elb', 'elbv2', 'emr', 'es', 'events', 'firehose', 'gamelift', 'glacier', 'health', 'iam', 'importexport', 'inspector', 'iot', 'iot-data', 'kinesis', 'kinesisanalytics', 'kms', 'lambda', 'lex-runtime', 'lightsail', 'logs', 'machinelearning', 'marketplacecommerceanalytics', 'meteringmarketplace', 'opsworks', 'opsworkscm', 'pinpoint', 'polly', 'rds', 'redshift', 'rekognition', 'route53', 'route53domains', 's3', 'sdb', 'servicecatalog', 'ses', 'shield', 'sms', 'snowball', 'sns', 'sqs', 'ssm', 'stepfunctions', 'storagegateway', 'sts', 'support', 'swf', 'waf', 'waf-regional', 'workspaces', 'xray']
>>> for item, service in (enumerate(session.get_available_services(), 1)):
... print item, service
...
1 acm
2 apigateway
3 application-autoscaling
4 appstream
5 autoscaling
6 batch
7 budgets
8 clouddirectory
9 cloudformation
10 cloudfront
11 cloudhsm
12 cloudsearch
13 cloudsearchdomain
14 cloudtrail
15 cloudwatch
16 codebuild
17 codecommit
18 codedeploy
19 codepipeline
20 cognito-identity
21 cognito-idp
22 cognito-sync
23 config
24 cur
25 datapipeline
26 devicefarm
27 directconnect
28 discovery
29 dms
30 ds
31 dynamodb
32 dynamodbstreams
33 ec2
34 ecr
35 ecs
36 efs
37 elasticache
38 elasticbeanstalk
39 elastictranscoder
40 elb
41 elbv2
42 emr
43 es
44 events
45 firehose
46 gamelift
47 glacier
48 health
49 iam
50 importexport
51 inspector
52 iot
53 iot-data
54 kinesis
55 kinesisanalytics
56 kms
57 lambda
58 lex-runtime
59 lightsail
60 logs
61 machinelearning
62 marketplacecommerceanalytics
63 meteringmarketplace
64 opsworks
65 opsworkscm
66 pinpoint
67 polly
68 rds
69 redshift
70 rekognition
71 route53
72 route53domains
73 s3
74 sdb
75 servicecatalog
76 ses
77 shield
78 sms
79 snowball
80 sns
81 sqs
82 ssm
83 stepfunctions
84 storagegateway
85 sts
86 support
87 swf
88 waf
89 waf-regional
90 workspaces
91 xray
One way is to make use of aws command line interface to get the list of available services and make use of their corresponding describe or list commands to get the configured/available services.
This can be done using SSM Parameter Feature.It returns the Service/Product Full Name.
Below is a sample AWS Lambda Code
import json
import boto3
def lambda_handler(event, context):
service_list = []
ssmClient = boto3.client("ssm", region_name = "us-east-1")
list_servie_path = ssmClient.get_parameters_by_path(
Path = "/aws/service/global-infrastructure/services"
)
if len(list_servie_path["Parameters"]) > 0:
for pathData in list_servie_path["Parameters"]:
list_servie_names = ssmClient.get_parameters_by_path(
Path = pathData["Name"]
)
service_list.append(list_servie_names["Parameters"][0]["Value"])
if "NextToken" in list_servie_path:
NextToken = list_servie_path["NextToken"]
while True:
list_servie_path = ssmClient.get_parameters_by_path(
Path = "/aws/service/global-infrastructure/services",
NextToken = NextToken
)
if len(list_servie_path["Parameters"]) > 0:
for pathData in list_servie_path["Parameters"]:
list_servie_names = ssmClient.get_parameters_by_path(
Path = pathData["Name"]
)
service_list.append(list_servie_names["Parameters"][0]["Value"])
if "NextToken" in list_servie_path:
NextToken = list_servie_path["NextToken"]
else:
break
print(len(service_list))
service_list.sort(key=lambda x:(not x.islower(), x))
return service_list
Sample Output :
"AWS Data Exchange",
"AWS Data Pipeline",
"AWS DataSync",
"AWS Database Migration Service",
"AWS DeepComposer",
"AWS DeepLens",
"AWS DeepRacer",
"AWS Device Farm",
"AWS Direct Connect",
"AWS Directory Service",
"AWS Elastic Beanstalk",
"AWS Elemental MediaStore Data Plane",
"AWS Elemental MediaTailor",
"AWS EventBridge Schemas",
"Amazon CloudFront",
"Amazon CloudSearch",
"Amazon CloudWatch",
"Amazon CloudWatch Application Insights",
"Amazon CloudWatch Events",
"Amazon CloudWatch Evidently",
"Amazon CloudWatch Logs",
"Amazon CloudWatch Synthetics",
"Amazon CodeGuru",
Hope, this helps..
Interestingly enough, I suspect the most complete source for this information (at very fine level of detail) is the Price List API.
For example:
To find a list of all available offer files, download the offer index file. Note what it provides:
Offer index file – A JSON file that lists the supported AWS services, with a URL for each offer file where you can download pricing details. The file also includes metadata about the offer index file itself, URLs for service offer files, and URLs for regional offer index files.
http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/price-changes.html
In turn, the individual service files detail all of the pricing information for all possible service elements.
One particularly useful example is the case of EC2, the various instance type attributes are provided here among the pricing data -- you'll find things like processor model, clock speed, number of CPUs, etc., detailed.
Quick perl script to get data by scraping html from /products page. This will get a nice json data set.
#!/usr/bin/perl
#
#This script is intended to simply piece togather a json file for available JSON services.
#
use v5.16.1;
use strict;
use warnings;
use JSON;
my ($category,%data, %opts, $marker);
my $count = 1;
my #foo = `curl https://aws.amazon.com/products/`;
foreach my $line (#foo) {
if ($line =~ /<h6> <a href.*?>(.*?)<i class/) {
$category = $1;
next;
}
if ($line =~ /^\s*<a href="https:\/\/aws.amazon.com\/.*?\/?(.*?)\/\?nc2.*?>(.*?)<span>(.*?)<\/span/) {
$data{category}{$category}{services}{$1}{name} = $2;
$data{category}{$category}{services}{$1}{description} = $3;
}
}
my $json = encode_json \%data;
say $json;
exit;
Ensure you have installed perl JSON module. Usage:
script_name.pl | python -m json.tool > your_json_file.json
Example output:
"Storage": {
"services": {
"ebs": {
"description": "EC2 Block Storage Volumes",
"name": "Amazon Elastic Block Store (EBS)"
},
"efs": {
"description": "Fully Managed File System for EC2",
"name": "Amazon Elastic File System (EFS)"
},
"glacier": {
"description": "Low-cost Archive Storage in the Cloud",
"name": "Amazon Glacier"
},
It will work up till they change that page :)
Not sure but it will list all the products to which the caller has access. But you can use the search_products() API in boto3 or searchProducts in sdk for listing the product. Perform assume role and call this api. For refernce,
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/servicecatalog.html#ServiceCatalog.Client.search_products
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ServiceCatalog.html#searchProducts-property

What's the proper IAM role for a service account to write custom metrics to Stackdriver in GCP

I've created a service account and furnished with private key in JSON format (/adc.json). It can be loaded into google-cloud python client via Client.from_service_account_json function fine. But when I tried call the Monitoring API to write a custom metric, it's getting 403 error like below.
In [1]: from google.cloud import monitoring
In [2]: c = monitoring.Client.from_service_account_json('/adc.json')
In [6]: resource = client.resource('gce_instance', labels={'instance_id': '1234567890123456789', 'zone': 'us-central1-f'})
In [7]: metric = client.metric(type_='custom.googleapis.com/my_metric', labels={'status': 'successful'})
In [9]: from datetime import datetime
In [10]: end_time = datetime.utcnow()
In [11]: client.write_point(metric=metric, resource=resource, value=3.14, end_time=end_time)
---------------------------------------------------------------------------
Forbidden Traceback (most recent call last)
<ipython-input-11-b030f6399aa2> in <module>()
----> 1 client.write_point(metric=metric, resource=resource, value=3.14, end_time=end_time)
/usr/local/lib/python3.5/site-packages/google/cloud/monitoring/client.py in write_point(self, metric, resource, value, end_time, start_time)
599 timeseries = self.time_series(
600 metric, resource, value, end_time, start_time)
--> 601 self.write_time_series([timeseries])
/usr/local/lib/python3.5/site-packages/google/cloud/monitoring/client.py in write_time_series(self, timeseries_list)
544 for timeseries in timeseries_list]
545 self._connection.api_request(method='POST', path=path,
--> 546 data={'timeSeries': timeseries_dict})
547
548 def write_point(self, metric, resource, value,
/usr/local/lib/python3.5/site-packages/google/cloud/_http.py in api_request(self, method, path, query_params, data, content_type, headers, api_base_url, api_version, expect_json, _target_object)
301 if not 200 <= response.status < 300:
302 raise make_exception(response, content,
--> 303 error_info=method + ' ' + url)
304
305 string_or_bytes = (six.binary_type, six.text_type)
Forbidden: 403 User is not authorized to access the project monitoring records. (POST https://monitoring.googleapis.com/v3/projects/MY-PROJECT/timeSeries/)
In the GCP's Access Control Panel, I didn't see a specific predefined role scope for Stackdriver Monitoring API. See the screenshot below:
I've tried Project Viewer, Service Account Actor predefined roles, neither worked. I am hesitatant to assigned a Project Editor role this service account because it feels like it's too broad of a scope for Stackdriver dedicated service account credential. So what should be the correct role to assign to this service account? Thanks.
You are right that it's too broad, and we are working on finer-grained roles, but, as of today, "Project Editor" is the correct role.
If you are running on a GCE VM and omit the private key, the Stackdriver monitoring agent will by default attempt to use the VM's default service account. This will work as long as the VM has the https://www.googleapis.com/auth/monitoring.write scope (this should be turned on by default for all GCE VMs these days). See this page for a detailed description of what credentials the agent needs.