What is the integration service name for CloudWatch - amazon-web-services

I am trying to create AWS API gateway with AWS service integration with cloudwatch using AWS cdk/ cloudformation. But I am getting errors like "AWS service of type cloudwatch not supported". When I try to use Cloud watch log then it works but not for only cloudwatch.
Code
new AwsIntegrationProps
{
Region = copilotFoundationalInfrastructure.Region,
Options = new IntegrationOptions {
PassthroughBehavior = PassthroughBehavior.WHEN_NO_TEMPLATES,
CredentialsRole = Role.FromRoleArn(this,"CloudWatchAccessRole", "arn:aws:iam::800524210815:role/APIGatewayCloudWatchRole"),
RequestParameters = new Dictionary\<string, string\>()
{
{ "integration.request.header.Content-Encoding", "'amz-1.0'" },
{ "integration.request.header.Content-Type", "'application/json'" },
{ "integration.request.header.X-Amz-Target", "'GraniteServiceVersion20100801.PutMetricData'" },
},
},
IntegrationHttpMethod = "POST",
Service = "cloudwatch", // this is working with s3 and logs
Action = "PutMetricData"
}
What is the correct service name for cloudwatch to putmetricsdata?
new AwsIntegrationProps
{
Region = copilotFoundationalInfrastructure.Region,
Options = new IntegrationOptions {
PassthroughBehavior = PassthroughBehavior.WHEN_NO_TEMPLATES,
CredentialsRole = Role.FromRoleArn(this,"CloudWatchAccessRole", "arn:aws:iam::800524210815:role/APIGatewayCloudWatchRole"),
RequestParameters = new Dictionary<string, string>()
{
{ "integration.request.header.Content-Encoding", "'amz-1.0'" },
{ "integration.request.header.Content-Type", "'application/json'" },
{ "integration.request.header.X-Amz-Target", "'GraniteServiceVersion20100801.PutMetricData'" },
},
},
IntegrationHttpMethod = "POST",
Service = "", // What will be the correct value for cloudwatch
Action = "PutMetricData"
}
What will be the correct value for cloudwatch

For CloudWatch logs you put logs right?
So for CloudWatch, it is monitoring... I got it from a github code but cannot find it anymore.

There are several ways to configure CloudWatch to monitor your API Gateway. First, you can create an AWS CloudWatch metric to monitor specific outputs produced by your API Gateway - see an example here. The second way is to use the default configuration - see here.

Related

Export DynamoDb metrics logs to S3 or CloudWatch

I'm trying to use DynamoDB metrics logs in an external observability tool.
To do that, I'll need to get these log data from S3 or CloudWatch log groups (not from Insights or CloudTrail).
For this reason, if there isn't a way to use CloudWatch, I'll need to export these metric logs from DynamoDb to S3, and from there export to CloudWatch or try to get those data directly from S3.
Do you know this is possible?
You could try using Logstash, it has a plugin for Cloudwatch and S3:
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-cloudwatch.html
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-s3.html
AWS puts DynamoDB metrics (table operation, table, and account) over CloudWatch metrics. Also, you can create as many metrics as you need. If you use Python, you can read it with boto3. The CloudWatch client has this method:
get_metric_data
Try this with your metrics:
cloudwatch_client = boto3.client('cloudwatch')
yesterday = date.today() - timedelta(days=1)
today = date.today()
response = cloudwatch_client.get_metric_data(
MetricDataQueries=[
{
'Id': 'some_request',
'MetricStat': {
'Metric': {
'Namespace': 'DynamoDB',
'MetricName': 'metric_name',
'Dimensions': []
},
'Period': 3600,
'Stat': 'Sum',
}
},
],
StartTime=datetime(yesterday.year, yesterday.month, yesterday.day),
EndTime=datetime(today.year, today.month, today.day),
)
print(response)

Creating endpoint in cloud run with Terraform and Google Cloud Platform

I'm research for a way to use Terraform with GCP provider to create cloud run endpoint. For starter I'm creating testing data a simple hello world. I have resource cloud run service configured and cloud endpoints resource configured with cloud endpoints depends_on cloud run. However, I'm trying to pass in the cloud run url as a service name to the cloud endpoints. File are constructed with best practice, with module > cloud run and cloud endpoints resource. However, the Terraform interpolation for passing the output of
service_name = "${google_cloud_run_service.default.status[0].url}"
Terraform throughs an Error: Invalid character. I've also tried module.folder.output.url.
I have the openapi_config.yml hardcoded in the TF config within
I'm wondering if it's possible to have to work. I research many post and some forum are outdated.
#Cloud Run
resource "google_cloud_run_service" "default" {
name = var.name
location = var.location
template {
spec {
containers {
image = "gcr.io/cloudrun/hello"
}
}
metadata {
annotations = {
"autoscaling.knative.dev/maxScale" = "1000"
"run.googleapis.com/cloudstorage" = "project_name:us-central1:${google_storage_bucket.storage-run.name}"
"run.googleapis.com/client-name" = "terraform"
}
}
}
traffic {
percent = 100
latest_revision = true
}
autogenerate_revision_name = true
}
output "url" {
value = "${google_cloud_run_service.default.status[0].url}"
}
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
#CLOUD STORAGE
resource "google_storage_bucket" "storage-run" {
name = var.name
location = var.location
force_destroy = true
bucket_policy_only = true
}
data "template_file" "openapi_spec" {
template = file("${path.module}/openapi_spec.yml")
}
#CLOUD ENDPOINT SERVICE
resource "google_endpoints_service" "api-service" {
service_name = "api_name.endpoints.project_name.cloud.goog"
project = var.project
openapi_config = data.template_file.openapi_spec.rendered
}
ERROR: googleapi: Error 400: Service name 'CLOUD_RUN_ESP_NAME' provided in the config files doesn't match the service name 'api_name.endpoints.project_name.cloud.goog' provided in the request., badRequest
So I later discovered, that the service name must match the same as the host/cloud run esp service url without https:// in order for the cloud endpoint services to provisioner. Terraform docs states otherwise in the form of " $apiname.endpoints.$projectid.cloud.goog " terraform_cloud_endpoints and in GCP docs states that the cloud run ESP service must be the url without https:// > gateway-12345-uc.a.run.app
Getting Started with Endpoints for Cloud Run

How to get the AWS IoT custom endpoint in CDK?

I want to pass the IoT custom endpoint as an env var to a lambda declared in CDK.
I'm talking about the IoT custom endpoint that lives here:
How do I get it in context of CDK?
You can ref AWS sample code:
https://github.com/aws-samples/aws-iot-cqrs-example/blob/master/lib/querycommandcontainers.ts
const getIoTEndpoint = new customResource.AwsCustomResource(this, 'IoTEndpoint', {
onCreate: {
service: 'Iot',
action: 'describeEndpoint',
physicalResourceId: customResource.PhysicalResourceId.fromResponse('endpointAddress'),
parameters: {
"endpointType": "iot:Data-ATS"
}
},
policy: customResource.AwsCustomResourcePolicy.fromSdkCalls({resources: customResource.AwsCustomResourcePolicy.ANY_RESOURCE})
});
const IOT_ENDPOINT = getIoTEndpoint.getResponseField('endpointAddress')
AFAIK the only way to recover is by using Custom Resources (Lambda), for example (IoTThing): https://aws.amazon.com/blogs/iot/automating-aws-iot-greengrass-setup-with-aws-cloudformation/

AWS Textract InvalidParameterException

I have a .Net core client application using amazon Textract with S3,SNS and SQS as per the AWS Document , Detecting and Analyzing Text in Multipage Documents(https://docs.aws.amazon.com/textract/latest/dg/async.html)
Created an AWS Role with AmazonTextractServiceRole Policy and added the Following Trust relation ship as per the documentation (https://docs.aws.amazon.com/textract/latest/dg/api-async-roles.html)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "textract.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Subscribed SQS to the topic and Given Permission to the Amazon SNS Topic to Send Messages to the Amazon SQS Queue as per the aws documentation .
All Resources including S3 Bucket, SNS ,SQS are in the same us-west2 region
The following method shows a generic error "InvalidParameterException"
Request has invalid parameters
But If the NotificationChannel section is commented the code is working fine and returning the correct job id.
Error message is not giving a clear picture about the parameter. Highly appreciated any help .
public async Task<string> ScanDocument()
{
string roleArn = "aws:iam::xxxxxxxxxxxx:instance-profile/MyTextractRole";
string topicArn = "aws:sns:us-west-2:xxxxxxxxxxxx:AmazonTextract-My-Topic";
string bucketName = "mybucket";
string filename = "mytestdoc.pdf";
var request = new StartDocumentAnalysisRequest();
var notificationChannel = new NotificationChannel();
notificationChannel.RoleArn = roleArn;
notificationChannel.SNSTopicArn = topicArn;
var s3Object = new S3Object
{
Bucket = bucketName,
Name = filename
};
request.DocumentLocation = new DocumentLocation
{
S3Object = s3Object
};
request.FeatureTypes = new List<string>() { "TABLES", "FORMS" };
request.NotificationChannel = channel; /* Commenting this line work the code*/
var response = await this._textractService.StartDocumentAnalysisAsync(request);
return response.JobId;
}
Debugging Invalid AWS Requests
The AWS SDK validates your request object locally, before dispatching it to the AWS servers. This validation will fail with unhelpfully opaque errors, like the OP.
As the SDK is open source, you can inspect the source to help narrow down the invalid parameter.
Before we look at the code: The SDK (and documentation) are actually generated from special JSON files that describe the API, its requirements and how to validate them. The actual code is generated based on these JSON files.
I'm going to use the Node.js SDK as an example, but I'm sure similar approaches may work for the other SDKs, including .NET
In our case (AWS Textract), the latest Api version is 2018-06-27. Sure enough, the JSON source file is on GitHub, here.
In my case, experimentation narrowed the issue down to the ClientRequestToken. The error was an opaque InvalidParameterException. I searched for it in the SDK source JSON file, and sure enough, on line 392:
"ClientRequestToken": {
"type": "string",
"max": 64,
"min": 1,
"pattern": "^[a-zA-Z0-9-_]+$"
},
A whole bunch of undocumented requirements!
In my case the token I was using violated the regex (pattern in the above source code). Changing my token code to satisfy the regex solved the problem.
I recommend this approach for these sorts of opaque type errors.
After a long days analyzing the issue. I was able to resolve it .. as per the documentation topic only required SendMessage Action to the SQS . But after changing it to All SQS Action its Started Working . But Still AWS Error message is really misleading and confusing
you would need to change the permissions to All SQS Action and then use the code as below
def startJob(s3BucketName, objectName):
response = None
response = textract.start_document_text_detection(
DocumentLocation={
'S3Object': {
'Bucket': s3BucketName,
'Name': objectName
}
})
return response["JobId"]
def isJobComplete(jobId):
# For production use cases, use SNS based notification
# Details at: https://docs.aws.amazon.com/textract/latest/dg/api-async.html
time.sleep(5)
response = textract.get_document_text_detection(JobId=jobId)
status = response["JobStatus"]
print("Job status: {}".format(status))
while(status == "IN_PROGRESS"):
time.sleep(5)
response = textract.get_document_text_detection(JobId=jobId)
status = response["JobStatus"]
print("Job status: {}".format(status))
return status
def getJobResults(jobId):
pages = []
response = textract.get_document_text_detection(JobId=jobId)
pages.append(response)
print("Resultset page recieved: {}".format(len(pages)))
nextToken = None
if('NextToken' in response):
nextToken = response['NextToken']
while(nextToken):
response = textract.get_document_text_detection(JobId=jobId, NextToken=nextToken)
pages.append(response)
print("Resultset page recieved: {}".format(len(pages)))
nextToken = None
if('NextToken' in response):
nextToken = response['NextToken']
return pages
Invoking textract with Python, I received the same error until I truncated the ClientRequestToken down to 64 characters
response = client.start_document_text_detection(
DocumentLocation={
'S3Object':{
'Bucket': bucket,
'Name' : fileName
}
},
ClientRequestToken= fileName[:64],
NotificationChannel= {
"SNSTopicArn": "arn:aws:sns:us-east-1:AccountID:AmazonTextractXYZ",
"RoleArn": "arn:aws:iam::AccountId:role/TextractRole"
}
)
print('Processing started : %s' % json.dumps(response))

Idle delete configuration for PySpark Cluster on GCP

I am trying to define a create cluster function to create a cluster on Cloud Dataproc. While going through the reference material I came across an idle delete parameter (idleDeleteTtl) which would auto-delete the cluster if not in use for the amount of time defined. When I try to include it in cluster config it gives me a ValueError: Protocol message ClusterConfig has no "lifecycleConfig" field.
The create cluster function for reference:
def create_cluster(dataproc, project, zone, region, cluster_name, pip_packages):
"""Create the cluster."""
print('Creating cluster...')
zone_uri = \
'https://www.googleapis.com/compute/v1/projects/{}/zones/{}'.format(
project, zone)
cluster_data = {
'project_id': project,
'cluster_name': cluster_name,
'config': {
'initialization_actions': [{
'executable_file': 'gs://<some_path>/python/pip-install.sh'
}],
'gce_cluster_config': {
'zone_uri': zone_uri,
'metadata': {
'PIP_PACKAGES': pip_packages
}
},
'master_config': {
'num_instances': 1,
'machine_type_uri': 'n1-standard-1'
},
'worker_config': {
'num_instances': 2,
'machine_type_uri': 'n1-standard-1'
},
'lifecycleConfig': { #### PROBLEM AREA ####
'idleDeleteTtl': '30m'
}
}
}
cluster = dataproc.create_cluster(project, region, cluster_data)
cluster.add_done_callback(callback)
global waiting_callback
waiting_callback = True
I want similar functionality if not in the same function itself. I already have a manual delete function defined but I want to add the functionality to auto delete clusters when not in use.
You are calling the v1 API passing a parameter that is part of the v1beta2 API.
Change your endpoint from:
https://www.googleapis.com/compute/v1/projects/{}/zones/{}
To this:
https://www.googleapis.com/compute/v1beta2/projects/{}/zones/{}