I'm running a lambda function using the boto3 SDK in order to add autoscaling policies to a number of dynamoDB tables and indices, however it's consistently throwing this error:
An error occurred (ObjectNotFoundException) when calling the PutScalingPolicy operation: No scalable target registered for service namespace: dynamodb, resource ID: table/tableName, scalable dimension: dynamodb:table:ReadCapacityUnits: ObjectNotFoundException
Relevant code here:
def set_scaling_policy(resource_type, capacity_type, resource_id):
dbClient = boto3.client('application-autoscaling')
response = dbClient.put_scaling_policy(
PolicyName= 'dynamoDBScaling',
ServiceNamespace= 'dynamodb',
ResourceId= resource_id,
ScalableDimension= 'dynamodb:{0}:{1}CapacityUnits'.format(resource_type,capacity_type),
PolicyType='TargetTrackingScaling',
TargetTrackingScalingPolicyConfiguration={
'TargetValue': 50.0,
'PredefinedMetricSpecification': {
'PredefinedMetricType': 'DynamoDB{0}CapacityUtilization'.format(capacity_type)
}
}
)
(resource_type is either 'table' or 'index'; capacity_type is either 'Read' or 'Write')
A few solutions I've considered:
fixing permissions - it was having some permissions issues before, I gave it AmazonDynamoDBFullAccess, which seems to have fixed all that. Also, presumably it would throw a different error if it didn't have access
formatting of parameters - according to the API here, it all seems correct. I've tried variants like using the full ARN instead of table/tableName, using just tablename, etc.
checking that tableName actually exists - it does, and I can add and remove scaling policies via the AWS console just fine
put_scaling_policy
http://boto3.readthedocs.io/en/latest/reference/services/application-autoscaling.html#ApplicationAutoScaling.Client.put_scaling_policy
You cannot create a scaling policy until you register the scalable
target using RegisterScalableTarget
register_scalable_target
http://boto3.readthedocs.io/en/latest/reference/services/application-autoscaling.html#ApplicationAutoScaling.Client.register_scalable_target
Registers or updates a scalable target. A scalable target is a
resource that Application Auto Scaling can scale out or scale in.
After you have registered a scalable target, you can use this
operation to update the minimum and maximum values for its scalable
dimension.
Related
I have a requirement for creating aws lambda functions dynamically basis some input parameters like name, docker image etc.
I have been able to build this using terraform (triggered using gitlab pipelines).
Now the problem is that for every unique name I want a new lambda function to be created/updated, i.e if I trigger the pipeline 5 times with 5 names then there should be 5 lambda functions, instead what I get is the older function being destroyed and a new one being created.
How do I achieve this?
I am using Resource: aws_lambda_function
Terraform code
resource "aws_lambda_function" "executable" {
function_name = var.RUNNER_NAME
image_uri = var.DOCKER_PATH
package_type = "Image"
role = role.arn
architectures = ["x86_64"]
}
I think there is a misunderstanding on how terraform works.
Terraform maps 1 resource to 1 item in state and the state file is used to manage all created resources.
The reason why your function keeps getting destroyed and recreated with the new values is because you have only 1 resource in your terraform configuration.
This is the correct and expected behavior from terraform.
Now, as mentioned by some people above, you could use "count or for_each" to add new lambda functions without deleting the previous ones, as long as you can keep track of the previous passed values (always adding the new values to the "list").
Or, if there is no need to keep track/state of the lambda functions you have created, terraform may not be the best solution to solve your needs. The result you are looking for can be easily implemented by python or even shell with aws cli commands.
I have an existing CDK setup in which a CloudFormation distribution is configured using the deprecated CloudFrontWebDistribution API, now I need to configure a OriginRequestPolicy, so after some Googling, switched to the Distribution API (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-cloudfront-readme.html) and reused the same "id" -
Distribution distribution = Distribution.Builder.create(this, "CFDistribution")
When I synth the stack I already see in the yaml that the ID - e.g. CloudFrontCFDistribution12345689 - is a different one than the one before.
When trying to deploy it will fail, since the HTTP Origin CNAMEs are already associated with the existing distribution. ("Invalid request provided: One or more of the CNAMEs you provided are already associated with a different resource. (Service: CloudFront, Status Code: 409, Request ID: 123457657, Extended Request ID: null)"
Is there a way to either add the OriginRequestPolicy (I just want to transfer an additional header) to the CloudFrontWebDistribution or a way to use the new Distribution API while maintaining the existing distribution instead of creating a new one?
(The same operation takes around 3 clicks in the AWS Console).
You could use the following trick to assign the logical ID yourself instead of relying on the autogenerated logical ID. The other option is to execute it in two steps, first update it without the additional CNAME and then do a second update with the additional CNAME.
const cfDistro = new Distribution(this, 'distro', {...});
cfDistro.node.defaultChild.overrideLogicalId('CloudfrontDistribution');
This will result in the following stack:
CloudfrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
...
Small edit to explain why this happens:
Since you're switching to a new construct, you're also getting a new logical ID. In order to ensure a rollback is possible, CloudFormation will first create all new resources and create the updated resources that need to be recreated. Only when creating and updating everything is done, it will clean up by removing the old resources. This is also the reason why a two-step approach would work when changing the logical IDs of the resources, or force a normal update by ensuring the same logical ID.
Thanks a lot #stijndepestel - simply assigning the existing logical ID worked on the first try.
Here's the Java variant of the code in the answer
import software.amazon.awscdk.services.cloudfront.CfnDistribution;
...
((CfnDistribution) distribution.getNode().getDefaultChild()).overrideLogicalId("CloudfrontDistribution");
Context
I have created a AWS Logs SubscriptionFilter using CDK. I am now trying to create a metric/alarm for some of the metrics for this resource.
Problem
All the metrics I am interested in (see ForwardedLogEvents, DeliveryErrors, DeliveryThrottling in the Monitoring AWS Logs with CloudWatch Metrics docs) requires these dimensions to be specified:
LogGroupName
DestinationType
FilterName
The first two are easy to specify since the LogGroupName is also required while creating the construct and DestinationType in my case is just Lambda. However, I see no way to get FilterName using CDK.
Using CloudWatch, I see that the FilterName is like MyStackName-MyLogicalID29669D87-GCMA0Q4KKALH. So I can't directly specify it using a Fn.ref (since I don't know the logical id). Using CloudFormation, I could have directly done Ref: LogicalId.
I also don't see any properties on the SubscriptionFilter object that will return this (unlike most other CDK constructs this one seems pretty bare and returns absolutely no information about the resource).
There are also no metric* methods on SubscriptionFilter object (unlike other standard constructs like Lambda functions, S3 buckets etc.), so I have to manually specify the Metric object. See for example: CDK metric objects docs.
The CDK construct (and the underlying CloudFormation resource: AWS::Logs::SubscriptionFilter) does not let me specify the FilterName - so I can't use a variable to specify it also and the name is dynamically generated.
Example code that is very close to what I need:
const metric = new Metric({
namespace: 'AWS/Logs',
metricName: 'ForwardedLogEvents',
dimensions: {
DestinationType: 'Lambda',
// I know this value since I specified it while creating the SubscriptionFilter
LogGroupName: 'MyLogGroupName',
FilterName: Fn.ref('logical-id-wont-work-since-it-is-dynamic-in-CDK')
}
})
Question
How can I figure out how to acquire the FilterName property to construct the Metric object?
Or otherwise, is there another way to go about this?
I was able to work around this by using Stack#getLogicalId method.
Example code
In Kotlin, as an extension function for any Construct):
fun Construct.getLogicalId() = Stack.of(this).getLogicalId(this.node.defaultChild as CfnElement)
... and then use it with any Construct:
val metric = Metric.Builder.create()
.namespace("AWS/Logs")
.metricName("ForwardedLogEvents")
.dimensions(mapOf(
"DestinationType" to "Lambda",
"LogGroupName" to myLogGroup.logGroupName,
"FilterName" to mySubscriptionFilter.getLogicalId()
))
.statistic("sum")
.build()
I am using AWS Managed Cassandra Service(MCS) with AWS Lambda for my course project. I am trying to perform write operations and I am getting Response Errors from MCS stating Consistency level LOCAL_ONE is not supported for this operation. Supported consistency levels are: LOCAL_QUORUM. It was working fine a few days ago and I did not change anything from my Lambda function or in my MCS Keyspace. AWS Lambda and AWS MCS are hosted on us-east-2 regions. How do I solve this?
Read operations are working fine.
Screen Shot of the logs taken from AWS CloudWatch Management which describes the error for my query :
Add a new parameter
{ consistency: cassandra.types.consistencies.localQuorum }
to the query execution. Below is an example of the same.
Before-> Not Working
addtempuser = 'INSERT into tempbotusers (mobilenumber,name,email) values (?,?,?)';
checkaddtempuser_result = await client.execute(addtempuser,[mobilenumber,'NoName','NoEmail']);
After adding the new parameter -> Working
addtempuser = 'INSERT into tempbotusers (mobilenumber,name,email) values (?,?,?)';
checkaddtempuser_result = await client.execute(addtempuser,[mobilenumber,'NoName','NoEmail'], { consistency: cassandra.types.consistencies.localQuorum });
I'm trying to create an Cloudwatch Log Group and corresponding Cloudwatch Log Stream on AWS with Terraform. Seems very straight forward, however my code is throwing ResourceNotFoundException: The specified log group does not exist error in regards to the log group on Plan.
My terraform:
variable "firehose_stream_name" {
default = "streamName"
}
resource "aws_cloudwatch_log_group" "firehose_log_group" {
name = "/aws/kinesisfirehose/${var.firehose_stream_name}"
}
resource "aws_cloudwatch_log_stream" "firehose_log_stream" {
name = "S3Delivery"
log_group_name = "${aws_cloudwatch_log_group.firehose_log_group.name}"
depends_on = ["aws_cloudwatch_log_group.firehose_log_group"]
}
Now, one would assume that Terraform would calculate the dependency based on the name value of the firehose_log_group being in the log_group_name of the aws_cloudwatch_log_stream. This is not happening. With that reference, and with the explicit depends_on block, this code is throwing a ResourceNotFoundException: The specified log group does not exist error on Plan. As if it's not able to calculate the dependency between the aws_cloudwatch_log_group and the aws_cloudwatch_log_stream. In any other scenario this kind of dependency would be calculated by Terraform.
What is happening here? Is there some kind of error in the code I'm not seeing? Is there some kind of dependency that Terraform is unable to calculate between these two?
UPDATE:
turns out this works fine on an Ubuntu server with TF version 0.11.3 for Linux, but the error occurs on the version 0.11.3 for Windows. The plot thickens.
This code is based on the TF documentation: https://www.terraform.io/docs/providers/aws/r/cloudwatch_log_stream.html
EDIT: explicitly include stream name variable
Looks like aws_cloudwatch_log_group.firehose_log_group.name is not set, before digging more, Can you cross check and confirm whether var.firehose_stream_name is defined!!