Hello guys as we all know that there's a limit on storing images on ECR and that's why we want to keep only last 4 or 5 images on ECR.
Any leads would be appreciated.
You can use ECR Lifecycle Policies to delete all but the last 4 images with the following policy:
{
"rules": [
{
"rulePriority": 1,
"description": "Rule 1",
"selection": {
"tagStatus": "any",
"countType": "imageCountMoreThan",
"countNumber": 4
},
"action": {
"type": "expire"
}
}
]
}
Related
I am trying to add an ILM policy in AWS based ElasticSearch(7.9) to delete data older than 4 days, I am getting the following error :
Error log :
[illegal_argument_exception] State name is null
Policy :
{
"policy": {
"description": "Delete older than 4 days",
"default_state": "hot",
"states": [
{
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "4d"
}
}
]
}
]
}
}
What am I doing wrong?
You must have a field 'name' for each "States". So in you case this looks like this :
{
"policy": {
"description": "Delete older than 4 days",
"default_state": "hot",
"states": [
{
"name": "hot",
"actions": [],
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "4d"
}
}
]
},
{
"name": "delete",
"actions": [
{
"delete":{}
}
],
"transitions": []
}
]
}
}
As you can see, I've name this state "hot" cause it's your default state so I guess this default state must be describe in your policy.
And for your information, this transition, made nothing (actions fields is empty).
So I've written a second state call "detele" that will currently delete index.
I have a CD pipeline built with AWS CDK and CodePipeline. It compiles the code for 5 lambdas, each of which it pushes to a secondary artifact.
The S3 locations of each of the artifacts are assigned to the parameters of a CloudFormation template which are attached to the Code parts of the lambdas.
This is working fine!
My problem is, I cannot add a sixth secondary artifact to CodeBuild (hard limit). I also cannot combine all of my lambda code into a single artifact as (as far as I can see) CodePipeline is not smart enough to look inside an artifact when assigning Code to a lambda in CloudFormation.
What is the recommendation for deploying multiple lambdas from a CodeBuild/CodePipeline? How have other people solved this issue?
EDIT: Code pipeline CF template
note: I have only included 2 lambdas as an example
Lambda application template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"Lambda1": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": {
"Ref": "lambda1SourceBucketNameParameter3EE73025"
},
"S3Key": {
"Ref": "lambda1SourceObjectKeyParameter326E8288"
}
}
}
},
"Lambda2": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": {
"Ref": "lambda2SourceBucketNameParameter3EE73025"
},
"S3Key": {
"Ref": "lambda2SourceObjectKeyParameter326E8288"
}
}
}
}
},
"Parameters": {
"lambda1SourceBucketNameParameter3EE73025": {
"Type": "String"
},
"lambda1SourceObjectKeyParameter326E8288": {
"Type": "String"
},
"lambda2SourceBucketNameParameterA0D2319B": {
"Type": "String"
},
"lambda2SourceObjectKeyParameterF3B3F2C2": {
"Type": "String"
}
}
}
Code Pipeline template:
{
"Resources": {
"Pipeline40CE5EDC": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Stages": [
{
"Actions": [
{
"ActionTypeId": {
"Provider": "CodeBuild"
},
"Name": "CDK_Build",
"OutputArtifacts": [
{
"Name": "CdkbuildOutput"
}
],
"RunOrder": 1
},
{
"ActionTypeId": {
"Provider": "CodeBuild"
},
"Name": "Lambda_Build",
"OutputArtifacts": [
{ "Name": "CompiledLambdaCode1" },
{ "Name": "CompiledLambdaCode2" }
],
"RunOrder": 1
}
],
"Name": "Build"
},
{
"Actions": [
{
"ActionTypeId": {
"Provider": "CloudFormation"
},
"Configuration": {
"StackName": "DeployLambdas",
"ParameterOverrides": "{\"lambda2SourceBucketNameParameterA0D2319B\":{\"Fn::GetArtifactAtt\":[\"CompiledLambdaCode1\",\"BucketName\"]},\"lambda2SourceObjectKeyParameterF3B3F2C2\":{\"Fn::GetArtifactAtt\":[\"CompiledLambdaCode1\",\"ObjectKey\"]},\"lambda1SourceBucketNameParameter3EE73025\":{\"Fn::GetArtifactAtt\":[\"CompiledLambdaCode2\",\"BucketName\"]},\"lambda1SourceObjectKeyParameter326E8288\":{\"Fn::GetArtifactAtt\":[\"CompiledLambdaCode2\",\"ObjectKey\"]}}",
"ActionMode": "CREATE_UPDATE",
"TemplatePath": "CdkbuildOutput::CFTemplate.template.json"
},
"InputArtifacts": [
{ "Name": "CompiledLambdaCode1" },
{ "Name": "CompiledLambdaCode2" },
{ "Name": "CdkbuildOutput" }
],
"Name": "Deploy",
"RunOrder": 1
}
],
"Name": "Deploy"
}
],
"ArtifactStore": {
"EncryptionKey": "the key",
"Location": "the bucket",
"Type": "S3"
},
"Name": "Pipeline"
}
}
}
}
Reviewed templates.
So, I don't see five inputs to a CodeBuild action, but I do see 2 inputs to a CloudFormation action (Deploy).
I assume your problem was a perceived limit of 5 input to the CloudFormation action. Is that assumption correct?
The limits for a CloudFormation action are actually 10. See "Action Type Constraints for Artifacts
" # https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#reference-action-artifacts
So if you can use up to 10, will that suffice?
If not, I have other ideas that would take a lot longer to document.
I'm working on ingesting metrics from Lambda into our centralized logging system. Our first idea is too costly so I'm trying to figure out if there is a way to lower the cost (instead of ingesting 3 metrics from 200 lambdas every 60s).
I've been messing around with MetricMath and have pretty much figured out what I want to do. I'd run this as a k8s cron job like thing and variabilize the start and end time.
How would this be charged? Is it the number of metrics used to perform the math or the number of values that I output?
i.e. m1 and m2 are pulling Errors and Invocations from 200 lambdas. To pull each of these individually would be 400 metrics.
In this method, would it only be 1, 3, or 401?
{
"MetricDataQueries": [
{
"Id": "m1",
"MetricStat": {
"Metric": {
"Namespace": "AWS/Lambda",
"MetricName": "Errors"
},
"Period": 300,
"Stat": "Sum",
"Unit": "Count"
},
"ReturnData": false
},
{
"Id": "m2",
"MetricStat": {
"Metric": {
"Namespace": "AWS/Lambda",
"MetricName": "Invocations"
},
"Period": 300,
"Stat": "Sum",
"Unit": "Count"
},
"ReturnData": false
},
{
"Id": "e1",
"Expression": "m1 / m2",
"Label": "ErrorRate"
}
],
"StartTime": "2020-02-25T02:00:0000",
"EndTime": "2020-02-26T02:05:0000"
}
Output:
{
"Messages": [],
"MetricDataResults": [
{
"Label": "ErrorRate",
"StatusCode": "Complete",
"Values": [
0.0045127626568890146
],
"Id": "e1",
"Timestamps": [
"2020-02-26T19:00:00Z"
]
}
]
}
Example 2:
Same principle. This is pulling the invocations by of each function by FunctionName. It then sorts them and outputs the most invoked. Any idea how many metrics this would be?
{
"MetricDataQueries": [
{
"Id": "e2",
"Expression": "SEARCH(' {AWS/Lambda,FunctionName} MetricName=`Invocations` ', 'Sum', 60)",
"ReturnData" : false
},
{
"Id": "e3",
"Expression": "SORT(e2, SUM, DESC, 1)"
}
],
"StartTime": "2020-02-26T12:00:0000",
"EndTime": "2020-02-26T12:01:0000"
}
Same question. 1 or 201 metrics?
Output:
{
"MetricDataResults": [
{
"Id": "e3",
"Timestamps": [
"2020-02-26T12:00:00Z"
],
"Label": "1 - FunctionName",
"Values": [
91.0
],
"StatusCode": "Complete"
}
],
"Messages": []
}
Billing is on metrics requested: https://aws.amazon.com/cloudwatch/pricing/
In the first example, you're requesting only 2 metrics. These metrics are aggregates of per lambda function metrics, but as far you're concerned, that's only 2 metrics and you will be billed for 2. You're not billed for the metric math, only for metrics you request.
In the second example, the number of metrics the search returns is the amount you will be billed for, 200 in your case.
I have setup a data-source with the following:
aws quicksight create-data-source --cli-input-json file://connection.json
cat connection.json:
{
"AwsAccountId": "44455...",
"DataSourceId": "abcdefg13asdafsad",
"Name": "randomname",
"Type": "S3",
"DataSourceParameters": {
"S3Parameters": {
"ManifestFileLocation": {
"Bucket": "cmunetcoms20",
"Key": "asn-manifest.json"
}
}
}
}
asn-manifest.json contains (and is placed in the appropriate bucket):
{
"fileLocations": [
{
"URIs": [
"https://cmunetcoms20.s3.us-east-2.amazonaws.com/ASN_Scores.csv"
]
},
{
"URIPrefixes": [
"prefix1",
"prefix2",
"prefix3"
]
}
],
"globalUploadSettings": {
"format": "CSV",
"delimiter": ",",
"textqualifier": "'",
"containsHeader": "true"
}
}
This successfully creates a data-source and then when I go to create a data-set I use
aws quicksight create-data-set --cli-input-json file://skeleton
skeleton contains:
{
"AwsAccountId": "44455...",
"DataSetId": "generatedDataSetName",
"Name": "test-asn-demo",
"PhysicalTableMap": {
"ASNs": {
"S3Source": {
"DataSourceArn": "arn:aws:quicksight:us-east-2:444558491062:datasource/cmunetcoms20162031",
"InputColumns": [
{
"Name": "ASN",
"Type": "INTEGER"
},
{
"Name": "Score",
"Type": "DECIMAL"
},
{
"Name": "Total_IPs",
"Type": "INTEGER"
},
{
"Name": "Badness",
"Type": "DECIMAL"
}
]
}
}
},
"ImportMode": "SPICE"
}
Throws the following error:
"An error occurred (InvalidParameterValueException) when calling the
CreateDataSet operation: Input column ASN in physical table ASNs has
invalid type. Allowed types for S3 physical table are [String]"
If I change each Type to "String", it throws the following error:
An error occurred (LimitExceededException) when calling the
CreateDataSet operation: Insufficient SPICE capacity
There is plenty of SPICE on the account, something like 51 GB, and almost 0 utilization. Additionally, I ran the numbers and the total amount of Spice that I think should be used for this data set is approximately 0 GB. (size 71k rows, 4 columns, each column as a string to pad my calculation).
Thanks
Got it fam. The solution for me was a regional configuration problem. My s3 bucket was in us-east-2 and my quicksight was in us-east-1. Trying to create a data set in a region that is not ur primary account (even though you have enterprise), causes a spice error since alternate regions are not given any spice balance to start out.
I need to locate a record in Route53 based on Value. My Route53 has 10,000+ records. Searching by Value for a Hosted Zone with more than 2000 records is not currently supported in the web interface. So, I must resort to using the AWS Route53 CLI's list-resource-record-sets command and the --query parameter. This parameter uses JMESPath to select or filter the result set.
So, let's look at the result set we are working with.
$ aws route53 list-resource-record-sets --hosted-zone-id Z3RB47PQXVL6N2 --max-items 5 --profile myprofile
{
"NextToken": "eyJTdGFydFJlY29yZE5hbWUiOiBudWxsLCAiU3RhcnRSZWNvcmRJZGVudGlmaWVyIjogbnVsbCwgIlN0YXJ0UmVjb3JkVHlwZSI6IG51bGwsICJib3RvX3RydW5jYXRlX2Ftb3VudCI6IDV9",
"ResourceRecordSets": [
{
"ResourceRecords": [
{
"Value": "ns-1264.awsdns-30.org."
},
{
"Value": "ns-698.awsdns-23.net."
},
{
"Value": "ns-1798.awsdns-32.co.uk."
},
{
"Value": "ns-421.awsdns-52.com."
}
],
"Type": "NS",
"Name": "mydomain.com.",
"TTL": 300
},
{
"ResourceRecords": [
{
"Value": "ns-1264.awsdns-30.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400"
}
],
"Type": "SOA",
"Name": "mydomain.com.",
"TTL": 300
},
{
"ResourceRecords": [
{
"Value": "12.23.34.45"
}
],
"Type": "A",
"Name": "abcdefg.mydomain.com.",
"TTL": 300
},
{
"ResourceRecords": [
{
"Value": "34.45.56.67"
}
],
"Type": "A",
"Name": "zyxwvut.mydomain.com.",
"TTL": 300
},
{
"ResourceRecords": [
{
"Value": "45.56.67.78"
}
],
"Type": "A",
"Name": "abcdxyz.mydomain.com.",
"TTL": 300
}
]
}
Ideally I need to find the ResourceRecordSets.Name, but I can definitely work with returning the entire ResourceRecordSet object, of any record that has a ResourceRecords.Value == 45.56.67.78.
My failed attempts
// My first attempt was to use filters on two levels, but this always returns an empty array
ResourceRecordSets[?Type == 'A'].ResourceRecords[?Value == '45.56.67.78'][]
[]
// Second attempt came after doing more research on JMESPath. I could not find any good examples using filters on two levels, so I do not filter on ResourceRecordSets
ResourceRecordSets[*].ResourceRecords[?Value == '45.56.67.78']
[
[],
[],
[
{
"Value": "45.56.67.78"
}
],
[],
[]
]
After beating my head on the desk for a while longer I decided to consult the experts. Using the above example, how can I utilize JMESPath and the AWS Route53 CLI to return one of the two following for records with a Value == 45.56.67.78?
[
"Name": "abcdxyz.mydomain.com."
]
OR
{
"ResourceRecords": [
{
"Value": "45.56.67.78"
}
],
"Type": "A",
"Name": "abcdxyz.mydomain.com.",
"TTL": 300
}
This should do:
aws route53 list-resource-record-sets --hosted-zone-id Z3RB47PQXVL6N2 --query "ResourceRecordSets[?ResourceRecords[?Value == '45.56.67.78'] && Type == 'A'].Name"