is it possible to get the tag history in an ECR (the aws docker registry) repository? - amazon-web-services

I use tags to mark the currently deployed version, and I want to know if it's possible to query which image was tagged at a certain time with a specific label.
While it's possible to keep track of these externally, having this option appears more reliable.

You can get the current tags but getting the history of tags is not possible.
aws ecr list-images --repository-name test-nginx
{
"imageIds": [
{
"imageTag": "1.0",
"imageDigest": "sha256:31641ee69cxxx1ca550a754376e9077f6f9134ad41e27"
},
{
"imageTag": "latest",
"imageDigest": "sha256:31641ee69cda92e0xxx9077f6f9134ad41e27"
}
]
}

Related

Use CDK deploy time token values in a launch template user-data script

I recently starting porting part of my infrastructure to AWS CDK. Previously, I did some experiments with Cloudformation templates directly.
I am currently facing the problem that I want to encode some values (namely the product version) in a user-data script of an EC2 launch template and these values should only be loaded at deployment time. With Cloudformation this was quite simple, I was just building my JSON file from functions like Fn::Base64 and Fn::Join. E.g. it looked like this (simplified)
"MyLaunchTemplate": {
"Type": "AWS::EC2::LaunchTemplate",
"Properties": {
"LaunchTemplateData": {
"ImageId": "ami-xxx",
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"#!/bin/bash -xe",
{"Fn::Sub": "echo \"${SomeParameter}\""},
]
}
}
}
}
}
}
This way I am able to define the parameter SomeParameter on launch of the cloudformation template.
With CDK we can access values from the AWS Parameter Store either at deploy time or at synthesis time. If we use them at deploy time, we only get a token, otherwise we get the actual value.
I have achieved so far to read a value for synthesis time and directly encode the user-data script as base64 like this:
product_version = ssm.StringParameter.value_from_lookup(
self, f'/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': base64.b64encode(
f'echo {product_version}'.encode('utf-8')).decode('utf-8'),
})
With this code, however, the version gets read during synthesis time and will be hardcoded into the user-data script.
In order to be able to use dynamic values that are only resolved at deploy time (value_for_string_parameter) I would somehow need to tell CDK to write a Cloudformation template similar to what I have done manually before (using Fn::Base64 only in Cloudformation, not in Python). However, I did not find a way to do this.
If I read a value that is only to be resolved at deploy time like follows, how can I use it in the UserData field of a launch template?
latest_string_token = ssm.StringParameter.value_for_string_parameter(
self, "my-plain-parameter-name", 1)
It is possible using the Cloudformation intrinsic functions which are available in the class aws_cdk.core.Fn in Python.
These can be used when creating a launch template in EC2 to combine strings and tokens, e.g. like this:
import aws_cdk.core as cdk
# loads a value to be resolved at deployment time
product_version = ssm.StringParameter.value_for_string_parameter(
self, '/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': cdk.Fn.base64(cdk.Fn.join('\n', [
'#!/usr/bin/env bash',
cdk.Fn.join('=', ['MY_PRODUCT_VERSION', product_version]),
'git checkout $MY_PRODUCT_VERSION',
])),
})
This example could result in the following user-data script in the launch template if the parameter store contains version 1.2.3:
#!/usr/bin/env bash
MY_PRODUCT_VERSION=1.2.3
git checkout $MY_PRODUCT_VERSION

AWS cloud watch metrics with ASG name changes

On AWS cloud watch we have one dashboard per environment.
Each dashboard has N plots.
Some plots, use the Auto Scaling Group Name (ASG) to find the data to plot.
Example of such a plot (edit, tab source):
{
"metrics": [
[ "production", "mem_used_percent", "AutoScalingGroupName", "awseb-e-rv8y2igice-stack-AWSEBAutoScalingGroup-3T5YOK67T3FD" ]
],
... other params removed for brevity ...
"title": "Used Memory (%)",
}
Every time we deploy, the ASG name changes (deploy using code-deploy with Elastic Bean Stalk (EBS) configuration files from source).
I need to manually find the new name and update the N plots one by one.
The strange thing is that this happens for production and staging environments, but not for integration.
All 3 should be copies of one another, with different settings from the EBS configuration files, so I don't know what is going on.
In any case, what (I think) I need is one of:
option 1: prevent the ASG name change upon deploy
option 2: dynamically update the plots with the new name
option 3: plot the same data without using the ASG name (but alternatives I find are EC2 instance ID that changes and ImageId and InstanceType that are common to more than one EC2, so won't work either)
My online-search-foo has turned out empty.
More Info:
I'm publishing these metrics with the cloud watch agent, by adjusting the conf file, as per the docs here:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-EC2-Instance.html
Have a look at CloudWatch Search Expression Syntax. It allows you to use tokens for searching, e.g.:
SEARCH(' {AWS/CWAgent, AutoScalingGroupName} MetricName="mem_used_percent" rv8y2igice', 'Average', 300)
which would replace the entry for metrics like so:
"metrics": [
[ { "expression": "SEARCH(' {AWS/CWAgent, AutoScalingGroupName} MetricName=\"mem_used_percent\" rv8y2igice', 'Average', 300)", "label": "Expression1", "id": "e1" } ]
]
Simply search the desired result in the console, results that match the search appear.
To graph, all of the metrics that match your search, choose Graph search
and find the accurate search expression that you want in the Details on the Graphed metrics tab.
SEARCH('{CWAgent,AutoScalingGroupName,ImageId,InstanceId,InstanceType} mem_used_percent', 'Average', 300)

AWS cli s3api put-bucket-tagging - cannot add tag to bucket unless bucket has 0 tags

As there is no create-tag for s3, only put-bucket-tagging can be used, which requires that you include all tags on the resource, not just the new one. Thus there is no way to add a new tag to a bucket that already has tags unless you include all existing tags PLUS your new tag. This makes it way more difficult to use for bulk operations, as you need to get all the tags first, extrapolate it into json, edit the json to add your new tag to every bucket, and then feed that to put-bucket-tagging.
Does anyone have a better way to do this or have a script that does this?
Command I'm trying:
aws s3api put-bucket-tagging --bucket cbe-res034-scratch-29 --tagging "TagSet=[{Key=Environment,Value=Research}]"
Error I get:
An error occurred (InvalidTag) when calling the PutBucketTagging operation: System tags cannot be removed by requester
I get the 'cannot be removed' error because put-bucket-tagging is trying to delete the other 10 tags on this bucket (because I didn't include them in the TagSet) and I don't have access to do so.
You can use resourcegroupstaggingapi to accomplish the result you expect, see it below.
aws resourcegroupstaggingapi tag-resources --resource-arn-list arn:aws:s3:::cbe-res034-scratch-29 --tags Environment=Research
To handle spaces on tag name or value, use it as json.
aws resourcegroupstaggingapi tag-resources --resource-arn-list arn:aws:s3:::cbe-res034-scratch-29 --tags '{"Environment Name":"Research Area"}'
I would strongly recommend using json file instead of command line flags. I have spent few hours yesterday without any success making key and value with white spaces work. This is in the context of Jenkins pipline in groovy calling bash shell script block.
Here is the syntax for calling json file.
aws resourcegroupstaggingapi tag-resources --cli-input-json file://tags.json
If you don't know exact format of json file then just run following, which will spit out format in tags.json file in current directory.
aws resourcegroupstaggingapi tag-resources --generate-cli-skeleton > tags.json
tags.json will have json. Just update the file and run the first commmand.
{
"ResourceARNList": [
""
],
"Tags": {
"KeyName": ""
}
}
You can fill up your data. e.g. for S3 bucket
{
"ResourceARNList": [
"arn:aws:s3:::my-s3-bucket"
],
"Tags": {
"Application": "My Application"
}
}

Create an AWS Resource Group with Terraform

I am currently getting into Terraform and I am trying to structure the different resources that I am deploying by using tags and resource groups.
https://docs.aws.amazon.com/cli/latest/reference/resource-groups/index.html
I can easily add tags with Terraform and I can create the resource-group via aws cli but I really want to be able to do both with Terraform if possible.
The official Terraform docs currently seem to not support an aws_resource_group resource(I was able to find aws_inspector_resource_group and aws_iam_resource_group, which are different types of grouping resources) but I was wondering if anyone was able to achieve it via some kind of a workaround.
I would really appreciate any feedback on the matter.
Thanks in advance!
This has been released in aws provider 1.55.0: https://www.terraform.io/docs/providers/aws/r/resourcegroups_group.html
For anyone looking for a code example, try this:
resource "aws_resourcegroups_group" "code-resource" {
name = "code-resource"
resource_query {
query = <<JSON
{
"ResourceTypeFilters": [
"AWS::EC2::Instance"
],
"TagFilters": [
{
"Key": "Stage",
"Values": ["dev"]
}
]
}
JSON
}
}
Please update it to your liking and needs. also be sure to checkout the source documentation:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/resourcegroups_group

Cloudformation - Redeploy environment that uses a recordset (with Jenkins)

TL;DR - What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
We're just starting to use AWS with a new project, and as part of the project I've been tasked with creating a simple demo environment, and updating this environment each night to show the previous days progress.
I'm using Jenkins and the Cloudformation plugin to do this, and it works great in creating a simple EC2 instance in an existing security group, pointed to by a Route53 CNAME so it can be browsed at subdomain.example.com.
The problem I have is that I can't redeploy the same stack, because the recordset already exists, and CF won't overwrite it.
There are lots of guides on how to deploy an environment, but I'm struggling to find one on how to keep an environment up to date.
So I guess my question is: What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
I agree with the comments in your question i.e. probably better to create a clean server and upload / update to it via continuous integration (Jenkins). Docker is super useful in this scenario which you mentioned in a later comment.
However, if you are leaning towards "immutable infrastructure" and want everything encapsulated in your CloudFormation template (Including creating a record in Route53) you could do something like the following code snippet in your AWS::CloudFormation::Init section - (See 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource- init.html' for more info)
"Resources": {
"MyServer": {
"Type": "AWS::EC2::Instance",
"Metadata": {
"AWS::CloudFormation::Init": {
"configSets" : { "Install" : [ "UpdateRoute53", "ConfigSet2, .... ] },
"UpdateRoute53" : {
"files" : {
"/usr/local/bin/cli53" : {
"source" : "https://github.com/barnybug/cli53/releases/download/0.6.3/cli53-linux-amd64",
"mode" : "000755", "owner" : "root", "group" : "root"
},
"/tmp/update_route53.sh" : {
"content" : { "Fn::Join" : ["", [
"#!/bin/bash\n\n",
"PRIVATE_IP=`curl http://169.254.169.254/latest/meta-data/local-ipv4/`\n",
"/usr/local/bin/cli53 rrcreate ",
{"Ref": "Route53HostedZone" },
" \"", { "Ref" : "ServerName" },
" 300 A $PRIVATE_IP\" --replace\n"
]]},
"mode" : "000755", "owner" : "root", "group" : "root"
}
},
"commands" : {
"01_UpdateRoute53" : {
"command" : "/tmp/update_route53.sh > /tmp/update-route53.log 2>&1"
}
}
}
}
},
"Properties": { ... }
}
}
....
I've ommitted large chunks of the template to focus on the important info. The "UpdateRoute53" section creates 2 files:
/usr/local/bin/cli53 - CLI53 is a great little wrapper program around AWS Route53 (as AWS CLI version of route53 is pretty horrible to use i.e. requires creating large chunks of JSON) - see https://github.com/barnybug/cli53 for more info on CLI53
/tmp/update_route53.sh - creates a script to upload to Route53 via the CLI53 script we installed in (1). This script determines the PRIVATE_IP via a curl comand to the special AWS meta data endpoint (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html for more details). The "zone id" to the correct hosted zone is injected via a CloudFormation parameter (i.e. {"Ref": "Route53HostedZone" }). Finally the name of the record comes from the "ServerName" parameter but how this is set could vary from template to template.
In the "commands" section we run the script we created in "files" section (2) and output the results a log file in the /tmp folder.
NOTE (1) - The parameter Route53HostedZone can be declared as follows: -
"Route53HostedZone": {
"Description": "Route 53 hosted zone for updating internal DNS",
"Type": "AWS::Route53::HostedZone::Id",
"Default": "VIWIWK4PYAC23B"
}
The cool thing about the "AWS::Route53::HostedZone::Id") parameter type is that it displays a combo box (when running a CloudFormation template via the AWS web console) showing the zone name with the value being the Zone ID.
NOTE (2) - The --replace attribute in the CLI53 script overrides existing records which is probably what you want.
NOTE (3) - Another option would be to SSH via Jenkins (e.g. using the the "Publish Over SSH Plugin" - https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin), determine the private IP and using the CLI53 script update Route53 either from the server you've logged into or even the build server (when Jenkins is running).
Lots of options - hope you get it sorted! :-)