In an AWS lambda, how do I access the image_id or tag of the launched container from within it? - amazon-web-services

I have an AWS lambda built using SAM. I want to propagate the id (or, if it's easier, the tag) of a lambda's supporting docker image through to the lambda runtime function.
How do I do this?
Note: I do mean image id and NOT container id - what you'd see if you called docker image ls locally. Getting the container id / hostname is the easy bit :D
I have tried to declare a parameter in the template.yaml and have it picked up as an environment variable that way. I would prefer to define the value at most once within the template.yaml, and preferably have it auto-populated, though I am not aware of best practice there. The aim is to avoid human error. I don't want to pass the value on the command line unless I have to.
If it's too hard to get the image id then as a fallback the DockerTag would be fine. Again, I don't want this in multiple places in the template.yaml. Thanks!
Unanswered similar question: Finding the image ID of a container from within the container

The launched image URI is available in the packaged template file after running sam package, so it's possible to extract the tag from there.
For example, if using YAML:
grep -w ImageUri packaged.yaml | cut -d: -f3
This will find the URI in the packaged template (which looks like ImageUri: 12345.dkr.ecr.us-east-1.amazonaws.com/myrepo:mylambda-123abc-latest) and grabs the tag, which is after the 2nd :.
That said, I don't think it's a great solution. I wish there was a way using the SAM CLI.

Related

Optional ways to put large UserData contents in CF template

I have a CloudFormation template for deploying Cisco 8000v instances. In order to bootstrap these I have a very long device-specific user-data file. I can put the whole contents in the UserData block but then my CF template is not very reusable. Can I refer to the contents via a filename and import them somehow? Can't find any examples of this. What is a more common way to approach this? The UserData string as several instance-specific configurations. Should I base64 encode the string and refer to it as a parameter?
You would store your long script externally to the instance, e.g. in S3. Then your user_data, would be very short, limited to downloading the script from S3 and executing it.
Alternatively, you can create custom ami which is per-configured for your use-case. This way your use_script can be reduce or even fully eliminated.
You can not use user_data too much, so it is limited with 16KB. check it on here
Best way to do it. Store it on S3 or another place to your EC2 instance can reach. on user_data download and execute it.

Update AWS ECS Task Definition with Powershell

long story short, I need to update my ECS Task definition via powershell in order to increase the "EphemeralStorage_SizeInGiB" which is only available via the AWS cli.
I am able to successfully grab the task via the Get-ECSTaskDefinitionDetail cmdlet but I'm stuck on what to do next.
I was able to convert that output to JSON and update the ephemeral storage field in the json file but cannot figure how to send that back to AWS. All my attempts with the Register-ECSTaskDefinition Cmdlet seem to fail as it wants individual arguments for each parameter instead of a json upload.
Any advice would be appreciated.
Thanks,
I don't have one to test with, but most AWS cmdlets return objects which can be piped to each other. Get-ECSTaskDefinitionDetail does too, returning an DescribeTaskDefinitionResponse object, with what looks like all the right properties to auto-fill the registration. Try out
Get-ECSTaskDefinitionDetail -TaskDefinition $ARN |
Register-ECSTaskDefinition -EphemeralStorage_SizeInGiB $newSize
Or it might require using this .TaskDefinition property:
$Response = Get-ECSTaskDefinitionDetail -TaskDefinition $ARN
$Response.TaskDefinition | Register-ECSTaskDefinition -EphemeralStorage_SizeInGiB $newSize
and maybe it's that easy?
note that you must not use -Select in the Get command, or it will return a different object type.
That said, it's pretty awkward that it won't take json when two of its parameters do. Might be worth reopening this feature request:
https://github.com/aws/aws-tools-for-powershell/issues/184

Does Deployment Manager have Cloud Functions support (and support for having multiple cloud functions)?

I'm looking at this repo and very confused about what's happening here: https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/cloud_functions
In other Deployment Manager examples I see the "type" is set to the type of resource being deployed but in this example I see this:
resources:
- name: function
type: cloud_function.py # why not "type: cloudfunctions"?
properties:
# All the files that start with this prefix will be packed in the Cloud Function
codeLocation: function/
codeBucket: mybucket
codeBucketObject: function.zip
location: us-central1
timeout: 60s
runtime: nodejs8
availableMemoryMb: 256
entryPoint: handler
"type" is pointing to a python script (cloud_function.py) instead of a resource type. The script is over 100 lines long and does a whole bunch of stuff.
This looks like a hack, like its just scripting the GCP APIs? The reason I'd ever want to use something like Deployment Manager is to avoid a mess of deployment scripts but this looks like it's more spaghetti.
Does Deployment Manager not support Cloud Functions and this is a hacky workaround or is this how its supposed to work? The docs for this example are bad so I don't know what's happening
Also, I want to deploy multiple function into a single Deployment Manager stack- will have to edit the cloud_function.py script or can I just define multiple resources and have them all point to the same script?
Edit
I'm also confused about what these two imports are for at the top of the cloud_function.yaml:
imports:
# The function code will be defined for the files in function/
- path: function/index.js
- path: function/package.json
Why is it importing the actual code of the function it's deploying?
Deployment manager simply interacts with the different kind of Google APIs. This documentation gives you a list of supported resource types by Deployment manager. I would recommend you to run this command “gcloud deployment-manager types list | grep function” and you will find this “cloudfunctions.v1beta2.function” resource type is also supported by DM.
The template is using a gcp-type (that is in beta).The cloud_functions.py is a template. If you use a template, you can reuse it for multiple resources, you can this see example. For better understanding, easier to read/follow you can check this example of cloud functions through gcp-type.
I wan to add to the answer by Aarti S that gcloud deployment-manager types list | grep function doesn't work for me as I found how to all list of resource types, including resources that are in alpha:
gcloud beta deployment-manager types list --project gcp-types
Or just gcloud beta deployment-manager types list | grep function helps.

How do I force a CloudFormation stack to update when the parameter is updated?

I am running a AWS CloudFormation stack that takes in some parameters and launches EC2 instances along with other AWS resources. The parameters are fed into the user data of the EC2 instance and based on that changes are made dynamically to the web application residing on the EC2 instance.
UserData:
Fn::Base64:
Fn::Join:
- ""
-
- "#!/bin/bash \n"
- "sh website-conf/website_mysql_config.sh "
- " -c \""
-
Ref: "CompanyName"
As shown in the example above, CompanyName is one of the many parameters passed to the userdata script. The problem is, when any one or multiple of parameters are updated, CloudFormation does not detect that and instead throws this error.
So, in order to update the stack, I have to edit the stack and make changes to the ASG so that CloudFormation 'sees' the changes and executes the stack update.
Is there a way to force CFN to update the stack when the parameters are updated?
CloudFormation will not update the stack unless there is a change in properties of the resources already created in the stack.
For example:
Consider I have a simple template to create a database where I need to pass 2 parameters:
db-name
region
Assume that I am using db-name passing it as value to DBInstanceIdentifier.
Also assume that I am not using the input parameter region for any purpose in creation of resources (or its properties) of the stack in any way.It is more of a dummy parameter I keep for readability purpose.
I passed (TEST-DB1, us-east-1) as input parameters to the CloudFormation template and successfully created the resources.
Scenario-1:
Now if I update the stack(still using the existing template) and just change the input parameters to (TEST-DB2, us-east-1). ie: changing just the db-name and not the region. Then CloudFormation will detect that, this parameter update, results in change in properties of running resource(s) of the stack and will compute and display the modifications as a change set.
Scenario-2:
Suppose I make another update(still using the existing template) property and just change the input parameters to (TEST-DB1, us-east-2). ie: changing just the region and not the db-name. Then CloudFormation will detect that, this parameter update, result in NO change in properties of running resource(s) of the stack will show the Error creating change set.
Bottomline:
Your change in input parameter must result in an update/replacement of any resources(or its attributes like security-groups,port etc..) of the stack. Then AWS CloudFormation will display them as Change Sets for your review. Also, the method (update or replacement) AWS CloudFormation uses depends on which property you update for a given resource type.
Your parameter "CompanyName" is not making any changes to the running
resources of the stack. Hence it is reporting as Error creating
change set. You need to use it to create any resource/resource properties of the stack. Then CloudFormation will detect the change-sets when you modify it. The same applies for any other input-parameters which you use.
Use the AWS CLI Update-Stack command. If you use the AWS CLI you can inject parameters into your stack so any change to any of the parameters result in a new stack. I do this myself to inject the Git/version commit ID into UserData so simply committing changes to the stack's JSON/Yaml to Git will allow stack updates. Any change to the parameters file will allow stack updates, even just a comment. I reference my Git commit ID in UserData the same way you are referencing Ref:CompanyName so when I change the Git commit ID the userData section is updated on stack updates.
Update Stack Command
aws cloudformation update-stack --stack-name MyStack --template-body file:///Users/Documents/Git/project/cloudformation/stack.json --parameters file:///Users/Documents/Git/project/cloudformation/parameters/stack-parameters.dev.json --capabilities CAPABILITY_IAM
Process
With this approach you make your parameters changes to the parameters json or yaml file then check it into version control. Now if you use a build server you can update your stack by checking out master and just running that one line above. Using AWS CodeBuild makes this easy so you don't need jenkins.
The answer of your problem is already answered with this state, CloudFormation will not update the stack unless there is a change in properties of the resources already created in the stack.
And for the answer for your question, please check the explanation below.
There is a way to force Cloudformation to update the stack using the AWS::CloudFormation::Init.
By using cfn-init, each instance can update itself when it detect the change that made by AWS::CloudFormation::Init in metadata.
There is a concept that we must understand first, that is the difference between UserData and metadata, at least under the AWS::CloudFormation::Init case.
Userdata: Will be only called once when the instance is being launch for the first time (this including update that need the instance to be replaced). So, if you update the stack (not creating a new one), even if you change the parameter value, it won't change anything if you call the parameter under UserData.
Metadata: Can be updated anytime. To make it works, you have to make sure that the daemon that detect the metadata changed is running (the daemon is called the cfn-hup)
If you already use the Metadata and AWS::CloudFormation::Init, the data is not immediately being updated. As far I know, here is the condition the data to be change after change the Metadata value.
Reboot the instance
Run cfn-init command again with it's parameter
Waiting about 15 minutes, because the daemon to check the change in Metadata is checking the change once in 15 minutes.

Need to get name of cloudformation template used to deploy ec2 from the command line using aws cli or api

I used a cloudformation template to create an ec2 instance. Is there any way besides tagging that I can get the name of the cloudformation template via the command line?
Method 1: Tagging
Tagging is going to be the cleanest and easiest way to get that data. You do need to do some advance work and this won't work for existing instances, but it's going to be fast and reliable.
Method 2: Cross-referencing
If you have the instance id, you can ask Cloudformation to search for it's sibling stack resources, from which you can infer the stack name, id, etc.
c = boto.cloudformation.connect_to_region('us-east-1')
c.describe_stack_resources(physical_resource_id='i-830e2869')[0].stack_name
If the instance is not part of a stack, you'll get a Stack for i-830e2869 does not exist 400 error.
Method 3: User data
I'll admit - this was pretty creative so kudos for thinking it up.
curl http://169.254.169.254/latest/user-data | grep 'cfn-init -s' | awk '{print $3}'
The reason this works is that instances created by Cloudformation need to run /opt/aws/bin/cfn-init to install packages and /opt/aws/bin/cfn-signal in order to report their successful creation and one of the parameters is the stack name.
It'll fail if someone edits the user-data, but despite feeling a bit hacky, it seems pretty reliable. I still wouldn't recommend using it in prod given it's brittle reliance on a script parameter.