AWS CDK create Lambda from image - amazon-web-services

I am a new to AWS world. I am working on a project to build a server less application and as part of that I have created 4 lambda which works fine.
Next I am trying to create a deployment pipe line using CDK; below is what I am trying to do.
create an docker image with code that includes all lambda code
create 4 different lambdas from same image just override the CMD in docker image and mention the lambda handler
I have setup CDK locally and able to create stack, everything works fine.
Below is my code snippet
--create the docker image
asset_img = aws_ecr_assets.DockerImageAsset(
self,
"test_image",
directory=os.path.join(os.getcwd(), "../mysrc")
)
--create lambda from docker image
aws_lambda.DockerImageFunction(
self,
function_name="gt_from_image",
code=_lambda.DockerImageCode.from_ecr(
self,
repository=asset_img.repository,
tag="latest")
)
Below is the error I am getting
TypeError: from_ecr() got multiple values for argument 'repository'
I am not sure how I can reference the image that was created and define the lambda.
Solved: Below is the solution
asset_img = _asset.DockerImageAsset(self, "test_image", directory=os.path.join(os.getcwd(), "../gt"))
_lambda.DockerImageFunction(self, id='gt_from_image', function_name="gt_from_image_Fn",
code=_lambda.DockerImageCode.from_ecr(
repository=asset_img.repository,
tag=asset_img.source_hash))

From the documentation for DockerImageCode.from_ecr(), it does not expect a scope argument, so your self argument is what is causing the error.
Another issue is that DockerImageAsset will not tag the image as latest, as that is against AWS best practices.
The easy way to achieve what you are doing is to use
DockerImageCode.from_image_asset().

Related

In an AWS lambda, how do I access the image_id or tag of the launched container from within it?

I have an AWS lambda built using SAM. I want to propagate the id (or, if it's easier, the tag) of a lambda's supporting docker image through to the lambda runtime function.
How do I do this?
Note: I do mean image id and NOT container id - what you'd see if you called docker image ls locally. Getting the container id / hostname is the easy bit :D
I have tried to declare a parameter in the template.yaml and have it picked up as an environment variable that way. I would prefer to define the value at most once within the template.yaml, and preferably have it auto-populated, though I am not aware of best practice there. The aim is to avoid human error. I don't want to pass the value on the command line unless I have to.
If it's too hard to get the image id then as a fallback the DockerTag would be fine. Again, I don't want this in multiple places in the template.yaml. Thanks!
Unanswered similar question: Finding the image ID of a container from within the container
The launched image URI is available in the packaged template file after running sam package, so it's possible to extract the tag from there.
For example, if using YAML:
grep -w ImageUri packaged.yaml | cut -d: -f3
This will find the URI in the packaged template (which looks like ImageUri: 12345.dkr.ecr.us-east-1.amazonaws.com/myrepo:mylambda-123abc-latest) and grabs the tag, which is after the 2nd :.
That said, I don't think it's a great solution. I wish there was a way using the SAM CLI.

Updating Lambda using CDK doesn't deploy latest image

Using the AWS C# CDK.
I get a docker image from an ECR repository & then create a lambda function using it.
The problem is that when I run the CDK, it clearly creates CloudFormation that updates the function. Within the AWS console, the latest image is then shown under "Image > Image URI". However the behaviour of my lambda clearly shows that the latest image has NOT been deployed.
If I click "Deploy New Image", leave everything as normal & click Save, my Lambda then shows that it is updating & then the behaviour of my lambda is as expected (latest image).
Unsure where I'm going wrong:
var dockerImageCode = DockerImageCode.FromEcr(ecrRepositoryContainingImage);
var dockerImageFunction = new DockerImageFunction(this,
Constants.LAMBDA_ID,
new DockerImageFunctionProps()
{
Code = dockerImageCode,
Description = versionString,
Vpc = foundationStackVpc,
SecurityGroups = new ISecurityGroup[]
{
securityStackVpcSecurityGroup
},
Timeout = Duration.Seconds(30),
MemorySize = 512
});
It is almost like, my lambda gets updated & shows that it is apparently pointing at the correct image within ECR. However the reality is, that it is not actually deployed.
Edit: A temporary fix is to ensure that rather than pushing a new image:latest image to ECR, I now call it image:buildnumber. It seems that even if the image in ECR is underlyingly different & cdk has supposedly updated the lambda image reference to the newly uploaded one in ECR, it doesn't actually redeploy/consider a change has occurred worthy of redeployment when the old image tag & new image tag are both named the same, in this case latest. Now since the build number will always be different & thus the new image tag will always be different to the previous one, this is deemed enough of a change for the lambda to be redeployed properly.
When using API fromEcr, you can specify EcrImageCodeProps with specified image tag.
See doc for detail.
The tag: latest did not work for me also. I think an easy way is to use SSM in the CodeBuild
'aws ssm put-parameter --name FhrEcrImageTagDemo --type String --value ${CODEBUILD_RESOLVED_SOURCE_VERSION} --overwrite'
Then in CDK lambda
code: aws_lambda.Code.fromEcrImage(
aws_ecr.Repository.fromRepositoryName(
this,
'id',
'ecrRepositoryName',
),
{
tag: aws_ssm.StringParameter.valueForStringParameter(
this,
'parameterName'
)
}
)
Another potential solution is using the exported variables and override parameters in this example class TagParameterContainerImage. It works for ecs but not sure for lambda and ecr.

Error when calling LambdaInvoke in aws CDK stepfunction

I am trying to create a stepfunction via AWS CDK.
The stepfunctions should, among other things, call a lambda function.
I am using the lambdaInvoke task as described in: https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_stepfunctions_tasks/LambdaInvoke.html
For the lambda_function parameter I am trying to use an existing Lambda by referencing its ARN using
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_lambda/Function.html#aws_cdk.aws_lambda.Function.from_function_arn
(i tried using from_function_attributes as well)
My cdk code looks like this
lambda_name = sfn.Task(
self, "Lambda Name",
task=sfn_tasks.LambdaInvoke(self, "lambdaInvoke",lambda_function=lambda_.Function.from_function_arn(self,"import-lambda",function_arn="arn:aws:lambda:eu-west-1:accountnumber:function:LambdaName")),
result_path="$.guid"
)
I am trying to call an existing lambda. (the lambda itself is created and updated in a pipeline setup in the same CDK project, but in a different stack)
I also tried adding :%LATEST to LambdaName, and both give me the same error:
jsii.errors.JSIIError: props.task.bind is not a function
I made a support ticket for the issue, they ran into the same issue and referred me to here.
How do I invoke a lambda via cdk?
Update:
LambdaInvoke is not supposed to be imbedded in a task.
lambda_name = sfn_tasks.LambdaInvoke(
self, "lambdaInvoke",
lambda_function=lambda_.Function.from_function_arn(self,"import-lambda",function_arn="arn:aws:lambda:eu-west-1:accountnumber:function:LambdaName")),
result_path="$.guid"
)
Works

How to deploy our own TensorFlow Object Detection Model in amazon Sagemaker?

I have my own trained TF Object Detection model. If I try to deploy/implement the same model in AWS Sagemaker. It was not working.
I have tried TensorFlowModel() in Sagemaker. But there is an argument called entrypoint- how to create that .py file for prediction?
entrypoint is a argument which contains the file name inference.py,which means,once you create a endpoint and try to predict the image using the invoke endpoint api. the instance will be created based on you mentioned and it will go to the inference.py script and execute the process.
Link : Documentation for tensor-flow model deployment in amazon sage-maker
.
The inference script must contain a methods input_handler and output_handler or handler which will cover both the function in inference.py script, this for pre and post processing of your image.
Example for Deploying the tensor flow model
In the above link, i have mentioned a medium post, this will be helpful for your doubts.

Perform cloud formation only if any changes in lambda using AWS Code Pipeline

I am uisng AWS Code pipeline to perform cloud formation. My source code is committed in GitHub repository. When ever a commit is happening in my github repository, AWS Code Pipeline will starts its execution and perform the cloud formation. These functionalities are working fine.
In my project I have multiple modules. So if a user is modified only in one module, the entire module's lambda's are updated. Is there any way to restrict this using AWS Code Pipeline.
My Code Pipeline has 3 stages.
Source
Build
Deploy
The following is the snapshot of my code pipeline.
We had a similar issue and eventually we came to conclusion that this is not exactly possible. So unless you separate your modules into different repos and make separate pipelines for each of them it is always going to execute everything.
The good thing is that with each execution of the pipeline it is not entirely redeploying everything when the cloud formation is executed. In the deploy stage you can add Create Changeset part which is basically going to detect what is changed from the previous CloudFormation deployment and it is going to redeploy only those parts and will not touch anything else.
This is the exact issue we faced recently and while I see comments mentioning that it isn't possible to achieve with a single repository, I have found a workaround!
Generally, the code pipeline is triggered by a CloudWatch event listening to the GitHub/Code Commit repository. Rather than triggering the pipeline, I made the CloudWatch event trigger a lambda function. In the lambda, we can write the logic to execute the pipeline(s) only for module which has changes. This works really nicely and provides a lot of control over the pipeline execution. This way multiple pipeline can be created from a single repository, solving the problem mention in the question.
Lambda logic can be something like:
import boto3
# Map config files to pipelines
project_pipeline_mapping = {
"CodeQuality_ScoreCard" : "test-pipeline-code-quality",
"ProductQuality_ScoreCard" : "test-product-quality-pipeline"
}
files_to_ignore = [ "readme.md" ]
codecommit_client = boto3.client('codecommit')
codepipeline_client = boto3.client('codepipeline')
def lambda_handler(event, context):
projects_changed = []
# Extract commits
print("\n EVENT::: " , event)
old_commit_id = event["detail"]["oldCommitId"]
new_commit_id = event["detail"]["commitId"]
# Get commit differences
codecommit_response = codecommit_client.get_differences(
repositoryName="ScorecardAPI",
beforeCommitSpecifier=str(old_commit_id),
afterCommitSpecifier=str(new_commit_id)
)
print ("\n Code commit response: ", codecommit_response)
# Search commit differences for files that trigger executions
for difference in codecommit_response["differences"]:
file_name = difference["afterBlob"]["path"]
project_name = file_name.split('/')[0]
print("\nChanged project: ", project_name)
# If project corresponds to pipeline, add it to the pipeline array
if project_name in project_pipeline_mapping:
projects_changed.insert(len(projects_changed),project_name)
projects_changed = list(dict.fromkeys(projects_changed))
print("pipeline(s) to be executed: ", projects_changed)
for project in projects_changed:
codepipeline_response = codepipeline_client.start_pipeline_execution(
name=project_pipeline_mapping[project]
)
Check AWS blog on this topic: Customizing triggers for AWS CodePipeline with AWS Lambda and Amazon CloudWatch Events
Why not model this as a pipeline per module?