Run containers on AWS Lambda - amazon-web-services

I am trying to use the newly launched service i.e. running containers on AWS lambda. But since this whole idea is quite new I am unable to find much support online.
Question: Is there a programmatic way to publish a new ECR image into lambda with all the configurations, using AWS SDK (preferably python)?
Also, can it directly be published version instead of
def pushLatestAsVesion(functionArn, description="Lambda Eenvironment"):
functionDetails = client.get_function(FunctionName=functionArn)
config = functionDetails['Configuration']
response = client.publish_version(
FunctionName=functionArn,
Description=description,
CodeSha256=functionDetails['Configuration']['CodeSha256'],
RevisionId=functionDetails['Configuration']['RevisionId']
)
print(response)
pushLatestAsVesion('arn:aws:lambda:ap-southeast-1:*************:function:my-serverless-fn')

I'm not sure about SDK, but please check SAM - https://aws.amazon.com/blogs/compute/using-container-image-support-for-aws-lambda-with-aws-sam/

Related

Consuming API's using AWS Lambda

I am a newcomer to AWS with very little cloud experience. The project I have is to call and consume a API from NOAA, and then save parse the returned XML document to a database. I have a ASP.NET console app that is able to do this pretty easily and successfully. However, I need to do the same thing, but in the cloud on a serverless architecture. Here are the steps I am wanting it to take:
Lambda calls the API at NOAA everyday at midnight
the API returns an XML doc with results
Parse the data and save the data to a cloud PostgreSQL database
It sounds simple, but I am having one heck of a time figuring out how to do this. I have a DB requisitioned from AWS, as that is where data is currently going through my console app. Does anyone have any advice or a resource I could look at for advice? Also, I would prefer to keep this in .NET, but realize that I may need to move it to Python.
Thanks in advance everyone!
Its pretty simple and you can test your code with below simple python boto3 lambda code.
Create new lambda function with admin access (temporary set Admin role and then you can set required role)
Add the following code
https://github.com/mmakadiya/public_files/blob/main/lambda_call_get_rest_api.py
import json
import urllib3
def lambda_handler(event, context):
# TODO implement
http = urllib3.PoolManager()
r = http.request('GET', 'http://api.open-notify.org/astros.json')
print(r.data)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
The above code will run the REST API to fetch the data. This is just a sample program and it will help you to go further.
MAKE SURE that Lambda function max run time is `15 minutes and it can not be run >15 min so think accordingly.

AWS .Net API - The provided token has expired

I am facing this weird scenario. I generate my AWS AccessKeyId, SecretAccessKey and SessionToken by running assume-role-with-saml command. After copying these values to .aws\credentials file, I try to run command "aws s3 ls" and can see all the S3 buckets. Similarly I can run any AWS command to view objects and it works perfectly fine.
However, when I write .Net Core application to list objects, it doesn't work on my computer. Same .Net application works find on other colleagues' computers. We all have access to AWS through the same role. There are no users in IAM console.
Here is the sample code, but I am not sure there is nothing wrong with the code, because it works fine on other users' computers.
var _ssmClient = new AmazonSimpleSystemsManagementClient();
var r = _ssmClient.GetParameterAsync(new Amazon.SimpleSystemsManagement.Model.GetParameterRequest
{
Name = "/KEY1/KEY2",
WithDecryption = true
}).ConfigureAwait(false).GetAwaiter().GetResult();
Any idea why running commands through CLI works and API calls don't work? Don't they both look at the same %USERPROFILE%.aws\credentials file?
I found it. Posting here since it can be useful for someone having same issue.
Go to this folder: %USERPROFILE%\AppData\Local\AWSToolkit
Take a backup of all files and folders and delete all from above location.
This solution applies only if you can run commands like "aws s3 ls" and get the results successfully, but you get error "The provided token has expired" while running the same from .Net API libraries.

Is there a way to share a document in Workdocs via the Workdocs API?

as far as i know i can share or download a document in Workdocs via the Amazon Workdocs UI. I am trying to build a static website in a S3 bucket that offers a Link to a Workdoc document for download.
Thus i have to share this document after verifying some stuff.
Sadly i found nothing similiar in the API documentation (https://docs.aws.amazon.com/workdocs/latest/APIReference/workdocs-api.pdf).
NodeJs would be fine using JDK but i can also try to use lambda functions if necessary.
Greetings,
Eric
You can build AWS python3+ based AWS Lambda along with AWS Step Functions.
import boto3
work_docs_client = boto3.client('workdocs') # Change aws connection parameters as per your setup
def lambda_handler(event, context):
# Call share_document_work_docs(resource_id, recipient_id) based on your conditions/checks
return None # change this as per your use-case
# This will add the permissions to the document to share
def share_document_work_docs(resource_id, recipient_id):
principals = [{'Id': recipient_id, 'Type': 'USER', 'Role': 'VIEWER'}] # change this as per your use-case
share_doc_response = work_docs_client.add_resource_permissions(
ResourceId=resource_id,
Principals=principals,
NotificationOptions={
'SendEmail': True,
'EmailMessage': 'Your message here',
} # change NotificationOptions as per your use-case
)
return share_doc_response

AWS: Delete lambda layer still retains layer version history

I am deploying a AWS Lambda layer using aws cli:
aws lambda publish-layer-version --layer-name my_layer --zip-file fileb://my_layer.zip
I delete it using
VERSION=$(aws lambda list-layer-versions --layer-name my_layer | jq '.LayerVersions[0].Version'
aws lambda delete-layer-version --layer-name my_layer --version-number $VERSION
Deletes successfully, ensured no other version of the layer exists.
aws lambda list-layer-versions --layer-name my_layer
>
{
"LayerVersions": []
}
Upon next publishing of the layer, it still retains history of previous version. From what I read if no layer version exists and no reference exists, the version history should be gone, but I don't see that. Anybody have a solution to HARD delete the layer with its version?
I have the same problem. What I'm trying to achieve is to "reset" the version count to 1 in order to match code versioning and tags on my repo. Currently, the only way I found is to publish a new layer with a new name.
I think AWS Lambda product lacks of features that help (semantic) versioning.
Currently there's no way to do this. Layer Versions are immutable, they cannot be updated or modified, you can only delete and publish new layer versions. Once a layer version is 'stamped' -- there is no way (AFAIK) that you can go back and get back that layer version.
It might be that after a while (weeks...months?) AWS delete it's memory of the layer version, but as it stands, the version number assigned to any deleted layer cannot be assumed by any new layer.
I ran into similar problem with layer versions. Do you have suggestions for simple way out instead of writing code to check available versions out there and pick latest one...
I am facing the same issue, As there was no aws-cli command to delete the layer itself,I had delete all versions of my lambda-layer using:
aws lambda delete-layer-version --layer-name test_layer --version-number 1
After deleting all versions of the layer, the layer was not showing in aws lambda layers page,So I thought its successfully deleted.
But to my surprise,AWS stills keeps the data about our deleted layers(at-least the last version for sure), when you try create a lambda layer with your previous deleted lambda name using aws gui or cli, the version will not start from 1, instead it starts from last_version of assumed to be deleted layer.

Perform cloud formation only if any changes in lambda using AWS Code Pipeline

I am uisng AWS Code pipeline to perform cloud formation. My source code is committed in GitHub repository. When ever a commit is happening in my github repository, AWS Code Pipeline will starts its execution and perform the cloud formation. These functionalities are working fine.
In my project I have multiple modules. So if a user is modified only in one module, the entire module's lambda's are updated. Is there any way to restrict this using AWS Code Pipeline.
My Code Pipeline has 3 stages.
Source
Build
Deploy
The following is the snapshot of my code pipeline.
We had a similar issue and eventually we came to conclusion that this is not exactly possible. So unless you separate your modules into different repos and make separate pipelines for each of them it is always going to execute everything.
The good thing is that with each execution of the pipeline it is not entirely redeploying everything when the cloud formation is executed. In the deploy stage you can add Create Changeset part which is basically going to detect what is changed from the previous CloudFormation deployment and it is going to redeploy only those parts and will not touch anything else.
This is the exact issue we faced recently and while I see comments mentioning that it isn't possible to achieve with a single repository, I have found a workaround!
Generally, the code pipeline is triggered by a CloudWatch event listening to the GitHub/Code Commit repository. Rather than triggering the pipeline, I made the CloudWatch event trigger a lambda function. In the lambda, we can write the logic to execute the pipeline(s) only for module which has changes. This works really nicely and provides a lot of control over the pipeline execution. This way multiple pipeline can be created from a single repository, solving the problem mention in the question.
Lambda logic can be something like:
import boto3
# Map config files to pipelines
project_pipeline_mapping = {
"CodeQuality_ScoreCard" : "test-pipeline-code-quality",
"ProductQuality_ScoreCard" : "test-product-quality-pipeline"
}
files_to_ignore = [ "readme.md" ]
codecommit_client = boto3.client('codecommit')
codepipeline_client = boto3.client('codepipeline')
def lambda_handler(event, context):
projects_changed = []
# Extract commits
print("\n EVENT::: " , event)
old_commit_id = event["detail"]["oldCommitId"]
new_commit_id = event["detail"]["commitId"]
# Get commit differences
codecommit_response = codecommit_client.get_differences(
repositoryName="ScorecardAPI",
beforeCommitSpecifier=str(old_commit_id),
afterCommitSpecifier=str(new_commit_id)
)
print ("\n Code commit response: ", codecommit_response)
# Search commit differences for files that trigger executions
for difference in codecommit_response["differences"]:
file_name = difference["afterBlob"]["path"]
project_name = file_name.split('/')[0]
print("\nChanged project: ", project_name)
# If project corresponds to pipeline, add it to the pipeline array
if project_name in project_pipeline_mapping:
projects_changed.insert(len(projects_changed),project_name)
projects_changed = list(dict.fromkeys(projects_changed))
print("pipeline(s) to be executed: ", projects_changed)
for project in projects_changed:
codepipeline_response = codepipeline_client.start_pipeline_execution(
name=project_pipeline_mapping[project]
)
Check AWS blog on this topic: Customizing triggers for AWS CodePipeline with AWS Lambda and Amazon CloudWatch Events
Why not model this as a pipeline per module?