AWS: Delete lambda layer still retains layer version history - amazon-web-services

I am deploying a AWS Lambda layer using aws cli:
aws lambda publish-layer-version --layer-name my_layer --zip-file fileb://my_layer.zip
I delete it using
VERSION=$(aws lambda list-layer-versions --layer-name my_layer | jq '.LayerVersions[0].Version'
aws lambda delete-layer-version --layer-name my_layer --version-number $VERSION
Deletes successfully, ensured no other version of the layer exists.
aws lambda list-layer-versions --layer-name my_layer
>
{
"LayerVersions": []
}
Upon next publishing of the layer, it still retains history of previous version. From what I read if no layer version exists and no reference exists, the version history should be gone, but I don't see that. Anybody have a solution to HARD delete the layer with its version?

I have the same problem. What I'm trying to achieve is to "reset" the version count to 1 in order to match code versioning and tags on my repo. Currently, the only way I found is to publish a new layer with a new name.
I think AWS Lambda product lacks of features that help (semantic) versioning.

Currently there's no way to do this. Layer Versions are immutable, they cannot be updated or modified, you can only delete and publish new layer versions. Once a layer version is 'stamped' -- there is no way (AFAIK) that you can go back and get back that layer version.
It might be that after a while (weeks...months?) AWS delete it's memory of the layer version, but as it stands, the version number assigned to any deleted layer cannot be assumed by any new layer.

I ran into similar problem with layer versions. Do you have suggestions for simple way out instead of writing code to check available versions out there and pick latest one...

I am facing the same issue, As there was no aws-cli command to delete the layer itself,I had delete all versions of my lambda-layer using:
aws lambda delete-layer-version --layer-name test_layer --version-number 1
After deleting all versions of the layer, the layer was not showing in aws lambda layers page,So I thought its successfully deleted.
But to my surprise,AWS stills keeps the data about our deleted layers(at-least the last version for sure), when you try create a lambda layer with your previous deleted lambda name using aws gui or cli, the version will not start from 1, instead it starts from last_version of assumed to be deleted layer.

Related

Trying to come up with a way to track any ec2 instance type changes across our account

I have been trying to come up with a way to track any and all instance type changed that happen in our companies account. (ex: t2.micro to t2.nano)
I settled on creating a custom config rule that would alert us if the instance changed with a uncompliant warning, but I think this might be over complicating it and am suspecting that I should be using CloudWatch alarms or EventBridge.
I have used the following setup (from the CLI):
rdk create ec2_check_instance_type --runtime python3.7 --resource-types AWS::ED2::Instance --input-parameters '{"modify-instance-type":"*"}'
modify-instance-type seemed to be the only thing I could find which related to what I was looking for the lambda function to track and I used the wildcard to signify any changes.
I then added the following to the lambda function:
if configuration_item['resourceType'] != 'AWS::EC2::Instance':
return 'NOT_APPLICABLE'
if configuration_item['configuration']['instanceType'] == valid_rule_parameters['ModifyInstanceAttribute']:
return 'NON_COMPLIANT'
is there a different input-parameter that I should be using for this instead of "modify-instance-type"? so far this has returned nothing. I don't think it is evaluating properly.
or does anyone have a service that might be a better way to track configuration changes like this within aws that I'm just not thinking of?

Run containers on AWS Lambda

I am trying to use the newly launched service i.e. running containers on AWS lambda. But since this whole idea is quite new I am unable to find much support online.
Question: Is there a programmatic way to publish a new ECR image into lambda with all the configurations, using AWS SDK (preferably python)?
Also, can it directly be published version instead of
def pushLatestAsVesion(functionArn, description="Lambda Eenvironment"):
functionDetails = client.get_function(FunctionName=functionArn)
config = functionDetails['Configuration']
response = client.publish_version(
FunctionName=functionArn,
Description=description,
CodeSha256=functionDetails['Configuration']['CodeSha256'],
RevisionId=functionDetails['Configuration']['RevisionId']
)
print(response)
pushLatestAsVesion('arn:aws:lambda:ap-southeast-1:*************:function:my-serverless-fn')
I'm not sure about SDK, but please check SAM - https://aws.amazon.com/blogs/compute/using-container-image-support-for-aws-lambda-with-aws-sam/

Get latest job revision while submitting AWS batch job without specifying the exact revision number

I am using AWSBatch Java client com.amazonaws.services.batch (AWS SDK for Java - 1.11.483) to submit jobs programmatically.
However, our scientists keep updating the job definition.
Every time there is a new job definition, I have to update the environment variable with the revision number to pass it to the client.
AWS documentation states that
This value can be either a name:revision or the Amazon Resource Name (ARN) for the job definition.
Is there any way I can default it to the latest revision and every time I submit a BatchJob, the latest revision will get picked without even knowing the last revision?
This value can be either a name:revision or the Amazon Resource Name (ARN) for the job definition.
Seems like AWS didn't document this properly: revision is optional, you can use simply use name instead of name:revision and it will get the ACTIVE revision of your job definition. It's also optional for Job Definition ARNs.
This also applies for boto3 and for AWS Step Functions integration with AWS Batch, and probably all other interfaces where Job Definition name or ARN are required.
From AWS Batch SubmitJob API reference.
jobDefinition
The job definition used by this job. This value can be one of name,
name:revision, or the Amazon Resource Name (ARN) for the job
definition. If name is specified without a revision then the latest
active revision is used.
perhaps the documentation is updated by now.
I could not find any Java SDK function but I ended up using a bash script that fetches the latest* revision number from AWS.
$ aws batch describe-job-definitions --job-definition-name ${full_name} \
--query='jobDefinitions[?status==`ACTIVE`].revision' --output=json \
--region=${region} | jq '.[0]'
(*) The .[0] will pick the first object from a list of active revisions, I used this because, by default, AWS Batch adds the latest revision to the top. You can set it as .[-1] if you want the last one.

Downgrade to previous version of AWS Lambda

Working with Amazon Lambda functions I use versioning feature which is provided by AWS Lambda functionality. Each time when I deployed new version of my artifact to AWS I create new version of function and publish it (using popup from screenshot).
But how can I publish any previous version of my function (for example when I need to rollback my last publication)?
You should provide each new version with an alias.
From the AWS Documentation
In contrast, instead of specifying the function ARN, suppose that you specify an alias ARN in the notification configuration (for example, PROD alias ARN). As you promote new versions of your Lambda function into production, you only need to update the PROD alias to point to the latest stable version. You don't need to update the notification configuration in Amazon S3.
The same applies when you need to roll back to a previous version of
your Lambda function. In this scenario, you just update the PROD alias
to point to a different function version. There is no need to update
event source mappings.
One solution I've found that works if your in a pinch is to go to a previous (working) version of lambda, download the deployment package, redeploy the downloaded zip package using the aws cli. I'm sure there is a more elegant solution but if you're in a pinch and you need something right now this works.
$ aws lambda update-function-code \
--function-name my_lambda_function \
--zip-file fileb://function.zip
In order to rollback to a specific version, you need to point the alias that are assigned to the current version to the version you want to rollback to.
For example: My latest version is 20 and has an alias 'Active'. For me to rollback or remove the version 20, I need to remove the alias or reassign it to another version. So if I point my alias to version 17 then lambda will take the version 17 as the default or prod version.
you can update the alias here:
https://myRegion.console.aws.amazon.com/lambda/home?region=myRegion#/functions/functionName/aliases/Active?tab=graph
(Update myRegion and functionName with relevant values.)
In the above specified page go to 'Aliases' section, click on 'Version' dropdown (by default it will display the version for which the alias is assigned to). Select the version that your alias want to point to and click on save.
Thats All !!!
There is no such feature in Lambda function.

AWS | Boto3 | RDS |function DownloadDBLogFilePortion |cannot download a log file because it contains binary data |

When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.