So I've created an AWS Lambda using a the AWS CLI. I did this by running the command:
aws lambda create-function
with the argument
--zip-file fileb://file-path/zipFile.zip
So then I want to make a change to the source code, so I made a new zip file but the lambda still executes with the old zip file's source code. Thus I tried to run the same command again but got the following error:
Function already exist: FunctionName
So either I just have to abandon that function and create a new one with the new zip file or there's some way for me to update the existing function to use the new zip file's code.
Is there a way for me to do this update, and if so, how?
create-function, as the name implies, creates functions. It does not update them with new code. For that you need update-function-code. This also takes a --zip-file argument.
While this updates the code, you may also need to publish a new version of the function for the changes to take effect. This can be done by adding the --publish argument to update-function-code, or as a separate step with the publish-version command.
Related
The last words I can read from intellij AWS console after hitting update Lambda function is, "Waiting for function to stabilize". There is never a confirmation message that my lambda function is updated in the cloud. Nor can I find my current function id in AWS console (in this case it is cc3988579bd7cb089e84d6bb09066c24).
Successfully packaged artifacts and wrote output template to file /Users/xxx/...
Execute the following command to deploy the packaged template
sam deploy --template-file /Users/xxx/...
Waiting for function to stabilize
I couldn't find any keyword relates to "update" from within the function. Then I found it in the function list. (Inspired by Oleksii Donoha, thank you)
I'm using CDK to set up a CI/CD Pipeline. I have currently a code build from a git into the pipeline. There are then two builds - one that pulls out code for a lambda and builds an artifact for it, and a second that issues the cdk synth to construct the lambda framework (including a nested bucket and dynamo).
Then it heads to a deploy stage, but fails because it can't find the parameters for the location of the lambda code
ive been using this example: https://docs.aws.amazon.com/cdk/latest/guide/codepipeline_example.html
the only differences from this example are that I'm using python for all of it and due to known future needs, the lamdba's are are in a parallel directory from the stack code
|-Lambdas
|--Lambda1
|---Lambda1Code
|--Lambda2
|---Lambda2Code
|-CDKStacks
|--LambdaCreationStack
|--PipelineCreationStack
|--app.py
Everything runs up until deploy where it fails with the error "The following CloudFormation Parameters are missing a value:" and then lists the BucketName and ObjectKey
I assigned those as overrides as per the above link:
admin_permissions=True,
parameter_overrides=dict(
lambda_code.assign(
bucket_name=lambda_location.bucket_name,
object_key=lambda_location.object_key,
object_version=lambda_location.object_version
)
),
as part of the pipeline actions CloudFormationCreateUpdateStackAction, and passed the code just like in the example from lambda stack to the pipeline stack. But every time the lambda stack is attempted to deploy the parameters for the location of the code 'do not exist'
I've tried overriding the parameters, but being in the pipeline and dynamically created I am hesitant to follow further (and my attempts didnt work anyways). I've tried a bunch of different stack/nested stack/single stack configurations but haven't had a Successs yet.
thoughts?
This basically boils down to CodeUri in the Cloudformation template will automatically append the s3 bucket if your CodeUri starts with ./
So you have 2 options.
In your pipeline output your artifact as normal, just do the whole repo from the codebuild into the code deploy. Your code deoploy can pick up the artifact naturally and will automatically append the S3 url to that
if you're using Python however, you MUST be aware that starting from a lambda directory deeper in the tree will mean that the python Imports expect that directory to be a root directory - meaning if you were in Lambdas/Lambda1 and wanted to import a file that existed in the Lambda1 directory, in order for it to work on AWS Lambda you would need to have the import be just the file name, ignoring the rest of the path.
This means that coding can be difficult, and running unit tests can be difficult as well. You'll want to add all the individual lambda folders (and their paths) from root to the PYTHONPATH env variable of your codebuild instance so the unit tests know where to do so (and add a .env file to your IDE as well to handle this in your local)
You use CDK and you cdk synth the stack you want to deploy. This creates a cdk.out folder with a bunch of asset zips in it plus the stack template (a json). you adjust your artifact output in the codebuild to output the cdk.out folder, and the asset zips are automatically (thanks to cdk) subbed into the codeUri locations in the also automatically synthed template. Once you know what the templates name is its easy to set the CodeDeploy to look for that template name and it will find the asset zips individually for each lambda.
I linked the wrong file from an AWS S3 to my AWS Lambda Function. I want to delete the file.
This post is similar, but I do not know how to reupload the zip file. I want to either replace it, or preferably, remove it so that I can edit the code again using the inline editor.
If you are not using versioning you will need to re-upload the correct version of the code again.
There is no removing the code you have there as this is now your Lambda functions code, any upload replaces this.
For the future take a look at using versioning for your functions so that you can ensure that any accidental uploads will not disrupt applications that are using your Lambda function.
I would like to conditionally execute certain stage in AWS Codepipeline depending on that if I put certain file on repo location. So, if I put "some_file.txt" on certain location in repo, I want for Codepipeline to check existence of this file and if it's there continue further to deploy code to production, otherwise stop on that stage.
With this I would like to avoid manual approval action and control release process with committing a file. Is this possible and what would be best practice?
I think you could create a lambda action for that:
Invoke an AWS Lambda function in a pipeline in CodePipeline
The lambda function can access the input artifact, and check if your file of interest is there or not.
Depending on the outcome of the check, the function with either put_job_success_result or put_job_failure_resul to continue or stop the pipeline.
you can use the spec file to check if there's the needed file present. If not, then you can execute a "stop-pipeline-execution" https://docs.aws.amazon.com/cli/latest/reference/codepipeline/stop-pipeline-execution.html
command. The required args can be fetched from the env vars and one more thing to note is to give that stage of yours adequate permission(s) to be able to execute the command.
Firstly, I am a newbie to AWS. I was able to edit my Lambda code in line, but I recently uploaded a zip file to it(30MB) to S3 bucket and added this zip to my Lambda from S3, and now my Lambda inline editor doesn't open anymore saying the following error
"The deployment package of your Lambda function "LF2" is too large to
enable inline code editing. However, you can still invoke your function."
I tried deleting my zip file from S3 bucket hoping that the URL of zip would not be reachable and the lambda would lose the zip file and let me edit the function again. But, my lambda size still consists of the 30MB zip file size. I am unable to delete this zip and can't figure out a way to get rid of this it and edit my lambda code again.
Note: My Lambda code was written in-line and different from the zip file(which only contains elastic search setup files which I uploaded for using in my code since import elastic search wasn't working). I know there would have been a better way to do this without uploading it's zip.
Yes, you can download the Lambda function. Go to the AWS console for the Lambda function, make sure you are in the Configuration view, then click Actions | Export function. This will allow you to download a ZIP file containing the Lambda function.
Note that once you upload a Lambda function via S3, it's copied by the Lambda service. There's no connection at that point back to the S3 object that you uploaded. One reason for this is that your Lambda function would break if you, accidentally or otherwise, deleted the file from S3.
I had this problem yesterday then I somehow managed to find my code but not that full code that was vanished from AWS lambda. I wrote that code again, tested it, then tried to upload it with the same name of the lambda function and at the same lambda function by compressing it in my own system.
While uploading it, lambda gave me the option to choose between the remote file I uploaded and the local file it had saved previously. I opted for the local file and boom! I got my code back as it was last saved.
So, I suggest you to just try to upload a random blank compressed zip file containing one file name same as the lambda function. It would give you the option to choose from both files, then choose for "local" file. It would take you to the in-line editor where your code was.
I just ran into same soup .. seem like the upload replaced the previous index.js with export handler.