Running into the "cannot find module" with AWS lambda... sometimes - amazon-web-services

Been working on some lambdas and all of a sudden I started running into this error. The strange thing is that everything was working fine until a certain point in time, at which point all get requests to that endpoint returned an internal server error. I looked in cloud watch to find the "Runtime.ImportModuleError" error type. "Cannont find module 'yadda yadda' \n require stack....."
I have tried several things. I tried using 7zip instead of compress-archive (just made it a different module that couldn't be found). I tried removing aws-sdk from the package.json and this is the part that really makes me feel insane. After removing that, rezipping, and reuploading this made it work. I thought it was solved so I resumed working on the lambda, but when I zipped and uploaded again it was back to the same error. At this point I took a break, looked some stuff up, and came back. The same lambda that was previously not working now worked perfectly fine with no changes from me. Tried zipping and uploading again, back to module cannot be found error. I'm at my wits end here, what the heck could be happening?

From the error you get from the cloudwatch its clear that some of the module that your lambda function needs is not present in the node_module directory and removing it from the package.json doesn't affect the lambda function because the lambda doesn't use the package.json to install the dependencies, this has been done manually.
For dependencies Lambda completely relies on the modules that you attached to your zipped( or s3:// bucket) file.
There are few things you can check to analyze
Re-align your package.json file with required dependencies and run the npm install
It could be possible that your lambda function using lambda layers for dependencies and that layers weren't included in the configuration.

Related

AWS CodeDeploy - deploy using the incorrect revision files

been banging my head against the wall trying to get an unbelievably simple CodeDeploy run going. The behavior I'm seeing suggests either a configuration issue or an issue local to the running agent. Basically, deployments are not using the files explicitly supplied to them - they're stuck implicitly using a prior version.
Having created an application and deployment group (and ensuring all prerequisites are in place such as the agent and roles are correctly assigned), I'm creating a deployment with via zipping up my code folder (at the root, not including the code's containing folder). There were a few issues to fix in a few of the hook steps, but I was able to fix a couple of them by changing the code, re-zipping and re-uploading before things got particularly weird. There was a syntax issue in my ApplicationStart hook script (when I finally got that far), so I fixed it and re-uploaded as before. However, the same syntax error occurred. I tried re-uploading, deleting all my S3 files and re-uploading, downloading the listed revision files and checking their contents (changes were reflected), but the same syntax error occurred. I even deleted the script and hook step completely from the yml file and it still happened, so clearly the deployment system is "stuck" in some sense. I went as far as feeding it a completely empty text file, told it it was a tar file, and it's still running my old revision. It's as though the runner agent's local files are stale and it's failing to clear local contents.
What's the deal? I feel like I've missed something fundamental.
edit - I created an entirely identical but new deployment group and re-tried the deployment with my new files and it worked. So that deployment group itself is stuck.

Cloud Function build error - failed to get OS from config file for image

I'm seeing this Cloud Build error when I try to deploy a Cloud Function:
"Step #2 - "analyzer": [31;1mERROR: [0mfailed to initialize cache: failed to create image cache: accessing cache image "us.gcr.io/MY_PROJECT/gcf/us-central1/SOME_KEY/cache:latest": failed to get OS from config file for image 'us.gcr.io/MY_PROJECT/gcf/us-central1/SOME_KEY/cache:latest'"
I'm able to build and emulate the cloud function locally, but I can't deploy it due to this error. I was able to deploy just fine until now. I've looked everywhere and I can't find any discussion about this. Anyone know what's going on here?
UPDATE: I deployed a new function 3 days ago and now I can't seem to deploy an update to it. I get the same error. I'm fairly sure this is happening due to the lifecycle rule I set up to ensure I don't keep storing images of functions: Firebase storage artifacts is huge and keeps increasing. This rule is important to keep around because I don't want to pay for unnecessary storage, but it seems like it might be the source of our problem here. Can someone from Google look into this?
I got the same error, even for code that deployed successfully before.
A workaround is to delete the Docker images for the failing Firebase functions inside Container Registry and re-deploying the functions. (The images will be re-created upon deploying.)
The error still occurs sporadically, so I suspect this may be a bug introduced in Firebase's deployment process. Thankfully for now, the workaround above resolves the issue every time the error comes up.
I also encountered the same problem, and solved it by deleting the images in the Container Registry of Firebase Project.
I made a Script at that time, and I'll put it here. The usage is as follows. Please use it if you like.
Install the Google Cloud SDK.
Download the Script
Edit CONTAINER_REGISTRY to your registry name. For example: CONTAINER_REGISTRY=asia.gcr.io/project-name/gcf/asia-northeast1
Grant execute permission. - $ chmod +x script.sh
Execute it. - $ sh script.sh
Deploy your functions.
I'm having the same problem for the last few days and in contact with the support. I had the same log and in my case it wasn't connected to the artifacts because the artifacts rebuild themselves automatically on deploy (read below about a subtle case related to the artifacts and how to fix it), but deleting the functions and redeploying solved it for me.
Artifacts auto cleanup
Note that if the artifacts bucket is empty, then the problem is somewhere else.
But if it's not empty, what you can do to resolve any possible problems related to the artifacts auto cleanup, is to delete the whole "container" folder manually in the artifacts which should solve it. Then just redeploy again.
Make sure not to delete the artifacts bucket itself!
Dough from firebase confirmed in the question you referring to that removing the artifacts content is safe.
So, here is how to delete it:
go to the google cloud console, select your project -> storage -> browser https://console.cloud.google.com/storage/browser
Select the "artifacts" bucket
Choose "containers" and delete it
If the problem was here, it should work fine after that.
This happens because the deletion rule you refer to in your question checks the "last updated" timestamp of each file while on redeploy only some files are updated. So the next day the rule will delete some of the files while leaving the others which will lead to the inconsistent state of the bucket in this case. So you just remove everything manually.

Lost ability to edit code in AWS Lambda console

I have several Lambdas deployed to AWS, all created as single file function in the console. All was working fine until I flushed my caches and cookies in chrome. Then the function codes will no longer show up in the browser, any browser, I tried 3. Also all the Lambda functions think they are all zip file based so I cannot reenter the code from my git repo. The functions still operate properly, I just cannot edit them.
All new functions I create are also not in console editing mode. Something general / global has changed, not specific to any one function.
What can cause this? And across all browsers?
Most importantly how can I fix this?
You can download your code as a zip file if you click right on Actions > Export Function and then Download deployment package. Maybe re-uploading the packages will fix your issue.

Serverless Lambda deployment in node8.10 not uploading any code

I have a lambda function deployed with Serverless. It was deployed with node6.10 runtime, so I decided to redeploy the service with the node8.10 runtime instead.
However after making this redeploy I faced a strange issue where I could not invoke the function and could no longer deploy updates to the function as the filesize was too large (60mb+). I was able to resolve this by uninstalling and reinstalling serverless-plugin-optimize.
This solved the file size issue (now it's about 2mb) but I still cannot invoke the function. Attempting to invoke it yields the following log in CloudWatch:
Unable to import module 'lambda/index': Error
at Function.Module._resolveFilename (module.js:547:15)
at Function.Module._load (module.js:474:25)
at Module.require (module.js:596:17)
at require (internal/module.js:11:18)
My expectation then was that the file path in my serverless.yml for the function was wrong, or that it was not exporting correctly.
./serverless.yml
functions:
funcOne:
handler: lambda/index.handler
./lambda/index.js
exports.handler = function (event, context) {
// execution code
};
However this does not appear to be the case, I know this because setting debug: true for the serverless-plugin-optimize leaves behind the _optimize folder with my minified code in. However somehow despite it being present locally it seems it is not making the upload to Lambda.
Viewing this in the AWS console I get the following:
2.6mb upload listed in Lambda directory:
Error in Lambda console code editor
60mb file still listed in S3 deployment bucket
I can't explain why I am getting this issue or what about switching to node8.10 would cause it. Outside of the serverless.yml file none of the code has been changed from the working node6.10 version. Has anyone encountered this issue before or knows of anything that might fix it?
I have now successfully resolved this issue. I don't know why this was the case, but the problem seemed to be with deploying to node8.10 using an older version of Serverless (1.27.2). Upgrading to the latest version of Serverless (1.32.0) fixed it immediately.

Jenkins triggered code deploy is failing at ApplicationStop step even though same deployment group via code deploy directly is running successfully

When I trigger via Jenkins (code deploy plugin), I get the following error -
No such file or directory - /opt/codedeploy-agent/deployment-root/edbe4bd2-3999-4820-b782-42d8aceb18e6/d-8C01LCBMG/deployment-archive/appspec.yml
However, if I trigger deployment into the same deployment group via code deploy directly, and specify the same zip in S3 (obtained via Jenkins trigger), this step passes.
What does this mean, and how do I find a workaround to this? I am currently working on integrating a few things and so, will need to deploy via code deploy and via Jenkins simultaneously. I will run the code deploy triggered deployment when I will need to ensure that the smaller unit is functioning well.
Update
Just mentioning another point, in case it applies. I was previously using a different codedeploy "application" and "deployment group" on the same ec2 instances, and deplying using jenkins and code deploy directly as well. In order to fix some issue (not allowing to overwrite existing files due to failed deployments, allegedly), I had deleted everything inside the /opt/codedeploy-agent/deployment-root/<directory containing deployments> directory, trying to follow what was mentioned in this answer. However, note that I deleted only items inside that directory. Thereafter, I started getting this error appspec.yml not found in deployment archive. So, then I created a new application and deployment group and since then, I am working on it.
So, another point to consider is whether I should do some further cleanup, if the jenkins triggered deployment is somehow still affected by those deletions (even though it is referring to the new application and deployment group).
As part of its process, CodeDeploy needs to reference previous deployments for Redeployments and Deployment Rollbacks operations. These references are maintained outside of the deployment archive folders. If you delete these archives manually as you indicate, then a CodeDeploy install can get fatally corrupted: the references left to previous deployments are no longer correct or consistent, and deploys will fail.
The best thing at this point is to remove the old installation completely, and re-install. This will allow the code deploy agent to work correctly again.
I have learned the hard way not to remove/modify any of the CodeDeploy install folders or files manually. Even if you change apps or deployment groups, CodeDeploy will figure it out itself, without the need for any manual cleanup.
In order to do a deployment, the bundle needs to contain a appspec.yml file, and the file needs to be put at the top directory. Seems the error message is due to the host agent can't find the appspec.yml file.