I have a lambda function written in .NET 6 to which I have attached a lambda layer which is a zip file with many JSON files and folders in it.
I want to read each JSON file in my lambda function.
When I do this I get an error saying Could not find a part of the path '/var/task/geojson-files/UK.json':
var filePath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "geojson-files", "UK.json");
var geojson = await File.ReadAllTextAsync(filePath);
I also tried /opt/geojson-files/UK.json. But still unable to read the file.
Is there any way to read the file in a lambda function from a lambda layer?
Thanks to #jarmod for his suggestion in the comments to enumerate the contents of /opt
I iterated over the /opt folder and could get the root cause of my problem i.e. unable to read files from the lambda layer.
I zipped the contents of geojson-files from inside that folder. So when I enumerated /opt, I could directly see /opt/UK.json file. So then I zipped the geojson-files folder and now I can see /opt/geojson-files/UK.json file.
Related
I have a bunch of settings and configs that I need AWS Lambda to be able to access. This is not a .py file, so I cannot just import it. In order to load the YAML, I need to specify a path to the file, so how do I do that in Lambda?
My understanding is that Lambda can only read/write to the /tmp folder. But I would like to include this config file with either the lambda layer or as a file in the lambda code package. Either of those two options are not in the /tmp folder and therefore does not provide direct read/write to files there.
Where can I put this YAML file and how can I reach it to read it during runtime?
Lambda has read access to everything. It only has write access to the /tmp folder. If you are thinking about including the YAML file in a Lambda layer, or in the code package, your code will be able to read the file just fine.
I'm implementing a serverless API, using:
API Gateway
Lambda
S3 Bucket "If needed"
My flow is to :
Call POST or PUT method with a binary file "zip", upload it to Lambda.
In Lambda: unzip the file.
In Lambda: Run a determined script over the extracted files.
In Lambda: Generate a new zip.
Return it to my desktop.
This flow is already implemented and it's working well with small files, 10MB for uploading and 6MB for downloading.
But I'm getting issues when dealing with large files as it'll be the case on many occasions. To solve such issue I'm thinking about the following flow:
Target file gets uploaded S3 Bucket.
A new event is generated and Lambda gets triggered.
Lambda Internal Tasks:
3.1 Lambda Download the file from S3 bucket.
3.2 Lambda Generate the corresponding WPK Package.
3.3 Lambda Upload the generated WPK package into S3.
3.4 Lambda returns a signed URL related to the uploaded file as a response.
But my problem with such design is that it requires more than a request to get completed. I want to do all this process in only 1 request, passing the target zip file in it and get the new zip as the response.
Any Ideas, please?
My Components and Flow Diagram would be:
Component and Flow Diagram
There are a couple of things you could do if you'd like to unzip large files while keeping a serverless approach :
Use Node.js for streaming of the zip file, unzipping the file in a pipe, putting the content in a write stream pipe back to S3.
Deploy your code to an AWS Glue Job.
Upload the file to S3,AWS Lambda gets triggered pass the file name as the key to the glue job and the rest will be done.
This way you have a serverless approach and a code that does not cause memory issues while unzipping large files
Firstly, I am a newbie to AWS. I was able to edit my Lambda code in line, but I recently uploaded a zip file to it(30MB) to S3 bucket and added this zip to my Lambda from S3, and now my Lambda inline editor doesn't open anymore saying the following error
"The deployment package of your Lambda function "LF2" is too large to
enable inline code editing. However, you can still invoke your function."
I tried deleting my zip file from S3 bucket hoping that the URL of zip would not be reachable and the lambda would lose the zip file and let me edit the function again. But, my lambda size still consists of the 30MB zip file size. I am unable to delete this zip and can't figure out a way to get rid of this it and edit my lambda code again.
Note: My Lambda code was written in-line and different from the zip file(which only contains elastic search setup files which I uploaded for using in my code since import elastic search wasn't working). I know there would have been a better way to do this without uploading it's zip.
Yes, you can download the Lambda function. Go to the AWS console for the Lambda function, make sure you are in the Configuration view, then click Actions | Export function. This will allow you to download a ZIP file containing the Lambda function.
Note that once you upload a Lambda function via S3, it's copied by the Lambda service. There's no connection at that point back to the S3 object that you uploaded. One reason for this is that your Lambda function would break if you, accidentally or otherwise, deleted the file from S3.
I had this problem yesterday then I somehow managed to find my code but not that full code that was vanished from AWS lambda. I wrote that code again, tested it, then tried to upload it with the same name of the lambda function and at the same lambda function by compressing it in my own system.
While uploading it, lambda gave me the option to choose between the remote file I uploaded and the local file it had saved previously. I opted for the local file and boom! I got my code back as it was last saved.
So, I suggest you to just try to upload a random blank compressed zip file containing one file name same as the lambda function. It would give you the option to choose from both files, then choose for "local" file. It would take you to the in-line editor where your code was.
I just ran into same soup .. seem like the upload replaced the previous index.js with export handler.
The question is using Lambda function is it possible to look through an S3 bucket with User folder's for a specific file name (Ex: Test1.txt and Text2.txt) Inside the file is just random number. Then basically write back a text file into the grabbed file respected folder basically saying "Test1.txt and Test2.txt has been touched.". If possible in python.
Yes! Use Amazon's AWS SDK. Here's an example for downloading a file from S3. The API for listing files and uploading files is pretty similar.
I am using a script that does binary merge on files within a folder in AWS S3 whenever a new file gets uploaded. I have configured the Lambda trigger on the specific bucket on Object PUT to start merging the new file in folder with the existing files in same location. I have a scenario wherein I upload multiple files to the same folder at the same time and I am trying to understand how does Lambda merge files in this scenario. Can anyone please help me understand if the Lambda drops some files and triggers only once the script or will all the file creates trigger events and Lambda invokes the script to do merge on all files without dropping any?
Thanks