Cannot upload development package to Lambda - amazon-web-services

I am constantly getting this error when trying to upload my development package to lambda. On my windows 7 pro box.
--zip-file must be a zip file with fileb:// prefix.
I have googled and found very little help. I have tried with a full path, with quotes, without, file instead of fileb all without any hope.
My publish Batch file:
del emailer.zip
cd emailer
"C:\Program Files\WinRAR\rar.exe" a -r emailer.zip
move /y emailer.zip ../emailer.zip
cd ..
aws lambda update-function-code --function-name emailer --zip-file fileb://emailer.zip
I have uploaded the development package here in case there is an issue with how I have constructed the package.
Why am I constantly getting this error? what do I need to do/research to resolve this issue?

Your file is not a valid zip file, you have created it through winrar which have created another type of archive
when downloading your file
fhenri#machine:~/Downloads$ file emailer.zip
emailer.zip: RAR archive data, v1d, os: Win32
When create a zip file (unzip zip cli) I am getting
fhenri#machine:~/Downloads$ file emailer_zip.zip
email_zip.zip: Zip archive data, at least v1.0 to extract
If you need to use winrar, you can check use winrar command line to create zip archives to create a correct zip archive, otherwise just winzip or another zip program

Related

How to give the local zip path in AWS CouldFormation YAML CodeUri?

I have exported a lambda YAML from its export funtion using Download AWS SAM file.
Also I have Downloaded the code zip file from Download deployment package.
in the YAML file we need to give the CodeUri
in the Downloaded YAML it is . as shown in the below picture.
So when I upload it in the AWS CouldFormation it says:
'CodeUri' is not a valid S3 Uri of the form 's3://bucket/key' with
optional versionId query parameter.
I need to know is there a way to give the zip file in the CodeUri from the local file path rather then uploading it in the S3.
I have tried with the zip file name I downloaded as well and still I get the same error.
You have to first run package command. It may not work with zip itself, so you may try with unpacked source code.

Downloading s3 bucket to local directory but files not copying?

There are many, many examples of how to download a directory of files from an s3 bucket to a local directory.
aws s3 cp s3://<bucket>/<directory> /<path>/<to>/<local>/ --recursive
However, I run this command from my AWS CLI that I've connected to and see confirmation in the terminal like:
download: s3://mybucket/myfolder/data1.json to /my/local/dir/data1.json
download: s3://mybucket/myfolder/data2.json to /my/local/dir/data2.json
download: s3://mybucket/myfolder/data3.json to /my/local/dir/data3.json
...
But then I check /my/local/dir for the files, and my directory is empty. I've tried using the sync command instead, I've tried copying just a single file - nothing seems to work right now. In the past I did successfully run this command and downloaded the files as expected.
Why are my files not being copied now, despite seeing no errors?
For testing you can go to your /my/local/dir folder and execute following command:
aws s3 sync s3://mybucket/myfolder .

Errno 22 When downloading multiple files from S3 bucket "sub-folder"

I've been trying to use the AWS CLI to download all files from a sub-folder in AWS however, after the first few files download it fails to download the rest. I believe this is because it adds an extension to the filename and it then sees that as an invalid filepath.
I'm using the following command;
aws s3 cp s3://my_bucket/sub_folder /tmp/ --recursive
It gives me the following error for almost all of the files in the subfolder;
[Errno 22] Invalid argument: 'C:\\tmp\\2019-08-15T16:15:02.tif.deDBF2C2
I think this is because of the .deDBF2C2 extension it seems to be adding to the files when downloading though I don't know why it does. The filenames all end with .tif in the actual bucket.
Does anyone know what causes this?
Update: The command worked once I executed it from a linux machine. Seems to be specific to windows.
This is an oversight by AWS using Windows reserved characters in Log files names! When you execute the command it will create all the directory's however any logs with :: in the name fail to download.
Issue is discussed here: https://github.com/aws/aws-cli/issues/4543
So frustrated I came up with a workaround by executing a "DryRun" which prints the expected log output and porting that to a text file, eg:
>aws s3 cp s3://config-bucket-7XXXXXXXXXXX3 c:\temp --recursive --dryrun > c:\temp\aScriptToDownloadFilesAndReplaceNames.txt
The output file is filled with these aws log entries we can turn into aws script commands:
(dryrun) download: s3://config-bucket-7XXXXXXXXXXX3/AWSLogs/7XXXXXXXXXXX3/Config/ap-southeast-2/2019/10/1/ConfigHistory/7XXXXXXXXXXX3_Config_ap-southeast-2_ConfigHistory_AWS::RDS::DBInstance_20191001T103223Z_20191001T103223Z_1.json.gz to \AWSLogs\7XXXXXXXXXXX3\Config\ap-southeast-2\2019\10\1\ConfigHistory\703014955993_Config_ap-southeast-2_ConfigHistory_AWS::RDS::DBInstance_20191001T103223Z_20191001T103223Z_1.json.gz
In Notepad++ or other text editor you replace the (dryrun) download: with aws s3 cp
Then you will see the following lines with the command: aws s3 cp, the Bucket file and the local file path. We need to remove the :: in the local file path on the right side of the to:
aws s3 cp s3://config-bucket-7XXXXXXXXXXX3/AWSLogs/7XXXXXXXXXXX3/Config/ap-southeast-2/2019/10/1/ConfigHistory/7XXXXXXXXXXX3_Config_ap-southeast-2_ConfigHistory_AWS::RDS::DBInstance_20191001T103223Z_20191001T103223Z_1.json.gz to AWSLogs\7XXXXXXXXXXX3\Config\ap-southeast-2\2019\10\1\ConfigHistory\7XXXXXXXXXXX3_Config_ap-southeast-2_ConfigHistory_AWS::RDS::DBInstance_20191001T103223Z_20191001T103223Z_1.json.gz
We can replace the :: with - only in local paths not S3 Bucket path's using a regex (.*):: that removes the last occurrence of chars at the end of each line:
And here we can see I've replaced the ::'s with hyphens $1- by clicking 'Replacing All' twice:
Next remove the to (ignore the | cursor icon in the below image, to should be replaced with nothing).
FIND: json.gz to AWSLogs
REPLACE: json.gz AWSLogs
Finally select all the lines copy/paste into a command prompt to download all the files with reserved file characters!
UPDATE:
If you have WSL (Windows Subsystem Linux) you should be able to download the files and then issue a simple file rename replacing the ::'s before copying to the mounted Windows folder system.
I tried from my raspberry pi and it worked. Seems to only be an issue with Windows OS.

AWS Lambda layer has no execute permission

I create a lambda lambda for Python runtime (3.6 and 3.7 compatible) that contains a bin executable (texlive)
But when I try to execute it through subprocess.run it says that it has no execution permissions!
How can I make it so this layer has execute permissions? I zipped the layer files on Windows 10 so I'm not sure how to add Linux execute permission.
Also, as far as I know when you unzip a file it "resets" the permissions, so if AWS is not setting the execute permissions when unzipping my layers, what can I do?
By the way, I'm uploading my layer via the aws console
I installed the WSL on Windows 10 and zipped up my layer using the zip executable from within Ubuntu:
zip -r importtime_wrapper_layer.zip .
It created a zip file that retained the 755 file permissions on my script.
I was able to view that the correct attributes were present using 7zip and the Lambda runtime was able to execute it.

How to import Spark packages in AWS Glue?

I would like to use the GrameFrames package, if I were to run pyspark locally I would use the command:
~/hadoop/spark-2.3.1-bin-hadoop2.7/bin/pyspark --packages graphframes:graphframes:0.6.0-spark2.3-s_2.11
But how would I run a AWS Glue script with this package? I found nothing in the documentation...
You can provide a path to extra libraries packaged into zip archives located in s3.
Please check out this doc for more details
It's possible to using graphframes as follows:
Download the graphframes python library package file e.g. from here. Unzip the .tar.gz and then re-archive to a .zip. Put somewhere in s3 that your glue job has access to
When setting up your glue job:
Make sure that your Python Library Path references the zip file
For job parameters, you need {"--conf": "spark.jars.packages=graphframes:graphframes:0.6.0-spark2.3-s_2.11"}
Every one looking for an answer please read this comment..
In order to use an external package in AWS Glue pySpark or Python-shell:
1)
Clone the repo from follwing url..
https://github.com/bhavintandel/py-packager/tree/master
git clone git#github.com:bhavintandel/py-packager.git
cd py-packager
2)
Add your required package under requirements.txt. For ex.,
pygeohash
Update the version and project name under setup.py. For ex.,
VERSION = "0.1.0"
PACKAGE_NAME = "dependencies"
3) Run the follwing "command1" to create .zip package for pyspark OR "command2" to create egg files for python-shell..
command1:
sudo make build_zip
Command2:
sudo make bdist_egg
Above commands will generate packae in dist folder.
4) Finally upload this pakcage from dist directory to S3 bucket. Then goto AWS Glue Job Console, edit job, find script libraries option, click on folder icon of "python library path" .. then select your s3 path.
finally use in your glue script:
import pygeohash as pgh
Done!
Also set --user-jars-firs: "true" parameter in glue job.