AWS Lambda layer has no execute permission - amazon-web-services

I create a lambda lambda for Python runtime (3.6 and 3.7 compatible) that contains a bin executable (texlive)
But when I try to execute it through subprocess.run it says that it has no execution permissions!
How can I make it so this layer has execute permissions? I zipped the layer files on Windows 10 so I'm not sure how to add Linux execute permission.
Also, as far as I know when you unzip a file it "resets" the permissions, so if AWS is not setting the execute permissions when unzipping my layers, what can I do?
By the way, I'm uploading my layer via the aws console

I installed the WSL on Windows 10 and zipped up my layer using the zip executable from within Ubuntu:
zip -r importtime_wrapper_layer.zip .
It created a zip file that retained the 755 file permissions on my script.
I was able to view that the correct attributes were present using 7zip and the Lambda runtime was able to execute it.

Related

Why can't my GCP script/notebook find my file?

I have a working script that finds the data file when it is in the same directory as the script. This works both on my local machine and Google Colab.
When I try it on GCP though it can not find the file. I tried 3 approaches:
PySpark Notebook:
Upload the .ipynb file which includes a wget command. This downloads the file without error but I am unsure where it saves it to and the script can not find the file either (I assume because I am telling it that the file is in the same directory and pressumably using wget on GCP saves it somewhere else by default.)
PySpark with bucket:
I did the same as the PySpark notebook above but first I uploaded the dataset to the bucket and then used the two links provided in the file details when you click the file name inside the bucket on the console (neither worked). I would like to avoid this though as wget is much faster then downloading on my slow wifi then reuploading to the bucket through the console.
GCP SSH:
Create cluster
Access VM through SSH.
Upload .py file using the cog icon
wget the dataset and move both into the same folder
Run script using python gcp.py
Just gives me an error saying file not found.
Thanks.
As per your first and third approach, if you are running a PySpark code on Dataproc, irrespective of whether you use .ipynb file or .py file, please note the below points:
If you use the ‘wget’ command to download the file, then it will be downloaded in the current working directory where your code is executed.
When you try to access the file through the PySpark code, it will check defaultly in HDFS. If you want to access the downloaded file from the current working directory, use the “ file:///” URI with absolute file path.
If you want to access the file from HDFS, then you have to move the downloaded file to HDFS and then access from there using an absolute HDFS file path. Please refer the below example:
hadoop fs -put <local file_name> </HDFS/path/to/directory>

Zip Go file using AWS Lambda Tool

I am trying to generate an exe file with this command in windows 10
go.exe get -u github.com/aws/aws-lambda-go/cmd/build-lambda-zip
the file comes back as linux_amd64/build-lambda-zip instead of build-lambda-zip.exe
Has anyone experienced this and know what the fix is?
I am using the AWS docs here https://docs.aws.amazon.com/lambda/latest/dg/golang-package.html
If you want to create bin, use install command with override $GOOS var (Compile and install packages and dependencies ):
GOOS=windows go install github.com/aws/aws-lambda-go/cmd/build-lambda-zip
exe file will be store to $GOBIN.
there was another way to access the aws lambda tools
I found it in
%USERPROFILE%\dotnet\tools.store\amazon.lambda.tools\4.0.0\amazon.lambda.tools\4.0.0\tools\netcoreapp2.1\any\Resources\build-lambda-zip.exe
if its not there we can get it from aws directly but running this command
dotnet tool update -g Amazon.Lambda.Tools

How to copy file from bucket GCS to my local machine

I need copy files from Google Cloud Storage to my local machine:
I try this command o terminal of compute engine:
$sudo gsutil cp -r gs://mirror-bf /var/www/html/mydir
That is my directory on local machine /var/www/html/mydir.
i have that error:
CommandException: Destination URL must name a directory, bucket, or bucket
subdirectory for the multiple source form of the cp command.
Where the mistake?
You must first create the directory /var/www/html/mydir.
Then, you must run the gsutil command on your local machine and not in the Google Cloud Shell. The Cloud Shell runs on a remote machine and can't deal directly with your local directories.
I have had a similar problem and went through the painful process of having to figuring it out too, so I thought I would provide my step by step solution (under Windows, hopefully similar for unix users) with snapshots and hope it helps others:
The first thing (as many others have pointed out on various stackoverflow threads), you have to run a local Console (in admin mode) for this to work (ie. do not use the cloud shell terminal).
Here are the steps:
Assuming you already have Python installed on your machine, you will then need to install the gsutil python package using pip from your console:
pip install gsutil
The Console looks like this:
You will then be able to run the gsutil config from that same console:
gsutil config
As you can see from the snapshot bellow, a .boto file needs to be created. It is needed to make sure you have permissions to access your drive.
Also note that you are now provided an URL, which is needed in order to get the authorization code (prompted in the console).
Open a browser and paste this URL in, then:
Log in to your Google account (ie. account linked to your Google Cloud)
Google ask you to confirm you want to give access to GSUTIL. Click Allow:
You will then be given an authorization code, which you can copy and paste to your console:
Finally you are asked for a project-id:
Get the project ID of interest from your Google Cloud.
In order to find these IDs, click on "My First Project" as circled here below:
Then you will be provided a list of all your projects and their ID.
Paste that ID in you console, hit enter and here you are! You now have created your .boto file. This should be all you need to be able to play with your Cloud storage.
Console output:
Boto config file "C:\Users\xxxx\.boto" created. If you need to use a proxy to access the Internet please see the instructions in that file.
You will then be able to copy your files and folders from the cloud to your PC using the following gsutil Command:
gsutil -m cp -r gs://myCloudFolderOfInterest/ "D:\MyDestinationFolder"
Files from within "myCloudFolderOfInterest" should then get copied to the destination "MyDestinationFolder" (on your local computer).
gsutil -m cp -r gs://bucketname/ "C:\Users\test"
I put a "r" before file path, i.e., r"C:\Users\test" and got the same error. So I removed the "r" and it worked for me.
Check with '.' as ./var
$sudo gsutil cp -r gs://mirror-bf ./var/www/html/mydir
or maybe below problem
gsutil cp does not support copying special file types such as sockets, device files, named pipes, or any other non-standard files intended to represent an operating system resource. You should not run gsutil cp with sources that include such files (for example, recursively copying the root directory on Linux that includes /dev ). If you do, gsutil cp may fail or hang.
Source: https://cloud.google.com/storage/docs/gsutil/commands/cp
the syntax that worked for me downloading to a Mac was
gsutil cp -r gs://bucketname dir Dropbox/directoryname

Cannot upload development package to Lambda

I am constantly getting this error when trying to upload my development package to lambda. On my windows 7 pro box.
--zip-file must be a zip file with fileb:// prefix.
I have googled and found very little help. I have tried with a full path, with quotes, without, file instead of fileb all without any hope.
My publish Batch file:
del emailer.zip
cd emailer
"C:\Program Files\WinRAR\rar.exe" a -r emailer.zip
move /y emailer.zip ../emailer.zip
cd ..
aws lambda update-function-code --function-name emailer --zip-file fileb://emailer.zip
I have uploaded the development package here in case there is an issue with how I have constructed the package.
Why am I constantly getting this error? what do I need to do/research to resolve this issue?
Your file is not a valid zip file, you have created it through winrar which have created another type of archive
when downloading your file
fhenri#machine:~/Downloads$ file emailer.zip
emailer.zip: RAR archive data, v1d, os: Win32
When create a zip file (unzip zip cli) I am getting
fhenri#machine:~/Downloads$ file emailer_zip.zip
email_zip.zip: Zip archive data, at least v1.0 to extract
If you need to use winrar, you can check use winrar command line to create zip archives to create a correct zip archive, otherwise just winzip or another zip program

How to set up and use EC2 CLI on Mac?

I am stuck at using Amazon EC2 CLI.
I have downloaded the Command Line Tools from
http://aws.amazon.com/developertools/351.
I placed the bin and lib folder into my Amazon project folder: /Users/Invictus/EC2
I downloaded the cert-xxxx.pem and pk-xxx.pem into the same folder.
Created a .bash_profile in the same folder.
I tried to execute ec2-describe-images -o amazon after I moved to cd /Users/Invictus/EC2.
The system does not recognise the command: command not found.
If I try to execute the same command inside the bin folder, the result is the same.
My .bash_profile:
export EC2_HOME=~/.EC2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home/
Where did I make a mistake?
My aim is to connect to the launched instance and be able to execute commands there from my local machine.
I have Java installed.
The newer AWS Unified CLI Tools is much, much easier to set up. All you need is Python, which comes built-in to every Mac.
Here are a few things I can think of:
Your .bash_profile should be in /Users/Invictus/ , not /Users/Invictus/EC2. Move it to your home directory and log off and log back in (or restart your machine) and see if it picks up the right path.
Instead of ec2-describe-images, can you run it as "./ec2-describe-images" - does that work? If not, can you check the permissions on that script?