I am trying to generate an exe file with this command in windows 10
go.exe get -u github.com/aws/aws-lambda-go/cmd/build-lambda-zip
the file comes back as linux_amd64/build-lambda-zip instead of build-lambda-zip.exe
Has anyone experienced this and know what the fix is?
I am using the AWS docs here https://docs.aws.amazon.com/lambda/latest/dg/golang-package.html
If you want to create bin, use install command with override $GOOS var (Compile and install packages and dependencies ):
GOOS=windows go install github.com/aws/aws-lambda-go/cmd/build-lambda-zip
exe file will be store to $GOBIN.
there was another way to access the aws lambda tools
I found it in
%USERPROFILE%\dotnet\tools.store\amazon.lambda.tools\4.0.0\amazon.lambda.tools\4.0.0\tools\netcoreapp2.1\any\Resources\build-lambda-zip.exe
if its not there we can get it from aws directly but running this command
dotnet tool update -g Amazon.Lambda.Tools
Related
I have an EMR cluster 5.28.1 running in AWS but I forgot to install from python libraries as part of the bootstrap action. Now that the cluster is running, I was simply attempting to add a step via the EMR console. Here are my settings
JAR: s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar
Main class: None
Arguments: s3://xxxx/install_python_libraries.sh
Unfortunately, I get the following error.
Cannot run program "s3://xxxxx/install_python_libraries.sh" (in directory "."): error=2, No such file or directory
I am not sure what I am doing wrong. The shell script looks like this.
#!/bin/bash -xe
# Non-standard and non-Amazon Machine Image Python modules:
sudo pip-3.6 install boto3
sudo pip-3.6 install xmltodict
I also tried this by simply using 'command-runner.jar' but I get the same error. Can you please help me figure out the problem so I do this via the console? I would like to install the libraries on all nodes - master and core.
Thanks
The issue is the xxx.sh files EOL/carriage return type.
In other words, if it is Windows ("\r\n") then it will not work and return the ./ file not found error.
Convert it to unix type ("\n") using something like notepad++ and it will run fine.
(In notepad++ edit>EOL Conversion>Unix(LF) hit save and try again)
I create a lambda lambda for Python runtime (3.6 and 3.7 compatible) that contains a bin executable (texlive)
But when I try to execute it through subprocess.run it says that it has no execution permissions!
How can I make it so this layer has execute permissions? I zipped the layer files on Windows 10 so I'm not sure how to add Linux execute permission.
Also, as far as I know when you unzip a file it "resets" the permissions, so if AWS is not setting the execute permissions when unzipping my layers, what can I do?
By the way, I'm uploading my layer via the aws console
I installed the WSL on Windows 10 and zipped up my layer using the zip executable from within Ubuntu:
zip -r importtime_wrapper_layer.zip .
It created a zip file that retained the 755 file permissions on my script.
I was able to view that the correct attributes were present using 7zip and the Lambda runtime was able to execute it.
We have a notebook instance within Sagemaker which contains many Jupyter Python scripts. I'd like to write a program which downloads these various scripts each day (i.e. so that I could back them up). Unfortunately I don't see any reference to this in the AWS CLI API.
Is this achievable?
It's not exactly that you want, but looks like VCS can fit your needs. You can use Github(if you already use it) or CodeCommit(free privat repos) Details and additional ways like sync target dir with S3 bucket - https://aws.amazon.com/blogs/machine-learning/how-to-use-common-workflows-on-amazon-sagemaker-notebook-instances/
Semi automatic way:
conda install -y -c conda-forge zip
!zip -r -X folder.zip folder-to-zip
Then download that zipfile.
First I installed stand-alone gsutil on Fedora 25, it ran nice for months.
Then I installed Cloud SDK, and my Google Cloud credentials have been broken ever since.
I don't need Cloud SDK after all. I just want to use gsutil again.
Is there a way to uninstall Cloud SDK and credentials from Linux?
Or maybe uninstall all Google Cloud products and reinstall the stand-alone gsutil?
To explain the likely reason this is happening:
When you install the Cloud SDK, it takes some steps to make sure that when you type gsutil from the shell, it resolves to the Cloud SDK version (depending on the installation method, it might make some executable scripts in /usr/local/bin/, or put /path/to/cloud/sdk/bin at the front of your PATH environment variable). This Cloud SDK wrapper script for gsutil does some extra auth logic, loading an extra .boto file which contains credentials produced from running gcloud auth login. You can see this extra .boto file when running gcloud version -l:
$ gsutil version -l
[...]
using cloud sdk: True
config path(s): /home/USER/.boto, /home/USER/.config/gcloud/legacy_credentials/USER#gmail.com/.boto
[...]
It's likely that the auth credentials in that extra .boto file are overriding the credentials in your $HOME/.boto file.
How to use standalone gsutil again:
You'll need to ensure that the first gsutil your shell finds is the standalone version. This essentially means that the directory containing the standalone gsutil executable should come before the cloud sdk directory in your PATH environment variable. This can be done via prepending it to your PATH variable, via adding something like this to the end of your .bashrc file:
if [ -d "/path/to/standalone/gsutil/directory" ]; then
PATH="/path/to/standalone/gsutil/directory:$PATH"
fi
After doing this, you can run this command to reload your .bashrc file and check the "using cloud sdk" value of your gsutil info:
$ source "$HOME/.bashrc"; gsutil version -l
If this still shows that you're using the Cloud SDK version of gsutil, you might have an alias defined for gsutil - you can check for this by running:
$ type gsutil
If you still encounter auth issues when using the standalone version of gsutil, you'll need to generate new credentials:
$ gsutil config
I am stuck at using Amazon EC2 CLI.
I have downloaded the Command Line Tools from
http://aws.amazon.com/developertools/351.
I placed the bin and lib folder into my Amazon project folder: /Users/Invictus/EC2
I downloaded the cert-xxxx.pem and pk-xxx.pem into the same folder.
Created a .bash_profile in the same folder.
I tried to execute ec2-describe-images -o amazon after I moved to cd /Users/Invictus/EC2.
The system does not recognise the command: command not found.
If I try to execute the same command inside the bin folder, the result is the same.
My .bash_profile:
export EC2_HOME=~/.EC2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home/
Where did I make a mistake?
My aim is to connect to the launched instance and be able to execute commands there from my local machine.
I have Java installed.
The newer AWS Unified CLI Tools is much, much easier to set up. All you need is Python, which comes built-in to every Mac.
Here are a few things I can think of:
Your .bash_profile should be in /Users/Invictus/ , not /Users/Invictus/EC2. Move it to your home directory and log off and log back in (or restart your machine) and see if it picks up the right path.
Instead of ec2-describe-images, can you run it as "./ec2-describe-images" - does that work? If not, can you check the permissions on that script?