Unable to run AWS CLI in Lambda custom run time, getting an error:
aws command not found
python3 -m venv lambdaVirtualEnv
source activate lambdaVirtualEnv
pip3 install awscli
copied the aws binary and contents under the site-packages to lambdaLayerDir
Created a lambda layer using lambdaLayerDir.zip file.
function handler ()
{
PATH=${PATH}:${LAMBDA_TASK_ROOT}
echo $PATH
EVENT_DATA=$1
RESPONSE="{\"statusCode\": 200, \"body\": \"Hello from Lambda!\"}"
echo $RESPONSE
aws
}
Output:
> * Connection #0 to host 127.0.0.1 left intact
/var/task/hello.sh: line 9: aws: command not found
END RequestId: b2225b95-c53c-4271-a664-873dc19528b4
REPORT RequestId: b2225b95-c53c-4271-a664-873dc19528b4 Init Duration: 33.70 ms Duration: 431.44 ms Billed Duration: 500 ms Memory Size: 128 MB Max Memory Used: 45 MB
RequestId: b2225b95-c53c-4271-a664-873dc19528b4 Error: Runtime exited with error: exit status 127
Runtime.ExitError
Related
I am trying to launch aws replication agent in a CENTOS 8.3 and always returns me an error during the process of replication agent installation ( python3 aws-replication-installer-init.py ......)
The output of the process shows me:
The installation of the AWS Replication Agent has started.
Identifying volumes for replication.
Identified volume for replication: /dev/sdb of size 7 GiB
Identified volume for replication: /dev/sda of size 11 GiB
All volumes for replication were successfully identified.
Downloading the AWS Replication Agent onto the source server... Finished.
Installing the AWS Replication Agent onto the source server...
Error: Failed Installing the AWS Replication Agent
Installation failed.
If i check the aws_replication_agent_installer.log i can see that appears messages like:
make -C /lib/modules/4.18.0-348.2.1.el8_5.x86_64/build M=/tmp/tmp8mdbz3st/AgentDriver modules
.....................
retcode: 0
Build essentials returned with code None
--- Building software
running: 'which zypper'
retcode: 256
running: 'make'
retcode: 0
running: 'chmod 0770 ./aws-replication-driver-commander'
retcode: 0
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 0.
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 1.
............
Cannot insert module. Try 9.
Installation returned with code 2
Installation failed due to unspecified error:
stderr: sh: /var/lib/aws-replication-agent/stopAgent.sh: No such file or directory
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no apt-get in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
rmmod: ERROR: Module aws_replication_driver is not currently loaded
insmod: ERROR: could not insert module ./aws-replication-driver.ko: Required key not available
rmmod: ERROR: Module aws_replication_driver is not currently loaded
Any issue of the error?
Launching with the command:
mokutil --disable-validation
will allow to change kernel modules (next boot will confirm it introducing password that must be entered afet command mokutil)
I am trying to test the newly added feature of running / invoking lambda with custom container image, so I am building a very simple image from AWS python:3.8 base image as follows:
FROM public.ecr.aws/lambda/python:3.8
COPY myfunction.py ./
CMD ["myfunction.py"]
And here is myfunction.py
import json
import sys
def lambda_handler(event, context):
print("Hello AWS!")
print("event = {}".format(event))
return {
'statusCode': 200,
}
My question is the following: after my build is done:
docker build --tag custom .
how can I now invoke my lambda, given that I do not expose any web endpoints and assuming I am spinning up my custom container with success, (although the handler= part is a little bit unsettling in terms of whether I have configured the handler appropriately)
▶ docker run -p 9000:8080 -it custom
INFO[0000] exec '/var/runtime/bootstrap' (cwd=/var/task, handler=)
A simple curl of course fails
▶ curl -XGET http://localhost:9000
404 page not found
It turns out I have to invoke this extremely non-intuitive url
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
However I am still getting this error
WARN[0149] Cannot list external agents error="open /opt/extensions: no such file or directory"
START RequestId: f681b2ca-5e35-499d-a262-dd7bc53912f0 Version: $LATEST
Traceback (most recent call last):andler 'py' missing on module 'myfunction'
END RequestId: f681b2ca-5e35-499d-a262-dd7bc53912f0
REPORT RequestId: f681b2ca-5e35-499d-a262-dd7bc53912f0 Init Duration: 1.08 ms Duration: 248.05 ms Billed Duration: 300 ms Memory Size: 3008 MB Max Memory Used: 3008 MB
edit: Solved by changing the CMD from
CMD ["myfunction.py"]
to
CMD ["myfunction.lambda_handler"]
I'm using AWS Code Deploy in order to deploy my ASP.NET application into an auto-scaling group.
When deploying i've this error: Script at specified location: application-start.bat failed with exit code 66.
From what i've seen the error code 66 is "The network resource type is not correct", which is very bizzare in this case...
My bundle contains an appspec.yml file like this:
version: 0.0
os: windows
files:
- source: ./
destination: c:\inetpub\wwwroot
hooks:
ApplicationStop:
- location: application-stop.bat
timeout: 900
ApplicationStart :
- location: application-start.bat
timeout: 900
And the bat 2 files (application-stop / application-start) only contains one line each
iisreset /stop
iisreset /start
When i go to the EC2 instance to look at the aws code deploy logs, it's not more clear to me
2016-04-04 08:58:42 ERROR [codedeploy-agent(2848)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Error during perform: InstanceAgent::Plugins::CodeDeployPlugin::ScriptError - Script at specified location: application-start.bat failed with exit code 66
C:/Windows/TEMP/ocr512.tmp/src/lib/instance_agent/plugins/codedeploy/hook_executor.rb:150:in 'execute_script'
C:/Windows/TEMP/ocr512.tmp/src/lib/instance_agent/plugins/codedeploy/hook_executor.rb:107:in 'block (2 levels) in execute'
Does anyone run into the same issues and find a way to fix it ?
I am using openshift-ansible (https://github.com/openshift/openshift-ansible) that was partially customized for our needs. The part launching the instances was modified to set the group_id nothing more was changed in it.
When creating a master openshift all works fine. However when creating 2 nodes of openshift I can see the 2 instances being created in the "Running instance" panel of the EC2 Dashboard. The instances are for a few seconds in state Initializing and they automatically switch to "Shutting down"
Ansible on its side was still in the task of launching the instances. So my question is:
Is there a way to analyze logs of the instances of AWS when new instances are being created ?
Log of the last ansible task:
TASK: [Launch instance(s)]
**************************************************** REMOTE_MODULE ec2 region=eu-west-1 keypair=ggkey1-eu-west
state=present instance_type=m3.large user_data='#cloud-config mounts:
- [ xvdb ] - [ ephemeral0 ] write_files: - content: | DEVS=/dev/xvdb VG=docker_vg path: /etc/sysconfig/docker-storage-setup owner:
root:root permissions: '"'"'0644'"'"' ' vpc_subnet_id=subnet-60cf1205
image=ami-33ba2a44 count=2 EXEC ['/bin/sh', '-c', 'mkdir
-p $HOME/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076 && echo $HOME/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076']
PUT /tmp/tmp4r8qve TO
/root/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076/ec2
EXEC ['/bin/sh', '-c', u'LANG=C LC_CTYPE=C /usr/bin/env
python2
/root/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076/ec2; rm
-rf /root/.ansible/tmp/ansible-tmp-1441977401.88-262307796372076/ >/dev/null 2>&1'] failed: [localhost] => {"failed": true} msg: wait for instances running timeout on Fri Sep 11 13:21:43 2015
$ ansible --version
ansible 1.9.2
configured module search path = None
$ uname -a
Linux ip-172-31-42-45 3.10.0-123.8.1.el7.x86_64 #1 SMP Mon Sep 22 19:06:58 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
root#ip-172-31-42-45 : ~/uha-rbox-spawner$
Thanks,
Is there a way to analyze logs of the instances of AWS when new instances are being created ?
You are looking for "get console output". You can see it in the AWS (http) console, or you can fetch it from awscli or the API of your choice.
"get console output" is slightly confusing since the AWS Console is also a "console". Think of it as "system logs" (as the Console does), or simply "what would show on a screen in a datacenter".
I'm trying to run a stand-alone Spark application on EC2 Yarn command line. I'm submitting the following spark-submit script:
./bin/spark-submit --class PageRankGraphX --master yarn-cluster --properties-file spark-defaults.conf.2 --executor-memory 2G --total-executor-cores 5 ./SparkPageRank-assembly-1.0.jar s3://linkfilefull/full/links_small.txt s3://conansoutputbucket/smalloutput.txt 10 0.15 2
This is the output - there is no exception or error thrown, the job simply fails after running:
15/04/15 21:27:03 INFO yarn.Client: Application report from ASM:
application identifier: application_1429126831428_0027
appId: 27
clientToAMToken: null
appDiagnostics:
appMasterHost: ip-172-31-1-67.eu-west-1.compute.internal
appQueue: default
appMasterRpcPort: 0
appStartTime: 1429133214320
yarnAppState: RUNNING
distributedFinalState: UNDEFINED
appTrackingUrl: http://172.31.10.227:9046/proxy/application_1429126831428_0027/
appUser: hadoop
15/04/15 21:27:04 INFO yarn.Client: Application report from ASM:
application identifier: application_1429126831428_0027
appId: 27
clientToAMToken: null
appDiagnostics:
appMasterHost: ip-172-31-1-67.eu-west-1.compute.internal
appQueue: default
appMasterRpcPort: 0
appStartTime: 1429133214320
yarnAppState: FINISHED
distributedFinalState: FAILED
appTrackingUrl: http://172.31.10.227:9046/proxy/application_1429126831428_0027/A
appUser: hadoop
Does anyone know what could be causing this or how I could investigate? When I try to access the yarn logs, it says logs are disabled or not ready.
Check out Amazon's documentation on enabling access to the web UI of Hadoop. Once in the UI, you can check the stderr output for the application, where the exception will most likely be. As others mentioned, this log will also be available on S3.