Test an AWS lambda locally using custom container image - amazon-web-services

I am trying to test the newly added feature of running / invoking lambda with custom container image, so I am building a very simple image from AWS python:3.8 base image as follows:
FROM public.ecr.aws/lambda/python:3.8
COPY myfunction.py ./
CMD ["myfunction.py"]
And here is myfunction.py
import json
import sys
def lambda_handler(event, context):
print("Hello AWS!")
print("event = {}".format(event))
return {
'statusCode': 200,
}
My question is the following: after my build is done:
docker build --tag custom .
how can I now invoke my lambda, given that I do not expose any web endpoints and assuming I am spinning up my custom container with success, (although the handler= part is a little bit unsettling in terms of whether I have configured the handler appropriately)
▶ docker run -p 9000:8080 -it custom
INFO[0000] exec '/var/runtime/bootstrap' (cwd=/var/task, handler=)
A simple curl of course fails
▶ curl -XGET http://localhost:9000
404 page not found

It turns out I have to invoke this extremely non-intuitive url
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
However I am still getting this error
WARN[0149] Cannot list external agents error="open /opt/extensions: no such file or directory"
START RequestId: f681b2ca-5e35-499d-a262-dd7bc53912f0 Version: $LATEST
Traceback (most recent call last):andler 'py' missing on module 'myfunction'
END RequestId: f681b2ca-5e35-499d-a262-dd7bc53912f0
REPORT RequestId: f681b2ca-5e35-499d-a262-dd7bc53912f0 Init Duration: 1.08 ms Duration: 248.05 ms Billed Duration: 300 ms Memory Size: 3008 MB Max Memory Used: 3008 MB
edit: Solved by changing the CMD from
CMD ["myfunction.py"]
to
CMD ["myfunction.lambda_handler"]

Related

Error of permission denied while running setting new container in python using docker client

I am having this very simple python script (on pycharm) where I set 'nginx' server by pulling it from the hub, this is my code:
import docker
import requests
client = docker.from_env()
img = client.images.pull('nginx:latest')
client.containers.run(img, detach=True, ports={'80/tcp': 8080})
r = requests.get('http://localhost:8080')
print(r.status_code)
I am getting the following error:
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))
when I am running this code using 'ipython' via terminal I am not getting any error and everything is working as expected.
I tried to look after a solution using the network with no success.
try to add current user to your usergroup
sudo groupadd docker
sudo usermod -aG docker $USER
more info on https://docs.docker.com/engine/install/linux-postinstall/

How to solve the error with deploying a model in aws sagemaker?

I have to deploy a custom keras model in AWS Sagemaker. I have a created a notebook instance and I have the following files:
AmazonSagemaker-Codeset16
-ann
-nginx.conf
-predictor.py
-serve
-train.py
-wsgi.py
-Dockerfile
I now open the AWS terminal and build the docker image and push the image in the ECR repository. Then I open a new jupyter python notebook and try to fit the model and deploy the same. The training is done correctly but while deploying I get the following error:
"Error hosting endpoint sagemaker-example-2019-10-25-06-11-22-366: Failed. >Reason: The primary container for production variant AllTraffic did not pass >the ping health check. Please check CloudWatch logs for this endpoint..."
When I check the logs, I find the following:
2019/11/11 11:53:32 [crit] 19#19: *3 connect() to unix:/tmp/gunicorn.sock >failed (2: No such file or directory) while connecting to upstream, client: >10.32.0.4, server: , request: "GET /ping HTTP/1.1", upstream: >"http://unix:/tmp/gunicorn.sock:/ping", host: "model.aws.local:8080"
and
Traceback (most recent call last):
File "/usr/local/bin/serve", line 8, in
sys.exit(main())
File "/usr/local/lib/python2.7/dist->packages/sagemaker_containers/cli/serve.py", line 19, in main
server.start(env.ServingEnv().framework_module)
File "/usr/local/lib/python2.7/dist->packages/sagemaker_containers/_server.py", line 107, in start
module_app,
File "/usr/lib/python2.7/subprocess.py", line 711, in init
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
I tried to deploy the same model in AWS Sagemaker with these files in my local computer and the model was deployed successfully but inside AWS, I am facing this problem.
Here is my serve file code:
from __future__ import print_function
import multiprocessing
import os
import signal
import subprocess
import sys
cpu_count = multiprocessing.cpu_count()
model_server_timeout = os.environ.get('MODEL_SERVER_TIMEOUT', 60)
model_server_workers = int(os.environ.get('MODEL_SERVER_WORKERS', cpu_count))
def sigterm_handler(nginx_pid, gunicorn_pid):
try:
os.kill(nginx_pid, signal.SIGQUIT)
except OSError:
pass
try:
os.kill(gunicorn_pid, signal.SIGTERM)
except OSError:
pass
sys.exit(0)
def start_server():
print('Starting the inference server with {} workers.'.format(model_server_workers))
# link the log streams to stdout/err so they will be logged to the container logs
subprocess.check_call(['ln', '-sf', '/dev/stdout', '/var/log/nginx/access.log'])
subprocess.check_call(['ln', '-sf', '/dev/stderr', '/var/log/nginx/error.log'])
nginx = subprocess.Popen(['nginx', '-c', '/opt/ml/code/nginx.conf'])
gunicorn = subprocess.Popen(['gunicorn',
'--timeout', str(model_server_timeout),
'-b', 'unix:/tmp/gunicorn.sock',
'-w', str(model_server_workers),
'wsgi:app'])
signal.signal(signal.SIGTERM, lambda a, b: sigterm_handler(nginx.pid, gunicorn.pid))
# If either subprocess exits, so do we.
pids = set([nginx.pid, gunicorn.pid])
while True:
pid, _ = os.wait()
if pid in pids:
break
sigterm_handler(nginx.pid, gunicorn.pid)
print('Inference server exiting')
# The main routine just invokes the start function.
if __name__ == '__main__':
start_server()
I deploy the model using the following:
predictor = classifier.deploy(1, 'ml.t2.medium', serializer=csv_serializer)
Kindly let me know the mistake I am doing while deploying.
Using Sagemaker script mode can be much simpler than dealing with container and nginx low-level stuff like you're trying to do, have you considered that?
You only need to provide the keras script:
With Script Mode, you can use training scripts similar to those you would use outside SageMaker with SageMaker's prebuilt containers for various deep learning frameworks such TensorFlow, PyTorch, and Apache MXNet.
https://github.com/aws-samples/amazon-sagemaker-script-mode/blob/master/tf-sentiment-script-mode/sentiment-analysis.ipynb
You should ensure that your container can respond to GET /ping requests: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html#your-algorithms-inference-algo-ping-requests
From the traceback, it looks like the server is failing to start when the container is started within SageMaker. I would look further in the stack trace and see why the server is failing start.
You can also try to run your container locally to debug any issues. SageMaker starts your container using the command 'docker run serve', and so you could run the same command and debug your container. https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html#your-algorithms-inference-code-run-image
you don't have gunicorn installed, that's why the error /tmp/gunicorn.sock >failed (2: No such file or directory), you need to write on Dockerfile pip install gunicorn and apt-get install nginx.

How to use aws cli binary in aws lambda cutom RunTime (Shell)

Unable to run AWS CLI in Lambda custom run time, getting an error:
aws command not found
python3 -m venv lambdaVirtualEnv
source activate lambdaVirtualEnv
pip3 install awscli
copied the aws binary and contents under the site-packages to lambdaLayerDir
Created a lambda layer using lambdaLayerDir.zip file.
function handler ()
{
PATH=${PATH}:${LAMBDA_TASK_ROOT}
echo $PATH
EVENT_DATA=$1
RESPONSE="{\"statusCode\": 200, \"body\": \"Hello from Lambda!\"}"
echo $RESPONSE
aws
}
Output:
> * Connection #0 to host 127.0.0.1 left intact
/var/task/hello.sh: line 9: aws: command not found
END RequestId: b2225b95-c53c-4271-a664-873dc19528b4
REPORT RequestId: b2225b95-c53c-4271-a664-873dc19528b4 Init Duration: 33.70 ms Duration: 431.44 ms Billed Duration: 500 ms Memory Size: 128 MB Max Memory Used: 45 MB
RequestId: b2225b95-c53c-4271-a664-873dc19528b4 Error: Runtime exited with error: exit status 127
Runtime.ExitError

Local stack DynamoDB is not working

I am tring to test my AWS resources locally. I found a very nice docker image which has almost all the serives avaible and same can be used for local testing. One of the services for DynamoDB is not working. This is also for my application.
I google it a lot, but I am not able to find the root cause. As per my docker container logs. Below are the logs.
docker run -it -p 4567-4578:4567-4578 -p 8080:8080 localstack/localstack
2018-07-30T12:49:17:ERROR:localstack.services.generic_proxy: Error forwarding request: expected string or buffer Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/generic_proxy.py", line 181, in forward
path=path, data=data, headers=forward_headers)
File "/opt/code/localstack/localstack/services/dynamodb/dynamodb_listener.py", line 35, in forward_request
TypeError: expected string or buffer
I think you are using wrong port. The answer can be find in localstack's issue list: https://github.com/localstack/localstack/issues/675
The DynamoDB's GUI in localstack is running on port 4564. Run following command, then you are able to access the GUI on localhost:4564/shell
docker run -d -p 4569:4569 -p 4564:4564 localstack/localstack:latest
connection code
const dynamoose = require('dynamoose');
const AWS = require('aws-sdk');
dynamoose.local('http://localhost:4569');
dynamoose.AWS.config.update({
region: 'us-east-1',
});
const Purchase = dynamoose.model('test', {
test: {
type: String,
hashKey: true,
}
}, {
update: true,
});

Why is the UTC time in C++ 8 seconds ahead of the actual UTC time? [duplicate]

I have a simple Node app which sends messages to AWS SQS. For local development I am providing AWS SDK with region, queueUrl, accessKeyId, secretAccessKey.
Everything works fine until I dockerise the app and run as a container. Then whenever SQS wants to do something I get the following error
{ SignatureDoesNotMatch: Signature expired: 20161211T132303Z is now earlier than 20161211T142227Z (20161211T143727Z - 15 min.)
If I add correctClockSkew: true it corrects the problem.
What is docker doing to require the correctClockSkew: true but not when running Node in MacOS
Node app
process.env.TZ = 'Europe/London';
const AWS = require('aws-sdk');
AWS.config.update({
region: 'eu-west-1',
correctClockSkew: true //this has to be set when running inside a docker container?
});
const sqs = new AWS.SQS({
apiVersion: '2012-11-05',
});
sqs.sendMessage({
QueueUrl: 'https://sqs.eu-west-1.amazonaws.com/522682236448/logback-paddle-prod-errors',
MessageBody: 'HelloSQS',
}, (err, data) => {
if (err) throw err;
});
Dockerfile
FROM node
RUN mkdir -p /usr/lib/app
WORKDIR /usr/lib/app
COPY app/ /usr/lib/app/
RUN npm install
CMD ["node", "index.js"]
docker run -d user/image
Edit
Originally I created the question because I kept getting AWS incorrect time errors, now I am getting it with ElasticSearch too. Why is my container reliably out of sync with the host by about 15 mins.
Docker runs inside of a VM on Windows and MacOS, and the clock of that VM can get out of sync with that of your laptop's OS. There are quite a few solutions I've seen, mostly one off commands including:
docker run -it --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i date -u $(date -u +%m%d%H%M%Y)
And from this answer there's:
docker-machine ssh default "sudo date -u $(date -u +%m%d%H%M%Y)"
The best solution I've seen for this is to run an ntp container in privledged mode so it can constantly adjust the time on your docker host:
docker run -d --restart unless-stopped --name ntp --privileged tutum/ntpd
See the docker hub repo for more details: https://hub.docker.com/r/tutum/ntpd/