Building an AWS Lambda function from a Docker Container using this tutorial as a base for my project:
However, when I build the container and attempt to run it using SAM, this is the set of errors I get everytime:
time="2021-02-19T21:43:03.87" level=error msg="Init failed" InvokeID= error="fork/exec /entry.sh: exec format error"
time="2021-02-19T21:43:03.87" level=error msg="INIT DONE failed: Runtime.InvalidEntrypoint"
FROM python-alpine
# Copy in the built dependencies
COPY --from=build-image . .
# (Optional) Add Lambda Runtime Interface Emulator and use a script in the ENTRYPOINT for simpler local runs
ADD aws-lambda-rie /usr/bin/aws-lambda-rie
COPY . .
COPY entry.sh /
COPY aws-lambda-rie /usr/bin/
RUN chmod 755 /usr/bin/aws-lambda-rie /entry.sh
ENTRYPOINT [ "/entry.sh" ]
CMD ["app.handler"]
Above is the last part of the Dockerfile to package everything created in the build-image container earlier in the script.
The entry.sh file contents are below. The purpose of the file is to tell the container to run the aws lambda handler function stated in the last line of the Dockerfile.
if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then
exec /usr/bin/aws-lambda-rie /usr/local/bin/python3 -m awslambdaric $1
else
exec /usr/local/bin/python3 -m awslambdaric $1
fi
If anyone know or has any ideas as to what it could be, it is very much appreciated.
Related
I have been playing around with AWS Batch, and I am having some trouble understanding why everything work when I build a docker image from my local windows machine and push it to ECR, while it doesn't work when I do this from a ubuntu EC2 instance.
What I show below is adapted from this tutorial.
The docker file is very simple:
FROM python:3.6.10-alpine
RUN apk add --no-cache --upgrade bash
COPY ./ /usr/local/aws_batch_tutorial
RUN pip3 install -r /usr/local/aws_batch_tutorial/requirements.txt
WORKDIR /usr/local/aws_batch_tutorial
Where the local folder contains the following bash script (run_job.sh):
#!/bin/bash
error_exit () {
echo "${BASENAME} - ${1}" >&2
exit 1
}
################################################################################
###### Convert envinronment variables to command line arguments ########
pat="--([^ ]+).+"
arg_list=""
while IFS= read -r line; do
# Check if line contains a command line argument
if [[ $line =~ $pat ]]; then
E=${BASH_REMATCH[1]}
# Check that a matching environmental variable is declared
if [[ ! ${!E} == "" ]]; then
# Make sure argument isn't already include in argument list
if [[ ! ${arg_list} =~ "--${E}=" ]]; then
# Add to argument list
arg_list="${arg_list} --${E}=${!E}"
fi
fi
fi
done < <(python3 script.py --help)
################################################################################
python3 -u script.py ${arg_list} | tee "${save_name}.txt"
aws s3 cp "./${save_name}.p" "s3://bucket/${save_name}.p" || error_exit "Failed to upload results to s3 bucket."
aws s3 cp "./${save_name}.txt" "s3://bucket/logs/${save_name}.txt" || error_exit "Failed to upload logs to s3 bucket."
It also contains a requirement.txt file with a three packages (awscli,boto3,botocore),
and a dummy python script (script.py) that simply lists the files in a s3 bucket and saves the list in a file that is then uploaded to s3.
Both in my local windows environment and in the EC2 instance I have set up my AWS credentials with aws configure, and in both cases I can successfully build the image, tag it and push it to ECR.
The problem arises when I submit the job on AWS Batch, which should run the ECR container using the command ["./run_job.sh"]:
if AWS Batch uses the ECR image pushed from windows, everything works fine
if it uses the image pushed from ec2 linux, the job fails, and the only info I can get is this:
Status reason: Task failed to start
I was wondering if anyone has any idea of what might be causing the error.
I think I fixed the problem.
The run_job.sh script in the docker image has to have execute permissions to be run by AWS Batch (but I think this is true in general).
For some reason, when the image is built from Windows, the script has this permission, but it doesn't when the image is built from linux (aws ec2 - ubuntu instance).
I fixed the problem by adding the following line in the Dockerfile:
RUN chmod u+x run_job.sh
I updated my docker file to upgrade ubuntu but it started failing and I'm unsure why...
dockerfile:
# using digest for version 20.04 as there is multiple digest that used this tag#
FROM ubuntu#sha256:82becede498899ec668628e7cb0ad87b6e1c371cb8a1e597d83a47fac21d6af3
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
#install tools
#removed for clarity
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
my evens from the pod
Successfully assigned se-agents/agent-se-linux-5c9f647768-25p7v to aks-linag-56790600-vmss000002
Pulling image "compregistrynp.azurecr.io/agent-se-linux:25319"
Successfully pulled image "comregistrynp.azurecr.io/agent-se-linux:25319"
Created container agent-se-linux
Started container agent-se-linux
Back-off restarting failed container
When I check the error in the pod, I see the following message:
standard_init_linux.go:228: exec user process caused: no such file or directory
Not even sure where to look anymore. The only difference in the dockerfile was the ubuntu tag and I added 1 tool to install. I tried to deploy what was in Prod to dev and it's failing with the same error. I'm convinced there's something in my AKS...
So the issue was that someone on my team had modified the shell script and didn't set the proper End of Line characters to Lf.
I will be running a script to convert the file to Linux to ensure this doesn't happen again in my pipeline!
I am working on a project, where I need to read Text File from S3 bucket using Boto3. Then need to dockerize my application. I Implemented my code using Boto3, its running Perfectly fine(Note: Taking two arguments through argparser with switches -p and -n). But when I try the same using Docker run
PS **my Working Directory** > docker run --rm Image_name -p Argument1 -n Argument2
<class 'botocore.exceptions.ProfileNotFound'> Code.py 92
I searched lot of things on it, my understanding is AWS container directory is unable to locate my Credential File and config file stored in my home directory/.aws folder.
What I Tried:
1. Path mounting as below:
PS **my Working Directory** > docker run --rm -it -v %userprofile%\.aws:/root/.aws
amazon/aws-cli
docker: Error response from daemon: %!u(string=is not a valid Windows path)serprofile%!\
(MISSING).aws.
See 'docker run --help'.
I completely don't understand what's wrong with the syntax. I tried manually inputting my user profile directory in the %userprofile% as C:/Users/Deepak
Then Strangely WSL2(backend) popup comes saying using passing containers on windows may poorly work.
I am not sure what it means. Does it have any effect on Docker containers build on windows environment?
2. I moved my Credential and Config file in my working directory as well and tried below code:
PS my Working Directory > docker run --rm -it -v ${PWD}:/root/.aws amazon/aws-cli Image_name -p
Argument1 -n Argument2
I have a container builder step
steps:
- id: dockerbuild
name: gcr.io/cloud-builders/docker
entrypoint: 'bash'
args:
- -c
- |
docker build . -t test
images: ['gcr.io/project/test']
The Dockerfile used to create this test image has gsutil specific commands like
FROM gcr.io/cloud-builders/gcloud
RUN gsutil ls
When I submit a docker build to container builder service using
gcloud container builds submit --config cloudbuild.yml
I see the following error
You are attempting to perform an operation that requires a project id, with none configured. Please re-run gsutil config and make sure to follow the instructions for finding and entering your default project id.
The command '/bin/sh -c gsutil ls' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
My question is, how do we use gcloud/gsutil commands inside the DockerFile so that I can run inside a docker build step ?
To invoke "gcloud commands." using the tool builder, you need Container Builder service account, because it executes your builds on your behalf.
Here in this GitHub there is an example for cloud-builders using the gcloud command:
Note : you have to specify $PROJECT_ID it's mandatory for your builder to work.
To do this, your Dockerfile either needs to start from a base image that has the cloud SDK installed already (like FROM gcr.io/cloud-builders/gcloud) or you would need to install it. Here's a Dockerfile that installs it: https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/gcloud/Dockerfile.slim
I wish I could use conditional for .ebextensions configuration, but I don't know how to use it, my current case are :
One of .ebextensions configuration content are create a folder, actually the folder that must be created it's only once, because if I'm deploying app for second times or more I've got error, and the error said "the folder already exist".
So I need to give conditional, if the folder already exist it's not necessary to run again the command for create a folder.
If anyone has any insight or direction on how this can be achieved, I would greatly appreciate it. Thank you!
The .ebextensions config files allow conditional command execution by using the test: directive on a command. Then the command only runs if the test is true (returns 0).
Example .ebextensions/create_dir.config file:
commands:
01_create_dir:
test: test ! -d "${DIR}"
command: mkdir "${DIR}"
Another example (actually tested on EB) to conditionally run a script if a directory is not there:
commands:
01_intall_foo:
test: test ! -d /home/ec2-user/foo
command: "/home/ec2-user/install-foo.sh"
cwd: "/home/ec2-user/"
The sparse documentation from AWS is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-commands
PS.
If you just need to conditionally create the directory, you can do it without conditionals, using the -p option for mkdir to conditionally create the directory like so:
commands:
01_create_dir:
command: mkdir -p "${DIR}"
I think that the only way to do it is with shell conditions:
commands:
make-directory:
command: |
if [ ! -f "${DIR}" ]; then
mkdir "${DIR}"
fi
See bigger example in jcabi-beanstalk-maven-plugin.