Docker with Serverless- files not getting packaged to container - amazon-web-services

I have a Serverless application using Localstack, I am trying to get fully running via Docker.
I have a docker-compose file that starts localstack for me.
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
When I run docker-compose up then deploy my application to localstack using SLS deploy everything works as expected. Although I want docker to run everything for me so I will run a Docker command and it will start localstack and deploy my service to it.
I have added a Dockerfile to my project and have added this
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
EXPOSE 3000
CMD ["sls","deploy", "--host", "0.0.0.0" ]
I then run docker build -t serverless/docker . followed by docker run -p 49160:3000 serverless/docker but am receiving the following error
This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I guess this is what would happen if I tried to run SLS deploy in the incorrect folder. So I have logged into the docker container and cannot see my app that i want to run there, what am i missing in dockerfile that is needed to package it up?
Thanks

Execute the pwd command inside the container while running it. Try
docker run -it serverless/docker pwd
The error showing, sls not able to find the config file in the current working directory. Either add your config file to your current working directory (Include this copying in Dockerfile) or copy it to specific location in container and pass --config in CMD (sls deploy --config)
This command can only be run in a Serverless service directory. Make
sure to reference a valid config file in the current working directory

Be sure that you have serverless installed
Once installed create a service
% sls create --template aws-nodejs --path myService
cd to folder with the file, serverless.yml
% cd myService
This will deploy the function to AWS Lambda
% sls deploy

Related

How to use the freshest code on docker-compose avoiding downtime in AWS CodeDeploy?

I have a Next.js app deployed with docker-compose on AWS CodeDeploy. I setup a staging environment so every time a developer pushed a new commit to the staging branch, it will trigger the creation of a new deployment on CodeDeploy.
The application's appspec.yml has one script to start the application with docker-compose build and docker-compose up.
I thought that using the flag --no-cache while invoking the build would be enough to start the container from the freshest version of the code, however, we are seeing that the new deployments are successful but the changes are not reflected on the application.
How can I make sure that every deployment creates a new container from the freshest code avoiding any downtime?
version: 0.0
os: linux
files:
- source: .
destination: /home/ec2-user/app/
hooks:
ApplicationStart:
- location: scripts/run.sh
timeout: 300
runas: root
#!/bin/bash
cd /home/ec2-user/app
docker-compose build --no-cache
docker-compose up -d
It should actually run a fresh container. However, you can try to remove unused data from your Docker system and also remove all volumes before you build a new image.
#!/bin/bash
cd /home/ec2-user/app &&
docker volume prune -f &&
docker system prune -f
docker-compose build --no-cache
docker-compose up -d
Another option is to check the processes docker ps to verify if a new container has been spun up.
Also, you can SSH into the running container and verify if your new changes are in the container (It's a long process but it is useful)
docker exec -it <container-name> sh

AWS Cloudwatch Agent in a docker container

I am trying to set up Amazon Cloudwatch Agent to my docker as a container. This is an OnPremise installation so it's running locally, not inside AWS Kubernetes or anything of the sorts.
I've set up a basic dockerfile, agent.json and .aws/ folder for credentials and using docker-compose build to actually set it up, then launch it, but I am running into constant problems because Docker does not contain or run systemctl so I cannot run the service using AWS own documentation command:
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -s
This will fail on an error when I try to run the container:
cloudwatch_1 | /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl: line 262: systemctl: command not found
cloudwatch_1 | unknown init system
I've tried to run the /start-amazon-cloudwatch-agent inside /bin as well, but no luck. No documentation on this.
Basically the issue is how can I run this as a service or a process in the foreground? Anyone have any clues? Otherwise the container won't stay up. Below is my code:
dockerfile
FROM amazonlinux:2.0.20190508
RUN yum -y install https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
COPY agent.json /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
CMD /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
agent.json
{
"agent": {
"metrics_collection_interval": 60,
"region": "eu-west-1",
"logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log",
"debug": true
}
}
.aws/ folder contains config and credentials, but I never got as far for the agent to actually try and make a connection.
just use the official image docker pull amazon/cloudwatch-agent it will handel all the things for you
here
if you insist to use your own , try the following:
FROM amazonlinux:2.0.20190508
RUN yum -y install https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
COPY agent.json /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json
ENV RUN_IN_CONTAINER=True
ENTRYPOINT ["/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent"]
Use the AWS official Docker Image, here is the example of the docker compose
version: "3.8"
services:
agent:
image: amazon/cloudwatch-agent:1.247350.0b251814
volumes:
- ./config/log-collect.json:/opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json # agent config
- ./aws:/root/.aws # required for authentication
- ./log:/log # sample log
- ./etc:/opt/aws/amazon-cloudwatch-agent/etc # for debugging the config of AWS of container
From config above, only the first 2 volume sync required.
Number 3 & 4 is for debug purpose.
If you interested in learning what each volumes does, you can read more at https://medium.com/#gusdecool/setup-aws-cloudwatch-agent-on-premise-server-part-1-31700e81ab8

Why isn't Kaniko able to push multi-stage Docker Image?

Building the following Dockerfile on GitLab CI using Kaniko, result in the error error pushing image: failed to push to destination eu.gcr.io/stritzke-enterprises/eliah-speech-server:latest: Get https://eu.gcr.io/...: exit status 1
If I remove the first FROM, RUN and COPY --from statements from the Dockerfile, the Docker Image is built and pushed as expected. If I execute the Kaniko build using Docker on my local machine everything works as expected. I execute other Kaniko builds and pushed on the same GitLab CI runner with the same GCE Service Account credentials.
What is going wrong with the GitLab CI based Kaniko build?
Dockerfile
FROM alpine:latest as alpine
RUN apk add -U --no-cache ca-certificates
FROM scratch
COPY --from=alpine /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY binaries/speech-server /speech-server
EXPOSE 8080
ENTRYPOINT ["/speech-server"]
CMD ["serve", "-t", "$GOOGLE_ACCESS_TOKEN"]
GitLab CI build stage
buildDockerImage:
stage: buildImage
dependencies:
- build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
GOOGLE_APPLICATION_CREDENTIALS: /secret.json
script:
- echo "$GCR_SERVICE_ACCOUNT_KEY" > /secret.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $DOCKER_IMAGE:latest -v debug
only:
- branches
except:
- master
As tdensmore pointed out this was most likely an authentication issue.
So for everyone who has come here, the following Dockerfile and Kaniko call work just fine.
FROM ubuntu:latest as ubuntu
RUN echo "Foo" > /foo.txt
FROM ubuntu:latest
COPY --from=ubuntu /foo.txt /
CMD ["/bin/cat", "/foo.txt"]
The Dockerfile can be built by running
docker run -v $(pwd):/workspace gcr.io/kaniko-project/executor:latest --context /workspace --no-push

Elastic Beanstalk - running npm install and webpack on every deployment of Django

I'm trying to use Elastic Beanstalk to deploy my Django server.
My problem is that part of my deployment process is to "npm install" from my package.json, and then executing webpack (npx webpack ..... --output main.js)
How can I do that while maintaining an easy deployment process (eb deploy) and without committing main.js to the repository?
To do it, you'll probably need ebextensions to configure your Elastic Beanstalk environment. Documentation is here.
I recently deploy my Symfony app on ElasticBeanstalk which needed Yarn to execute webpack.
To do it, I created a .config file in which I write the commands to install Yarn and another .config file to run Yarn at each deployment. All .config files are in .ebextensions directory at the root of the project.
commands:
01_install_node:
command: |
sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
sudo yum -y install nodejs
02_install_yarn:
command: |
sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
sudo yum -y install yarn
You can use the container_commands key to execute commands that affect
your application source code. Container commands run after the
application and web server have been set up and the application
version archive has been extracted.
container_commands:
02_run_yarn:
command: |
yarn install
yarn run encore production

AWS CodeBuild - Unable to find DockerFile during build

Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).