is docker-compose.yml not supported in AWS Elastic Beanstalk? - amazon-web-services

In my root directory, I have my docker-compose.yml.
$ ls
returns:
build cmd docker-compose.yml exp go.mod go.sum LICENSE media pkg README.md
In the same directory, I ran:
$ eb init -p docker infogrid
$ eb create infogridEnv
However, this gave me an error:
Instance deployment: Both 'Dockerfile' and 'Dockerrun.aws.json' are missing in your source bundle. Include at least one of them. The deployment failed.
The fact that it does not even include docker-compose.yml as the missing file makes me think it does not support docker-compose. This is contradicting with the main documentation where it explicitly shows an example with docker-compose.yml.

It may be that you use "Amazon AMI" your enviroment should be the new "Docker running on 64bit Amazon Linux 2"
only then you get the docker-compose.yml support
source https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/docker-multicontainer-migration.html

Related

Docker with Serverless- files not getting packaged to container

I have a Serverless application using Localstack, I am trying to get fully running via Docker.
I have a docker-compose file that starts localstack for me.
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
When I run docker-compose up then deploy my application to localstack using SLS deploy everything works as expected. Although I want docker to run everything for me so I will run a Docker command and it will start localstack and deploy my service to it.
I have added a Dockerfile to my project and have added this
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
EXPOSE 3000
CMD ["sls","deploy", "--host", "0.0.0.0" ]
I then run docker build -t serverless/docker . followed by docker run -p 49160:3000 serverless/docker but am receiving the following error
This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I guess this is what would happen if I tried to run SLS deploy in the incorrect folder. So I have logged into the docker container and cannot see my app that i want to run there, what am i missing in dockerfile that is needed to package it up?
Thanks
Execute the pwd command inside the container while running it. Try
docker run -it serverless/docker pwd
The error showing, sls not able to find the config file in the current working directory. Either add your config file to your current working directory (Include this copying in Dockerfile) or copy it to specific location in container and pass --config in CMD (sls deploy --config)
This command can only be run in a Serverless service directory. Make
sure to reference a valid config file in the current working directory
Be sure that you have serverless installed
Once installed create a service
% sls create --template aws-nodejs --path myService
cd to folder with the file, serverless.yml
% cd myService
This will deploy the function to AWS Lambda
% sls deploy

The EB CLI cannot find Dockerfile or the Dockerrun.aws.json file in the application root directory

But Dockerfile is in fact on file in the application root directory???
Nvm, the error is misleading...
I have to run
eb init -p docker application-name
in the same directory before that, such that the file .elasticbeanstalk can be generated.

Curl command doesn't work in config file on AWS

I have a Django web application that is deployed to AWS elastic beanstalk (Python 3.7 running on 64bit Amazon Linux 2/3.1.3). I am trying to run the following config file
files:
"/usr/local/bin/cron_tab.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
exec &>> /tmp/cron_tab_log.txt
date > /tmp/date
source /var/app/venv/staging-LQM1lest/bin/activate
cd /var/app/current
python manage.py crontab add
exit 0
container_commands:
cron_tab:
command: "curl /usr/local/bin/cron_tab.sh | bash"
This file placed in the .ebextentions folder. All other config files are working properly. However, this one is not working. Also, I have tried to run the container_commands code manually on SSH and it gives output such as below.
curl: (3) <url> malformed
I also checked the /tmp folder but there is no cron_tab_log.txt. I checked /usr/local/bin the cron_tab.sh is located there.
I just want this Django-crontab run after the deploy and it doesn't work. How can I handle this issue?
Curl is used for web url call not executing a script, I think you need to change the last line in your config file to be:
command: "sudo /usr/local/bin/cron_tab.sh"

Installing Docker during AWS CodeBuild

When running a bash script during CodeBuild, I get this error:
./scripts/test.sh: line 95: docker: command not found
However, I've made sure to install docker at the start of the script using:
curl -sSL https://get.docker.com/ | sh
apt-get install -y docker-ce docker-compose
But this results in the following error:
Package docker-ce is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'docker-ce' has no installation candidate
Any ideas on how to get docker working during CodeBuild?
There are a few different options for this in CodeBuild:
You can use CodeBuild provided images, which will already have docker installed on them. To use any one of these images select the privilege mode when creating the CodeBuild project.
You can enable Docker in custom image (images not managed by CodeBuild. e.g.: hosted in your ECR repo or public DockerHub) when configuring CodeBuild project. Select the privileged mode for your project settings. Instructions here: https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker-custom-image.html

Docker & Amazon Beanstalk - Deploy an Angular application

I am trying to deploy a dist folder that is generated with versioning by Gulp using a Dockerfile and with Amazon EB.
This fails when I run eb deploy with:
COPY dist /var/www/html dist: no such file or directory. Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Is this because the dist directory is not under source control? If so, what is the best way to transfer the dist directory up to EB whilst still using my docker file to deploy and configure the application?
Below is my Dockerfile:
FROM nimmis/apache-php5
COPY dist /var/www/html
WORKDIR /var/www/html
EXPOSE 80
If you really want the dist files in your docker image then install gulp and run the command to generate the dist folder within the Dockerfile.
See the RUN command for Dockerfiles
I think my understanding of eb deploy was the issue. Answer is to zip the dist directory, along with other files using a bash script and create my own artifact for the config.yml file:
e.g.
dist (application files)
config (php.ini and 000-default.conf)
Dockerfile
Then add the directory to the config.yml:
deploy:
artifact: dist.zip
I was then able to write a bash script to create a version number label and then deploy to Beanstalk:
eb deploy --staged --label {version_number}