I am trying to run a simple Dockerized Python script with AWS batch.
Is there a problem with my Docker image?
I have locally built the Docker image and it runs fine locally. I pushed the image to a AWS repository, and pulling this remote image to my local machine also runs correctly.
Problem
I have setup my compute env, job queue, and job definition, but I get this error
CannotStartContainerError: Error response from daemon:
OCI runtime create failed: container_linux.go:370:
starting container process caused:
exec: "docker": executable file not found in $PATH: unknown
when I run
["docker","run","-t","111111111111.dkr.ecr.us-region-X.amazonaws.com/myimage:latest","python3","hello_world.py","--MSG","ok"]
Is Docker installed?
I am using the ECS_AL2 image type. When I start a EC2 with this AMI and ssh into it, I can see that Docker is already installed. docker run works fine for instance.
Is there a (generic) problem with my compute env, job queue, or job def?
When I instead try to run the command echo hello this works fine.
Appreciate any advice/help you can provide.
UPDATE - ANSWER
#samtoddler helped me to realize that I only needed
["python3","hello_world.py","--MSG","ok"]
in the Command statement
this error
CannotStartContainerError: Error response from daemon:
that means it is coming from docker daemon, so docker is doing its job.
Seems like you have some trouble with your docker image, how it is packaged and how you trying to pass all those vars.
Please check Docker Image CMD section on how to use ENTRYPOINT and CMD.
There is some explanation in this question docker-oci-runtime-create-failed-container-linux-go349-starting-container-pro
Related
I'm trying to run the ASW sam local example with 'sam local invoke' but get this error:
Could not find amazon/aws-sam-cli-emulation-image-nodejs12.x:rapid-1.6.2 image locally and failed to pull it from docker
Any suggestions?
This started working after I created a docker account and was logged in in Docker Desktop.
Switch your Docker from windows container to Linux container:
Else try to pull the image manually you will get the actual error.
https://hub.docker.com/r/amazon/aws-sam-cli-emulation-image-java8
I'm attempting to run the following command in CodeBuild:
- docker run --rm -v $(pwd)/SQL:/flyway/SQL -v $(pwd)/conf:/flyway/conf flyway/flyway -enterprise -url=jdbc:postgresql://xxx.xxx.us-east-1.rds.amazonaws.com:5432/hamshackradio -dryRunOutput="/flyway/SQL/output.sql" migrate
I get the following error:
ERROR: Unable to use /flyway/SQL/output.sql as a dry run output:
/flyway/SQL/output.sql (Permission denied) Caused by:
java.io.FileNotFoundException: /flyway/SQL/output.sql (Permission
denied)
The goal is to capture the output.sql file. Running the exact same command locally on Windows (adjusting the paths of course) works without error. The issue isn't Flyway or the overall command structure. It's something to do with the internals of running the Docker container on Ubuntu on AWS CodeBuild and permissions there (maybe permissions, maybe something else, I'm open).
Does anyone have a good idea on how to address this?
The container doesn't have write access to the host. You could try the following, which saves the artifact to the container and uses docker cp to copy the artifact to the host.
container=$(docker create -v <flywaymigrationspathonhost>:/flyway/sql flyway/flyway migrate -dryRunOutput=/flyway/reports/changes.sql -schemas=dbo -user=youruser -password=yourpass -url=<yourjdbcurl> -licenseKey=<licensekey>)
docker start -a ${container}
docker cp ${container}:/flyway/reports/changes.sql <hostpath>
You need to enable Privileged Mode for your CodeBuild project.
I want to deploy Spring Boot applications using Kinesis streams on Kubernetes cluster on AWS.
I used kops in an AWS EC2 (Amazon Linux) instance to create my cluster and deploy it using terraform.
I installed Spring Cloud Data Flow for Kubernetes using Helm chart. All my pods are up and running and I can access to the Spring Cloud Data Flow interface in order to register my dockerized apps. I am using ECR repositories to upload my Docker images.
When I want to deploy the stream (composed of a time-source and a log-sink), a big nice red error message pops up. I checked the log of the Skipper pod and I have the following error message starting with :
org.springframework.cloud.skipper.SkipperException: Could not install AppDeployRequest
and finishing with :
Caused by: java.io.IOException: Cannot run program "docker" (in directory "/tmp/spring-cloud-deployer-5769885450333766520/time-log-kinesis-stream-1539963209716/time-log-kinesis-stream.log-sink-kinesis-app-v1"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) ~[na:1.8.0_111-internal]
at org.springframework.cloud.deployer.spi.local.LocalAppDeployer$AppInstance.start(LocalAppDeployer.java:386) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE]
at org.springframework.cloud.deployer.spi.local.LocalAppDeployer$AppInstance.start(LocalAppDeployer.java:414) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE]
at org.springframework.cloud.deployer.spi.local.LocalAppDeployer$AppInstance.access$200(LocalAppDeployer.java:296) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE]
at org.springframework.cloud.deployer.spi.local.LocalAppDeployer.deploy(LocalAppDeployer.java:199) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE]
... 54 common frames omitted
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method) ~[na:1.8.0_111-internal]
at java.lang.UNIXProcess.<init>(UNIXProcess.java:247) ~[na:1.8.0_111-internal]
at java.lang.ProcessImpl.start(ProcessImpl.java:134) ~[na:1.8.0_111-internal]
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) ~[na:1.8.0_111-internal]
... 58 common frames omitted
I already had this error when I tried to deploy on a local k8s cluster on Windows 10 and I thought it was linked to Win10 platform.
I am using spring-cloud-dataflow-server-kubernetes at version 1.6.2.RELEASE.
I really do not have any clues why this error is appearing. Thanks !
It looks like the docker command is not found by the SCDF local deployer's ProcessBuilder when it tries to run the docker exec from this path:
/tmp/spring-cloud-deployer-5769885450333766520/time-log-kinesis-stream-1539963209716/time-log-kinesis-stream.log-sink-kinesis-app-v1
The SCDF sets the above path as its working directory before running the docker command and hence docker is expected to run from this location.
I have found where was the issue. My bad, the problem is always between the keyboard and the chair !
I wanted to remove all the metrics process in the skipper-config.yaml file and I inserted a typo in the configuration file. The JSON env variable data.spring.application.json for the Skipper launch was not valid hence the DeployerInitializationService never saw the properties it needed to add Kubernetes into the repository !
Now in the logs and in the dataflow shell I have the default and the minikube accounts. Thanks for your help anyway :)
I'm trying to use the docker awslogs driver and getting the following error:
"docker: Error response from daemon: Failed to initialize logging
driver: NoCredentialProviders: no valid providers in chain.
Deprecated."
According to this GitHub comment, I need to set the AWS_SHARED_CREDENTIALS_FILE environment variable for the docker daemon, but I'm not sure how to do that when using Docker for Mac.
The command I'm using to start the container is:
docker run -d \
--log-driver=awslogs \
--log-opt awslogs-region=us-east-1 \
--log-opt awslogs-group=my-log-group \
my-image
Version information:
Docker for Mac 1.12.1-rc1-beta23 build 11375
OS X El Capitan 10.11.6
but I'm not sure how to do that when using Docker for Mac.
With boot2docker, you would need to modify /var/lib/boot2docker/profile in order to add this variable.
See "Docker daemon config file on boot2docker".
It does persists across the TinyCore-based VM reboot, and the docker daemon would then take it into account.
With the new docker for Mac xhyve-based, the idea should be the same.
/var/lib/boot2docker/profile does exist as well, as shown in this answer.
The official docker dameon doc points to:
--config-file=/etc/docker/daemon.json Daemon configuration file
So try and modify this file.
By default, the comments mention:
~/Library/Containers/com.docker.docker/Data/database/com.docker.driver.amd64-linux/etc/docker/daemon.json
Using information taken from this answer: Docker deamon config path under mac os
You can connect to the VM layer that runs the docker daemon using:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
And you can modify /etc/docker/daemon.json to add the needed variables there.
Once you make your changes, you can just run:
service docker restart
from within the moby terminal to restart the docker daemon.
Do notice that if you restart docker from your mac, the changes will not persist.
On a side note, if you encounter a login screen when connecting with the screen command, try username: root to access the system.
My current objective is to have Travis deploy our Django+Docker-Compose project upon successful merge of a pull request to our Git master branch. I have done some work setting up our AWS CodeDeploy since Travis has builtin support for it. When I got to the AppSpec and actual deployment part, at first I tried to have an AfterInstall script do docker-compose build and then have an ApplicationStart script do docker-compose up. The containers that have images pulled from the web are our PostgreSQL container (named db, image aidanlister/postgres-hstore which is the usual postgres image plus the hstore extension), the Redis container (uses the redis image), and the Selenium container (image selenium/standalone-firefox). The other two containers, web and worker, which are the Django server and Celery worker respectively, use the same Dockerfile to build an image. The main command is:
CMD paver docker_run
which uses a pavement.py file:
from paver.easy import task
from paver.easy import sh
#task
def docker_run():
migrate()
collectStatic()
updateRequirements()
startServer()
#task
def migrate():
sh('./manage.py makemigrations --noinput')
sh('./manage.py migrate --noinput')
#task
def collectStatic():
sh('./manage.py collectstatic --noinput')
# find any updates to existing packages, install any new packages
#task
def updateRequirements():
sh('pip install --upgrade -r requirements.txt')
#task
def startServer():
sh('./manage.py runserver 0.0.0.0:8000')
Here is what I (think I) need to make happen each time a pull request is merged:
Have Travis deploy changes using CodeDeploy, based on deploy section in .travis.yml tailored to our CodeDeploy setup
Start our Docker containers on AWS after successful deployment using our docker-compose.yml
How do I get this second step to happen? I'm pretty sure ECS is actually not what is needed here. My current status right now is that I can get Docker started with sudo service docker start but I cannot get docker-compose up to be successful. Though deployments are reported as "successful", this is only because the docker-compose up command is run in the background in the Validate Service section script. In fact, when I try to do docker-compose up manually when ssh'd into the EC2 instance, I get stuck building one of the containers, right before the CMD paver docker_run part of the Dockerfile.
This took a long time to work out, but I finally figured out a way to deploy a Django+Docker-Compose project with CodeDeploy without Docker-Machine or ECS.
One thing that was important was to make an alternate docker-compose.yml that excluded the selenium container--all it did was cause problems and was only useful for local testing. In addition, it was important to choose an instance type that could handle building containers. The reason why containers couldn't be built from our Dockerfile was that the instance simply did not have the memory to complete the build. Instead of a t1.micro instance, an m3.medium is what worked. It is also important to have sufficient disk space--8GB is far too small. To be safe, 256GB would be ideal.
It is important to have an After Install script run service docker start when doing the necessary Docker installation and setup (including installing Docker-Compose). This is to explicitly start running the Docker daemon--without this command, you will get the error Could not connect to Docker daemon. When installing Docker-Compose, it is important to place it in /opt/bin/ so that the binary is used via /opt/bin/docker-compose. There are problems with placing it in /usr/local/bin (I don't exactly remember what problems, but it's related to the particular Linux distribution for the Amazon Linux AMI). The After Install script needs to be run as root (runas: root in the appspec.yml AfterInstall section).
Additionally, the final phase of deployment, which is starting up the containers with docker-compose up (more specifically /opt/bin/docker-compose -f docker-compose-aws.yml up), needs to be run in the background with stdin and stdout redirected to /dev/null:
/opt/bin/docker-compose -f docker-compose-aws.yml up -d > /dev/null 2> /dev/null < /dev/null &
Otherwise, once the server is started, the deployment will hang because the final script command (in the ApplicationStart section of my appspec.yml in my case) doesn't exit. This will probably result in a deployment failure after the default deployment timeout of 1 hour.
If all goes well, then the site can finally be accessed at the instance's public DNS and port in your browser.