automated docker build run error: Unable to find image - build

I have an automated docker build set up and the build appears to be working fine but when I try to run it I get this error:
Unable to find image 'dtwill/ddcintegrationdevenvs:blkmesa_esrbtmq' locally
Pulling repository dtwill/ddcintegrationdevenvs
2014/09/11 14:33:20 Error: image dtwill/ddcintegrationdevenvs not found
Run command:
docker run -i -p 9200:9200 -p 9300:9300 -p 9001:9001 -p 15672:15672 --rm -t dtwill/ddcintegrationdevenvs:blkmesa_esrbtmq
I'm trying to test:
a. docker looks for image locally
b. if image is not found locally that docker will successfully pull and run image
Image is valid https://registry.hub.docker.com/u/dtwill/ddcintegrationdevenvs/

The image you linked to is private. Did you do a docker login or create a .dockercfg file before docker run?
(BTW, I linked to an outdated commit in the docker source repo for the authentication file since it seems to be broken in the current documentation.)

Related

Flask application on Docker image appears to run fine when ran from Docker Desktop; unable to deploy on Ec2,"Essential container in task exited"

I am currently trying to deploy a Docker image. The Docker image is of a Flask application; when I run the image via Docker desktop, the service works fine. However, after creating an EC2 instance on Amazon and running the image as a task, I get the error "Stopped reason Essential container in task exited".
I am unsure of how to trouble shoot or what steps to take. Do advice!
Edit:
I noticed that my Docker file on my computer is of 155mb while that on AWS is 67mb. Does AWS do any compression? I will be trying to push my image again.
Edit2:
Reading through some other qn, it appears that it is normal for file sizes to differ as the Docker desktop shows the uncompressed version.
I decided to run the AWS Task Image on my Docker desktop, while it does run and the console shows everything is fine, I am unable to access the links provided.
* Serving Flask app 'main' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on all addresses (0.0.0.0)
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://127.0.0.1:5000
* Running on http://172.17.0.2:5000 (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: XXXXXX
Under my Docker file, I have made sure to EXPOSE 5000 already. I am unsure why after running the same image on Amazon on my local machine, I am unable to connect to it on my local machine.
FROM alpine:latest
ENV PATH /usr/local/bin:$PATH
RUN apk add --no-cache python3
RUN apk add py3-pip && pip3 install --upgrade pip
WORKDIR /backend
RUN pip3 install wheel
RUN pip3 install --extra-index-url https://alpine-wheels.github.io/index numpy
COPY . /backend
RUN pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["main.py"]
Edit3:
I believe I "found" the problem but I am unsure how to run it. When I was building the Docker file, inside VSCode I would run it via doing docker run -it -d -p 5000:5000 flaskapp , where the tags -d and -p 5000:5000 means running it in demo mode and port forwarding the 5000 port. When I run the image that way inside VSCode, I am able to access the application on my local machine.
However, after creating the image and running it via pressping Start inside Docker Desktop, I am unable to access it on my local machine.
How will I go about running the Docker image this way either via Docker Desktop or Amazon EC2?

docker image runs ok locally but in ECS I get a message: executable file not found in $PATH

I've a weird error, I'm trying to run a python script in ECS, the dockerfile is pretty basic:
FROM python:3.8
COPY . /
RUN pip install -r requirements.txt
CMD ["python", "./get_historical_data.py"]
building this in my local machine works perfect,
docker run --network=host historical-price
I uploaded this image to ECR and run on ECS, a basic config, just set container name, pointing the Image to my ECR repo and set some environment variables...when I run this I get
Status reason CannotStartContainerError: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "python": executable file not found in $PATH: unknown
but (really weird) if I enter in the EC2 server and run the container manually
docker run -it -e TICKER='SOL/USDT' -e EXCHANGE='BINANCE' -e DB_HOST='xxx' -e DB_NAME='xxx' -e DB_PASSWORD='xxx' -e DB_PORT='xxx' -e DB_USER='xxx' xxx.dkr.ecr.ap-southeast-2.amazonaws.com/xxx:latest /bin/bash
I can see this running ok...
I've tried several dockerfiles, using
CMD python ./get_historical_data.py
or using python3 command instead of python
also I tried to skip the CMD command in the Dockerfile and add this in the ECS task definition
nothing work...
I really don't know what can be happen here because the last week I ran a similar task and this worked perfectly, hope you can help me
thank you, please let me know if you need more details

Docker with Serverless- files not getting packaged to container

I have a Serverless application using Localstack, I am trying to get fully running via Docker.
I have a docker-compose file that starts localstack for me.
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
When I run docker-compose up then deploy my application to localstack using SLS deploy everything works as expected. Although I want docker to run everything for me so I will run a Docker command and it will start localstack and deploy my service to it.
I have added a Dockerfile to my project and have added this
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
EXPOSE 3000
CMD ["sls","deploy", "--host", "0.0.0.0" ]
I then run docker build -t serverless/docker . followed by docker run -p 49160:3000 serverless/docker but am receiving the following error
This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I guess this is what would happen if I tried to run SLS deploy in the incorrect folder. So I have logged into the docker container and cannot see my app that i want to run there, what am i missing in dockerfile that is needed to package it up?
Thanks
Execute the pwd command inside the container while running it. Try
docker run -it serverless/docker pwd
The error showing, sls not able to find the config file in the current working directory. Either add your config file to your current working directory (Include this copying in Dockerfile) or copy it to specific location in container and pass --config in CMD (sls deploy --config)
This command can only be run in a Serverless service directory. Make
sure to reference a valid config file in the current working directory
Be sure that you have serverless installed
Once installed create a service
% sls create --template aws-nodejs --path myService
cd to folder with the file, serverless.yml
% cd myService
This will deploy the function to AWS Lambda
% sls deploy

Running gcloud run deploy from inside Cloud Build results in error

I have a custom build step in Google Cloud Build, which first builds a docker image and then deploys it as a cloud run service.
This last step fails, with the following log output;
Step #2: Deploying... Step #2: Setting IAM Policy.........done Step
2: Creating Revision............................................................................................................................failed
Step #2: Deployment failed Step #2: ERROR: (gcloud.run.deploy) Cloud
Run error: Invalid argument error. Invalid ENTRYPOINT. [name:
"gcr.io/opencobalt/silo#sha256:fb860e758eb1957b90ff3761fcdf68dedb9d10f832f2bb21375915d3de2aaed5"
Step #2: error: "Invalid command \"/bin/sh\": file not found" Step #2:
]. Finished Step #2 ERROR ERROR: build step 2
"gcr.io/cloud-builders/gcloud" failed: step exited with non-zero
status: 1
The build steps look like this;
["run","deploy","silo","--image","gcr.io/opencobalt/silo","--region","us-central1","--platform","managed","--allow-unauthenticated"]}
The image is built an exists in the registry, and if I change the last build step to deploy a compute engine VM instead, it works. Those build steps looks like this;
{"name":"gcr.io/cloud-builders/gcloud","args":["compute","instances",
"create-with-container","silo","--container-image","gcr.io/opencobalt/silo","--zone","us-central1-a","--tags","silo,pharo"]}
I can also build the image locally but run into the same error when running gcloud run deploy locally.
I am trying to figure out how to solve this problem. The image works, since it runs fine locally and runs fine when deployed as a Compute Engine VM, the error only show up when I'm trying to deploy the image as a Cloud Run service.
(added) The Dockerfile looks like this;
######################################
# Based on Ubuntu image
######################################
FROM ubuntu
######################################
# Basic project infos
######################################
LABEL maintainer="PeterSvensson"
######################################
# Update Ubuntu apt and install some tools
######################################
RUN apt-get update \
&& apt-get install -y wget \
&& apt-get install -y git \
&& apt-get install -y unzip \
&& rm -rf /var/lib/apt/lists/*
######################################
# Have an own directory for the tool
######################################
RUN mkdir webapp
WORKDIR webapp
######################################
# Download Pharo using Zeroconf & start script
######################################
RUN wget -O- https://get.pharo.org/64/80+vm | bash
COPY service_account.json service_account.json
RUN export certificate="$(cat service_account.json)"
COPY load.st load.st
COPY setup.sh setup.sh
RUN chmod +x setup.sh
RUN ./setup.sh; echo 0
RUN ./pharo Pharo.image load.st; echo 0
######################################
# Expose port 8080 of Zinc outside the container
######################################
EXPOSE 8080
######################################
# Finally run headless as server
######################################
CMD ./pharo --headless Pharo.image --no-quit
Any advice warmly welcome.
Thank you.
After a lot of testing, I managed to come further. It seems that the /bin/sh missing file thing is a red herring.
I tried to change the startup command from CMD to ENTRYPOINT, since that was mentioned in the error, but it did not work. However, when I copied the startup instruction into a new file 'startup.sh' and changed the last line of the Dockerfile to;
ENTRYPOINT ./startup.sh
It did work. I needed to chmod +x the new file of course, but the strange thing is that ENTRYPOINT ./pharo --headless Pharo.image --no-quit gave the same error, and even ENTRYPOINT ["./pharo", "--headless", "Pharo.image", "--no-quit"] also gave the same error.
But having just one argument to ENTRYPOINT made cloud run work. Go figure.
It appears that Google Cloud Run has a dislike for the ubuntu:20.04 image. I have the exact same problem with a Play framework application.
The command
ENTRYPOINT /opt/play-codecheck/bin/play-codecheck -Dconfig.file=/opt/codecheck/production.conf
failed with
error: "Invalid command \"/bin/sh\": file not found"
I also tried
ENTRYPOINT ["/bin/bash", "/opt/play-codecheck/bin/play-codecheck", "-Dconfig.file=/opt/codecheck/production.conf"]
and was rewarded with
error: "Invalid command \"/bin/bash\": file not found"
The trick of putting the command in a shell script didn't work for me either. However, when I changed
FROM ubuntu:20.04
to
FROM ubuntu:18.04
the image deployed. At this point, that's an acceptable fix for me, but it seems like something that Google needs to address.
See also:
Unable to deploy Ubuntu 20.04 Docker container on Google Cloud Run
My workaround was to use a CMD directive that calls Python directly rather than a shell (either /bin/sh or /bin/bash). It's working well so far.

amazon beans talk docker Failed to build Docker image aws_beanstalk/staging-app not a directory

I want run my Java application in Amazon Beans talk within Docker, I zip Dockerfile, my app and bash script into archive and upload to beanstalk but during build I get error:
Step 2 : COPY run /opt
time="2017-02-07T16:42:40Z" level="info" msg="stat /var/lib/docker/devicemapper/mnt/823f97180373b7f268e72b3a5daf0f965feb2c7aa9d3537cf845a36e2dfac80a/rootfs/opt/run: not a directory"
Failed to build Docker image aws_beanstalk/staging-app: ="info" msg="stat /var/lib/docker/devicemapper/mnt/823f97180373b7f268e72b3a5daf0f965feb2c7aa9d3537cf845a36e2dfac80a/rootfs/opt/run: not a directory" .
On my local computer docker build and run works fine.
My Dockerfile:
FROM ubuntu:14.04
MAINTAINER Dev
COPY run /opt
COPY app.war /opt
EXPOSE 8081
CMD ["/opt/run"]
Thanks for help