I am using terraform to build infrastructure on AWS provider. I am using ECR to push my local docker images using AWSCLI.
Now, I have a Application load balancer which would route traffic to ECS_service. I want ECS to manage my Docker Containers using Fargate. But, the docker containers are exited by saying "Essential Docker container exited".
Thats the only log printed out.
If i change the docker image to be nginx:latest(which is fetched from dockerhub). It works.
PS: My docker container is a simple node application with node:alpine as base image. Is it something related to this, i am wrong !
Can anyone provide me with some insight on what is wrong with my approach.
I get the following error in AWS Logs:
docker-standard-init-linux-go211-exec-user-process-caused-exec-format-error.
My Dockerfile
FROM node:alpine
WORKDIR /app
COPY . .
RUN npm install
# Expose a port.
EXPOSE 8080
# Run the node server.
ENTRYPOINT ["npm", "start"]
They say, its issue with the start script. I am just running this command. npm start to start the server.
It’s not your approach, your image is just not working.
Try running it locally and see the output otherwise you will need to ship the logs to Cloudwatch and see what they say
Related
I have created a docker image locally and pushed it to GCP container registry using gcloud sdk. The image is successfully pushed to container registry.
Now, I am trying to manually deploy this image from the container registry on to the existing private GKE cluster from the GCP UI by selecting Deploy to GKE option.
After deploying, I am getting an error saying "container has runasNonRoot and the image will run as root (pod:"appname-XXXXX_default(XXXXXX-XXXX....)", container:appName-1): CreateContainerConfigError "
Any help will be greatly appreciated.
Sounds like you've got a pod security policy setup to avoid running containers as root. This is a good security policy because of the risk of breakout into other applications or nodes within the cluster.
You might want to read up on the Kubernetes security context and potentially rebuild your container.
With my clusters I would often have to consume public images that use root, in this case I would consume the previous image as a base, create a new (non-root) user and new group to take ownership of any tools that are needed in the image.
Changing the default user in a Dockerfile:
FROM ubuntu
RUN groupadd --gid 15555 notroot \
&& useradd --uid 15555 --gid 15555 -ms /bin/false notroot\
&& chown -R notroot:notroot /home/notroot
USER notroot
ENTRYPOINT ["/bin/bash", "-c"]
CMD ["whoami && id"]
Here's a better explanation of why you should avoid root in Docker images.
I am wondering something about PySpark applications. If I container a PySpark program called my_spark_script.py, can I just execute it inside the Docker container? I mean to ask, is a Docker file like this valid:
WORKDIR /app
COPY . .
RUN pip3 install -r requirements.txt
CMD spark-submit --master yarn --deploy-mode cluster--num-executors 2 my_spark_script.py // <-- ???
And I can build it as:
docker build -t my_docker_image .
and then run it as
docker run -d my_docker_image
I am wondering if this can be run on AWS EC2 or AWS EMR or something else like this? Would it work?
I just dont know how the container CMD works in relation to environment like EC2 or EMR. Please help!
Amazon Elastic Container Service (ECS) is a managed AWS service to run Docker containers. ECS provides the Fargate launch type, which is a serverless platform with which a container service is run on Docker containers instead of EC2 instances.
To build the source code into a Docker image you can use AWS CodeBuild service, and AWS CodePipeline for Continuous Integration, please check the following example, here.
Final goal: To deploy a ready-made cryptocurrency exchange on AWS.
I have setup a readymade server by 0xProject by running the following command on my local machine:
npx #0x/launch-kit-wizard && docker-compose up
This command creates a docker-compose.yml file which has multiple container definitions and starts the exchange on http://localhost:3001/
I need to deploy this to AWS for which I'm following this Youtube tutorial
I have created a registry user with appropriate permissions
An EC2 instance is created
ECR repository is created
AWS CLI is configured
As per AWS instructions, I'm retrieving an authentication token and authenticating Docker client to registry:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <docker-id-given-by-AWS>.dkr.ecr.us-east-2.amazonaws.com
I'm trying to build the docker image:
docker build -t testdockerregistry .
Now, since in this case, we have docker-compose.yml instead of Dockerfile - when I try to build the image - it throws the following error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: CreateFile C:\Users\hp\Desktop\xxx\Dockerfile: The system cannot find the file specified.
I tried building image from docker-compose itself as per this guide, which fails with the following message:
postgres uses an image, skipping
frontend uses an image, skipping
mesh uses an image, skipping
backend uses an image, skipping
nginx uses an image, skipping
Can anyone please help me with this?
You can use the aws ecs cli-compose command from the ECS CLI.
By using this command it will translate the docker-compose file you create into a ECS Task Definition.
If you're interested in finding out more about the CLI take a read of the AWS documentation here.
Another approach, instead of using the AWS ECS CLI directly, is to use the new docker/compose-cli
This CLI tool makes it easy to run Docker containers and Docker Compose applications in the cloud using either Amazon Elastic Container Service (ECS) or Microsoft Azure Container Instances (ACI) using the Docker commands you already know.
See "Docker Announces Open Source Compose for AWS ECS & Microsoft ACI " from Aditya Kulkarni.
It references "Docker Open Sources Compose for Amazon ECS and Microsoft ACI" from Chris Crone, Engineer #docker:
While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted.
We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:
I have an Ec2 on AWS.
I tried
SSH into that box
install Docker
pull Docker image from my : Repository URI
docker pull bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest
tag it
docker tag bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest
I'm trying to build it, and I don't know what command I should use.
I've tried
docker build bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest .
I kept getting
How would one go about debugging this further?
I am setting up a CI/CD pipeline for my micro-services. Currently I use TravisCI to pull the code from Github upon check-in, build the docker image and push it to DockerHub. I tried using docker cloud(previously knows as Tutum), which provides automatic deployment feature to AWS EC2 instance but the deployment sometimes recreates the container and the service endpoint URL changes, which is not desirable.
I am exploring amazon's ECS and its tasks , but I can not find any reference for how to setup continuos deployment to ECS when a new image is pushed to docker hub.
Anybody has any experience doing the setup ?
with ECS you would basically have CI detect a change to docker hub and update your task definition/service.
For this I use the wonderful ecs-deploy script from here:
https://github.com/silinternational/ecs-deploy
After my container has been built and deployed to dockerhub it's simply a matter of:
ecs-deploy -k $AWS_KEY -s $AWS_SECRET -r $AWS_REGION -c $CLUSTER_NAME -n $SERVICE_NAME -i $DOCKER_IMAGE_NAME
and that does it.