Docker trying to push 900MB to aws ec2 container - amazon-web-services

I've created a docker image locally, and it's about 900MB. I've setup AWS container service and I'm now trying push my docker image to it. My docker image is a node (v6) application.
It's taking too long and I feel like I'm doing something wrong here? It shouldn't be trying to push 900MB (the size of my image) should it?
Is there a way to simply push my Dockerfile to AWS and have it pull my code from github, build the image on the server, and then run it?
Here is my Dockerfile:
FROM node:6.2.0
# Create app directory inside the container
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 7000
CMD [ "npm", "start" ]
The code which I'm running to push my docker image is the following:
docker push <id.dkr.ecr.ap-southeast-2.amazonaws.com/name:latest
Before this I ran the following:
aws ecr get-login --region ap-southeast-2
docker login -u AWS -p <token> -e none <aws_url>

Related

Using Spark-Submit in Docker Image?

I am wondering something about PySpark applications. If I container a PySpark program called my_spark_script.py, can I just execute it inside the Docker container? I mean to ask, is a Docker file like this valid:
WORKDIR /app
COPY . .
RUN pip3 install -r requirements.txt
CMD spark-submit --master yarn --deploy-mode cluster--num-executors 2 my_spark_script.py // <-- ???
And I can build it as:
docker build -t my_docker_image .
and then run it as
docker run -d my_docker_image
I am wondering if this can be run on AWS EC2 or AWS EMR or something else like this? Would it work?
I just dont know how the container CMD works in relation to environment like EC2 or EMR. Please help!
Amazon Elastic Container Service (ECS) is a managed AWS service to run Docker containers. ECS provides the Fargate launch type, which is a serverless platform with which a container service is run on Docker containers instead of EC2 instances.
To build the source code into a Docker image you can use AWS CodeBuild service, and AWS CodePipeline for Continuous Integration, please check the following example, here.

How can I solve getting Unauthorized Access 401 error when pulling aws deep learning container with docker?

I tried building a detectron2 image with docker, in order to use with AWS SageMaker. The dockerfile looks like this:
ARG REGION="eu-central-1"
FROM 763104351884.dkr.ecr.$REGION.amazonaws.com/pytorch-training:1.6.0-gpu-py36-cu101-ubuntu16.04
RUN pip install --upgrade torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
############# Detectron2 section ##############
RUN pip install \
--no-cache-dir pycocotools~=2.0.0 \
--no-cache-dir https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.6/detectron2-0.4%2Bcu101-cp36-cp36m-linux_x86_64.whl
ENV FORCE_CUDA="1"
# Build D2 only for Volta architecture - V100 chips (ml.p3 AWS instances)
# ENV TORCH_CUDA_ARCH_LIST="Volta"
# Set a fixed model cache directory. Detectron2 requirement
ENV FVCORE_CACHE="/tmp"
############# SageMaker section ##############
COPY container_training/sku-110k /opt/ml/code
WORKDIR /opt/ml/code
ENV SAGEMAKER_SUBMIT_DIRECTORY /opt/ml/code
ENV SAGEMAKER_PROGRAM training.py
WORKDIR /
ENTRYPOINT ["bash", "-m", "start_with_right_hostname.sh"]
The problem is that when I run the docker build command, it fails at pulling the image from the AWS ECR repository. It throws the error
ERROR [internal] load metadata for
763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu 0.4s ------ > [internal] load metadata for 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py36-cu101-ubuntu16.04:
------ failed to solve with frontend dockerfile.v0: failed to create LLB
definition: unexpected status code [manifests
1.6.0-gpu-py36-cu101-ubuntu16.04]: 401 Unauthorized
I have to mention that I successfully login before trying to build and I have full ECR permissions on my user.
you probably logged in in your ECR private account but not in docker & shared ECR repo to retrieve Pytorch base image like this :
Enter your region and account id below, and then execute the following cell to do it.
%%bash
REGION=YOUR_REGION
ACCOUNT=YOUR_ACCOUNT_ID
aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin 763104351884.dkr.ecr.$REGION.amazonaws.com
# loging to your private ECR
aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com

Dockerfile - install jenkins on AWS

New to AWS so any help would be appreciated.
I'm attempting to run Jenkins through Docker on AWS. I found this article https://docs.aws.amazon.com/aws-technical-content/latest/jenkins-on-aws/containerized-deployment.html
Can anyone share a better step-by-step tutorial to achieve this? the page above seems incomplete.
It talks about "The Dockerfile should also contain the steps to install the Jenkins Amazon ECS plugin" but does not show how to go about installing the plugin using the Dockerfile.
thanks
Please follow below steps:
Launch EC2 cluster according to your needs.
Install docker in you local machine. For example, for ubuntu (sudo apt-get isntall docker.io)
systemctl start docker
Create new folder for our jenkins docker. Create new Dockerfile inside it with following contents.
FROM Jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Create plugins.txt in same folder and add below line
amazon-ecs:1.3
Login to ECR using aws cli. Configure aws first with your credentials.
aws ecr get-login --region <REGION>
Run the output returned from above command to docker login.
sudo docker build -t jenkins_master .
sudo docker tag jenkins_master:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_master:latest
Create repository in ECR for this image
aws ecr create-repository --repository-name jenkins_master
Push the image in AWS ECR.
sudo docker push <AWS ACC ID>.dkr.ecr.<REGION>+.amazonaws.com/jenkins_master:latest
Our Jenkins docker image is ready. But data stored by this Jenkins server will not be persistent. To store data permanently, we will create another docker image which will create a volume with mount point. For that, create new directory for this new docker image and inside it create another Dockerfile with below content.
FROM Jenkins
VOLUME ["/var/jenkins_home"]
Again follow same commands to push this new repository to ECR.
sudo docker build -t jenkins_dv .
sudo docker tag jenkins_dv:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
aws ecr create-repository --repository-name jenkins_dv
sudo docker push <AWS Account Number>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
Now our images are ready. We will use this images to run them as service on our ECS cluster. For that we need to install ecs-cli using below command for linux.
sudo curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
Create a new txt file with below contents which will have jenkins configuration.
jenkins_master:
image: jenkins_master
cpu_shares: 100
mem_limit: 2000M
ports:
- "8080:8080"
- "50000:50000"
volumes_from:
- jenkins_dv
jenkins_dv:
image: jenkins_dv
cpu_shares: 100
mem_limit: 500M
15. Finally push this service using above file to your newly created cluster.
ecs-cli compose --file docker_compose.txt service up --cluster <cluster_name>
Hope this helps!

gitlab CI create docker image and push to AWS

I setup a docker registry (ECR) on AWS. From my gitlab repository I'd like to setup a CI to automatically create images and push them to the repository.
I was following the following tutorial to setup everything, but when running the example, I receive the error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
My yml file looks like this
image: docker:latest
variables:
REPOSITORY_URL: <aws-url>/<registry>/outsite-slackbot
services:
- docker:dind
before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
stages:
- build
build:
stage: build
script:
- $(aws ecr get-login --no-include-email --region eu-west-1)
There is no problem with the Dockerfile, you can't be connected to docker daemon by the way. So check these steps:
Are you logged in as a root? (sudo su or sudo -i)
Start Docker service (service docker start)
Then follow the tutorial :)

Configuring bitbucket pipelines with Docker to connect to AWS

I am trying to set up Bitbucket pipelines to deploy to ECS as here: https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html
These instructions say how to push to Docker hub, but I want to push the image to Amazon's image repo. I have set AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID in my Bitbucket parameters list and I can run these command locally with no problems (the keys defined in ~/.aws/credentials). However, I keep getting the error 'no basic auth credentials'. I am wondering if it is not recognising the variables somehow. The docs here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html say that:
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files. So I am not sure why it isn't working. My bitbucket pipelines configuration is as so (I have not included anything unnecessary):
- export IMAGE_NAME=$AWS_REPO_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my/repo-name:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t $IMAGE_NAME .
# authenticate with the AWS repo (this gets and runs the docker login command)
- eval $(aws ecr get-login --region $AWS_DEFAULT_REGION)
# push the new Docker image to the repo
- docker push $IMAGE_NAME
Is there a way of specifying the credentials for aws ecr get-login to use? I even tried this, but it doesn't work:
- mkdir -p ~/.aws
- echo -e "[default]\n" > ~/.aws/credentials
- echo -e "aws_access_key_id = $AWS_ACCESS_KEY_ID\n" >> ~/.aws/credentials
- echo -e "aws_secret_access_key = $AWS_SECRET_ACCESS_KEY\n" >> ~/.aws/credentials
Thanks
I use an alternative method to build and push Docker images to AWS ECR that requires no environment variables:
image: amazon/aws-cli
options:
docker: true
oidc: true
aws:
oidc-role: arn:aws:iam::123456789012:role/BitBucket-ECR-Access
pipelines:
default:
- step:
name: Build and push to ECR
script:
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
- docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1 .
- docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1
You will need to update the role ARN to match a Role you have created in your AWS IAM console with sufficient permissions.
Try this:
bitbucket-pipeline.yml
pipelines:
custom:
example-image-builder:
- step:
image: python:3
script:
- export CLONE_ROOT=${BITBUCKET_CLONE_DIR}/../example
- export IMAGE_LOCATION=<ENTER IMAGE LOCATION HERE>
- export BUILD_CONTEXT=${BITBUCKET_CLONE_DIR}/build/example-image-builder/dockerfile
- pip install awscli
- aws s3 cp s3://example-deployment-bucket/deploy-keys/bitbucket-read-key .
- chmod 0400 bitbucket-read-key
- ssh-agent bash -c 'ssh-add bitbucket-read-key; git clone --depth 1 git#bitbucket.org:example.git -b master ${CLONE_ROOT}'
- cp ${CLONE_ROOT}/requirements.txt ${BUILD_CONTEXT}/requirements.txt
- eval $(aws ecr get-login --region us-east-1 --no-include-email)
- docker build --no-cache --file=${BUILD_CONTEXT}/dockerfile --build-arg AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} --build-arg AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} --tag=${IMAGE_LOCATION} ${BUILD_CONTEXT}
- docker push ${IMAGE_LOCATION}
options:
docker: true
dockerfile
FROM python:3
MAINTAINER Me <me#me.me>
COPY requirements.txt requirements.txt
ENV DEBIAN_FRONTEND noninteractive
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
RUN apt-get update && apt-get -y install stuff
ENTRYPOINT ["/bin/bash"]
I am running out of time, so for now I included more than just the answer to your question. But this would be a good enough template to work from. Ask questions in the comments if there is any line you don't understand and I will edit the answer.
i had the same issue. the error is mainly due to an old version of awscli.
you need to use a docker image with a more recent awscli.
for my project i use linkmobility/maven-awscli
You need to set the Environnment variables in Bitbucket
small changes to your pipeline
image: Docker-Image-With-awscli
eval $(aws ecr get-login --no-include-email --region ${AWS_DEFAULT_REGION} )