Accessing Cloud SQL from Cloud Run on Google Cloud - google-cloud-platform

I have a Cloud Run service that accesses a Cloud SQL instance through SQLAlchemy. However, in the logs for Cloud Run, I see CloudSQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: ensure that the account has access to "<connection_string>". Going to that link, it says that:
"By default, your app will authorize your connections using the Cloud Run (fully managed) service account. The service account is in the format PROJECT_NUMBER-compute#developer.gserviceaccount.com."
However, the following (https://cloud.google.com/run/docs/securing/service-identity) says:
"By default, Cloud Run revisions are using the Compute Engine default service account (PROJECT_NUMBER-compute#developer.gserviceaccount.com), which has the Project > Editor IAM role. This means that by default, your Cloud Run revisions have read and write access to all resources in your Google Cloud project."
So shouldn't that mean that Cloud Run can already access SQL? I've already set up the Cloud SQL Connection in the Cloud Run deployment page. What do you suggest I do to allow access to Cloud SQL from Cloud Run?
EDIT: I have to enable the Cloud SQL API.

No, Cloud Run cannot access to Cloud SQL by default. You need to follow one of the two paths.
Connect to SQL using a local unix socket file: You need to configure permissions like you said and deploy with flags indicating intent to connect to the database. Follow https://cloud.google.com/sql/docs/mysql/connect-run
Connect to SQL with a private IP: This involves deploying Cloud SQL instance into a VPC Network and therefore having it get a private IP address. Then you use Cloud Run VPC Access Connector (currently beta) to allow Cloud Run container to be able to connect to that VPC network, which includes SQL database's IP address directly (no IAM permissions needed). Follow https://cloud.google.com/vpc/docs/configure-serverless-vpc-access

Cloud SQL Proxy solution
I use the cloud-sql-proxy to create a local unix socket file in the workspace directory provided by Cloud Build.
Here are the main steps:
Pull a Berglas container populating its call with the _VAR1 substitution, an environment variable I've encrypted using Berglas called CMCREDENTIALS. You should add as many of these _VAR{n} as you require.
Install the cloudsqlproxy via wget.
Run an intermediate step (tests for this build). This step uses the variables stored in the provided temporary /workspace directory.
Build your image.
Push your image.
Using Cloud Run, deploy and include the flag --set-environment-variables
The full cloudbuild.yaml
# basic cloudbuild.yaml
steps:
# pull the berglas container and write the secrets to temporary files
# under /workspace
- name: gcr.io/berglas/berglas
id: 'Install Berglas'
env:
- '${_VAR1}=berglas://${_BUCKET_ID_SECRETS}/${_VAR1}?destination=/workspace/${_VAR1}'
args: ["exec", "--", "/bin/sh"]
# install the cloud sql proxy
- id: 'Install Cloud SQL Proxy'
name: alpine:latest
entrypoint: sh
args:
- "-c"
- "\
wget -O /workspace/cloud_sql_proxy \
https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 && \
sleep 2 && \
chmod +x /workspace/cloud_sql_proxy"
waitFor: ['-']
# using the secrets from above, build and run the test suite
- name: 'python:3.8.3-slim'
id: 'Run Unit Tests'
entrypoint: '/bin/bash'
args:
- "-c"
- "\
(/workspace/cloud_sql_proxy -dir=/workspace/${_SQL_PROXY_PATH} -instances=${_INSTANCE_NAME1} & sleep 2) && \
apt-get update && apt-get install -y --no-install-recommends \
build-essential libssl-dev libffi-dev libpq-dev python3-dev wget && \
rm -rf /var/lib/apt/lists/* && \
export ${_VAR1}=$(cat /workspace/${_VAR1}) && \
export INSTANCE_NAME1=${_INSTANCE_NAME1} && \
export SQL_PROXY_PATH=/workspace/${_SQL_PROXY_PATH} && \
pip install -r dev-requirements.txt && \
pip install -r requirements.txt && \
python -m pytest -v && \
rm -rf /workspace/${_SQL_PROXY_PATH} && \
echo 'Removed Cloud SQL Proxy'"
waitFor: ['Install Cloud SQL Proxy', 'Install Berglas']
dir: '${_APP_DIR}'
# Using the application/Dockerfile build instructions, build the app image
- name: 'gcr.io/cloud-builders/docker'
id: 'Build Application Image'
args: ['build',
'-t',
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
'.',
]
dir: '${_APP_DIR}'
# Push the application image
- name: 'gcr.io/cloud-builders/docker'
id: 'Push Application Image'
args: ['push',
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
]
# Deploy the application image to Cloud Run
# populating secrets via Berglas exec ENTRYPOINT for gunicorn
- name: 'gcr.io/cloud-builders/gcloud'
id: 'Deploy Application Image'
args: ['beta',
'run',
'deploy',
'${_IMAGE_NAME}',
'--image',
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
'--region',
'us-central1',
'--platform',
'managed',
'--quiet',
'--add-cloudsql-instances',
'${_INSTANCE_NAME1}',
'--set-env-vars',
'SQL_PROXY_PATH=/${_SQL_PROXY_PATH},INSTANCE_NAME1=${_INSTANCE_NAME1},${_VAR1}=berglas://${_BUCKET_ID_SECRETS}/${_VAR1}',
'--allow-unauthenticated',
'--memory',
'512Mi'
]
# Use the defaults below which can be changed at the command line
substitutions:
_IMAGE_NAME: your-image-name
_BUCKET_ID_SECRETS: your-bucket-for-berglas-secrets
_INSTANCE_NAME1: project-name:location:dbname
_SQL_PROXY_PATH: cloudsql
_VAR1: CMCREDENTIALS
# The images we'll push here
images: [
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}'
]
Dockerfile utilized
The below builds a Python app from source contained inside the directory <myrepo>/application. This dockerfile sits under application/Dockerfile.
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.8.3-slim
# Add build arguments
# Copy local code to the container image.
ENV APP_HOME /application
WORKDIR $APP_HOME
# Install production dependencies.
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libpq-dev \
python3-dev \
libssl-dev \
libffi-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy the application source
COPY . ./
# Install Python dependencies
RUN pip install -r requirements.txt --no-cache-dir
# Grab Berglas from Google Cloud Registry
COPY --from=gcr.io/berglas/berglas:latest /bin/berglas /bin/berglas
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
ENTRYPOINT exec /bin/berglas exec -- gunicorn --bind :$PORT --workers 1 --threads 8 app:app
Hope this helps someone, though possibly too specific (Python + Berglas) for the original OP.

Related

pull access denied repo does not exist or may require authorization: server message:insufficient_scope: authorization failed"host=registry-1.docker.io

My Docker container works perfectly locally and using the default context and the command "docker compose up". I'm trying to run my docker image on ECS in AWS following this guide - https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I've followed all of the steps on the guide, after I've set the context to my new context (I've tried all 3 options) - after I run "docker compose up" I get the above error, here again for detail:
INFO trying next host error="pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" host=registry-1.docker.io
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I've also set the user and added all of the permissions I can think of - image below
I've looked everywhere and I can't find traction, please help :)
The image is located on AWS ECS and Docker hub - I've tried both
Here is my Docker file:
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN curl -sS https://getcomposer.org/installer | php -- --
install-dir=/usr/local/bin --filename=composer
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath
gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
# RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
USER $user

Google Cloud Run inaccessible even on successful build

My Google Cloud Run image was build successfully using Cloud Build via Github repo. I don't see anything concerning in the build logs.
This is my Dockerfile:
# Use the official lightweight Node.js 10 image.
# https://hub.docker.com/_/node
FROM node:17-slim
RUN set -ex; \
apt-get -y update; \
apt-get -y install ghostscript; \
apt-get -y install pngquant; \
rm -rf /var/lib/apt/lists/*
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install dependencies.
# If you add a package-lock.json speed your build by switching to 'npm ci'.
RUN npm ci --only=production
# RUN npm install --production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
CMD [ "npm", "start" ]
But when I try to access the cloud through the public URL I see:
Oops, something went wrong…
Continuous deployment has been set up, but your repository has failed to build and deploy.
This revision is a placeholder until your code successfully builds and deploys to the Cloud Run service myapi in asia-east1 of the GCP project myproject.
What's next?
From the Cloud Run service page, click "Build History".
Examine your build logs to understand why it failed.
Fix the issue in your code or Dockerfile (if any).
Commit and push the change to your repository.
It appears that the node app did not run. What am I doing wrong?
Turns out that cloudbuild.yaml is not really optional. Adding the file with the following resolved the issue:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA", "."]
# Push the container image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"]
# Deploy container image to Cloud Run
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "myapi"
- "--image"
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"
- "--region"
- "asia-east1"
images:
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"

How to deploy to AWS Beanstalk with GitLab CI

How To Deploy a Node App on AWS Elastic Beanstalk, Docker, and Gitlab ci.
I've created a simple node application. Dockerized the node application.
What I'm trying to do is deploy my application using gitlab ci.
This is what I have so far:
image: docker:git
services:
- docker:dind
stages:
- build
- release
- release-prod
variables:
CI_REGISTRY: registry.gitlab.com
CONTAINER_TEST_IMAGE: registry.gitlab.com/testapp/routing:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/testapp/routing:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
build:
stage: build
script:
- docker build -t $CONTAINER_TEST_IMAGE -f Dockerfile.prod .
- docker push $CONTAINER_TEST_IMAGE
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
release-prod:
stage: release-prod
script:
when: manual
I'm stuck on release-prod stage. I'm just not sure how I can deploy the app to AWS Beanstalk.
Since I have the docker images have been created and stored in gitlab registry. All I want to do is instruct AWS Beanstalk to download the docker images from gitlab registry and are start the application.
I also have a Dockerrun.aws.json which defines the services.
Your Dockerrun.aws.json file is what Beanstalk uses as the final say in what is deployed.
The option I found to work for us was to make a custom docker image with the eb cli installed so we can run eb deploy... from the gitlab-ci.yml file.
This requires AWS permissions for the runner to be able to access the aws service though so a user or permissions come into play. But they would any way it's setup.
GitLab project - CI/CD settings aws user keys (Ideally it's set up to use an IAM role instead but User/keys will work - I'm not too familiar with getting temporary access which might be the best thing for this but again, I'm not sure how that works)
We use a custom EC2 instance as our runner to run the pipeline so I'm not sure about shared runners - we had a concern of passing aws user creds to a shared runner pipeline...
build stage:
build and push the docker image to our ECR repository or your use case
deploy stage:
have a custom image stored in GitLab that has pre installed the eb cli. Then run eb deploy env-name
This is the dockerfile we use for our PHP project. Some of the installs aren't necessary for your case... This could also be improved by adding a USER and package versions. This will create a docker image that has the eb cli installed though.
FROM node:12
RUN apt-get update && apt-get -y --allow-unauthenticated install apt-transport-https ca-certificates curl gnupg2 software-properties-common ruby-full \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update && apt-get -y --allow-unauthenticated install docker-ce \
&& apt-get -y install build-essential zlib1g-dev libssl-dev libncurses-dev libffi-dev libsqlite3-dev libreadline-dev libbz2-dev python-pip python3-pip
RUN git clone https://github.com/aws/aws-elastic-beanstalk-cli-setup.git \
&& ./aws-elastic-beanstalk-cli-setup/scripts/bundled_installer
RUN python3 --version && apt-get update && apt-get -y install python3-pip \
&& pip3 install awscli boto3 botocore && pip3 install boto3 botocore --upgrade
Example gitlab-ci.yml setup
release-prod:
image: registry.gitlab.com/your-acct/project/custom-image
stage: release-prod
script:
- service docker start
- echo 'export PATH="/root/.ebcli-virtual-env/executables:$PATH"' >> ~/.bash_profile && source ~/.bash_profile
- echo 'export PATH=/root/.pyenv/versions/3.7.2/bin:$PATH' >> /root/.bash_profile && source /root/.bash_profile
- eb deploy your-environment
when: manual
you could also add the echo commands to the custom gitlab image also so all you need to run is eb deploy...
Hope this helps a little
Although there are couple of different ways to achieve this, I finally found proper solution for my usage cases. I have documented in here https://medium.com/voices-of-plusdental/gitlab-ci-deployment-for-php-applications-to-aws-elastic-beanstalk-automated-qa-test-environments-253ca4932d5b Using eb deploy was the easiest and shortest version. Also allows me to customize the instances in any way I want.

AWS Cloudwatch Agent in a docker container

I am trying to set up Amazon Cloudwatch Agent to my docker as a container. This is an OnPremise installation so it's running locally, not inside AWS Kubernetes or anything of the sorts.
I've set up a basic dockerfile, agent.json and .aws/ folder for credentials and using docker-compose build to actually set it up, then launch it, but I am running into constant problems because Docker does not contain or run systemctl so I cannot run the service using AWS own documentation command:
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -s
This will fail on an error when I try to run the container:
cloudwatch_1 | /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl: line 262: systemctl: command not found
cloudwatch_1 | unknown init system
I've tried to run the /start-amazon-cloudwatch-agent inside /bin as well, but no luck. No documentation on this.
Basically the issue is how can I run this as a service or a process in the foreground? Anyone have any clues? Otherwise the container won't stay up. Below is my code:
dockerfile
FROM amazonlinux:2.0.20190508
RUN yum -y install https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
COPY agent.json /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
CMD /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
agent.json
{
"agent": {
"metrics_collection_interval": 60,
"region": "eu-west-1",
"logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log",
"debug": true
}
}
.aws/ folder contains config and credentials, but I never got as far for the agent to actually try and make a connection.
just use the official image docker pull amazon/cloudwatch-agent it will handel all the things for you
here
if you insist to use your own , try the following:
FROM amazonlinux:2.0.20190508
RUN yum -y install https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
COPY agent.json /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json
ENV RUN_IN_CONTAINER=True
ENTRYPOINT ["/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent"]
Use the AWS official Docker Image, here is the example of the docker compose
version: "3.8"
services:
agent:
image: amazon/cloudwatch-agent:1.247350.0b251814
volumes:
- ./config/log-collect.json:/opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json # agent config
- ./aws:/root/.aws # required for authentication
- ./log:/log # sample log
- ./etc:/opt/aws/amazon-cloudwatch-agent/etc # for debugging the config of AWS of container
From config above, only the first 2 volume sync required.
Number 3 & 4 is for debug purpose.
If you interested in learning what each volumes does, you can read more at https://medium.com/#gusdecool/setup-aws-cloudwatch-agent-on-premise-server-part-1-31700e81ab8

Deploy Docker Containers from Docker Cloud

I'm new to Docker and am trying to learn more about best practices for deploying Dockerized images. I've built some images on my development host using the Dockerfile and docker-compose.yml below.
After building the images, I ssh'd to my production server, an Amazon Linux flavored T2.micro instance on AWS's EC2 service. There I installed docker and docker-compose, then tried to build my images, but ran out of RAM. I therefore published the images I had built on my local host to Docker Cloud, and I now wish to deploy those images from Docker Cloud on the AWS instance.
How can I achieve this? I'd be very grateful for any help others can offer!
Dockerfile:
# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <douglas.duhaime#gmail.com>
# Add source to a directory and use that directory
# NB: /app is a reserved directory in tomcat container
ENV APP_PATH="/lts-app"
RUN mkdir "$APP_PATH"
ADD . "$APP_PATH"
WORKDIR "$APP_PATH"
##
# Build BlackLab
##
RUN apk add --update --no-cache \
wget \
tar \
git
# Store the path to the maven home
ENV MAVEN_HOME="/usr/lib/maven"
# Add maven and java to the path
ENV PATH="$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH"
# Install Maven
RUN MAVEN_VERSION="3.3.9" && \
cd "/tmp" && \
wget "http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" -O - | tar xzf - && \
mv "/tmp/apache-maven-$MAVEN_VERSION" "$MAVEN_HOME" && \
ln -s "$MAVEN_HOME/bin/mvn" "/usr/bin/mvn" && \
rm -rf "/tmp/*"
# Get the BlackLab source
RUN git clone "git://github.com/INL/BlackLab.git"
# Build BlackLab with Maven
RUN cd "BlackLab" && \
mvn clean install
##
# Build Python + Node dependencies
##
# Install system deps with Alpine Linux package manager
RUN apk add --update --no-cache \
g++ \
gcc \
make \
openssl-dev \
python3-dev \
python \
py-pip \
nodejs
# Install Python dependencies
RUN pip install -r "requirements.txt" && \
npm install --no-optional && \
npm run build
# Store Mongo service name as mongo host
ENV MONGO_HOST=mongo_service
ENV TOMCAT_HOST=tomcat_service
ENV TOMCAT_WEBAPPS=/tomcat_webapps/
# Make ports available
EXPOSE 7082
# Seed the db
CMD npm run seed && \
gunicorn -b 0.0.0.0:7082 --access-logfile - --reload server.app:app
docker-compose.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
To solve this problem, I followed advice from StackOverflow user #MazelTov's and built the containers on my local OSX development machine, then published the images to Docker Cloud, then pulled those images down onto and ran the images from my production server (AWS EC2).
Install Dependencies
I'll try and outline the steps I followed below in case they help others. Please note these steps require you to have docker and docker-compose installed on your development and production machines. I used the gui installer to install Docker for Mac.
Build Images
After writing a Dockerfile and docker-compose.yml file, you can build your images with docker-compose up --build.
Upload Images to Docker Cloud
Once the images are built, you can upload them to Docker Cloud with the following steps. First, create an account on Docker Cloud.
Then store your Docker Cloud username in an environment variable (so your ~/.bash_profile should contain export DOCKER_ID_USER='yaledhlab' (use your username though).
Next login to your account from your developer machine:
docker login
Once you're logged in, list your docker images:
docker ps
This will display something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89478c386661 yaledhlab/let-them-speak-web "/bin/sh -c 'npm run…" About an hour ago Up About an hour 0.0.0.0:7082->7082/tcp letthemspeak_web_1
5e9c75d29051 training/webapp:latest "python app.py" 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp heuristic_mirzakhani
890f7f1dc777 bitnami/tomcat:latest "/app-entrypoint.sh …" 4 hours ago Up About an hour 0.0.0.0:8080->8080/tcp letthemspeak_tomcat_service_1
09d74e36584d mongo "docker-entrypoint.s…" 4 hours ago Up About an hour 0.0.0.0:27017->27017/tcp letthemspeak_mongo_service_1
For each of the images you want to publish to Docker Cloud, run:
docker tag image_name $DOCKER_ID_USER/my-uploaded-image-name
docker push $DOCKER_ID_USER/my-uploaded-image-name
For example, to upload mywebapp_web to your user's account on Docker cloud, you can run:
docker tag mywebapp_web $DOCKER_ID_USER/web
docker push $DOCKER_ID_USER/web
You can then run open https://cloud.docker.com/swarm/$DOCKER_ID_USER/repository/list to see your uploaded images.
Deploy Images
Finally, you can deploy your images on EC2 with the following steps. First, install Docker and Docker-Compose on the Amazon-flavored EC2 instance:
# install docker
sudo yum install docker -y
# start docker
sudo service docker start
# allow ec2-user to run docker
sudo usermod -a -G docker ec2-user
# get the docker-compose binaries
sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# change the permissions on the source
sudo chmod +x /usr/local/bin/docker-compose
Log out, then log back in to update your user's groups. Then start a screen and run the server: screen. Once the screen starts, you should be able to add a new docker-compose config file that specifies the path to your deployed images. For example, I needed to fetch the let-them-speak-web container housed within yaledhlab's Docker Cloud account, so I changed the docker-compose.yml file above to the file below, which I named production.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
image: 'yaledhlab/let-them-speak-web'
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
Then the production compose file can be run with: docker-compose -f production.yml up. Finally, ssh in with another terminal, and detach the screen with screen -D.
Yeah, that's true. Docker Cloud uses Docker Hub as its native registry for storing both public and private repositories. Once you push your images to Docker Hub, they are available in Docker Cloud.
Pulling images from Docker hub is the opposite of pushing them. This works for both private and public repositories.
To download your images locally, I always export docker username to shell session:
# export DOCKER_ID_USER="username"
In fact, I have this on my .bashrc profile.
Replacing the value of DOCKER_ID_USER with your Docker Cloud username.
Then Log in to Docker Cloud using the docker login command.
$ docker login
This logs you in using your Docker ID, which is shared between both Docker Hub and Docker Cloud
You can now run docker pull command to get your images downloaded locally.
$ docker pull image:tag
This is applicable to any Cloud Platform, not specific to AWS.
As you’re new to docker, here is my recommendation of best Docker Guides, including Docker vs VMs and advanced topics like working with Docker swarm and Kubernetes.