I am developing a micro-service architecture based application using Spring-boot and Angular7 for the front end. What I need to do is to deploy it to a docker I have installed on my AWS ec2 instance using Travis.
What I have done so far is created a travis.yml file
language: java
jdk: oraclejdk8
services:
- mysql
- rabbitmq
- redis-server
before_install:
- mysql -e 'CREATE DATABASE IF NOT EXISTS mydb;'
image: my-service/aws-cli-docker
variables:
AWS_ACCESS_KEY_ID: "##########"
AWS_SECRET_ACCESS_KEY: "##########"
deploy_stage:
stage: deploy
environment: Production
only:
- master
script:
- aws ssm send-command --document-name "AWS-RunShellScript" --instance-ids "i-########" --parameters '{"commands":["sudo docker-compose -f /home/ubuntu/docker-compose.yml up -d --no-deps --build"],"executionTimeout":["3600"]}' --timeout-seconds 600 --region us-east-2
My dockerfile is in the source of the sevice as is as follows:
FROM java:8
VOLUME /tmp
ADD ./target/myService-0.0.1-SNAPSHOT.jar my-service.jar
EXPOSE 8081
ENTRYPOINT [ "sh", "-c", "java -Xms64m -Xmx512m -XX:+UseTLAB -XX:+ResizeTLAB -XX:ReservedCodeCacheSize=128m -XX:+UseCodeCacheFlushing -jar /my-service.jar" ]
nd my docker compose file is in the home/ubuntu path on my ec2 instance as follows:
services:
vehicle-service:
image: my-service/aws-cli-docker
ports:
-8081:8081
I my Travis build works and tests run but there's no deployment or any errors about the docker image. Can someone figure out what I'm missing?
Related
I'm getting the following error when I try to run docker compose up to deploy my infrastructure to AWS using Docker's ECS integration. Note that I'm running this on Pop!_OS 21.10, which is based on Ubuntu.
NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Things I've tried, based on an exhaustive search of SO and other sites:
Verified the proper format of my ~/.aws/config and ~/.aws/credentials files are formatted correctly, are in the proper place, and have the correct permissions
Verified that the aws cli works fine
Verify that AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION are all set correctly
Tried copying the config and credentials to /root/.aws
Tried setting AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION in the root user's environment
Created /etc/systemd/system/docker.service.d/aws-credentials.conf and populated it with:
[Service]
Environment="AWS_ACCESS_KEY_ID=********************"
Environment="AWS_SECRET_ACCESS_KEY=****************************************"
Ran docker -l debug compose up (Only extra information it provides is DEBUG deploying on AWS with region="us-east-1"
I'm running out of options. If anyone has any other ideas to try, I'd love to hear it. Thanks!
Update: I've also now tried the following, with no luck:
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials in /etc/systemd/system/docker.service.d/override.conf
After remembering my IAM account has MFA enabled, generated a token and added Environment="AWS_SESSION_TOKEN=..." to override.conf
Also to note - each time after I've added/modified files under /etc/systemd/system/docker.service.d/ I've run:
sudo systemctl daemon-reload
sudo systemctl restart docker
Edit:
Here's one of the Dockerfiles (both the scraper and scheduler use an identical Dockerfile):
FROM denoland/deno:alpine
WORKDIR /app
USER deno
COPY deps.ts .
RUN deno cache --unstable --no-check deps.ts
COPY . .
RUN deno cache --unstable --no-check mod.ts
RUN mkdir -p /var/tmp/log
CMD ["run", "--unstable", "--allow-all", "--no-check", "mod.ts"]
Here's my docker-compose (some bits redacted):
version: '3'
services:
grafana:
container_name: grafana
image: grafana/grafana
ports:
- "3000:3000"
volumes:
- grafana:/var/lib/grafana
deploy:
replicas: 1
scheduler:
image: scheduler
x-aws-pull-credentials: "arn..."
container_name: scheduler
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
scraper:
image: scraper
x-aws-pull-credentials: "arn..."
container_name: scraper
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
volumes:
grafana:
Have you attempted to use the Amazon ECS Local Container Endpoints tool that AWS Labs provides? It allows you to create an override file for you docker-compose configurations, and it will simulate the ECS endpoints and IAM roles you would be using in AWS.
This is done using the local AWS credentials you have on your workstation. More information is available on the AWS Blog.
I've already created a Bitbucket pipeline to build an application image and upload it to AWS ECR.
# Bitbucket pipeline to build an image and upload it to AWS ECR
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
caches:
- docker
services:
- docker
name: Build and Push
deployment: Production
script:
- echo "Build Docker and Push to Registry"
- docker build -t $AWS_ECR_REPOSITORY .
- docker inspect $AWS_ECR_REPOSITORY
- pipe: atlassian/aws-ecr-push-image:1.4.1
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
IMAGE_NAME: $AWS_ECR_REPOSITORY
Now I want to create another bitbucket pipeline which can take that image from AWS ECR and deploy that application on AWS Elastic Beanstalk.
I'm not sure how to do that.
I am trying to create basic gitlab CICD pipeline which will deploy my node.js based backend to AWS kops based k8s cluster.For that I have created gitlab-ci.yml file which will use for deploy whole CICD pipeline, however I am getting confused with how to get kubernetes cluster IP address so I can use it in gitlab-ci.yml to set as - kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
where I want CLUSTER_ADDRESS to configure with gitlab in gitlab-ci.yml.
Any help would be appreciated.
variables:
DOCKER_DRIVER: overlay2
REGISTRY: $CI_REGISTRY
IMAGE_TAG: $CI_REGISTRY_IMAGE
K8S_DEPLOYMENT_NAME: deployment/$CI_PROJECT_NAME
CONTAINER_NAME: $CI_PROJECT_NAME
stages:
- build
- build-docker
- deploy
build-docker:
image: docker:latest
stage: build-docker
services:
- docker:dind
tags:
- privileged
only:
- Test
script:
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY
- docker build --network host -t $IMAGE_NAME:$IMAGE_TAG -t $IMAGE_NAME:latest .
- docker push $IMAGE_NAME:$IMAGE_TAG
- docker push $IMAGE_NAME:latest
deploy-k8s-(stage):
image:
name: kubectl:latest
entrypoint: [""]
stage: deploy
tags:
- privileged
# Optional: Manual gate
when: manual
dependencies:
- build-docker
script:
- kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
- kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
- kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
- kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=default
- kubectl config use-context default
- kubectl set image $K8S_DEPLOYMENT_NAME $CI_PROJECT_NAME=$IMAGE_TAG
- kubectl rollout restart $K8S_DEPLOYMENT_NAME
If your current kubeconfig context is set to the cluster in question, you can run the following to get the cluster address you want:
kubectl config view --minify --raw \
--output 'jsonpath={.clusters[0].cluster.server}'
You can add --context <cluster name> if not.
In most cases this will be https://api.<cluster name>.
My project is a flask project using docker-compose.
And source code is in GitLab.
I wanna auto-deploy to ECS with GitLab CI.
Also, docker images are in ECR.
But I faced following error.
Subnet created: subnet-0ffc4936b92c
Subnet created: subnet-0177c849eeca
Cluster creation succeeded.
WARN[0000] Skipping unsupported YAML option for service... option name=build service name=proxy
WARN[0000] Skipping unsupported YAML option for service... option name=container_name service name=proxy
WARN[0000] Skipping unsupported YAML option for service... option name=restart service name=proxy
WARN[0000] Skipping unsupported YAML option for service... option name=build service name=api
WARN[0000] Skipping unsupported YAML option for service... option name=container_name service name=api
WARN[0000] Skipping unsupported YAML option for service... option name=restart service name=api
WARN[0000] Skipping unsupported YAML option for service... option name=build service name=worker
WARN[0000] Skipping unsupported YAML option for service... option name=container_name service name=worker
WARN[0000] Skipping unsupported YAML option for service... option name=restart service name=worker
INFO[0001] Using ECS task definition TaskDefinition="backend:12"
WARN[0001] No log groups to create; no containers use 'awslogs'
ERRO[0001] Error running tasks error="InvalidParameterException: No Container Instances were found in your cluster." task definition=0xc0005a5ae0
FATA[0001] InvalidParameterException: No Container Instances were found in your cluster.
docker-compose.yml
version: "3.0"
services:
proxy:
container_name: rs-proxy
image: ${REPOSITORY_URL}/proxy
build:
context: proxy/.
dockerfile: Dockerfile
ports:
- 80:80
restart: on-failure
api:
container_name: rs-api
image: ${REPOSITORY_URL}/api
build:
context: api/.
dockerfile: Dockerfile.prod
restart: on-failure
volumes:
- ./api/migrations:/app/migrations
worker:
container_name: rs-worker
image: ${REPOSITORY_URL}/worker
build:
context: .
dockerfile: ./worker/Dockerfile
restart: on-failure
.gitlab-ci.yml
image: tiangolo/docker-with-compose
variables:
PROJECT_NAME: test-project
CONFIG_NAME: $PROJECT_NAME
PROFILE_NAME: $PROJECT_NAME-profile
AWS_ECR_URL: $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
REPOSITORY_URL: $AWS_ECR_URL/$PROJECT_NAME
before_script:
- export REPOSITORY_URL=$REPOSITORY_URL
- apk add --no-cache curl jq python3 py-pip
- apk add --update curl
- pip install awscli
- curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
- chmod +x /usr/local/bin/ecs-cli
- echo "Logging in AWS..."
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ECR_URL
stages:
- build
- deploy
build:
stage: build
script:
- echo "Building image..."
- docker-compose -f docker-compose.yml build
- echo "Pushing image..."
- docker push ${REPOSITORY_URL}/proxy:latest
- docker push ${REPOSITORY_URL}/api:latest
- docker push ${REPOSITORY_URL}/worker:latest
only:
- master
deploy:
stage: deploy
script:
- echo "Configuring AWS ECS..."
- ecs-cli configure --cluster $CONFIG_NAME --default-launch-type EC2 --config-name $CONFIG_NAME --region $AWS_DEFAULT_REGION
- ecs-cli configure profile --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY --profile-name $PROFILE_NAME
- echo "Updating the service..."
- ecs-cli up --capability-iam --size 1 --instance-type t2.medium --cluster-config $CONFIG_NAME --ecs-profile $PROFILE_NAME --force
- ecs-cli compose --file ./docker-compose.prod.yml up --create-log-groups --cluster-config $CONFIG_NAME --ecs-profile $PROFILE_NAME --force-update
only:
- master
ecs-params.yml
version: 1
task_definition:
task_execution_role:
services:
proxy:
essential: true
api:
essential: true
worker:
essential: true
project structure
I've attached configuration files.
I think, I missed some AWS configurations, but I can't find mistakes.
How can I fix it?
I setup a docker registry (ECR) on AWS. From my gitlab repository I'd like to setup a CI to automatically create images and push them to the repository.
I was following the following tutorial to setup everything, but when running the example, I receive the error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
My yml file looks like this
image: docker:latest
variables:
REPOSITORY_URL: <aws-url>/<registry>/outsite-slackbot
services:
- docker:dind
before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
stages:
- build
build:
stage: build
script:
- $(aws ecr get-login --no-include-email --region eu-west-1)
There is no problem with the Dockerfile, you can't be connected to docker daemon by the way. So check these steps:
Are you logged in as a root? (sudo su or sudo -i)
Start Docker service (service docker start)
Then follow the tutorial :)