Bit complicated but will try to explain as much to give clarity any help much apricated,
I use azure devops to do deployment in eks using helm , everything working fine and i now have a requirement to add certificate to the pod.
for this i have a der file with me , which i should copy to the pods(replica 3) and do keytool to import the certificate and put that in an appropriate location before my application starts
My setup is i have a dockerfile and i call a shell script inside a docker file and do helm install using deployment.yml file
I now tried using configmap to mount the der file which is used to importcert and then i will execute some unix commands to import the certificate , the unix command is not working , can one some help here ?
docker file
FROM amazonlinux:2.0.20181114
RUN yum install -y java-1.8.0-openjdk-headless
ARG JAR_FILE='**/*.jar'
ADD ${JAR_FILE} car_service.jar
ADD entrypoint.sh .
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"] # split ENTRYPOINT wrapper from
CMD ["java", "-jar", "/car_service.jar"] # main CMD
entrypoint.sh
#!/bin/sh
# entrypoint.sh
# Check: $env_name must be set
if [ -z "$env_name" ]; then
echo '$env_name is not set; stopping' >&2
exit 1
fi
# Install aws client
yum -y install curl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
yum -y install unzip
unzip awscliv2.zip
./aws/install
# Retrieve secrets from Secrets Manager
export KEYCLOAKURL=`aws secretsmanager get-secret-value --secret-id myathlon/$env_name/KEYCLOAKURL --query SecretString --output text`
cd /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin
keytool -noprompt -importcert -alias msfot-$(date +%Y%m%d-%H%M) -file /tmp/msfot.der -keystore msfot.jks -storepass msfotooling
mkdir /data/keycloak/
cp /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin/msfot.jks /data/keycloak/
cd /
# Run the main container CMD
exec "$#"
myconfigmap
create configmap msfot1 --from-file=msfot.der
my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
helm.sh/chart: {{ include "helm-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
date: "{{ now | unixEpoch }}"
spec:
volumes:
- name: msfot1
configMap:
name: msfot1
items:
- key: msfot.der
path: msfot.der
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: msfot1
mountPath: /tmp/msfot.der
subPath: msfot.der
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: env_name
value: {{ .Values.environmentName }}
- name: SPRING_PROFILES_ACTIVE
value: "{{ .Values.profile }}"
my values.yml file is
replicaCount: 3
#pass repository and targetPort values during runtime
image:
repository:
tag: "latest"
pullPolicy: Always
service:
type: ClusterIP
port: 80
targetPort:
profile: "aws"
environmentName: dev
i have 2 questions here
in my entrypoint.sh files keytool,mkdir,cp and cd command is not getting executed (so the Certificate is not getting added to keystore)
as you know this setup works for all the env as i use the same deployment.yml file though i have different values.yml file for each environment
i want this certificate generation to happen only in acc and prod not for dev and test
is their any other easy method doing this rather than configmap/deployment.yml ??
Please advice
Thanks
Related
I am using Github action "Deploy to Amazon ECS" to create Docker container from Node.js backend and deploy it on ECS.
During deployment, I receive following error:
Fill in the new image ID in the Amazon ECS task definition
Run aws-actions/amazon-ecs-render-task-definition#v1
Error: /home/runner/work/project-app-strapi/project-app-strapi/task-definition.json: Unexpected token � in JSON at position 0
The task-definition.json was generated by following command (as I am not very experienced with aws ecs CLI and prefer to create the infrastructure using AWS Console):
aws ecs describe-task-definition --task-definition "arn:aws:ecs:eu-west-1:076457945931:task-definition/project-strapi:2" --profile project > task-definition.json
also checked the file and it is valid json that doesn't contain any harmful hidden characters. It looks like this:
{
"taskDefinition": {
"taskDefinitionArn": "arn:aws:ecs:eu-west-1:076457945931:task-definition/project-strapi:2",
"containerDefinitions": [{
"name": "project-app",
"image": "076457945931.dkr.ecr.eu-west-1.amazonaws.com/company/project-strapi",
"cpu": 0,
"portMappings": [{
"containerPort": 1337,
"hostPort": 1337,
"protocol": "tcp"
}],
"essential": true,
... other fields, I don't believe they are needed
}
Workflow file is same as the default aws.yml for this Github Action, no changes were made here (besides filling variables):
name: Deploy to Amazon ECS
on:
push:
branches: [ "main" ]
env:
AWS_REGION: eu-west-1 # set this to your preferred AWS region, e.g. us-west-1
ECR_REPOSITORY: company/project-strapi # set this to your Amazon ECR repository name
ECS_SERVICE: project-strapi # set this to your Amazon ECS service name
ECS_CLUSTER: project-strapi-app # set this to your Amazon ECS cluster name
ECS_TASK_DEFINITION: task-definition.json # set this to the path to your Amazon ECS task definition
# file, e.g. .aws/task-definition.json
CONTAINER_NAME: project-app # set this to the name of the container in the
# containerDefinitions section of your task definition
permissions:
contents: read
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
# Build a docker container and
# push it to ECR so that it can
# be deployed to ECS.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: ${{ env.ECS_TASK_DEFINITION }}
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition#v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
I tried several things, specifically various changes to formatting of json, changing the directory of the file, but the error remains.
First Download the task-defination file then update the image of the task-defination then update it to the ecs service, then you wont get any issue
- name: Download task definition
run: |
aws ecs describe-task-definition --task-definition **your task defination name** --query taskDefinition > taskdefinition.json
- name: new image in ECS taskdefinition
id: demo
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: taskdefinition.json
container-name: **your container name**
image: ${{ steps.check_files.outputs.**image** }}
- name: updating task-definition file
run: cat ${{ steps.demo.outputs.task-definition }} > taskdefinition.json
I have some services running on the cluster, and the ALB is working fine. I want to configure SSL communication from ALB/ingress to Keycloak17.0.1 by creating a self signed certificate and establish a communication to route through port 8443 instead of http(80). Keycloak is being built by the docker image. Docker compose file is exposing the port 8443.I should also make sure to have the keystore defined as a Kubernetes PVC within the deployment instead of a docker volume.
Below is deployment file:
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "keycloak"
namespace: "test"
spec:
volumes:
- name: keycloak-pv-volume
persistentVolumeClaim:
claimName: keycloak-pv-claim
spec:
selector:
matchLabels:
app: "keycloak"
replicas: 3
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "keycloak"
spec:
containers:
-
name: "keycloak"
image: "quay.io/keycloak/keycloak:17.0.1"
imagePullPolicy: "Always"
livenessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 300
periodSeconds: 30
env:
-
name: "KEYCLOAK_USER"
value: "admin"
-
name: "KEYCLOAK_PASSWORD"
value: "admin"
-
name: "PROXY_ADDRESS_FORWARDING"
value: "true"
-
name: HTTPS_PROXY
value: "https://engineering-exodos*********:3128"
-
name: KC_HIDE_REALM_IN_ISSUER
value:************
ports:
- name: "http"
containerPort: 8080
- name: "https"
containerPort: 8443
self certificate is being created like below: (.groovy)
def secretPatch = 'kc alb-secret-patch.yaml'
sh loadMixins() + """
openssl req -newkey rsa:4096
-x509
-sha256
-days 395
-nodes
-out keycloak_alb.crt
-keyout keycloak_alb.key
-subj "/C=US/ST=MN/L=MN/O=Security/OU=IT Department/CN=www.gateway.com"
EXISTS=$(kubectl -n istio-system get secret --ignore-not-found keycloak_alb-secret)
if [ -z "$EXISTS" ]; then
kubectl -n create secret tls keycloak_alb-secret --key="keycloak_alb.key" --cert="keycloak_alb.crt"
else
# base64 needs the '-w0' flag to avoid wrapping long lines
echo -e "data:\n tls.key: $(base64 -w0 keycloak_alb.key)\n tls.crt: $(base64 -w0 keycloak.crt)" > ${secretPatch}
kubectl -n istio-system patch secret keycloak_alb-secret -p "$(cat ${secretPatch})"
fi
"""
}
Docker file:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_CACHE=ispn
ENV KC_DB=postgres
USER keycloak
RUN chmod -R 755 /opt/keycloak \
&& chown -R keycloak:keycloak /opt/keycloak
COPY ./keycloak-benchmark-dataset.jar /opt/keycloak/providers/keycloak-benchmark-dataset.jar
COPY ./ness-event-listener.jar /opt/keycloak/providers/ness-event-listener.jar
# RUN curl -o /opt/keycloak/providers/ness-event-listener-17.0.0.jar https://repo1.uhc.com/artifactory/repo/com/optum/dis/keycloak/ness_event_listener/17.0.0/ness-event-listener-17.0.0.jar
# Changes for hiding realm in issuer claim in access token
COPY ./keycloak-services-17.0.1.jar /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar
RUN /opt/keycloak/bin/kc.sh build`enter code here`
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
COPY --from=builder /opt/keycloak/providers /opt/keycloak/providers
COPY --from=builder /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar
COPY --chown=keycloak:keycloak cache-ispn-remote.xml /opt/keycloak/conf/cache-ispn-remote.xml
COPY conf /opt/keycloak/conf/
# Elastic APM integration changes
USER root
RUN mkdir -p /opt/elastic/apm
RUN chmod -R 755 /opt/elastic/apm
RUN curl -L https://repo1.uhc.com/artifactory/Thirdparty-Snapshots/com/elastic/apm/agents/java/current/elastic-apm-agent.jar -o /opt/elastic/apm/elastic-apm-agent.jar
ENV ES_AGENT=" -javaagent:/opt/elastic/apm/elastic-apm-agent.jar"
ENV ELASTIC_APM_SERVICE_NAME="AIDE_007"
ENV ELASTIC_APM_SERVER_URL="https://nonprod.uhc.com:443"
ENV ELASTIC_APM_VERIFY_SERVER_CERT="false"
ENV ELASTIC_APM_ENABLED="true"
ENV ELASTIC_APM_LOG_LEVEL="WARN"
ENV ELASTIC_APM_ENVIRONMENT="Test-DIS"
CMD export JAVA_OPTS="$JAVA_OPTS $ES_AGENT"
USER keycloak
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start"]
Docker Compose:
services:
keycloak:
image: quay.io/keycloak/keycloak:17.0.1
command: start-dev
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KC_HTTPS_CERTIFICATE_FILE: /opt/keycloak/tls.crt
KC_HTTPS_CERTIFICATE_KEY_FILE: /opt/keycloak/tls.key
ports:
- 8080:8080
- 8443:8443
volumes:
- ./localhost.crt:/opt/keycloak/conf/tls.crt
- ./localhost.key:/opt/keycloak/conf/tls.key
What is best standard way of practice to go ahead and route the traffic via SSL from ALB to Keycloak?
I'm trying to deploy my Django API on to Google App Engine using GitHub CI/CD, but I'm getting a strange error that doesn't provide any stack trace in my deploy job. My build job with unit tests and code coverage passes.
main.yaml:
name: Python application
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
defaults:
run:
working-directory: src
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:10.8
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: github_actions
ports:
- 5433:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.9
uses: actions/setup-python#v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with Unittest
env:
SECRET_KEY: ${{ secrets.SECRET_KEY }}
DB_NAME: ${{ secrets.DB_NAME }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_HOST: ${{ secrets.DB_HOST }}
DB_PORT: ${{ secrets.DB_PORT }}
DB_ENGINE: ${{ secrets.DB_ENGINE }}
run: |
coverage run manage.py test && coverage report --fail-under=75 && coverage xml
mv coverage.xml ../
- name: Report coverage to Codecov
env:
SECRET_KEY: ${{ secrets.SECRET_KEY }}
DB_NAME: ${{ secrets.DB_NAME }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_HOST: ${{ secrets.DB_HOST }}
DB_PORT: ${{ secrets.DB_PORT }}
DB_ENGINE: ${{ secrets.DB_ENGINE }}
uses: codecov/codecov-action#v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./coverage.xml
directory: ./coverage/reports/
fail_ci_if_error: true
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Deploy to App Engine
id: deploy
uses: google-github-actions/deploy-appengine#v0.2.0
with:
project_id: ${{ secrets.GCP_PROJECT_ID }}
deliverables: app.yaml
credentials: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS }}
version: v1
- name: Test
run: curl "${{ steps.deploy.outputs.url }}
app.yaml:
runtime: python39
instance_class: B1
service: deploy
basic_scaling:
max_instances: 1
idle_timeout: 10m
Here are the two errors I'm getting:
I also get another strange error in app.yaml, which causes the workflow to not run. I thought from the Google App Engine documentation for this file that we didn't need to include an on trigger. I'm not sure if it's caused by the error in main.yaml.
Is there an easy way to fix this error?
UPDATE: After trying v0.4.0 of the GitHub Action, I get the same error, but I found out that my GOOGLE_APPLICATION_CREDENTIALS are causing the error.
{
"type": "service_account",
"project_id": "***",
"private_key_id": "***",
"private_key": "-----BEGIN PRIVATE KEY-----***=\n-----END PRIVATE KEY-----\n",
"client_email": "***#appspot.gserviceaccount.com",
"client_id": "***",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/***%40appspot.gserviceaccount.com"
}
I replaced all private information with ***, but the JSON is definitely still valid.
Apologies if this is a really simple question
I have a 2 applications that can potentially share the same template
applications:
#
app1:
containerName: app1
replicaCount: 10
logLevel: warn
queue: queue1
#
app2:
containerName: app2
replicaCount: 20
logLevel: info
queue: queue2
...
If I create a single template for both apps, is there a wildcard or variable i can use
that will iterate over both of the apps (i.e. app1 or app2) ? ...e.g. the bit where ive put <SOMETHING_HERE> below ...
spec:
env:
- name: LOG_LEVEL
value: "{{ .Values.applications.<SOMETHING_HERE>.logLevel }}"
Currently (which im sure is not very efficient) I have two seperate templates that each define their own app .e.g
app1_template.yaml
{{ .Values.applications.app1.logLevel }}
app2_template.yaml
{{ .Values.applications.app2.logLevel }}
Which im pretty sure is not the way Im supposed to do it?
any help on this would be greatly appreciated
One of the solutions would be to have one template and multiple value files, one per deployment/environment
spec:
env:
- name: LOG_LEVEL
value: "{{ .Values.logLevel }}"
values-app1.yaml:
containerName: app1
replicaCount: 10
logLevel: warn
queue: queue1
values-app2.yaml:
containerName: app2
replicaCount: 20
logLevel: info
queue: queue2
then, specify which values file should be used, by adding this to helm command:
APP=app1 # or app2
helm upgrade --install . --values ./values-${APP}.yaml
you can also have shared values, let say in regular values.yaml and provide multiple files:
APP=app1
helm upgrade --install . --values ./values.yaml --values ./values-${APP}.yaml
You can just use a single values file as you've done, and then set the app name when you run helm..
helm upgrade --install app1 ./charts --set app=app1
and
helm upgrade --install app2 ./charts --set app=app2
Then in your templates use:
spec:
env:
- name: LOG_LEVEL
value: "{{ .Values.applications (.Values.app) "loglevel" }}"
I'm currently trying to do an automated deployment through github actions. Below is my current workflow yaml file:
name: Deploy AWS
on: [workflow_dispatch]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: 'Git: Checkout source code'
uses: actions/checkout#v1
- name: '.NET Core: Setup'
uses: actions/setup-dotnet#v1
with:
dotnet-version: '3.0.*'
- name: '.NET Core: Get dependencies'
run: dotnet restore
- name: '.NET Core: Build'
run: dotnet build --configuration Debug --no-restore
- name: 'AWS: Timestamp action'
uses: gerred/actions/current-time#master
id: current-time
- name: 'AWS: String replace action'
uses: frabert/replace-string-action#master
id: format-time
with:
pattern: '[:\.]+'
string: "${{ steps.current-time.outputs.time }}"
replace-with: '-'
flags: 'g'
- name: 'AWS: Generate build archive'
run: (cd ./project.Api/bin/Debug/netcoreapp3.0 && zip -r "../../../../${{ steps.format-time.outputs.replaced }}.zip" . -x '*.git*')
- name: 'AWS: Deploying build'
uses: einaregilsson/beanstalk-deploy#v14
with:
aws_access_key: { my_access_key }
aws_secret_key: { my_secret_key }
application_name: api_test
environment_name: my-api-test
version_label: "v${{ steps.format-time.outputs.replaced }}"
region: ap-southeast-2
deployment_package: "${{ steps.format-time.outputs.replaced }}.zip"
- name: 'AWS: Deployment complete'
run: echo Should be on EB now
The current elastic beanstalk environment is setup with a load balancer - which I think is the main issue being caused with the deployment failing. I haven't been able to find a solution on how to deploy to aws elastic beanstalk when the environment contains a load balancer.
I know you had already done this, but it will help needy one :-)
I'm new here so not able to write correctly in box, but yaml code starts from "name:dotnet.." till end ,indent yaml accordingly
name: dotnet -> s3 -> Elastic Beanstalk
on:
workflow_dispatch
#Setting up some environment variables
env:
EB_PACKAGE_S3_BUCKET_NAME : "php-bucket"
EB_APPLICATION_NAME : "dotnet-app"
EB_ENVIRONMENT_NAME : "Dotnetapp-env"
DEPLOY_PACKAGE_NAME : "dotnet-app-${{ github.sha }}.zip"
AWS_REGION_NAME : "af-south-1"
jobs:
build_and_create_Artifact:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout#v3
- name: Setup .NET Core
uses: actions/setup-dotnet#v1
with:
dotnet-version: 6.0.*
- name: Install dependencies
run: dotnet restore
- name: Build
run: dotnet build --configuration Release --no-restore
- name: Test
run: dotnet test --no-restore --verbosity normal
- name: Publish
run: dotnet publish -c Release -o '${{ github.workspace }}/out'
- name: Zip Package
run: |
cd ${{ github.workspace }}/out
zip -r ${{ env.DEPLOY_PACKAGE_NAME }} *
- name: Upload a Build Artifact
uses: actions/upload-artifact#v3.1.0
with:
name: .Net-artifact
path: ${{ github.workspace }}/out/${{ env.DEPLOY_PACKAGE_NAME }}
- name: "Configure AWS Credentials"
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID}}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION_NAME }}
- name: "Copy artifact to S3"
run: aws s3 cp ${{ github.workspace }}/out/${{ env.DEPLOY_PACKAGE_NAME }} s3://${{ env.EB_PACKAGE_S3_BUCKET_NAME }}/
- name: "Build Successful"
run: echo "CD part completed successfully"
Deploy_Artifact:
needs: build_and_create_Artifact
runs-on: ubuntu-latest
steps:
- name: "Configure AWS Credentials"
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID}}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION_NAME }}
- name: 'AWS: Timestamp action'
uses: gerred/actions/current-time#master
id: current-time
- name: 'AWS: String replace action'
uses: frabert/replace-string-action#master
id: format-time
with:
pattern: '[:\.]+'
string: "${{ steps.current-time.outputs.time }}"
replace-with: '-'
flags: 'g'
- name: "Create Elastic Beanstalk Application Version"
run : aws elasticbeanstalk create-application-version --application-name ${{ env.EB_APPLICATION_NAME }} --version-label version#${{ github.sha }} --source-bundle S3Bucket=${{ env.EB_PACKAGE_S3_BUCKET_NAME }},S3Key=${{ env.DEPLOY_PACKAGE_NAME }} --description SHA_of_app_is_${{ github.sha }}__Created_at__${{ steps.format-time.outputs.replaced }}
- name: "Deploy Application Version"
run: aws elasticbeanstalk update-environment --environment-name ${{ env.EB_ENVIRONMENT_NAME }} --version-label "version#${{ github.sha }}"
- name: "Successfully run CD pipeline"
run: echo "CD part completed successfully"