How to deploy RDS with aws CI CD pipeline? - amazon-web-services

I have everything set up dockerfile buildspec and all when I create RDS public instance I can connect it from my local intellij for example ,but when I run ci cd pipeline in deploy stage it gives this error
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.internal.exception.FlywaySqlException: Unable to obtain connection from database: The connection attempt failed.
and this is my properties
spring:
flyway:
enabled: true
fail-on-missing-locations: true
locations: db.migration
datasource:
driver-class-name: org.postgresql.Driver
I tried to add this
baselineOnMigrate: true
validateOnMigrate: false
but not yet

Related

GCP ci/cd: skaffold cannot access private git repository using google cloud build

I'm trying to configure auto ci/cd process with google cloud platform.
So I went through this instruction https://davelms.medium.com/automate-gke-deployments-using-cloud-build-and-cloud-deploy-2c15909ddf22 and all works good. So I have a trigger in cloud build that goes to cloud build file that using skaffold for manifest rendering. It builds an image and deploy the app. All good.
But as we have a lot of apps we want to have deploy configs in the separate repo. In that case I see from skaffold docs https://skaffold.dev/docs/references/yaml/?version=v2beta29#build-artifacts-docker-ssh that you could use as configs:
requires:
- configs: []
git:
repo: https://github.com/GoogleContainerTools/skaffold.git
path: skaffold.yaml
ref: main
sync: true
this config works for public repo, but for private repo I get error:
error parsing skaffold configuration file: caching remote dependency https://github.com/your_repo.git: failed to clone repo: running [/usr/bin/git clone https://github.com/your_repo.git ./P7akUPb6jdsgjfgTnOedB92BH8UE7 --branch main --depth 1]
" - stderr: "Cloning into './P7akUPb6jdsgjfgTnOedB92BH8UE7'...\nfatal: could not read Username for 'https://github.com': No such device or address\n""
Where or how I could add details for accessing private repo?

Unable to write to AWS EFS from AWS ECS Fargate task

I followed this tutoiral to add persistent storage to my Grafana with a Fargate instance: https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-mount-efs-containers-tasks/
Before I followed the tutorial the task and deployment worked fine (just without persistent data). Now my task fails:
Essential container in task exited
When I check the log in my task I get the following:
Failed to start grafana. error: failed to connect to database: failed to create SQLite database file "/var/lib/grafana/grafana.db": open /var/lib/grafana/grafana.db: permission denied
...
GF_PATHS_DATA='/var/lib/grafana' is not writable.
My Dockerfile looks like this:
FROM grafana/grafana-oss:8.2.7
ENV GF_DEFAULT_APP_MODE "development"
ENV GF_LOG_LEVEL "debug"
ENV GF_PATHS_PLUGINS "/app/grafana/plugins"
COPY plugins /app/grafana/plugins
EXPOSE 3000
What can I do? Where could the issue be? I googled a lot and nothing worked / helped.

How to use custom docker image node:16.13.0-alpine in Codebuild

summery
I tried to use docker container node:16.13.0-alpine in Codebuild.
However, build was failed with following error.
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image.
Asm fetching username: AuthorizationData is malformed, empty field
I want to know how to resolve this error and build successfully passes.
what I've tried
I set environment as follows:
In Registry credentials section, I added Secrets Manager ARN for Docekr credential.
codes
Here is buildspec.yml for testing.
version: 0.2
phases:
build:
commands:
- echo this is test.
My Registry URL was wrong.
node:14.16.0-stretch is successed.

Terraform fails in GitLab due to cache

I am trying to deploy my AWS infrastructure using Terraform from GitLab CI CD Pipeline.
I am using the GitLab managed image and it's default Terraform template.
I have configured S3 backend and it's pointing to the S3 bucket used to store the tf state file.
I had stored CI CD variables in GitLab for: AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY' and 'S3_BUCKET'.
Everything was working fine, until I changed the 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY' and 'S3_BUCKET' which was pointing to a different AWS account.
Now I am getting the following error:
$ terraform init
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Error: Error loading state:
AccessDenied: Access Denied
status code: 403, request id: XXXXXXXXXXXX,
host id: XXXXXXXXXXXXXXXXXXXXX
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
Cleaning up file based variables 00:00
ERROR: Job failed: exit code 1
Since this issue happened because I changed the access_key and secret_key (It was working fine from my local VS Code), I commented out the 'cache:' block in the .gitlab-ci.yml file and it worked!
The following is my .gitlab-ci.yml file:
.gitlab-ci.yml
stages:
- validate
- plan
- apply
- destroy
image:
name: registry.gitlab.com/gitlab-org/gitlab-build-images:terraform
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
# Default output file for Terraform plan
variables:
PLAN: plan.tfplan
JSON_PLAN_FILE: tfplan.json
STATE: dbrest.tfstate
cache:
paths:
- .terraform
before_script:
- alias convert_report="jq -r '([.resource_changes[]?.change.actions?]|flatten)|{\"create \":(map(select(.==\"create\"))|length),\"update\":(map(select(.==\"update\"))|length),\"delete\":(map(select(.==\"delete\"))|length)}'"
- terraform --version
- terraform init
validate:
stage: validate
script:
- terraform validate
only:
- tags
plan:
stage: plan
script:
- terraform plan -out=plan_file
- terraform show --json plan_file > plan.json
artifacts:
paths:
- plan.json
expire_in: 2 weeks
when: on_success
reports:
terraform: plan.json
only:
- tags
allow_failure: true
apply:
stage: apply
extends: plan
environment:
name: production
script:
- terraform apply --auto-approve
dependencies:
- plan
only:
- tags
when: manual
terraform destroy:
extends: apply
stage: destroy
script:
- terraform destroy --auto-approve
needs: ["plan","apply"]
when: manual
only:
- tags
The issue clearly happens if I don't comment out the below block. However it used to work before I made changes to the AWS access_key and secret_key.
#cache:
# paths:
# - .terraform
When the cache was not commented, the following was the result in the CI CI Pipeline:
Is cache being stored anywhere? And How do I clear it?
Think it's related to GitLab.
It seems that runner cache can be cleared off from GitLab from the UI itself.
Go to GitLab -> CI CD -> Pipelines and hit the 'Clear Runner Cache' button to clear the cache.
It actually works!

How to solve insufficient authentication scopes when use Pubsub on GCP

I'm trying to build 2 microservices (in Java Spring Boot) to communicate with each other using GCP Pub/Sub.
First, I tested the programs(in Eclipse) working as epxected in my local laptop(http://localhost), i.e. one microservice published the message and the other received it successfully using the Topic/Subscriber created in GCP (as well as the credential private key: mypubsub.json).
Then, I deployed the same programs to run GCP, and got following errors:
- 2020-03-21 15:53:16.831 WARN 1 --- [bsub-publisher2] o.s.c.g.p.c.p.PubSubPublisherTemplate : Publishing to json-payload-sample-topic topic failed
- com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes. at com.google.api.gax.rpc.ApiExceptionFactory
What I did to deploy the programs(in container) to run on GCP/Kubernetes Engine:
Login the Cloud Shell after switch to my project for the Pubsub testing
Git clone my programs which being tested in Eclipse
Move the mypubsub.json file to under /home/my_user_id
export GOOGLE_APPLICATION_CREDENTIALS="/home/my_user_id/mp6key.json"
Run 'mvn clean package' to build the miscroservice programs
Run 'docker build' to create the image files
Run 'docker push' to push the image files to gcr.io repo
Run 'kubectl create' to create the deployments and expose the services
Once the 2 microservices deployed and exposed, when I tried to access them in browser, the one to publish a message worked fine to retrieve data from database and processed the data, then failed with the above errors when trying to access the GCP Pubsub API to publish the message
Could anyone provide a hint for what to check to solve the issue?
The issue has been resolved by following the guide:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
Briefly the solution is to add following lines in the deployment.yaml to load the credential key:
- name: google-cloud-key
secret:
secretName: pubsub-key
containers:
- name: my_container
image: gcr.io/my_image_file
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
Try to explicitly provide CredentialsProvider to your Publisher class, I faced the same authentication issue.
This approach worked for me !
CredentialsProvider credentialsProvider = FixedCredentialsProvider.create(
ServiceAccountCredentials.fromStream(
PubSubUtil.class.getClassLoader().getResourceAsStream("key.json")));
Publisher publisher = Publisher.newBuilder(topicName)
.setCredentialsProvider(credentialsProvider)
.build();