How to automate the git pull operation from a private repository using saltstack? - digital-ocean

I am using digital ocean Ubuntu server for hosting . I want to automate git pull operation on my salt-master and minions .

I use this inside a statefile to clone a Git repository. You can than execute the state automatically when needed:
# Place a Git deploy key.
/root/.ssh/id_rsa:
file.managed:
- source: salt://files/id_rsa
- user: user
- group: group
- mode: 600
- template: jinja
# Clone the repository.
git#github.com:user/repository.git:
git.latest:
- user: user
- identity: /root/.ssh/id_rsa
- target: /folder/to/clone/to/
- branch: master
- require:
- file: /root/.ssh/id_rsa

Related

Google Cloud Build keeps on giving me 'Can't reach database server at'

So I have been at this for days now almost and it is driving me crazy. Based on other posts, I have set up the following cloudbuild.yaml :
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- -t
- gcr.io/${INSTANCE_NAME}
- .
- name: gcr.io/cloud-builders/docker
args:
- push
- gcr.io/${INSTANCE_NAME}
- name: 'gcr.io/${INSTANCE_NAME}'
entrypoint: sh
env:
- DATABASE_URL=postgresql://USER:PASSWORD#localhost/DATABASE?host=/cloudsql/CONNECTION_NAME
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=CONNECTION_NAME=tcp:5432 & sleep 3
npx prisma migrate deploy
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args:
- run
- deploy
- backend
- --image
- gcr.io/${INSTANCE_NAME}
- --region
- europe-west1
images:
- gcr.io/${INSTANCE_NAME}
When running this, I am greeted by:
Step #2: 2023/02/05 13:00:49 Listening on 127.0.0.1:5432 for CONNECTION_NAME
Step #2: 2023/02/05 13:00:49 Ready for new connections
Step #2: 2023/02/05 13:00:49 Generated RSA key in 118.117245ms
Step #2: npm WARN exec The following package was not found and will be installed: prisma#4.9.0
Step #2: Prisma schema loaded from prisma/schema.prisma
Step #2: Datasource "db": PostgreSQL database "develop", schema "public" at "localhost"
Step #2:
Step #2: Error: P1001: Can't reach database server at `/cloudsql/CONNECTION_NAME`:`5432`
Step #2:
Step #2: Please make sure your database server is running at `/cloudsql/CONNECTION_NAME`:`5432`.
So even with using the database url hardcoded and with the Cloud SQL proxy working, i am STILL getting this error. What am I missing?
Check for the container-name in .env file and change it to postgres as it would replace name in connection string as discussed here
Or try the following format if you don’t want to hardcode IP address
DB_USER=dbuser
DB_PASS=dbpass
DB_HOST=localhost
DB_PORT=5432
CLOUD_SQL_CONNECTION_NAME=/cloudsql/gcp-project-id:europe-west3:db-instance-name
DATABASE_URL=postgres://${DB_USER}:${DB_PASS}#${DB_HOST}:${DB_PORT}/${DB_BASE}?host=${CLOUD_SQL_CONNECTION_NAME}
If you have public IP try connecting by unix socket

GCP Helm Cloud Builder

Just curious, why isn't there a helm cloud builder officially supported? It seems like a very common requirement, yet I'm not seeing one in the list here:
https://github.com/GoogleCloudPlatform/cloud-builders
I was previously using alpine/helm in my cloudbuild.yaml for my helm deployment as follows:
steps:
# Build app image
- name: gcr.io/cloud_builders/docker
args:
- build
- -t
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
- ./cloudbuild/$_CONTAINER_NAME/
# Push my-app image to Google Cloud Registry
- name: gcr.io/cloud-builders/docker
args:
- push
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
# Configure a kubectl workspace for this project
- name: gcr.io/cloud-builders/kubectl
args:
- cluster-info
env:
- CLOUDSDK_COMPUTE_REGION=$_CUSTOM_REGION
- CLOUDSDK_CONTAINER_CLUSTER=$_CUSTOM_CLUSTER
- KUBECONFIG=/workspace/.kube/config
# Deploy with Helm
- name: alpine/helm
args:
- upgrade
- -i
- $_CONTAINER_NAME
- ./cloudbuild/$_CONTAINER_NAME/k8s
- --set
- image.repository=$_IMAGE_REPO/$_CONTAINER_NAME,image.tag=$COMMIT_SHA
- -f
- ./cloudbuild/$_CONTAINER_NAME/k8s/values.yaml
env:
- KUBECONFIG=/workspace/.kube/config
- TILLERLESS=false
- TILLER_NAMESPACE=kube-system
- USE_GKE_GCLOUD_AUTH_PLUGIN=True
timeout: 1200s
substitutions:
# substitutionOption: ALLOW_LOOSE
# dynamicSubstitutions: true
_CUSTOM_REGION: us-east1
_CUSTOM_CLUSTER: demo-gke
_IMAGE_REPO: us-east1-docker.pkg.dev/fakeproject/my-docker-repo
_CONTAINER_NAME: app2
options:
logging: CLOUD_LOGGING_ONLY
# In this option we are providing the worker pool name that we have created in the previous step
workerPool:
'projects/fakeproject/locations/us-east1/workerPools/cloud-build-pool'
And this was working with no issues. Then recently it just started failing with the following error so I'm guessing a change was made recently:
Error: Kubernetes cluster unreachable: Get "https://10.10.2.2/version": getting credentials: exec: executable gke-gcloud-auth-plugin not found"
I get this error regularly on VM's and can workaround it by setting USE_GKE_GCLOUD_AUTH_PLUGIN=True, but that does not seem to fix the issue here if I add it to the env section. So I'm looking for recommendations on how to use helm with Cloud Build. alpine/helm was just something I randomly tried and was working for me up until now, but there's probably better solutions out there.
Thanks!

AWS CodeDeploy to EC2 not updating modified date for files

My deployment is putting the files on the server but all of the files have a modified time of 0 so Apache isn't hosting the updated files.
I added an AfterInstall script that is supposed to touch every file in the directory but it's not working for some reason. There is no error and if I run the script manually it works fine, just not during the deploy process.
Has anyone else run into this issue? Is there something simple I'm overlooking to make this work?
Touch script
#!/bin/bash
find /var/www/html/docs -type f -exec touch {} +
YML file
version: 0.0
os: linux
files:
- source: /source/
destination: /var/www/html/site/
file_exists_behavior: OVERWRITE
permissions:
- object: /var/www/html/site
pattern: "**"
owner: [redacted]
group: [redacted]
hooks:
AfterInstall:
- location: scripts/after_install
timeout: 10
runas: [redacted]

Cloud Build Trigger for FTP or SSH deployment

How can I deploy a directory to a FTP or SSH server, with a trigger and cloudbuild.yaml?
So far I can already generate a listing of the files which I'd like to upload:
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |-
find $_UPLOAD_DIRNAME -exec echo {} >> batch.txt \;
cat ./batch.txt
env:
...
I've came to the conclusion, that I don't want the FTP anti-pattern
and have therefore written an alternate SSH cloudbuild.yaml:
generate a new pair of RSA keys.
use the private key for SSH login.
recursively upload the directory with scp.
run remote commands with ssh.
It logs in as user root, therefore remote /etc/ssh/sshd_config needs PermitRootLogin yes.
My variable substitutions meanwhile look alike this:
And this would be the cloudbuild.yaml, which generally demonstrates how to set up SSH keys:
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:latest'
entrypoint: 'bash'
args:
- '-c'
- |-
echo Deploying $_UPLOAD_DIRNAME # $SHORT_SHA
gcloud config set compute/zone $_COMPUTE_ZONE
gcloud config set project $PROJECT_ID
mkdir -p /builder/home/.ssh
gcloud compute config-ssh
gcloud compute scp --ssh-key-expire-after=$_SSH_KEY_EXPIRE_AFTER --scp-flag="${_SSH_FLAG}" --recurse ./$_UPLOAD_DIRNAME $_COMPUTE_INSTANCE:$_REMOTE_PATH
gcloud compute ssh $_COMPUTE_INSTANCE --ssh-key-expire-after=$_SSH_KEY_EXPIRE_AFTER --ssh-flag="${_SSH_FLAG}" --command="${_SSH_COMMAND}"
env:
- '_COMPUTE_ZONE=$_COMPUTE_ZONE'
- '_COMPUTE_INSTANCE=$_COMPUTE_INSTANCE'
- '_UPLOAD_DIRNAME=$_UPLOAD_DIRNAME'
- '_REMOTE_PATH=$_REMOTE_PATH'
- '_SSH_FLAG=$_SSH_FLAG'
- '_SSH_COMMAND=$_SSH_COMMAND'
- '_SSH_KEY_EXPIRE_AFTER=$_SSH_KEY_EXPIRE_AFTER'
- 'PROJECT_ID=$PROJECT_ID'
- 'SHORT_SHA=$SHORT_SHA'
I've managed to deploy to FTP with ncftp:
first patch /etc/apt/sources.list.
then install ncftp with apt-get.
create the file ~/.ncftp with variable substitutions.
optional step: replace text in files with sed.
recursively upload the directory with ncftpput.
Here's my cloudbuild.yaml (it is working, but the next answer might offer a better solution):
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |-
echo Deploying ${_UPLOAD_DIRNAME} # ${SHORT_SHA}
echo to ftp://${_REMOTE_ADDRESS}${_REMOTE_PATH}
echo "deb http://archive.ubuntu.com/ubuntu/ focal universe" > /etc/apt/sources.list
apt-get update -y && apt-get install -y ncftp
cat << EOF > ~/.ncftp
host $_REMOTE_ADDRESS
user $_FTP_USERNAME
pass $_FTP_PASSWORD
EOF
# sed -i "s/##_GIT_COMMIT_##/${SHORT_SHA}/g" ./${_UPLOAD_DIRNAME}/plugin.php
ncftpput -f ~/.ncftp -R $_REMOTE_PATH $_UPLOAD_DIRNAME
env:
- '_UPLOAD_DIRNAME=$_UPLOAD_DIRNAME'
- '_REMOTE_ADDRESS=$_REMOTE_ADDRESS'
- '_REMOTE_PATH=$_REMOTE_PATH'
- '_FTP_USERNAME=$_FTP_USERNAME'
- '_FTP_PASSWORD=$_FTP_PASSWORD'
- 'SHORT_SHA=$SHORT_SHA'
Where _REMOTE_PATH is eg. /wp-content/plugins (the variable requires at least one slash) and the _UPLOAD_DIRNAME is the name of the directory within the local Git repository, with no slashes.

Hyperledger fabric node sdk deploy.js example failing

I'm following these instructions for setting up hyperledger fabric
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
but when I run deploy.js
info: Returning a new winston logger with default configurations
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric CA service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a CryptoKeyStore to save keys, using the store: {"opts":{"path":"/home/ubuntu/.hfc-key-store"}}
I'm able to use the docker cli but not node sdk.
Failed to load user "admin" from local key value store
How do I store admin user ?
Fixed after installing couchdb.
docker pull couchdb
docker run -d -p 5984:5984 --name my-couchdb couchdb
The certificate authorite services in the docker compose yaml file have a volumes section e.g.:
ccenv_latest:
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ccenv_snapshot:
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ca:
volumes:
- ./tmp/ca:/.fabric-ca
You need to make sure the local path is valid, so in the above configuration you need to have a ./tmp/ccenv and ./tmp/ca directories on the same level as the docker-compose yaml file.