What does - means in kubectl -f - - kubectl

What does the last - (following -f) mean in the following command:
kustomize build config/samples | kubectl apply -f -

Snippet from kubectl documentation:
Apply the JSON passed into stdin to a pod
cat pod.json | kubectl apply -f -

Related

CodeBuild, get the list of folders that have changed files inside them buildspec.yml

Im trying to make my first build on aws and this is my buildspec.yml, I was just testing if my command does work on codebuild
version: 0.2
env:
git-credential-helper: yes
phases:
install:
runtime-versions:
nodejs: 16
pre_build:
commands:
- aws codeartifact login ...
build:
commands:
- changed_folders=$(git diff --dirstat=files,0 HEAD~1 | awk '{print $2}' | xargs -I {} dirname {} | awk -F '/' '{print $1}' | sort | uniq)
- echo $changed_folders
this command works on local but when it is building
git diff --dirstat=files,0 HEAD~1 | awk '{print $2}' | xargs -I {} dirname {} | awk -F '/' '{print $1}' | sort | uniq
theres an error saying
fatal: ambiguous argument 'HEAD~1': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
I tried changing the HEAD~1 with $CODEBUILD_WEBHOOK_HEAD_REF it works but im getting an empty result when I echo using echo $changed_folders
Im using github as my repository
Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html
Normally Codebuild Systems (Jenkins, Github Action, Codebuild) fetch only last commit of (triggerred ref only), so that when you run some git commits about git history you will get empty. So that by default build systems clone source code like git clone --depth 1 -b <branch> <repo_url>.
AWS announced you can fetch full history clone in codebuild.
When you enable artifact option "Full clone" on your pipeline, you will be able to get changed_files

How to specify absolute path to Dockerfile on docker build

How can I specify -t and -f parameters on docker cloud build job.
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |-
docker build --network=cloudbuild -f pipeline/components/testdocker/Dockerfile -t europe-west1-docker.pkg.dev/xxx/test .
dir: 'mc2-AIBooster_POC/'
I get the following error: "docker build" requires exactly 1 argument"
By using -f first and then -t .
docker build --network=cloudbuild -f pipeline/components/testdocker/Dockerfile -t europe-west1-docker.pkg.dev/vf-grp-aib-dev-buildnl/docker-repository/$_PROJECT_ID/$_USER/$_BRANCH/test .
You forgot to add a dot at the end of you docker build command, the order of flags shouldn't matter:
docker build --network=cloudbuild -t europe-west1-docker.pkg.dev/vf-grp-aib-dev-buildnl/docker-repository/$_PROJECT_ID/$_USER/$_BRANCH/test -f pipeline/components/testdocker/Dockerfile .
More about using docker build here.

Copying File From Gitlab To EC2 (/www/html) Folder Using Ssh

I am trying to copy files from my GitLab repository to the folder of my ec2 instance over ssh using server_ip and ec2 private_key.
I am not able to copy my files into the target folder.
My .gitlab-ci.yml:
stages:
- deploy
deploy:
stage: deploy
image: alpine
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- ssh -o StrictHostKeyChecking=no ubuntu#$DEPLOY_SERVER 'rm -rf /var/www/html/*'
- scp -r . ubuntu#$DEPLOY_SERVER:/var/www/html **
## Here How Can I Copy all my repositroy file to target folder**
Check first the ssh call just before scp actually works.
Then try:
scp -o LogLevel=DEBUG -r . ubuntu#$DEPLOY_SERVER:/var/www/html
That will give you an idea why the scp fails, while the ssh call, I presume, works.

CircleCI script to test against DynamoDB Local Fails

We have a CircleCI script that manages our deployment. I wanted to allow DynamoDB local to run so that we could test our DynamoDB requests. I've tried following the answers here, here and here. I've also tries using the DynamoDB local image from Docker Hub, here. This is the closest I've gotten.
version: 2
jobs:
setup-dynamodb:
docker:
- image: openjdk:15-jdk
steps:
- setup_remote_docker:
version: 18.06.0-ce
- run:
name: run-dynamodb-local
background: true
shell: /bin/bash
command: |
curl -k -L -o dynamodb-local.tgz http://dynamodb-local.s3-website-us-west-2.amazonaws.com/dynamodb_local_latest.tar.gz
tar -xzf dynamodb-local.tgz
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -port 8000 -sharedDb
check-failed:
docker:
- image: golang:1.14.3
steps:
- checkout
- setup_remote_docker:
version: 18.06.0-ce
- attach_workspace:
at: /tmp/app/workspace
- run:
name: Install dockerize
shell: /bin/bash
command: |
yum -y update && \
yum -y install wget && \
yum install -y tar.x86_64 && \
yum clean all
wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && \
tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && \
rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
environment:
DOCKERIZE_VERSION: v0.3.0
- run:
name: Wait for Local DynamoDB
command: dockerize -wait tcp://localhost:8000 -timeout 1m
- run:
name: checkerr
shell: /bin/bash
command: |
ls -laF /tmp/app/workspace/
for i in $(seq 1 2); do
f=$(printf "failed%d.txt" $i)
value=$(</tmp/app/workspace/$f)
if [[ "$value" != "nil" ]]; then
echo "$f = $value"
exit 1
fi
done
The problem I'm having is that all my tests are failing with error message dial tcp 127.0.0.1:8000: connect: connection refused. I'm not sure why this is happening. Do I need to expose the port from the container?
The reason is, the first job is totally seperate to second job.
In fact, you don't need the first one, and adjust second one as below
check-failed:
docker:
- image: golang:1.14.3
- image: amazon/dynamodb-local
steps:
- setup_remote_docker:
...
...
By the way, you don't need install dynamodb every time, you can run as container as well

Why codebuild.sh fails to run my local build?

I am trying to test locally my build without needing to upload my code all over the time. Therefore, I downloaded the codebuild.sh into my ubuntu machine and places into ~/.local/bin/codebuild_build.
Then I made it executable via:
chmod +x ~/.local/bin/codebuild_build
And with the following buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- docker login -u $USER -p $TOKEN
build:
commands:
- docker build -f ./dockerfiles/7.0.8/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_708) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.0.8/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_72) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
post_build:
commands:
- docker push etable/php7.2
- docker push etable/php7.2-dev
- docker push etable/php7.0.8
- docker push etable/php7.0.8-dev
I tried to execute my command like that:
codebuild_build -i amazon/aws-codebuild-local -a /tmp/artifacts/docker-php -e .codebuild -c ~/.aws
But I get the following output:
Build Command:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=amazon/aws-codebuild-local" -e "ARTIFACTS=/tmp/artifacts/docker-php" -e "SOURCE=/home/pcmagas/Kwdikas/docker-php" -v "/home/pcmagas/Kwdikas/docker-php:/LocalBuild/envFile/" -e "ENV_VAR_FILE=.codebuild" -e "AWS_CONFIGURATION=/home/pcmagas/.aws" -e "INITIATOR=pcmagas" amazon/aws-codebuild-local:latest
Removing agent-resources_build_1 ... done
Removing agent-resources_agent_1 ... done
Removing network agent-resources_default
Removing volume agent-resources_source_volume
Removing volume agent-resources_user_volume
Creating network "agent-resources_default" with the default driver
Creating volume "agent-resources_source_volume" with local driver
Creating volume "agent-resources_user_volume" with local driver
Creating agent-resources_agent_1 ... done
Creating agent-resources_build_1 ... done
Attaching to agent-resources_agent_1, agent-resources_build_1
build_1 | 2020/01/16 14:43:58 Unable to initialize (*errors.errorString: AgentAuth was not specified)
agent-resources_build_1 exited with code 10
Stopping agent-resources_agent_1 ... done
Aborting on container exit...
My ~/.aws has the following files:
$ ls -l /home/pcmagas/.aws
σύνολο 8
-rw------- 1 pcmagas pcmagas 32 Αυγ 8 17:29 config
-rw------- 1 pcmagas pcmagas 116 Αυγ 8 17:34 credentials
Whilst the config has the following:
[default]
region = eu-central-1
And ~/.aws/credentials is in the following format:
[default]
aws_access_key_id = ^KEY_ID_CENSORED^
aws_secret_access_key = ^ACCESS_KEY_CENSORED^
Also in the .codebuild I contain the required docker-login params:
USER=^CENCORED^
TOKEN=^CENCORED^
Hence, I can get the params required for docker-login.
Do you have any idea why I the build fails to run locally?
Your pre-build step has a command that logs you in to docker
docker login -u $USER -p $TOKEN
Make sure that you have included the docker login credentials in your local file environment file.
Change the environment variable name in '.codebuild' file, e.g.:
DOCKER_USER=^CENCORED^
DOCKER_TOKEN=^CENCORED^
It seems the CodeBuild agent is interpreting the 'TOKEN' environment variable itself.