Updating letsencrypt cert-manager ClusterIssuer server url (with kubectl patch) - kubectl

A ClustrerIssuer is running. It is configured with the lestencrypt staging url. How do I change the server url to production? Is it possible to patch or is replacing the complete issuer mandatory?
I tried:
kubectl patch ClusterIssuer letsencrypt --type json -p '[{"op": "replace", "path": "/spec/acme/sever", "value":"https://acme-v02.api.letsencrypt.org/directory"}]'
The result is:
clusterissuer.cert-manager.io/letsencrypt patched (no change)
kubectl describe ClusterIssuer letsencrypt | grep letsencrypt.org
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/xyz
*** This works, but I am trying to make the path work:
cat ../include/letsencypt.yaml | sed -E "s/acme-staging-v02/acme-v02/g" | kubectl apply -n cert-manager -f -

Related

How to deploy an AWS Amplify app from GitHub Actions?

I want to control Amplify deployments from GitHub Actions because Amplify auto-build
doesn't provide a GitHub Environment
doesn't watch the CI for failures and will deploy anyways, or
requires me to duplicate the CI setup and re-run it in Amplify
didn't support running a cypress job out-of-the-box
Turn off auto-build (in the App settings / General / Branches).
Add the following script and job
scripts/amplify-deploy.sh
echo "Deploy app $1 branch $2"
JOB_ID=$(aws amplify start-job --app-id $1 --branch-name $2 --job-type RELEASE | jq -r '.jobSummary.jobId')
echo "Release started"
echo "Job ID is $JOB_ID"
while [[ "$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')" =~ ^(PENDING|RUNNING)$ ]]; do sleep 1; done
JOB_STATUS="$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')"
echo "Job finished"
echo "Job status is $JOB_STATUS"
deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
steps:
- uses: actions/checkout#v2
- name: Deploy
run: ./scripts/amplify-deploy.sh xxxxxxxxxxxxx master
You could improve the script to fail if the release fails, add needed steps (e.g. lint, test), add a GitHub Environment, etc.
There's also amplify-cli-action but it didn't work for me.
Disable automatic builds:
Go to App settings > general in the AWS Amplify console and disable automatic builds there.
Go to App settings > Build Settings and create a web hook which is a curl command that will trigger a build.
Example: curl -X POST -d {} URL -H "Content-Type: application/json"
Save the URL in GitHub as a secret.
Add the curl script to the GitHub actions YAML script like this:
deploy:
runs-on: ubuntu-latest
steps:
- name: deploy
run: |
URL="${{ secrets.WEBHOOK_URL }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
Similar to answer 2 here, but I used tags instead.
Create an action like ci.yml, turn off auto-build on the staging & prod envs in amplify and create the webhook triggers.
name: CI-Staging
on:
release:
types: [prereleased]
permissions: read-all # This is required to read the secrets
jobs:
deploy-staging:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.STAGING_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
name: CI-production
on:
release:
types: [released]
permissions: read-all # This is required to read the secrets
jobs:
deploy-production:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.PRODUCTION_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"

kubectl get -o yaml: is it possible to hide metadata.managedFields?

Using kubectl version 1.18, on microk8s 1.18.3
When getting a resource definition in yaml format. Example kubectl get pod/mypod-6f855c5fff-j8mrw -o yaml. The output contains a section related to metadata.managedFields
Is there a way to hide that metadata.managedFields to shorten the console output?
Below is an example of output to better illustrate the question.
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"productpage","service":"productpage"},"name":"productpage","namespace":"bookinfo"},"spec":{"ports":[{"name":"http","port":9080}],"selector":{"app":"productpage"}}}
creationTimestamp: "2020-05-28T05:22:41Z"
labels:
app: productpage
service: productpage
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app: {}
f:service: {}
f:spec:
f:ports:
.: {}
k:{"port":9080,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector:
.: {}
f:app: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-05-28T05:22:41Z"
name: productpage
namespace: bookinfo
resourceVersion: "121804"
selfLink: /api/v1/namespaces/bookinfo/services/productpage
uid: feb5a62b-8784-41d2-b104-bf6ebc4a2763
spec:
clusterIP: 10.152.183.9
ports:
- name: http
port: 9080
protocol: TCP
targetPort: 9080
selector:
app: productpage
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Kubectl 1.21 doesn't include managed fields by default anymore
kubectl get will omit managed fields by default now.
Users could set --show-managed-fields to true to show managedFields when the output format is either json or yaml.
https://github.com/kubernetes/kubernetes/pull/96878
check out this kubectl plugin: https://github.com/itaysk/kubectl-neat.
it not only removes managedField but many other fields users are not interested in.
for example: kubectl get pod mymod -oyaml | kubectl neat or kubectl neat pod mypod -oyaml
For those who like to download yaml and delete unwanted keys try this:
Install yq then try(please make sure you get yq version 4.x):
cat k8s-config.yaml | yq eval 'del(.status)' -
--OR--
kubectl --namespace {namespace} --context {cluster} get pod {podname} | yq ...
You may add/join more yq to delete more keys. Here is what I did:
cat k8s-config.yaml | yq eval 'del(.status)' - | yq eval 'del (.metadata.managedFields)' - | yq eval 'del (.metadata.annotations)' - | yq eval 'del (.spec.tolerations)' - | yq eval 'del(.metadata.ownerReferences)' - | yq eval 'del(.metadata.resourceVersion)' - | yq eval 'del(.metadata.uid)' - | yq eval 'del(.metadata.selfLink)' - | yq eval 'del(.metadata.creationTimestamp)' - | yq eval 'del(.metadata.generateName)' -
--OR--
cat k8s-config.yaml | yq eval 'del(.status)' - \
| yq eval 'del (.metadata.managedFields)' - \
| yq eval 'del (.metadata.annotations)' - \
| yq eval 'del (.spec.tolerations)' - \
| yq eval 'del(.metadata.ownerReferences)' - \
| yq eval 'del(.metadata.resourceVersion)' - \
| yq eval 'del(.metadata.uid)' - \
| yq eval 'del(.metadata.selfLink)' - \
| yq eval 'del(.metadata.creationTimestamp)' - \
| yq eval 'del(.metadata.generateName)' -
Another way is to have a neat() function on your ~/.bashrc or ~/.zshrc and call it as below:
neat() function:
neat () {
yq eval 'del(.items[].metadata.managedFields,
.metadata,
.apiVersion,
.items[].apiVersion,
.items[].metadata.namespace,
.items[].kind,
.items[].status,
.items[].metadata.annotations,
.items[].metadata.resourceVersion,
.items[].metadata.selfLink,.items[].metadata.uid,
.items[].metadata.creationTimestamp,
.items[].metadata.ownerReferences)' -
}
then:
kubectl get pods -o yaml | neat
cat k8s-config.yaml | neat
You may read more on yq delete here
I'd like to add some basic information about that feature:
ManagedFields is a section created by ServerSideApply feature. It helps tracking changes in cluster objects by different managers.
If you disable it in the kube-apiserver manifests all object created after this change won't have metadata.managedFields sections, but it doesn't affect the existing objects.
Open the kube-apiserver manifest with your favorite text editor:
$ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add command line argument to spec.containers.command:
- --feature-gates=ServerSideApply=false
kube-apiserver will restart immediately.
It usually takes couple of minutes for the kube-apiserver to start serving requests again.
You can also disable ServerSideApply feature gate on the cluster creation stage.
Alternatively, managedFields can be patched to an empty list for the existing object:
$ kubectl patch pod podname -p '{"metadata":{"managedFields":[{}]}}'
This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that just setting the managedFields to an empty list will not reset the field. This is on purpose, so managedFields never get stripped by clients not aware of the field.
Now that --export is deprecated, to get the output from your resources in the 'original' format (just cleaned up, without any information you don't want in this situation) you can do the following using yq v4.x:
kubectl get <resource> -n <namespace> <resource-name> -o yaml \
| yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' -
First thing what came to my mind was to just use stream editor like sed to just skip this part beggining form managedFields: to another specific pattern.
It's bit hardcoded as you would need to specify 2 patterns like managedFields: and ending pattern like name: productpage but will work for this scenario. If this won't fit you, pleas add more details, how you would like to achieve this.
sed command would look like:
sed -n '/(Pattern1)/{p; :a; N; /(Pattern2)/!ba; s/.*\n//}; p'
For example Ive used Nginx pod:
$ kubectl get po nginx -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
nginx'
creationTimestamp: "2020-05-29T10:54:18Z"
...
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
...
status:
conditions:
...
startedAt: "2020-05-29T10:54:19Z"
hostIP: 10.154.0.29
phase: Running
podIP: 10.52.1.6
podIPs:
- ip: 10.52.1.6
qosClass: Burstable
startTime: "2020-05-29T10:54:18Z"
After using sed
$ kubectl get po nginx -o yaml | sed -n '/annotations:/{p; :a; N; /hostIP: 10.154.0.29/!ba; s/.*\n//}; p'
apiVersion: v1
kind: Pod
metadata:
annotations:
hostIP: 10.154.0.29
phase: Running
podIP: 10.52.1.6
podIPs:
- ip: 10.52.1.6
qosClass: Burstable
startTime: "2020-05-29T10:54:18Z"
In your case command like:
$ kubectl get pod/mypod-6f855c5fff-j8mrw -o yaml | sed -n '/managedFields:/{p; :a; N; /name: productpage/!ba; s/.*\n//}; p'
Should give output like:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"productpage","service":"productpage"},"name":"productpage","namespace":"bookinfo"},"spec":{"ports":[{"name":"http","port":9080}],"selector":{"app":"productpage"}}}
creationTimestamp: "2020-05-28T05:22:41Z"
labels:
app: productpage
service: productpage
managedFields:
name: productpage
namespace: bookinfo
resourceVersion: "121804"
selfLink: /api/v1/namespaces/bookinfo/services/productpage
uid: feb5a62b-8784-41d2-b104-bf6ebc4a2763
spec:
clusterIP: 10.152.183.9
ports:
- name: http
port: 9080
protocol: TCP
targetPort: 9080
selector:
app: productpage
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

aws-iam-authenticator install via ansible

Looking how to translate (properly) from a bash command (orig inside a Dockerfile) to ansible task/role that will download latest aws-iam-authenticator binary and install into /usr/local/bin on Ubuntu (x64) OS.
currently I have:
curl -s https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest | grep "browser_download.url.*linux_amd64" | cut -d : -f 2,3 | tr -d '"' | wget -O /usr/local/bin/aws-iam-authenticator -qi - && chmod 555 /usr/local/bin/aws-iam-authenticator
Basically you need to write a playbook and separate that command in various tasks
Example example.yml file
- hosts: localhost
tasks:
- shell: |
curl -s https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest
register: json
- set_fact:
url: "{{ (json.stdout | from_json).assets[2].browser_download_url }}"
- get_url:
url: "{{ url }}"
dest: /usr/local/bin/aws-iam-authenticator-ansible
mode: 0555
you can execute it by doing
ansible-playbook --become example.yml
I hope this is what you're looking for ;-)
So after finding other posts that gave strong hints, information and unresolved issues, Ansible - Download latest release binary from Github repo & https://github.com/ansible/ansible/issues/27299#issuecomment-331068246. I was able to come up with the following ansible task that works for me.
- name: Get latest url for linux-amd64 release for aws-iam-authenticator
uri:
url: https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest
return_content: true
body_format: json
register: json_response
- name: Download and install aws-iam-authenticator
get_url:
url: " {{ json_response.json | to_json | from_json| json_query(\"assets[?ends_with(name,'linux_amd64')].browser_download_url | [0]\") }}"
mode: 555
dest: /usr/local/bin/aws-iam-authenticator
Note
If you're running the AWS CLI version 1.16.156 or later, then you don't need to install the authenticator. Instead, you can use the aws eks get-token command. For more information, see Create kubeconfig manually.

Certificate error when deploying Hashicorp Vault with Kubernetes Auth Method on AWS EKS

I am trying to deploy Hashicorp Vault with Kubernetes Auth Method on AWS EKS.
Hashicorp Auth Method:
https://www.vaultproject.io/docs/auth/kubernetes.html
Procedure used, derived from CoreOS Vault Operator. Though I am not actually using their operator:
https://github.com/coreos/vault-operator/blob/master/doc/user/kubernetes-auth-backend.md
Below is the summary of the procedure use with some additional content. Essentially, I am getting a certificate error when attempting to actually login to vault after following the needed steps. Any help is appreciated.
Create the service account and clusterrolebinding for tokenreview:
$kubectl -n default create serviceaccount vault-tokenreview
$kubectl -n default create -f example/k8s_auth/vault-tokenreview-binding.yaml
Contents of vault-tokenreview-binding.yaml file
=========================================
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: vault-tokenreview-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-tokenreview
namespace: default
Enable vault auth and add Kubernetes cluster to vault:
$SECRET_NAME=$(kubectl -n default get serviceaccount vault-tokenreview -o jsonpath='{.secrets[0].name}')
$TR_ACCOUNT_TOKEN=$(kubectl -n default get secret ${SECRET_NAME} -o jsonpath='{.data.token}' | base64 --decode)
$vault auth-enable kubernetes
$vault write auth/kubernetes/config kubernetes_host=XXXXXXXXXX kubernetes_ca_cert=#ca.crt token_reviewer_jwt=$TR_ACCOUNT_TOKEN
Contents of ca.crt file
NOTE: I retrieved the certificate from AWS EKS console. Which
is shown in the "certificate authority" field in
base64 format. I base64 decoded it and placed it here
=================
-----BEGIN CERTIFICATE-----
* encoded entry *
-----END CERTIFICATE-----
Create the vault policy and role:
$vault write sys/policy/demo-policy policy=#example/k8s_auth/policy.hcl
Contents of policy.hcl file
=====================
path "secret/demo/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
$vault write auth/kubernetes/role/demo-role \
bound_service_account_names=default \
bound_service_account_namespaces=default \
policies=demo-policy \
ttl=1h
Attempt to login to vault using the service account created in last step:
$SECRET_NAME=$(kubectl -n default get serviceaccount default -o jsonpath='{.secrets[0].name}')
$DEFAULT_ACCOUNT_TOKEN=$(kubectl -n default get secret ${SECRET_NAME} -o jsonpath='{.data.token}' | base64 --decode)
$vault write auth/kubernetes/login role=demo-role jwt=${DEFAULT_ACCOUNT_TOKEN}
Error writing data to auth/kubernetes/login: Error making API request.
URL: PUT http://localhost:8200/v1/auth/kubernetes/login
Code: 500. Errors:
* Post https://XXXXXXXXX.sk1.us-west-2.eks.amazonaws.com/apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority
your kubernetes url https://XXXXXXXXX.sk1.us-west-2.eks.amazonaws.com has the bad cert try adding -tls-skip-verify
vault write -tls-skip-verify auth/kubernetes/login .......

.ebextensions not executing not uploading and creating files

I am trying to follow these instructions to force SSL on AWS Beanstalk.
I believe this is the important part.
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#! /bin/bash
CONFIGURED=`grep -c "return 301 https" /opt/elasticbeanstalk/support/conf/webapp_healthd.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/listen 80;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; }\n' /opt/elasticbeanstalk/support/conf/webapp_healthd.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
For some reason the file is not being uploaded or created.
I also tried this a sudo command for the container commands starting with 00 and 01.
I also manually ssh into the server, manually created the file. Then locally used the aws elasticbeanstalk restart-app-server --environment-name command to restart the server. And this still did not work.
Any help would be greatly appreciated.