Certmanager Cross Project DNS01 Challenge Fail - google-cloud-platform

Project Prod and Project Staging have been setup and each running a GKE cluster. Cert-manager is installed to automate the process of certificate issuance as explained in official docs.
Project Prod has DNS that maps to both prod and staging cluster istio gateway IP addresses.
On DNS01 challenge for cluster in Project Prod, manages to authenticate, and certificate is issued successfully.
But the cluster running in Project Staging, fails to get certificate due to not enough permission to authenticate and verify via Cloud DNS setup in Project Prod.
In Project Prod, there is a service account with dns/admin role that is setup via GKE secret and accessed in clusterissuer like so
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod-clusterissuer
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: name#name.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod-clusterissuer
solvers:
# ACME DNS-01 provider configurations
- dns01:
# Google Cloud DNS
cloudDNS:
# Secret from the google service account key
serviceAccountSecretRef:
name: cloud-dns-key
key: key.json
# The project in which to update the DNS zone
project: iprocure-server-prod
Certificate is issued successfully in Project Prod GKE cluster.
Project Staging GKE cluster, clusterissuer has its service account with dns/admin role just like in Project Prod, but fails to perform dns01 challenge in Project Prod DNS.
Following error is seen when kubectl describe challenge
Type Reason Age From Message
---- ------ ---- ---- -------
Warning PresentError 2m56s (x19 over 7h14m) cert-manager Error presenting challenge: GoogleCloud API call failed: googleapi: Error 403: Forbidden, forbidden
What should be done to Project Staging service account to enable dns01 challenge to be performed in Project Prod Clous DNS

I faced this problem too. I had to replicate the service account in my Prod Project with DNS ADMIN permission to Staging Project so that the GKE cluster in Staging can have enough permission to authenticate and verify via Cloud DNS setup in the Prod project
You have to create a SA in Prod Project with DNS ADMIN permission and master the email of that SA then go to project B and make that SA email a member by adding it as a member with also DNS ADMIN permission.
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ .Values.app.certificate.issuer.name }}
namespace: {{ .Values.app.namespace }}
labels:
app.kubernetes.io/managed-by: "Helm"
spec:
acme:
email: {{ .Values.app.certificate.acme.email }}
privateKeySecretRef:
name: {{ .Values.app.certificate.issuer.name }}
server: {{ .Values.app.certificate.acme.server }}
solvers:
- dns01:
cloudDNS:
project: {{ .Values.app.project_id }} ## Make sure this is the project of where DNS is e.g Prod Project
serviceAccountSecretRef:
name: {{ .Values.secrets.name }}
# Secret from the google service account key
key: key.json

Related

How to integrate Custom CA (AWS PCA) using Kubernetes CSR in Istio

am trying to setup Custom CA (AWS PCA) Integration using Kubernetes CSR in Istio following this doc (Istio / Custom CA Integration using Kubernetes CSR). Steps followed:
i) Enable feature gate on cert-manager controller: --feature-gates=ExperimentalCertificateSigningRequestControllers=true
ii) AWS PCA and aws-privateca-issuer plugin is already in place.
iii) awspcaclusterissuers object in place with AWS PCA arn (arn:aws:acm-pca:us-west-2:<account_id>:certificate-authority/)
iv) Modified Istio operator with defaultConfig and caCertificates of AWS PCA issuer (awspcaclusterissuers.awspca.cert-manager.io/)
v) Modified istiod deployment and added env vars (as mentioned in the doc along with cluster role).
istiod pod is failing with this error:
Generating K8S-signed cert for [istiod.istio-system.svc istiod-remote.istio-system.svc istio-pilot.istio-system.svc] using signer awspcaclusterissuers.awspca.cert-manager.io/cert-manager-aws-root-ca
2023-01-04T07:25:26.942944Z error failed to create discovery service: failed generating key and cert by kubernetes: no certificate returned for the CSR: "csr-workload-lg6kct8nh6r9vx4ld4"
Error: failed to create discovery service: failed generating key and cert by kubernetes: no certificate returned for the CSR: "csr-workload-lg6kct8nh6r9vx4ld4"
K8s Version: 1.22
Istio Version: 1.13.5
Note: Our integration of cert manager and AWS PCA works fine as we generate Private Certificates using cert-manager and PCA with ‘Certificates’ object. It’s the integration of kubernetes csr method with istio that is failing!
Would really appreciate if anybody with knowledge on this could help me out here as there are nearly zero docs on this integration.
I haven't done this with Kubernetes CSR, but I have done it with Istio CSR. Here are the steps to accomplish it with this approach.
Create a certificate in AWS Private CA and download the public root certificate either via the console or AWS CLI (aws acm-pca get-certificate-authority-certificate --certificate-authority-arn <certificate-authority-arn> --region af-south-1 --output text > ca.pem).
Create a secret to store this root certificate. Cert manager will use this public root cert to communicate with the root CA (AWS PCA).
Install cert-manager. Cert manager will essentially function as the intermediate CA in place of istiod.
Install AWS PCA Issuer plugin.
Make sure you have the necessary permissions in place for the workload to communicate with AWS Private CA. The recommended approach would be to use OIDC with IRSA. The other approach is to grant permissions to the node role. The problem with this is that any pod running on your nodes essentially has access to AWS Private CA, which isn't a least privilege approach.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "awspcaissuer",
"Action": [
"acm-pca:DescribeCertificateAuthority",
"acm-pca:GetCertificate",
"acm-pca:IssueCertificate"
],
"Effect": "Allow",
"Resource": "arn:aws:acm-pca:<region>:<account_id>:certificate-authority/<resource_id>"
}
]
}
Once the permissions are in place, you can create a cluster issuer or an issuer that will represent the root CA in the cluster.
apiVersion: awspca.cert-manager.io/v1beta1
kind: AWSPCAClusterIssuer
metadata:
name: aws-pca-root-ca
spec:
arn: <aws-pca-arn-goes-here>
region: <region-where-ca-was-created-in-aws>
Create the istio-system namespace
Install Istio CSR and update the Helm values for the issuer so that cert-manager knows to communicate with the AWS PCA issuer.
helm install -n cert-manager cert-manager-istio-csr jetstack/cert-manager-istio-csr \
--set "app.certmanager.issuer.group=awspca.cert-manager.io" \
--set "app.certmanager.issuer.kind=AWSPCAClusterIssuer" \
--set "app.certmanager.issuer.name=aws-pca-root-ca" \
--set "app.certmanager.preserveCertificateRequests=true" \
--set "app.server.maxCertificateDuration=48h" \
--set "app.tls.certificateDuration=24h" \
--set "app.tls.istiodCertificateDuration=24h" \
--set "app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem" \
--set "volumeMounts[0].name=root-ca" \
--set "volumeMounts[0].mountPath=/var/run/secrets/istio-csr" \
--set "volumes[0].name=root-ca" \
--set "volumes[0].secret.secretName=istio-root-ca"
I would also recommend setting preserveCertificateRequests to true at least for the first time you set this up so that you can actually see the CSRs and whether or not the certificate were successfully issued.
When you install Istio CSR, this will create a certificate called istiod as well as a corresponding secret that stores the cert. The secret is called istiod-tls. This is the cert for the intermediate CA (Cert manager with Istio CSR).
9) Install Istio with the following custom configurations:
Update the CA address to Istio CSR (the new intermediate CA)
Disable istiod from functioning as the CA
Mount istiod with with the cert-manager certificate details
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio
namespace: istio-system
spec:
profile: "demo"
hub: gcr.io/istio-release
values:
global:
# Change certificate provider to cert-manager istio agent for istio agent
caAddress: cert-manager-istio-csr.cert-manager.svc:443
components:
pilot:
k8s:
env:
# Disable istiod CA Sever functionality
- name: ENABLE_CA_SERVER
value: "false"
overlays:
- apiVersion: apps/v1
kind: Deployment
name: istiod
patches:
# Mount istiod serving and webhook certificate from Secret mount
- path: spec.template.spec.containers.[name:discovery].args[7]
value: "--tlsCertFile=/etc/cert-manager/tls/tls.crt"
- path: spec.template.spec.containers.[name:discovery].args[8]
value: "--tlsKeyFile=/etc/cert-manager/tls/tls.key"
- path: spec.template.spec.containers.[name:discovery].args[9]
value: "--caCertFile=/etc/cert-manager/ca/root-cert.pem"
- path: spec.template.spec.containers.[name:discovery].volumeMounts[6]
value:
name: cert-manager
mountPath: "/etc/cert-manager/tls"
readOnly: true
- path: spec.template.spec.containers.[name:discovery].volumeMounts[7]
value:
name: ca-root-cert
mountPath: "/etc/cert-manager/ca"
readOnly: true
- path: spec.template.spec.volumes[6]
value:
name: cert-manager
secret:
secretName: istiod-tls
- path: spec.template.spec.volumes[7]
value:
name: ca-root-cert
configMap:
defaultMode: 420
name: istio-ca-root-cert
If you want to watch a detailed walk-through on how the different components communicate, you can watch this video:
https://youtu.be/jWOfRR4DK8k
In the video, I also show the CSRs and the certs being successfully issued, as well as test that mTLS is working as expected.
The video is long, but you can skip to 17:08 to verify that the solution works.
Here's a repo with these same steps, the relevant manfiest files and architecture diagrams describing the flow: https://github.com/LukeMwila/how-to-setup-external-ca-in-istio

How to connect AWS EKS cluster from Azure Devops pipeline - No user credentials found for cluster in KubeConfig content

I have to setup CI in Microsoft Azure Devops to deploy and manage AWS EKS cluster resources. As a first step, found few kubernetes tasks to make a connection to kubernetes cluster (in my case, it is AWS EKS) but in the task "kubectlapply" task in Azure devops, I can only pass the kube config file or Azure subscription to reach the cluster.
In my case, I have the kube config file but I also need to pass the AWS user credentials that is authorized to access the AWS EKS cluster. But there is no such option in the task when adding the New "k8s end point" to provide the AWS credentials that can be used to access the EKS cluster. Because of that, I am seeing the below error while verifying the connection to EKS cluster.
During runtime, I can pass the AWS credentials via envrionment variables in the pipeline but can not add the kubeconfig file in the task and SAVE it.
Azure and AWS are big players in Cloud and there should be ways to connect to connect AWS resources from any CI platform. Does anyone faced this kind of issues and What is the best approach to connect to AWS first and EKS cluster for deployments in Azure Devops CI.
No user credentials found for cluster in KubeConfig content. Make sure that the credentials exist and try again.
Amazon EKS uses IAM to provide authentication to your Kubernetes cluster through the AWS IAM Authenticator for Kubernetes. You may update your config file referring to the following format:
apiVersion: v1
clusters:
- cluster:
server: ${server}
certificate-authority-data: ${cert}
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
env:
- name: "AWS_PROFILE"
value: "dev"
args:
- "token"
- "-i"
- "mycluster"
Useful links:
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
https://github.com/kubernetes-sigs/aws-iam-authenticator#specifying-credentials--using-aws-profiles
I got the solution by using ServiceAccount following this post: How to deploy to AWS Kubernetes from Azure DevOps
For anyone who is still having this issue, i had to set this up for the startup i worked for and it was pretty simple.
After your cluster is created create the service account
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
EOF
Then apply the cluster rolebinding
$ kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: build-robot
name: build-robot
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- kind: ServiceAccount
name: build-robot
namespace: default
EOF
Be careful with the above as it gives full access, checkout (https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for more info for scoping the access.
From there head over to ADO and follow the steps using deploy-robot as the SA name
$ kubectl get serviceAccounts build-robot -n default -o='jsonpath={.secrets[*].name}'
xyz........
$ kubectl get secret xyz........ -n default -o json
...
...
...
Paste the output into the last box when adding the kubernetes resource into the environment and select Accept UnTrusted Certificates. Then click apply and validate and you should be good to go.

Google Cloud - Private service connection for CloudSQL using Deployment Manager

I'm trying to automate the deployment of a system using deployment manager. In essence, it's comprised of:
One compute instance running a proxy server
A second compute instance running the app itself (private IP only)
A CloudSQL instance hosting the database (MySQL)
In the existing environments they have, the database is configured with a private IP address, and private service access in the network so that the compute instance can acccess the DB by its private IP.
I've managed to get the 2 instances running, and the CloudSQL instance, but I"m struggling to get the private IP set up on the SQL instance. I've got the following:
- name: database
type: sqladmin.v1beta4.instance
properties:
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
region: {{ properties["region"] }}
databaseVersion: {{ properties["dbType"] }}
settings:
tier: db-n1-standard-1
dataDiskSizeGb: 10
dataDiskType: PD_SSD
storageAutoResize: true
replicationType: SYNCHRONOUS
locationPreference:
zone: {{ properties['zone']}}
ipConfiguration:
privateNetwork: {{ properties["network"] }}
However, when I try to build this, I receive the error:
Failed to create subnetwork. Please create Service Networking
connection with service 'servicenetworking.googleapis.com' from
consumer project '' network '' again
I've tried to dig through the documentation to find how to create this connection using Deployment Manager, but I'm at a loss! I got as far as creating a private address range for peering:
- name: google-managed-services-<network_name>
type: compute.beta.globalAddress
properties:
network: $(ref.<network_name>.selfLink)
purpose: VPC_PEERING
addressType: INTERNAL
prefixLength: 16
and this appears to create the reservation for private service links correctly, but I can't find the final piece of the puzzle, the actual peer connection to Google's network. The documentation suggests the CLI call I need is:
> gcloud services vpc-peerings connect
--service=servicenetworking.googleapis.com
--ranges=[RESERVED_RANGE_NAME]
--network=[VPC_NETWORK]
--project=[PROJECT_ID]
but as far as I can tell, Deployment Manager doesn't support this API.
Has anyone had success with automating this sort of setup before? Pointers to relevant documentation that I might have missed are of course welcome!
The servicenetworking.googleapis.com is not currently supported by Deployment Manager nor is it a supported GCP-type so this can't be done through DM for now. I recommend creating a feature request for it since it's a relatively new API.
below config works for me, after setting https://cloud.google.com/sql/docs/mysql/configure-private-ip#configure-access
ipConfiguration:
privateNetwork: "internal"
ipv4Enabled: false
authorizedNetworks: null

How to get Certificate running Kubernetes cert-manager

I am trying to setup TLS in kubernetes(DigitalOcean), using cert-manager.
Using Let's Encrypt and certbot on a server machine is well described, but when running in Kubernetes I can not find any information.
I found this but how can I use the certificate in cert-manager on Kubernetes cert-manager ClusterIssuer:
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: your_email_address_here
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}
I dont have any email registret?
you can use any temp mail service but at expiring date of certificate you will not get noticed.
you can use any of you email gmail or anything if you want to setup certificate in Kubernetes using ingress and cert-manager you can follow this link by Digital ocean: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes

IP Address specification in deployment of Spring Cloud microservice

I am trying to develop spring cloud microservice. I developed a sample demo of Spring Cloud project by using Zuul proxy, Eureka server and Hystrix. I added my developed service as a client of Eureka server and applied the routing. All are working well. Now I need to deploy in my AWS Ec2 machine. In my local I added the default zone URL in application.properties file like the following,
eureka.client.serviceUrl.defaultZone=http://localhost:8071/eureka/
When I am moving to my Ec2 machine or by sing AWS ECS, how I can modify this IP address belongs to cloud for proper configuration? I also using localhost:8090 and 8091 like these ports for Zuul and Turbine dashboard project etc. So how I need to change this URL when I am deploying to cloud?
We use domains. So you would point an A-record of api.yourdomain.com at the IP address or load balancer alias that is supporting your services.
Why? When we decided to change infrastructure we are able to change a DNS entry rather than modify all of our microservices' configurations. We recently moved from Eureka/Zuul to AWS's ALB. Using domains allowed us to run both environments in parallel and cutover with no down time. In the event there was a failure in the new environment, the old one was still running and we could cut back with a simple A-record change.
In your application.yml file you can configure different profiles so that you can test locally and then in ECS you can define the profile to use when creating the task definition.
First here is an example of how you can configure your application.yml file to be able to run on different profiles:
############# for running locally ################
server:
port: 1234
logging:
file: logs/example.log
level:
com.example: INFO
endpoints:
health:
sensitive: true
spring:
datasource:
url: jdbc:mysql://example.us-east-1.rds.amazonaws.com/example_db?noAccessToProcedureBodies=true
username: example
password: example
driver-class-name: com.mysql.jdbc.Driver
security:
oauth2:
client:
clientId: example
clientSecret: examplesecret
scope: webapp
accessTokenUri: http://localhost:9999/uaa/oauth/token
userAuthorizationUri: http://localhost:9999/uaa/oauth/authorize
resource:
userInfoUri: http://localhost:9999/uaa/user
########## For deployment in Docker containers/ECS ########
spring:
profiles: prod
datasource:
url: jdbc:mysql://example.rds.amazonaws.com/example_db?noAccessToProcedureBodies=true
username: example
password: example
driver-class-name: com.mysql.jdbc.Driver
prodnetwork:
ipAddress: api.yourdomain.com
security:
oauth2:
client:
clientId: exampleid
clientSecret: examplesecret
scope: webapp
accessTokenUri: https://${prodnetwork.ipAddress}/v1/uaa/oauth/token
userAuthorizationUri: https://${prodnetwork.ipAddress}/v1/uaa/oauth/authorize
resource:
userInfoUri: https://${prodnetwork.ipAddress}/v1/uaa/user
Second: Setting up ECS to use your Prod profile:
When you build your docker container, tag it with your new profile's name, in this case "prod"
Third: Create a task definition and define your Docker tag in the repo URL and your new profile in your container run command:
Now when you work on your application on your local machine, you can run it with "localhost" and when you deploy it to ECS you can define your new domain/ip to be used in the run command in your container definition.