ExternalName for S3 endpoint in Kubernetes with AWS SDK - amazon-web-services

I'm trying to use the AWS S3 SDK for Java to connect to a bucket from a Kubernetes pod running an Spring Boot application. In order to get external access I had to create a service as follows:
kind: Service
apiVersion: v1
metadata:
name: s3
namespace: production
spec:
type: ExternalName
externalName: nyc3.digitaloceanspaces.com
And then I modified my configuration in application.properties specifying the endpoint:
cloud.aws.endpoint=s3
cloud.aws.credentials.accessKey=ASD
cloud.aws.credentials.secretKey=123
cloud.aws.credentials.instanceProfile=true
cloud.aws.credentials.useDefaultAwsCredentialsChain=true
Because the SDK builds the host name for the bucket as bucket.s3... I modified my client to use "path style" access with this configuration:
#Bean(name = "amazonS3")
public AmazonS3Client amazonS3Client(AWSCredentialsProvider credentialsProvider,
RegionProvider regionProvider) {
EndpointConfiguration endpointConfiguration = new EndpointConfiguration(
endpoint, regionProvider.getRegion().getName());
return (AmazonS3Client) AmazonS3ClientBuilder.standard()
.withCredentials(credentialsProvider)
.withEndpointConfiguration(endpointConfiguration)
.withPathStyleAccessEnabled(true)
.build();
}
But when I try to perform any bucket operation I get the following error regarding the name mismatch with the SSL certificate:
javax.net.ssl.SSLPeerUnverifiedException: Certificate for <s3> doesn't match any of the subject alternative names: [*.nyc3.digitaloceanspaces.com, nyc3.digitaloceanspaces.com]
How can I avoid this certificate error?

I am having a similar issue. I believe AmazonS3Client API doesn't resolve the k8s service name. I had to directly use a host name instead of K8s service name.

Related

GKE Gateway with a wildcard Certificate Manager certificate

I'm trying to set up GKE Gateway with an HTTPS listener using a wildcard certificate managed via Certificate Manager.
The problem I'm facing is not in provisioning of the certificate, which was done successfully following the DNS Authorization tutorial and this answer. I've successfully provisioned a wildcard certificate, which is shown by gcloud certificate-manager certificates describe <cert-name> as ACTIVE and
AUTHORIZED on both the domain and its wildcard subdomain. I've also provisioned the associated Certificate Map and Map Entry (all via Terraform) and created a global static IP address and a wildcard A record for it in Cloud DNS.
However, when I try to use this cert and address in the GKE Gateway resource, the resource gets "stuck" (never reaches SYNC phase), and there's no HTTPS GCLB provisioned as seen via gcloud or Cloud Console.
Here's the config I was trying to use for it:
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: external-https
annotations:
networking.gke.io/certmap: gke-gateway
spec:
gatewayClassName: gke-l7-gxlb
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
kinds:
- kind: HTTPRoute
- name: https
protocol: HTTPS
port: 443
allowedRoutes:
kinds:
- kind: HTTPRoute
addresses:
- type: NamedAddress
value: gke-gateway
I've tried multiple different combinations of this config, including with an explicit IPAddress, or without allowedRoutes. But no matter what I try, it doesn't seem to work. I can only see the initial ADD and UPDATE events in the output of kubectl describe gateway external-http, and there're no logs for it to be found anywhere afaik (since GKE Gateway Controller is part of the GKE Control Plane and without any logging exposed to the customers, from what I understand).
The only time I was able to make either internal or external Gateway to work is when using HTTP protocol, i.e. without certificates. Hence, I think this has to do with HTTPS, and probably more specifically with linking to the managed wildcard certificate.
Additionally, I should mention that my attempts at deploying the Gateway fail most of the time (i.e. the resource gets "stuck" in the same way), even when reusing a previously-working HTTP config. I'm not sure what the source of this flakiness is (apart from maybe some internal quota), but I imagine this is fully expected, as the service is still in Beta.
Has anyone been able to actually provision a Gateway with HTTPS listener and wildcard certs, and how?

Access external jwksuri behind a company proxy

I am new to istio and had doubt configuring a Request authentication policy.The policy uses a jwksuri which is an external URI.The policy is applied on the istio-system namespace.The moment I apply this policy and do
>istioctl proxy-status
The ingress gateway on which the policy is applied LDS is marked stale.If I remove this policy the gateway goes back into SYNCED state.It seems this jwksuri is not accessible since we are behind a company proxy. I created Service entry to access the external jwks uri something like this
kubectl apply -f - <<EOF
apiVersion:
networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: jwksexternal
spec:
hosts:
-
authorization.company.com
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
EOF
Also tried to create one more service entry "Configuring traffic to external proxy" referring to this documentation https://istio.io/latest/docs/tasks/traffic-management/egress/http-proxy/
But this is not working.How should I configure the company proxy in Istio.
Edit this is the logs in istiod (Please note https://authorization.company.com/jwk is an external url)
2021-06-02T14:35:39.423938Z error model Failed to fetch public key from "https://authorization.company.com/jwk": Get "https://authorization.company.com/jwk": dial tcp: lookup authorization.company.com on 10.X.0.X:53: no such host
2021-06-02T14:35:39.423987Z error Failed to fetch jwt public key from "https://authorization.company.com/jwk": Get "https://authorization.company.com/jwk": dial tcp: lookup authorization.company.com on 10.X.0.X:53: no such host
2021-06-02T14:35:39.424917Z info ads LDS: PUSH for node:istio-ingressgateway-5b69b5448c-8wbt4.istio-system resources:1 size:4.5kB
2021-06-02T14:35:39.433976Z warn ads ADS:LDS: ACK ERROR router~10.X.48.X~istio-ingressgateway-5b69b5448c-8wbt4.istio-system~istio-system.svc.cluster.local-105 Internal:Error adding/updating listener(s) 0.0.0.0_8443: Provider 'origins-0' in jwt_authn config has invalid local jwks: Jwks RSA [n] or [e] field is missing or has a parse error
Not able to find a workaround for this issue. As of now embedded the jwks into the jwt rules.But this has a problem ,whenever the public key keys get rotated .The jwt rules fail. This is a proxy issue but not sure how to bypass
By default, Istio allows traffic to external systems.
See https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy
So if the problem is that the JWKS URL can't be accessed, it is most likely not because of Istio and a ServiceEntry won't help. I guess the problem will be somewhere else, not in Istio.

How to add Custom Resource Definition for ManagedCertificate in GCP Kubernetes cluster?

I'm trying to add an SSL certificate to my GCP kubernetes cluster. My domain is already pointing to the cluster's external endpoint. I'm creating an ingress and the console gives me two options. I can upload my certificate, or I can create a Google-managed certificate. I would prefer the managed certificate, but that option is greyed out and it gives me this help text:
To create Google-managed certificates, your cluster needs to have ManagedCertificate Custom Resource Definition present.
So how do I add ManagedCertificate Custom Resource Definition to my cluster?
Assuming you GKE cluster is version 1.17.9-gke.6300 or later, you just need to apply a CRD in the form
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: certificate-name
spec:
domains:
- domain-name1
- domain-name2
If you create the above in a file named certificate-name.yaml (replacing certificate-name with what you used in the definition above), you'd simply run
kubectl apply -f certificate-name.yaml

Istio JWT verification against JWKS with internally signed certificate

I'm attempting to configure Istio authentication policy to validate our JWT.
I set the policy and can see it takes affect. However it won't allow anything to connect. When applying the policy if I inspect the istio-pilot logs I can see it failing to retrieve the signing keys, giving a certificate error.
2018-10-24T03:22:41.052354Z error model Failed to fetch pubkey from "https://iam.company.com.au/oauth2/jwks": Get https://iam.company.com.au/oauth2/jwks: x509: certificate signed by unknown authority
2018-10-24T03:22:41.052371Z warn Failed to fetch jwt public key from "https://iam.company.com.au/oauth2/jwks "
This I assume would be due to this server using a TLS certificate signed by our corporate CA.
How do I get istio-pilot to trust certs from our CA? I have tried installing ca-certificates and including our CA public key in the Ubuntu certificates but it still won't work.
Policy:
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "our-service-jwt-example"
spec:
targets:
- name: our-service
origins:
- jwt:
issuer: iam.company.com.au
audiences:
- YRhT8xWtcLrOQmqJUGPA1p6O6mUa
jwksUri: "https://iam.company.com.au/oauth2/jwks"
principalBinding: USE_ORIGIN
Pilot does the jwks resolving for the envoy. In that case, pilot needs to have the CA certificate. At the moment there is no way to add a CA cert to the pilot unless you add the cert when deploying pilot in the istio. https://github.com/istio/istio/blob/master/pilot/pkg/model/jwks_resolver.go
This has been added as of Istio 1.4:
https://github.com//istio/istio/pull/17176
You can provide an extra root certificate in PEM format in the pilot.jwksResolverExtraRootCA helm chart value (also works with IstioOperator for more recent versions of Istio) and it will create a ConfigMap containing an extra.pem file that should get mounted into the istio pilot container as /cacerts/extra.pem. From there it should get picked up automatically.

Reference ConfigMap / Secret in Kubernetes object metadata

I have a Kubernetes cluster provisioned at AWS with kops and I use route 53 mapper to configure ELB based on Service annotations and use namespaces for different environments dev, test, prod with configuration being defined in ConfigMap and Secret objects.
Environments have different hostname and TSL certificates:
kind: Service
apiVersion: v1
metadata:
name: http-proxy-service
labels:
dns: route53
annotations:
domainName: <env>.myapp.example.io
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: |-
arn:aws:acm:eu-central-1:44315247xxxxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
selector:
app: http-proxy
ports:
- name: https
port: 443
Is there a Kubernetes way to reference ConfigMap/Secret objects in the metadata section of the object descriptor so I can have only one file for all environments?
I am looking for pure Kubernetes solution not using any templating before sending file to API via kubecetl.
There is not.
FWIW, it seems nuts that that mapper was designed to pull cert data from annotations on a Service. Service objects are not otherwise secret.
The mapper should be able to consume cert data from a Secret that has well defined fields to indicate what domain should be wired with what cert data in front of what service.