Getting AWS Root Certificates for IoT Core Thing - amazon-web-services

In AWS IoT Core I created a thing, created a Policy for the thing, created a Certificate for the thing and attached the Policy to the Certificate.
After that I downloaded the .crt and the .key files of the Certificate I verified if they match with the following command:
(openssl x509 -noout -modulus -in certificate.pem.crt | openssl md5 ; openssl rsa -noout -modulus -in private.pem.key | openssl md5) | uniq
and got a single hash back, which indicates that they match
(stdin)= 97c1a8816c35acbfgt04f325aeacae6
The only thing left was to find the Root CA that my thing Certificate was signed with.
I found the AWS developer guide for the server certs here and I downloaded the VeriSign Class 3 Public Primary G5 root CA certificate, which I renamed to rootCA.pem.
But when I ran a test to verify the CA with the following command:
openssl s_client -connect <my ID>.iot.ap-southeast-2.amazonaws.com:8443 /
-CAfile /etc/mosquitto/certs/rootCA.pem /
-cert /etc/mosquitto/certs/certificate.pem.crt /
-key /etc/mosquitto/certs/private.pem.key
I get this response with the error unable to get local issuer certificate (see below)
CONNECTED(00000003)
depth=1 C = US, O = Symantec Corporation, OU = Symantec Trust Network, CN = Symantec Class 3 ECC 256 bit SSL CA - G2
verify error:num=20:unable to get local issuer certificate
---
Certificate chain
0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com, Inc./CN=*.iot.ap-southeast-2.amazonaws.com
i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 ECC 256 bit SSL CA - G2
1 s:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 ECC 256 bit SSL CA - G2
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDlTCCAzygAwIBAgIQGw ...
——END CERTIFICATE-----
subject=/C=US/ST=Washington/L=Seattle/O=Amazon.com, Inc./CN=*.iot.ap-southeast-2.amazonaws.com
issuer=/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 ECC 256 bit SSL CA - G2
---
No client certificate CA names sent
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
Requested Signature Algorithms: ECDSA+SHA512:RSA+SHA512:ECDSA+SHA384:RSA+SHA384:ECDSA+SHA256:RSA+SHA256:DSA+SHA256:ECDSA+SHA224:RSA+SHA224:DSA+SHA224:ECDSA+SHA1:RSA+SHA1:DSA+SHA1
Shared Requested Signature Algorithms: ECDSA+SHA512:RSA+SHA512:ECDSA+SHA384:RSA+SHA384:ECDSA+SHA256:RSA+SHA256:DSA+SHA256:ECDSA+SHA224:RSA+SHA224:DSA+SHA224:ECDSA+SHA1:RSA+SHA1:DSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 2398 bytes and written 1448 bytes
Verification error: unable to get local issuer certificate
---
New, TLSv1.2, Cipher is ECDHE-ECDSA-AES256-GCM-SHA384
Server public key is 256 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-ECDSA-AES256-GCM-SHA384
Session-ID: EB3B32C8 …
Session-ID-ctx:
Master-Key: 783A17EB6 …
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1587558792
Timeout : 7200 (sec)
Verify return code: 20 (unable to get local issuer certificate)
Extended master secret: yes
---
Does someone know how to get the Root CA for my thing Certificate?
Thanks
Edit:
Thanks to Ben T's advice I created a new thing in the Mumbai Region. Surprisingly I can see an option now to download the root certificate directly from the Certificate creation screen (see below)
After running openssl s_client -connect again with the new certs/key I finally get verify return:1.
AWESOME

The Verisign certificate is used for legacy endpoints.
You should use the newer certificates for the Amazon Trust Services endpoints. e.g. the one at https://www.amazontrust.com/repository/AmazonRootCA1.pem
See https://docs.aws.amazon.com/iot/latest/developerguide/server-authentication.html#server-authentication-certs
All new AWS IoT Core regions, beginning with the May 9, 2018 launch of AWS IoT Core in the Asia Pacific (Mumbai) Region, serve only ATS certificates.

Related

Filebeat Kafka client failing SSL handshake with AWS MSK

We are trying to send logs using Filebeat to AWS MSK (Provisioned) using kafka configuration available. We're using mTLS authentication with the setup of Root CA and Intermediate CA with Vault. The intermediate CA is available in AWS PCA which is assigned to AWS MSK cluster which in turn issues the certs to the brokers on AWS MSK.
We are able to do mTLS authentication using Kafka client with the Admin setup (Kafka client with required certificates), however filebeat kafka is failing to do SSL handshake. All the certs provided in the handshake are valid.
Filebeat docker image: docker.elastic.co/beats/filebeat:8.5.1
Our Filebeat config looks like
filebeat.yaml
---
filebeat.shutdown_timeout: 0
fields_under_root: false
logging.level: debug
.
.
.
output.kafka:
hosts: 'XXXXMSK_BOOTSTRAP_HOSTSXXXX'
ssl.enabled: true
ssl.verification_mode: 'certificate'
ssl.certificate: /path/to/obained-cert.crt'
ssl.key: /path/to/obained-key.pki.key'
ssl.authorities: [/path/to/root/int/ca/combined-file/msk_ca_chain.pem']
topic: 'XXXXKAFKA_TOPICXXXX'
codec.format:
string: '{"timestamp": "%{[#timestamp]}", "message": %{[message]}, "host": %{[host]}}'
close_inactive: 10m
required_acks: 1
partition.round_robin:
reachable_only: false
keep-alive: 30000ms
obained-cert.crt
-----BEGIN CERTIFICATE-----
MIIXXXXX
#Obtained Cert#
-----END CERTIFICATE-----
obained-key.pki.key
-----BEGIN RSA PRIVATE KEY-----
MIIXXXXX
#Obtained private key#
-----END RSA PRIVATE KEY-----
msk_ca_chain.pem
-----BEGIN CERTIFICATE-----
MIIXXXXX
#Intermediate CA Cert#
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIXXXXX
#Root CA Cert#
-----END CERTIFICATE-----
The error in Filebeat log is:
{"log.level":"error","#timestamp":"2023-01-06T10:59:48.701Z","log.logger":"kafka","log.origin":{"file.name":"kafka/client.go","file.line":337},"message":"Kafka (topic=XXXXKAFKA_TOPICXXXX): kafka: client has run out of available brokers to talk to (Is your cluster reachable?)","service.name":"filebeat","ecs.version":"1.6.0"}
The error on AWS Cloudwatch for the brokers is:
[2023-01-06 12:48:07,716] INFO [SocketServer listenerType=ZK_BROKER, nodeId=3] Failed authentication with /INTERNAL_IP (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2023-01-06 12:48:08,004] INFO [SocketServer listenerType=ZK_BROKER, nodeId=2] Failed authentication with /INTERNAL_IP (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2023-01-06 12:48:08,016] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Failed authentication with /INTERNAL_IP (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I've enabled debug logs on Filebeat, but I'm not seeing any information regarding why SSL handshake has failed.
Is there any way we could see any debug logs on Filebeat Kafka or AWS MSK Broker side to identify why SSL handshake is failing? Also, any pointers around possible problems in filebeat.yaml config are also appreciated.
Thanks in advance!!!
Sorry for answering my own question. I have resolved this issue now by appending intermediate CA cert to the certificate obtained by root CA and then supplying only root CA in authorities section.
The changes I made:
Appended intermediate CA cert to the file /path/to/obained-cert.crt for the parameter ssl.certificate
Provided only the root certificate instead of chain of certificates for the parameter ssl.authorities i.e. ['/path/to/root/ca/msk_root_ca.pem']
This has done the trick!!
So, if you've intermidiate CA in the PKI, always append that to the obtained cert in order to carry out SSL handshake.
I hope this helps others.

(Public certificate) https-backendauth.config EC2 public certificate

I have been following alone the https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html tutorial to activate end-to-end encrypt in my classic load-balancer environment. I need to include the the ec2 public certificate in the https-backendauth.config, but I am not so sure what I am suppose to do.
I do have registered a certificate to my domain using the CName, and the certificate was successfully issued by the aws. Is it the certificate they are talking to? But what is the syntax between -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----? Do I just include the CNAME and CVALUE? Please help. Much appreciated
https-backendauth.config:
option_settings:
# Backend Encryption Policy
aws:elb:policies:backendencryption:
PublicKeyPolicyNames: backendkey
InstancePorts: 443
# Public Key Policy
aws:elb:policies:backendkey:
PublicKey: |
-----BEGIN CERTIFICATE-----
################################################################
################################################################
################################################################
################################################################
################################################
-----END CERTIFICATE-----

AWS API gateway High Availability setup in us-east-2 and us-west-2

I have followed step by step the instructions in the following documentation:
Multi-region serverless app with API Gateway and Lambda
API Gateway HA setup in us-east-1 and us-east-2
but when I run:
curl -v --location --request POST 'https://mydomain.example.com/basepath/path'
EDIT:
I get the following output:
* Trying 3.23.57.129:443...
* Connected to mydomain.example.com (3.23.57.129) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=*.execute-api.us-east-2.amazonaws.com
* start date: Jul 1 00:00:00 2022 GMT
* expire date: Jul 30 23:59:59 2023 GMT
* subjectAltName does not match mydomain.example.com
* SSL: no alternative certificate subject name matches target host name 'mydomain.example.com'
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
curl: (60) SSL: no alternative certificate subject name matches target host name 'mydomain.example.com'
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
EDIT2:
I've 2 records configured in my hosted zone at Route53, like both documentations suggest. These are the records according to the AWS API I've deployed in us-east-2 and us-west-2:
Record name: mydomain.example.com
Record Type: CNAME
Value: XXX500mdd4.execute-api.us-east-2.amazonaws.com (the domain of the api deployed over us-east-2)
Alias: No
TTL: 60 seconds
Routing policy: Failover
Health check ID: 2fcf9c25-25f0-486f-b351-c1df0562bb1d
Record name: mydomain.example.com
Record Type: CNAME
Value: sinbpssYYY.execute-api.us-west-2.amazonaws.com (the domain of the api deployed over us-west-2)
Alias: No
TTL: 60 seconds
Routing policy: Failover
Health check ID: 276e957a-77ed-4471-8a43-ee3eff00626a
The first record is configured as the primary, and the second one as the secondary. I've followed exactly the ideas stated at both cited documentations, but I can't see what I'm doing wrong
EDIT3:
For validate the generated certificate I followed the AWS documentation for DNS Validation (the same you supplied in your link). For that purpose I followed the steps in the topic "To set up DNS validation in the console". That is:
Opened the ACM console
Selected the certificate from the list
Then, in the Domain section, I used the Create records in Route 53 button. I performed these steps for the certificate requested in us-east-2 and also for the one requested in us-west-2
After performing these actions for the us-east-2 region, the following record appears in the hosted Zone at Route 53:
Record name: _85e39035fc67b4892c097f367cab18b0.mydomain.example.com
Record Type: CNAME
Value: _8f9709078cc0246638fe9e1a2571ff17.fpktwqqglf.acm-validations.aws.
Alias: No
TTL: 300 seconds
Routing policy: Simple
However, when I complete the steps for the certificate requested in us-west-2, no other record appears at the hosted zone. I've the hypothesis, that since both domains (the one requested in us-east-2 and that on us-west-2) have the same name (mydomain.example.com), the other record that would supposedly be added when requesting the certificate in us-west-2 region would have the same name and that is why only one record was added to validate both certificates in us-east-2 and us-west-2 region.
EDIT4:
When I bought the domain, the hosted zone was automatically created too, with the followings two records:
Record name: example.com
Record Type: NS
Value: ns-31.awsdns-03.com, ns-1919.awsdns-47.co.uk, ns-741.awsdns-28.net, ns-1192.awsdns-21.org
Alias: No
TTL: 172800 seconds
Routing policy: Simple
Record name: example.com
Record Type: SOA
Value: ns-31.awsdns-03.com. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
Alias: No
TTL: 900 seconds
Routing policy: Simple
Can anybody please help me?

version 3.11 | AWS installation fails due to HTTPS / x509 issues on waiting for control pane

Trying to do an openshift 3.11 install with 3 master setup ,2 infra and 2 nodes. I didn't use a LB node since I figured the AWS ELB would take care of that for me.
My current issue is the installation will fail on the wait for control pane task.
failed: [ip-10-0-4-29.us-east-2.compute.internal] (item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg": {"cmd": "/usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system"
Different errors shown below
I've done the following.
Because this is only a demon system I wanted to go the cheap route and create self signed certs. So i ran the following
openssl rew -new -key openshift.key -out openshift.csr
openssl req -new -key openshift.key -out openshift.csr
openssl x509 -req -days 1095 -in openshift.csr -signkey openshift.key -out openshift.crt
then within my hosts file i added the following
openshift_master_named_certificates=[{"certfile": "/home/ec2-user/certs/openshift.crt", "keyfile": "/home/ec2-user/certs/openshift.key"}]
Next I created an ELB accepting HTTP traffic on port 8443 and directing it to HTTP 8443 to any of the masters.
When I do this I get the following fail when re-running the command which is failing the task
[root#ip-10-0-4-29 ~]# /usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system
Unable to connect to the server: http: server gave HTTP response to HTTPS client
If i change the ELB to take http traffic and direct it to HTTPS 8443 I ge the following error
[root#ip-10-0-4-29 ~]# /usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system
The connection to the server os.domain-name.net:8443 was refused - did you specify the right host or port?
If I try to change the ELB to accept HTTPS traffic I needed to copy the guide to create SSL certs to use in aws but even then accepting HTTPS traffic on 8443 and sending it either via HTTP or HTTPS to 8443 on the master node results in this error
[root#ip-10-0-4-29 ~]# /usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system
Unable to connect to the server: x509: certificate signed by unknown authority
I've also copy in my hosts file just incase I've something off with it.
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin
openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
openshift_clusterid=openshift
openshift_cloudprovider_kind=aws
openshift_hosted_manage_registry=true
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=s3
openshift_hosted_registry_storage_s3_accesskey="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_hosted_registry_storage_s3_secretkey="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
openshift_hosted_registry_storage_s3_bucket=os-test-os-bucket
openshift_hosted_registry_storage_s3_region=us-west-2
openshift_hosted_registry_storage_s3_chunksize=26214400
openshift_hosted_registry_storage_s3_rootdirectory=/registry
openshift_hosted_registry_pullthrough=true
openshift_hosted_registry_acceptschema2=true
openshift_hosted_registry_enforcequota=true
openshift_hosted_registry_replicas=3
#openshift_enable_excluders=false
openshift_disable_check=memory_availability
openshift_additional_repos=[{'id': 'centos-okd-ci', 'name': 'centos-okd-ci', 'baseurl' :'https://rpms.svc.ci.openshift.org/openshift-origin-v3.11', 'gpgcheck' :'0', 'enabled' :'1'}]
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true']}]
openshift_router_selector='node-role.kubernetes.io/infra=true'
openshift_registry_selector='node-role.kubernetes.io/infra=true'
openshift_metrics_install_metrics=true
openshift_master_named_certificates=[{"certfile": "/home/ec2-user/certs/openshift.crt", "keyfile": "/home/ec2-user/certs/openshift.key"}]
# uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=os.domain-name.net
openshift_master_cluster_public_hostname=os.domain-name.net
# host group for masters
[masters]
ip-10-0-4-29.us-east-2.compute.internal
ip-10-0-5-54.us-east-2.compute.internal
ip-10-0-6-8.us-east-2.compute.internal
[etcd]
ip-10-0-4-29.us-east-2.compute.internal
ip-10-0-5-54.us-east-2.compute.internal
ip-10-0-6-8.us-east-2.compute.internal
[nodes]
# host group for nodes, includes region info
[nodes]
#master
ip-10-0-4-29.us-east-2.compute.internal openshift_node_group_name='node-config-master'
ip-10-0-5-54.us-east-2.compute.internal openshift_node_group_name='node-config-master'
ip-10-0-6-8.us-east-2.compute.internal openshift_node_group_name='node-config-master'
#infra
ip-10-0-4-28.us-east-2.compute.internal openshift_node_group_name='node-config-infra'
ip-10-0-5-241.us-east-2.compute.internal openshift_node_group_name='node-config-infra'
#node
ip-10-0-4-162.us-east-2.compute.internal openshift_node_group_name='node-config-compute'
ip-10-0-5-146.us-east-2.compute.internal openshift_node_group_name='node-config-compute'
Please if anyone can help me get past this hurdle so I can finally try and demo out a CI/CD pipeline using Openshift I'd be truly grateful
I know this is an old link but I was running into the same issue with my ELB configured as HTTPS. I changed the listener to TCP and used port 443 for the Load Balancer Port and the Instance Port. For the Health Check, make sure you are using Ping Protocol HTTPS, Ping Port 443 and Ping Path of "/" . Those configuration changes allowed the installation to proceed.

Istio JWT verification against JWKS with internally signed certificate

I'm attempting to configure Istio authentication policy to validate our JWT.
I set the policy and can see it takes affect. However it won't allow anything to connect. When applying the policy if I inspect the istio-pilot logs I can see it failing to retrieve the signing keys, giving a certificate error.
2018-10-24T03:22:41.052354Z error model Failed to fetch pubkey from "https://iam.company.com.au/oauth2/jwks": Get https://iam.company.com.au/oauth2/jwks: x509: certificate signed by unknown authority
2018-10-24T03:22:41.052371Z warn Failed to fetch jwt public key from "https://iam.company.com.au/oauth2/jwks "
This I assume would be due to this server using a TLS certificate signed by our corporate CA.
How do I get istio-pilot to trust certs from our CA? I have tried installing ca-certificates and including our CA public key in the Ubuntu certificates but it still won't work.
Policy:
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "our-service-jwt-example"
spec:
targets:
- name: our-service
origins:
- jwt:
issuer: iam.company.com.au
audiences:
- YRhT8xWtcLrOQmqJUGPA1p6O6mUa
jwksUri: "https://iam.company.com.au/oauth2/jwks"
principalBinding: USE_ORIGIN
Pilot does the jwks resolving for the envoy. In that case, pilot needs to have the CA certificate. At the moment there is no way to add a CA cert to the pilot unless you add the cert when deploying pilot in the istio. https://github.com/istio/istio/blob/master/pilot/pkg/model/jwks_resolver.go
This has been added as of Istio 1.4:
https://github.com//istio/istio/pull/17176
You can provide an extra root certificate in PEM format in the pilot.jwksResolverExtraRootCA helm chart value (also works with IstioOperator for more recent versions of Istio) and it will create a ConfigMap containing an extra.pem file that should get mounted into the istio pilot container as /cacerts/extra.pem. From there it should get picked up automatically.