I have been following alone the https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html tutorial to activate end-to-end encrypt in my classic load-balancer environment. I need to include the the ec2 public certificate in the https-backendauth.config, but I am not so sure what I am suppose to do.
I do have registered a certificate to my domain using the CName, and the certificate was successfully issued by the aws. Is it the certificate they are talking to? But what is the syntax between -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----? Do I just include the CNAME and CVALUE? Please help. Much appreciated
https-backendauth.config:
option_settings:
# Backend Encryption Policy
aws:elb:policies:backendencryption:
PublicKeyPolicyNames: backendkey
InstancePorts: 443
# Public Key Policy
aws:elb:policies:backendkey:
PublicKey: |
-----BEGIN CERTIFICATE-----
################################################################
################################################################
################################################################
################################################################
################################################
-----END CERTIFICATE-----
Related
We are trying to send logs using Filebeat to AWS MSK (Provisioned) using kafka configuration available. We're using mTLS authentication with the setup of Root CA and Intermediate CA with Vault. The intermediate CA is available in AWS PCA which is assigned to AWS MSK cluster which in turn issues the certs to the brokers on AWS MSK.
We are able to do mTLS authentication using Kafka client with the Admin setup (Kafka client with required certificates), however filebeat kafka is failing to do SSL handshake. All the certs provided in the handshake are valid.
Filebeat docker image: docker.elastic.co/beats/filebeat:8.5.1
Our Filebeat config looks like
filebeat.yaml
---
filebeat.shutdown_timeout: 0
fields_under_root: false
logging.level: debug
.
.
.
output.kafka:
hosts: 'XXXXMSK_BOOTSTRAP_HOSTSXXXX'
ssl.enabled: true
ssl.verification_mode: 'certificate'
ssl.certificate: /path/to/obained-cert.crt'
ssl.key: /path/to/obained-key.pki.key'
ssl.authorities: [/path/to/root/int/ca/combined-file/msk_ca_chain.pem']
topic: 'XXXXKAFKA_TOPICXXXX'
codec.format:
string: '{"timestamp": "%{[#timestamp]}", "message": %{[message]}, "host": %{[host]}}'
close_inactive: 10m
required_acks: 1
partition.round_robin:
reachable_only: false
keep-alive: 30000ms
obained-cert.crt
-----BEGIN CERTIFICATE-----
MIIXXXXX
#Obtained Cert#
-----END CERTIFICATE-----
obained-key.pki.key
-----BEGIN RSA PRIVATE KEY-----
MIIXXXXX
#Obtained private key#
-----END RSA PRIVATE KEY-----
msk_ca_chain.pem
-----BEGIN CERTIFICATE-----
MIIXXXXX
#Intermediate CA Cert#
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIXXXXX
#Root CA Cert#
-----END CERTIFICATE-----
The error in Filebeat log is:
{"log.level":"error","#timestamp":"2023-01-06T10:59:48.701Z","log.logger":"kafka","log.origin":{"file.name":"kafka/client.go","file.line":337},"message":"Kafka (topic=XXXXKAFKA_TOPICXXXX): kafka: client has run out of available brokers to talk to (Is your cluster reachable?)","service.name":"filebeat","ecs.version":"1.6.0"}
The error on AWS Cloudwatch for the brokers is:
[2023-01-06 12:48:07,716] INFO [SocketServer listenerType=ZK_BROKER, nodeId=3] Failed authentication with /INTERNAL_IP (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2023-01-06 12:48:08,004] INFO [SocketServer listenerType=ZK_BROKER, nodeId=2] Failed authentication with /INTERNAL_IP (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2023-01-06 12:48:08,016] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Failed authentication with /INTERNAL_IP (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I've enabled debug logs on Filebeat, but I'm not seeing any information regarding why SSL handshake has failed.
Is there any way we could see any debug logs on Filebeat Kafka or AWS MSK Broker side to identify why SSL handshake is failing? Also, any pointers around possible problems in filebeat.yaml config are also appreciated.
Thanks in advance!!!
Sorry for answering my own question. I have resolved this issue now by appending intermediate CA cert to the certificate obtained by root CA and then supplying only root CA in authorities section.
The changes I made:
Appended intermediate CA cert to the file /path/to/obained-cert.crt for the parameter ssl.certificate
Provided only the root certificate instead of chain of certificates for the parameter ssl.authorities i.e. ['/path/to/root/ca/msk_root_ca.pem']
This has done the trick!!
So, if you've intermidiate CA in the PKI, always append that to the obtained cert in order to carry out SSL handshake.
I hope this helps others.
I know that there are let's encrypt and ACM for the certificate method.
I'm currently deploying with Nginx and certified with ACM.
After issuing an SSL certificate with ACM, I created a record in Route 53 and set up an EC2 Load Balancer.
But why is the following code required when SSL certificate is issued with let's encryp, whereas is not needed when SSL certificate is issued with ACM?
server {
listen 443 ssl;
ssl_certificate "/etc/letsencrypt/live/rtcworld.xyz/fullchain.pem";
ssl_certificate_key "/etc/letsencrypt/live/rtcworld.xyz/privkey.pem";
}
I'm trying to setup DNS filtering using squid on an EC2 instance in a public subnet. EC2 instances from private subnets will be allowed through/blocked via the public EC2 instance.
If I SSH into a private EC2 instance and run curl google.com, nothing happens (google.com is in my whitelist.txt file). /var/log/squid/access.log shows no new entries.
If I run ssh <private-ip-of-ec2-instance-in-public-subnet>, I get a connection refused message from squid. I can also see a new entry in /var/log/squid/access.log.
I think there's a configuration error. What do I need to know in order to debug such cases? Is there a way to know where the routing is failing (from the instance or from the AWS console)? I've validated the VPC setup, routing tables, and security group permissions and don't see what's missing.
I did the setup manually, from the AWS console and via SSH in the EC2 instances.
My setup is:
VPC: 10.0.0.0/16
public-subnet: 10.0.0.0/20
private-subnet-1: 10.0.128.0/20
private-subnet-2: 10.0.144.0/20
Route tables:
public-rtb:
10.0.0.0/16 -> local
0.0.0.0/16 -> internet gateway
subnet associations: public-subnet
private-rtb:
10.0.0.0/16 -> local
0.0.0.0/16 -> ENI of public EC2 instance
subnet associations: private-subnet-1 and private-subnet-2
Security Groups:
public-sg:
All ICMP-IPV4 allowed from 10.0.0.0/16
SSH allowed from my local IP
HTTPS (TCP IPV4, port 443) allowed from 10.0.0.0/16
HTTP (TCP IPV4, port 80) allowed from 10.0.0.0/16
private-sg:
SSH allowed from 10.0.0.0/16
All traffic allowed from itself
EC2:
EC2 instance in the public subnet with squid installed and configured
2 private instances in the private subnet
Squid setup:
Setup an OpenSSL certificate in the public EC2 instance
mkdir /etc/squid/ssl
cd /etc/squid/ssl
openssl genrsa -out squid.key 4096
openssl req -new -key squid.key -out squid.csr -subj "/C=XX/ST=XX/L=squid/O=squid/CN=squid"
openssl x509 -req -days 3650 -in squid.csr -signkey squid.key -out squid.crt
cat squid.key squid.crt >> squid.pem
Setup the squid config
visible_hostname squid
cache deny all
# Log format and rotation
logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru %ssl::>sni %Sh/%<a %mt
logfile_rotate 10
debug_options rotate=10
# Handle HTTP requests
http_port 3128
http_port 3129 intercept
# Handle HTTPS requests
https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
acl SSL_port port 443
http_access allow SSL_port
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1 all
# Deny requests to proxy instance metadata
acl instance_metadata dst 169.254.169.254
http_access deny instance_metadata
# Filter HTTP requests based on the whitelist
acl allowed_http_sites dstdomain "/etc/squid/whitelist.txt"
http_access allow allowed_http_sites
# Filter HTTPS requests based on the whitelist
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist.txt"
ssl_bump peek step2 allowed_https_sites
ssl_bump splice step3 allowed_https_sites
ssl_bump terminate step2 all
http_access deny all
Changed the port
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3129
sudo iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3130
I think the issue here is that in your config file, you've got http_access deny all but no ACLs allowing traffic from any network
for example (from the squid.conf template):
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow localnet
http_access deny all
for simplicity, you can add these 2 lines at the beginning of the config file (must be before http_access deny all):
acl acl_name src your_public_ip
http_access allow acl_name
In AWS IoT Core I created a thing, created a Policy for the thing, created a Certificate for the thing and attached the Policy to the Certificate.
After that I downloaded the .crt and the .key files of the Certificate I verified if they match with the following command:
(openssl x509 -noout -modulus -in certificate.pem.crt | openssl md5 ; openssl rsa -noout -modulus -in private.pem.key | openssl md5) | uniq
and got a single hash back, which indicates that they match
(stdin)= 97c1a8816c35acbfgt04f325aeacae6
The only thing left was to find the Root CA that my thing Certificate was signed with.
I found the AWS developer guide for the server certs here and I downloaded the VeriSign Class 3 Public Primary G5 root CA certificate, which I renamed to rootCA.pem.
But when I ran a test to verify the CA with the following command:
openssl s_client -connect <my ID>.iot.ap-southeast-2.amazonaws.com:8443 /
-CAfile /etc/mosquitto/certs/rootCA.pem /
-cert /etc/mosquitto/certs/certificate.pem.crt /
-key /etc/mosquitto/certs/private.pem.key
I get this response with the error unable to get local issuer certificate (see below)
CONNECTED(00000003)
depth=1 C = US, O = Symantec Corporation, OU = Symantec Trust Network, CN = Symantec Class 3 ECC 256 bit SSL CA - G2
verify error:num=20:unable to get local issuer certificate
---
Certificate chain
0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com, Inc./CN=*.iot.ap-southeast-2.amazonaws.com
i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 ECC 256 bit SSL CA - G2
1 s:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 ECC 256 bit SSL CA - G2
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDlTCCAzygAwIBAgIQGw ...
——END CERTIFICATE-----
subject=/C=US/ST=Washington/L=Seattle/O=Amazon.com, Inc./CN=*.iot.ap-southeast-2.amazonaws.com
issuer=/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 ECC 256 bit SSL CA - G2
---
No client certificate CA names sent
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
Requested Signature Algorithms: ECDSA+SHA512:RSA+SHA512:ECDSA+SHA384:RSA+SHA384:ECDSA+SHA256:RSA+SHA256:DSA+SHA256:ECDSA+SHA224:RSA+SHA224:DSA+SHA224:ECDSA+SHA1:RSA+SHA1:DSA+SHA1
Shared Requested Signature Algorithms: ECDSA+SHA512:RSA+SHA512:ECDSA+SHA384:RSA+SHA384:ECDSA+SHA256:RSA+SHA256:DSA+SHA256:ECDSA+SHA224:RSA+SHA224:DSA+SHA224:ECDSA+SHA1:RSA+SHA1:DSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 2398 bytes and written 1448 bytes
Verification error: unable to get local issuer certificate
---
New, TLSv1.2, Cipher is ECDHE-ECDSA-AES256-GCM-SHA384
Server public key is 256 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-ECDSA-AES256-GCM-SHA384
Session-ID: EB3B32C8 …
Session-ID-ctx:
Master-Key: 783A17EB6 …
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1587558792
Timeout : 7200 (sec)
Verify return code: 20 (unable to get local issuer certificate)
Extended master secret: yes
---
Does someone know how to get the Root CA for my thing Certificate?
Thanks
Edit:
Thanks to Ben T's advice I created a new thing in the Mumbai Region. Surprisingly I can see an option now to download the root certificate directly from the Certificate creation screen (see below)
After running openssl s_client -connect again with the new certs/key I finally get verify return:1.
AWESOME
The Verisign certificate is used for legacy endpoints.
You should use the newer certificates for the Amazon Trust Services endpoints. e.g. the one at https://www.amazontrust.com/repository/AmazonRootCA1.pem
See https://docs.aws.amazon.com/iot/latest/developerguide/server-authentication.html#server-authentication-certs
All new AWS IoT Core regions, beginning with the May 9, 2018 launch of AWS IoT Core in the Asia Pacific (Mumbai) Region, serve only ATS certificates.
I'm attempting to configure Istio authentication policy to validate our JWT.
I set the policy and can see it takes affect. However it won't allow anything to connect. When applying the policy if I inspect the istio-pilot logs I can see it failing to retrieve the signing keys, giving a certificate error.
2018-10-24T03:22:41.052354Z error model Failed to fetch pubkey from "https://iam.company.com.au/oauth2/jwks": Get https://iam.company.com.au/oauth2/jwks: x509: certificate signed by unknown authority
2018-10-24T03:22:41.052371Z warn Failed to fetch jwt public key from "https://iam.company.com.au/oauth2/jwks "
This I assume would be due to this server using a TLS certificate signed by our corporate CA.
How do I get istio-pilot to trust certs from our CA? I have tried installing ca-certificates and including our CA public key in the Ubuntu certificates but it still won't work.
Policy:
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "our-service-jwt-example"
spec:
targets:
- name: our-service
origins:
- jwt:
issuer: iam.company.com.au
audiences:
- YRhT8xWtcLrOQmqJUGPA1p6O6mUa
jwksUri: "https://iam.company.com.au/oauth2/jwks"
principalBinding: USE_ORIGIN
Pilot does the jwks resolving for the envoy. In that case, pilot needs to have the CA certificate. At the moment there is no way to add a CA cert to the pilot unless you add the cert when deploying pilot in the istio. https://github.com/istio/istio/blob/master/pilot/pkg/model/jwks_resolver.go
This has been added as of Istio 1.4:
https://github.com//istio/istio/pull/17176
You can provide an extra root certificate in PEM format in the pilot.jwksResolverExtraRootCA helm chart value (also works with IstioOperator for more recent versions of Istio) and it will create a ConfigMap containing an extra.pem file that should get mounted into the istio pilot container as /cacerts/extra.pem. From there it should get picked up automatically.