MySQL 5.7 Ubuntu 16.04 on AWS EC2
I've got replication set up over ssl using self-signed certificates. I am able to connect to the master from the slave using the mysql client with ssl-mode=VERIFY_IDENTITY. The replication is also working over ssl until I try and enable MASTER_SSL_VERIFY_SERVER_CERT to enable host name verification.
With that enabled the slave is no longer able to authenticate with the master and received io error 2026, which is just a generic ssl connection failed error. The logs are not any more helpful nor is ssldump which just shows the connection being aborted before the handshake even starts.
According to the docs:
To activate host name identity verification, add the
MASTER_SSL_VERIFY_SERVER_CERT option.
and
For a replication connection, specifying
MASTER_SSL_VERIFY_SERVER_CERT=1 corresponds to setting
--ssl-mode=VERIFY_IDENTITY
But also
Host name identity verification does not work with self-signed
certificates.
https://dev.mysql.com/doc/refman/5.7/en/replication-solutions-encrypted-connections.html
So how can I enable host name verification during replication with self-signed certificates? The docs seem to indicate it is impossible, but then why am I able to connect via the client with ssl-mode=VERIFY_IDENTITY?
Thank you.
The solution was to add MASTER_SSL_CA, MASTER_SSL_CERT, and MASTER_SSL_KEY to my CHANGE MASTER TO statement to manually point to the ca, cert, and key rather an trusting mysql to read them from the config.
As far as I can tell this means the mysql docs are wrong.
They state that the paths can be set in the [client] section of my.cnf, but this is clearly not the case, at least for me. For whatever reason the [client] section does appear to be used by the mysql client, but is ignored for replication.
I believe I was also misunderstanding self-signed certificates. MASTER_SSL_VERIFY_SERVER_CERT does work because I don't actually have self-signed certs, I have certs signed by my own CA. The CA cert itself is self-signed but that's different from the master/slave certs being self-signed it seems.
And finally, I was absolutely misunderstanding the purpose of MASTER_SSL_VERIFY_SERVER_CERT. It turns out I don't really need it at all because my personal CA only signs certs for this one domain anyway so there's nothing to be gained by checking that the common name of the server cert matches the requested domain. It always will. The verification would only be helpful when using a trusted certificate authority that signs certs for many domains. Then you would want to verify the certificate belongs to the domain you requested otherwise you would be vulnerable to man-in-the-middle attacks.
Hopefully that mess of info helps someone else.
Related
I'm new to AWS, I've created an EC2 instance with Windows Server 2022 type.
however, I got the following error when I connected to it via RDP:
"The remote computer could not be authenticated due to problems with its security certificate. it may be unsafe to proceed"
"The certificate is not from a trusted certifying authority"
how can I solve this Certificate issue? because I'm going to use it for development and so input sensitive data to it (like passwords, ...etc). So securing the connection by fixing the certificate issue is so important for me.
This may be a weird question but I created a Client VPN and in doing so it was necessary for me to create a server certificate and key both of which I imported. The VPN is working fine, but I am now being hit with a $400 pro-rated charge for Amazon's Private Certification Authority service. I don't remember ever using this, I think I might have created one by accident.
Is it safe to delete this? I don't think it should effect my VPN, I created the necessary certificates for that.
You do not need this, and instead can use pre-shared keys. The Amazon PCA allows you to use IKE type keys. It is not necessary. From the client-guide
To create a Client VPN endpoint, you must provision a server certificate in AWS >Certificate Manager, regardless of the type of authentication you use. For more >information about creating and provisioning a server certificate, see the steps in Mutual authentication.
More information can be found here: https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html
Also, from direct work experience - you should contact AWS about that cert - especially if you haven't used it. They can waive it or provide you equal credits on your next bill.
This could be possibly a duplicate question, but I've tried every solution I found and nothing worked. On main domain, I've successfully installed SSL and it is working fine. I need to install the same wild card SSL on other two instances which are using for subdomain.
The overall structure I've setup so far is as follows -
Cloudflare is using for CDN where I've created A record for all 3 instances. One for main domain and other 2 for subdomains.
Created 3 instances (Ubuntu 18.04 + Apache) on AWS EC2
When I am hitting subdomain in browser, it is showing lock sign but with Error 521 : Web server is down
but When I am trying it with default Public DNS, it is showing my page without any error.
Please suggest what is missing here. Thanks much!!
The 521 error from CloudFlare indicates that it is unable to speak to your host on that port.
Error 521 occurs when the origin web server refuses connections from Cloudflare. Security solutions at your origin may block legitimate connections from certain Cloudflare IP addresses.
The two most common causes of 521 errors are:
Offlined origin web server application
Blocked Cloudflare requests
Please check the following:
The EC2 security group is allowing inbound access on both port 80/443 (this cannot be locked down to your IP address).
If a NACL is in place (which is not the default one) ensure that both the communication ports (80/443) and the ephemeral ports are open.
Ensure that the servers are listening on both port 80/443.
It is important to identify whether CloudFlare it attempting to connect to HTTP or HTTPS, it can support both of these models based on the configuration.
If you're still stuck after these points you can attempt to validate the requests going to your server using VPC Flow Logs.
Finally, this answer gave me a hint How to install third-party SSL Certificate with AWS EC2 Instance (Ubuntu AMI)? Will it cost one-time or monthly basis?
And I resolved this error as follow -
1. Downloaded the certificate files from primary server
2. Uploaded the same certificate files to the secondary server where the subdomain is pointed
3. then edit /etc/apache2/sites-available/default-ssl.conf file on secondary server, search for "SSLCertificate" and change the following lines
4. Enable the SSL configuration, and restart the webserver.
ln -s /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-enabled/
apachectl configtest
apachectl graceful
I'm not connecting to my Amazon RDS via SSL/TLS. Does anyone know why the certificate authority is still listed like so in the admin console?
All RDS instances are automatically configured with a TLS certificate that is used by the server if and when your client establishes a TLS connection.
Whether your application chooses to connect using TLS or not doesn't change the fact that the certificate is there, available for use.
What's actually indicated by this dropdown is which specific RDS CA signed the certificate that is automatically installed on your instance, because, if you are using TLS then your application needs to be expecting the same CA or validation will fail. The available CA choices change over time as a matter of best practice, with old ones eventually retiring after new ones are created.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
I'm trying to proxy to any https connection and I have in alert section this message:
No route host
error
burp certificate add
I have tried to solve it, reinstalled burp, reinstalled the certificate, set a manual network configuration.
Does somebody know how to solve this issue?
The first thing to check is that you can browse these sites directly from your web browser, without Burp.
If you are on a corporate network you may need to use a proxy. In that case you need to set this as an "Upstream proxy" in User options > Connections.
Another possibility is that a host firewall is blocking the outbound connection.