https security exception for amazon s3 bucket - amazon-web-services

I am having two buckets https://almaconnect.dev.s3.amazonaws.com/ and https://almaconnect.s3.amazonaws.com/
The first one when I hit gives non-secure result and asks me to add an exception in the browser. The 2nd one works fine.
I am wondering what issue there can be.
Please, help me guys....
Thanks,
Amit Chaudhary

The server sends a wildcard certificate for *.s3.amazon.com.
This certifies all subdomains of the domain s3.amazon.com.
Certificate is valid for your working example almaconnect.s3.amazon.com but not for your second example **alamonnect.**dev.s3.amazon.com.
Create a bucket called e.g. alaconnectdev to work around this problem.
With the distribution of Firefox 3.5, all major browsers allow only a single level of subdomain matching with certificate names that contain wildcards, in conformance with RFC 2818.
In other words the certificate *.mydomain.com will work for one.mydomain.com or two.mydomain.com but NOT one.two.mydomain.com.
Resources:
Wikipedia Wildcard Certificates
RFC 2818 on IETF.org

Related

Naked domain and http to https redirects

Hope you're all doing well!
I have a question I'm hoping to get some help with. I have a static site served through S3 with CloudFront distributions in front.
My main site is served on www.xyz.xyz and the cloudfront distribution connected ha a behavior http to https redirect.
Then I also want people to be able to access http://xyz.xyz, therefore I have created another bucket for the naked domain, with a redirect policy to www.xyz.xyz with http as protocol. In the CloudFront distribution connected to this the origin is the direct S3 website link, and not the bucket.
In the end this ensures all guests end at https://www.xyz.xyz, however when running Google Lighthouse for a SEO check, if I enter http://xyz.xyz it seems to go through 2 redirects, one to https and one to www and I'm assuming, according to Lighthouse, that this has some negative effects in that regard, both in terms of time to serve, but also SEO.
Am I doing something wrong? I hope you can help me. I really thought it was simpler, also with all the buckets and such :-)
I noticed in AWS Amplify you need to setup redirect/rewrites, but I guess in S3 + CloudFront terms, that's what I'm already doing.
Best,
To maintain compatibility with HSTS, you must perform your redirection in two steps. The first redirect should upgrade the request to https. The second can canonicalize the domain (add or remove www). So this behavior is desirable.

Google Cloud Functions certificate doesn't match domain name

I want to use my Google Cloud Function as a webhook endpoint for a Telegram bot - so that Telegram server makes a request to my function every time there's an update that I need to reply to. (Here's a full guide they provide for this). I have set up such a webhook at a GCF provided address, which looks like https://us-central1-project-name-123456.cloudfunctions.net/processUpdate (where processUpdate is the name of my function).
However, it looks like Telegram doesn't work with my function because of a problem with certificate. They #CanOfWormsBot created to troubleshoot this provides an error message:
⛔️ This verified certificate appears to be invalid
https://us-central1-project-name-123456.cloudfunctions.net/processUpdate
Your CN (Common Name) or SAN (Subject Alternative Name) appear not to match your domain name, please verify you're setting the correct domain for the certificate.
CERTIFICATE:
Common Name(CN): misc.google.com
Issuer: Google Internet Authority G3
Alternative Names(SAN): Too many SANS to be shown here.
Issued: 18/06/2019
Expires: 10/09/2019
What's the root cause of this issue? Does it mean that Google misconfigured certificate they use for cloudfunctions.net? Can I fix this by configuring my cloud function?

Insecure website error when connecting to AWS Console w/Account Alias

When I try to access my AWS console using my account name in the URL, I get this error (in Firefox):
Your connection is not secure
The owner of mycompanyname.tech.signin.aws.amazon.com has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website.
This site uses HTTP Strict Transport Security (HSTS) to specify that Firefox may only connect to it securely. As a result, it is not possible to add an exception for this certificate.
Why is this happening and what can I do about it?
Short answer: the problem is that there is a period in the company name/alias (mycompanyname.tech). I modified this to remove the period and the error no longer occurred.
Longer answer: I guess the way the wildcard security certificate works is that it only applies to names with 1 subdomain level (before signin.aws.amazon.com), and with the period, it broke it up into 2 ['mycompanyname', 'tech'].

How to validate ssl certificate on amazon ELB?

I'm writing a script that loads IAM certificate to some ELB in order to check if it's valid.
When I tested it, I used an invalid private key on purpose to see if I could load it to the ELB.... and the problem - it gets loaded!
So my questions are-
How is this possible? I know for a fact that if you use AWS console you can't do something like that.
Is there a boto way to check if a cert is valid? (not using openssl, this is what I'm trying to avoid).
What exactly do you mean when you say "check if it's valid"? If you try to upload a malformed PEM file (the text of the cert isn't valid) then it will definitely throw an error since it can't decode the file. Also, if you try to upload a mismatched public & private key it will also throw an error. I just tested these sorts of cases myself and got the following error:
The private key did not match the public key provided. Please verify the key material and try again.
If you're referring to testing that a certificate is signed, authentic, and not expired, then the ELB isn't going to do any of that. According to the AWS documentation for ELBs it's perfectly fine to make use of self-signed certificates, and certs will also continue to work (whether CA signed or self-signed) even if expired. Both self-signed certs and expired certs are "valid" as far as operation of a secure SSL connection goes. Whether the cert is signed and unexpired or not is really just a means of providing authentication that it's a legitimate certificate.
If you are asking about testing if a certificate is properly signed and not expired then you would need to test for these sorts of things yourself, typically by leveraging something like openssl.

Using Meteor browser-policy package allowOriginForAll for AWS works on http site but not https

So we are using the Meteor browser-policy package, and using Amazon S3 to store content.
On the server we have setup the browser policy as follows:
BrowserPolicy.content.allowOriginForAll('*.amazonaws.com');
BrowserPolicy.content.allowOriginForAll('*.s3.amazonaws.com');
This works fine in local dev and in production when visiting our http:// site. However when using the https:// address to our site the AWS content no longer passes this policy.
The following error is put on the console
Refused to load the image 'http://our-bucket-name.s3.amazonaws.com/asset-stored-in-s3.png' because it violates the following Content Security Policy directive: "img-src data: 'self' *.google-analytics.com *.zencdn.net *.filepicker.io *.uservoice.com *.amazonaws.com *.s3.amazonaws.com".
As you can see we have some other origins allowed in the browser policy, these all seem to work fine in both http and https. AWS S3 is the only one that is failing.
I've tried Chrome, Firefox, and Safari and they all have the same issue.
Whats going on?
I may not have the exact answer to this question but I have some information which the community may find helpful.
First, you should avoid serving mixed content. I'm unclear if that would set off the browser policy alerts but you just shouldn't do it anyway. The easiest solution is to use a protocol-relative-url or just explicitly specify https in your url.
Second, I too assumed that the wildcard worked like a glob. However, I've been told that it works the same way as an ssl certificate rule - i.e. for all subdomains or for a specific subdomain. In other words, *.example.com and www.example.com, are valid but *.foo.example.com, isn't meaningful. I think you want to explicitly add your bucket like so:
BrowserPolicy.content.allowOriginForAll('our-bucket-name.s3.amazonaws.com')
unless you literally want to trust all of amazonaws.com.