How to update a thing certificate in AWS IoT? - amazon-web-services

How do I update the certificate of an existing Thing in AWS IoT, assuming I know the thing name and an attribute with the same value? I.e. the thing has name "foo" and attribute "id=foo".
From the limited documentation, I'm assuming I do something like:
Register the replacement certificate (RegisterCertificate)
Find the existing thing (ListThings, filtered by attribute)
Attach the new certificate to the Thing (AttachThingPrincipal?)
Somehow find the old certificate (is there no better way than ListCertificates and paging)??
Update the old certificate to be INACTIVE (UpdateCertificate)
Can anyone confirm the correct, most succinct way to do this?

I welcome better solutions, but this worked for me:
Call RegisterThing again (same ThingName, same policy, different cert). This seems to attach a new certificate to my thing.
Called ListThingPrincipals, filtering on ThingName. The result will be a list of ARNs representing the certificates associated with the thing, of the form arn:aws:iot:<region>:<account id>:cert/<cert id>.
Iterative through the list, strip out the certificate id and call DescribeCertificate, with the certificate id as parameter.
Compare the result (which includes the PEM form of the certificate) with the new certificate. If it's not a match, this is one of the previous certificates. Consequently, call UpdateCertificate and mark that certificate as INACTIVE.

Related

How to renew a cloudformation created API Gateway API Key

I've created users with API Keys in a cloudformation yaml file. We want to renew one API Key but an API Key is immutable so has to be deleted and regenerated. Deleting an API Key manually and then hoping that rerunning the cloudformation script is going to replace it with no other ill effects seems like risky business. What is the recommended way to do this (I'd prefer not to drop and recreate the entire stack for availability reasons and because I only want to renew one of our API keys, not all of them)?
The only strategy I can think of right now is
change the stack so that the name associated with the API Key in question is changed
deploy the stack (which should delete the old API Key and create the new one)
change the stack to revert the 1st change which should leave me with a changed API Key
with same name
deploy the stack
Clunky eh!
It is indeed a bit clunky, but manually deleting it, will not cause cloudformation to recreate the API key, since it has an internal state of the stack in which the key still exists.
You could simply change the resource name of the API key and update the stack, but this will only work if you can have duplicate names for API keys, which I doubt, but I could not find confirmation in the docs.
This leaves the only way to do it, in two steps (if you want to keep the same name). One to remove the old key, and a second update to create the new key. This can be achieved by simply commenting the corresponding lines in the first step and subsequently uncommenting them for the second step, or as you suggested, by changing the name of the API key and then changing it back.

Can't get a domain verified on AWS after transfer

After transferring a domain from another registrar to AWS, I can't get it verified in the Certificate Manager. I created a hosted zone, the CNAME records created by the Certificate Manager are there, I tried with the DNS tester - the records seem good. However it still says "pending validation". I tried a few times, waited a couple of days and it doesn't seem it will work.
I'm totally out of ideas, any help?
DNS validation require 2 things to be setup correctly. Record Name and Record Value
Check if you're correctly setting these in Route53. Reference Doc here: https://docs.aws.amazon.com/acm/latest/userguide/dns-validation.html
Now 2 issues which are very common
In the Record Name part confirm that you're not adding your domain name in the value. _X is the only part you have to copy-paste. If you copy _X.YourDomain then 'YourDomain' part is duplicated
Record Value ends with . (a period / dot). Don't remove that period
You can verify the settings from https://mxtoolbox.com/ it has various configurations like A record, CNAME, DNS Validation, etc.

How to Query Route53 hosted zone to check for an existing record set?

I am new to amazon Route53. As of now, I am able to create a hosted zone and a resource record set in my amazon account. But now I want to search whether a record set already exists in my hosted zone. For Example
Hosted zone "abc.com" and it has two-record set in it.
A.abc.com
B.abc.com
Now I want to query my hosted zone and find out whether A.abc.com already exists in the abc.com.
So, is there any API that I can use where I can pass my amazon credentials and my amazon hostedzone and the searched "record set" and then I can get the result back whether that record set already exists. Kindly guide me.
After research, I found out that there is "ListResourceRecordSet" which will give me the list back for a particular zone. But I don't want the list I just want to check whether the entry already exists.
I have been able to perform this check efficiently using the ListResourceRecordSet API method, and the name and maxitems parameters. You haven't specified how you are accessing the API, so I'm going to explain this using the standard AWS REST API.
Given your example:
Call the API passing A.abc.com as the name parameter and 1 as the maxitems parameter. Your request will look like this: https://route53.amazonaws.com/2013-04-01/hostedzone/{YOUR_HOSTED_ZONE_ID}/rrset?name=A.abc.com.&maxitems=1
Note that I've added a trailing dot (".") to the end of the resource name A.abc.com. The API reference indicates that it may affect result sort order so I add it just in case.
You will get back an XML result in this format:
<?xml version="1.0"?>
<ListResourceRecordSetsResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/">
<ResourceRecordSets>
<ResourceRecordSet>
<Name>A.abc.com.</Name>
<Type>A</Type>
<TTL>3600</TTL>
<ResourceRecords>
<ResourceRecord>
<Value>SOME_IP_ADDRESS</Value>
</ResourceRecord>
</ResourceRecords>
</ResourceRecordSet>
</ResourceRecordSets>
<IsTruncated>true</IsTruncated>
<NextRecordName>B.abc.com.</NextRecordName>
<NextRecordType>A</NextRecordType>
<MaxItems>1</MaxItems>
</ListResourceRecordSetsResponse>
Now you're going to have to do some parsing. Check the result to see if there is one ResourceRecordSet and if its Name property matches the name of the resource record you are looking for (you probably want to do a case-insensitive compare of the two values). Keep in mind that the Name property has that trailing period (".") at the end, so add it to the name you're searching for before doing the comparison.
If there is exactly one resource record set and the name matches the one you are looking for, it exists. If either one of those checks fails, then it does not exist.
Granted, this isn't as simple as a GetResourceRecordSet operation would be, but at least it keeps you from having to query the entire zone and parse a bunch of records. You also won't run into the long delay or throttling issues that you may using the CLI --query option.
There does not appear to be a way to use this method with the AWS CLI as it lacks a --name parameter for some reason. I can vouch for the fact that the JavaScript SDK will allow you to do this using the StartRecordName parameter.
There is no way to filter the API call, but there is a way to filter the data returned. Using the CLI you can do this with the --query option.
From the documentation: "To view all the resource record sets of a particular name, use the --query parameter to filter them out. For example:"
aws route53 list-resource-record-sets --hosted-zone-id Z2LD58HEXAMPLE --query "ResourceRecordSets[?Name == 'A.abc.com']"

Renewing IAM SSL Server Certificates

I have been using IAM server certificates for some of my Elastic Beanstalk applications, but now its time to renew -- what is the correct process for replacing the current certificate with the updated cert?
When I try repeating an upload using the same command as before:
aws iam upload-server-certificate --server-certificate-name foo.bar --certificate-body file://foobar.crt --private-key file://foobar.key --certificate-chain file://chain_bundle.crt
I receive:
A client error (EntityAlreadyExists) occurred when calling the UploadServerCertificate operation: The Server Certificate with name foo.bar already exists.
Is the best practice to simply upload using a DIFFERENT name then switch the load balancers to the new certificate? This makes perfect sense - but I wanted to verify I'm following the correct approach.
EDIT 2015-03-30
I did successfully update my certificate using the technique above. That is - I uploaded the new cert using the same technique as originally, but with a different name, then updated my applications to point to the new certificate.
The question remains however, is this the correct approach?
Yes, that is the correct approach.
Otherwise, you would be forced to roll it out to every system that used it at the same time, with no opportunity to test, first, if desired.
My local practice, which is I don't intend to imply is The One True Way™, yet serves the purpose nicely, is to append -yyyy-mm for the year and month of the certificate's expiration date to the end of the name, making it easy to differentiate between them at a glance... and using this pattern, when the list sorted is lexically, they're coincidentally sorted chronologically as well.

AWS S3 - Privacy error when accessing file from link

I am working with a team that is using S3 to host content and they moved from a single bucket for all brands to one bucket for each brand and now we are having trouble when linking to the content from within salesforce site.com page. When I copy the link from S3 as HTTPS, I get a >"Your connection is >not private, Attackers might be trying to steal your information from >spiritxpress.s3.varsity.s3.amazonaws.com (for example, passwords, messages, or credit cards)."
I have asked them to compare the settings from the one that is working, and I don't have access to dig into it myself, and we are pretty new to this as well so thought I would see if there were any known paths to walk down. The ID and Key have not changed and I can access the content via CyberDuck, it just is not loading when reached via a link.
Let me know if additional information is needed and I will provide as quickly as I can.
[EDIT] the bucket naming convention they are using is all lowercase and meets convention guidelines as well, but it seems strange to me they way it is structured as they have named the bucket "brandname.s3.companyname" and when copying the link it comes across as "https://brandname.s3.company.s3.amazonaws.com/directory/filename" where the other bucket was being rendered as "https://s3.amazonaws.com/bucketname/......
Whoever made this change has failed to account for the way wildcard certificates work in HTTPS.
Requests to S3 using HTTPS are greeted with a certificate identifying itself as "*.s3[-region].amazonaws.com" and in order for the browser to consider this to be valid when compared to the link you're hitting, there cannot be any dots in the part of the hostname that matches the * offered by the cert. Bucket names with dots are valid, but they cannot be used on the left side of "s3[-region].amazonaws.com" in the hostname unless you are willing and able to accept a certificate that is deemed invalid... they can only be used as the first element of the path.
The only way to make dotted bucket names and S3 native wildcard SSL to work together is the other format: https://s3[-region].amazonaws.com/example.dotted.bucket.name/....
If your bucket isn't in us-standard, you likely need to use the region in the hostname, so that the request goes to the correct endpoint, e.g. https://s3-us-west-2.amazonaws.com/example.dotted.bucket.name/path... for a bucket in us-west-2 (Oregon). Otherwise S3 may return an error telling you that you need to use a different endpoint (and the endpoint they provide in the error message will be valid, but probably not the one you're wanting for SSL).
This is a limitation on how SSL certificates work, not a limitation in S3.
Okay, it appears it did boil down to some permissions that were missed and we were able to get the file to display as expected. Other issues are present, but the present one is resolved so marking as answered.