I have been using IAM server certificates for some of my Elastic Beanstalk applications, but now its time to renew -- what is the correct process for replacing the current certificate with the updated cert?
When I try repeating an upload using the same command as before:
aws iam upload-server-certificate --server-certificate-name foo.bar --certificate-body file://foobar.crt --private-key file://foobar.key --certificate-chain file://chain_bundle.crt
I receive:
A client error (EntityAlreadyExists) occurred when calling the UploadServerCertificate operation: The Server Certificate with name foo.bar already exists.
Is the best practice to simply upload using a DIFFERENT name then switch the load balancers to the new certificate? This makes perfect sense - but I wanted to verify I'm following the correct approach.
EDIT 2015-03-30
I did successfully update my certificate using the technique above. That is - I uploaded the new cert using the same technique as originally, but with a different name, then updated my applications to point to the new certificate.
The question remains however, is this the correct approach?
Yes, that is the correct approach.
Otherwise, you would be forced to roll it out to every system that used it at the same time, with no opportunity to test, first, if desired.
My local practice, which is I don't intend to imply is The One True Way™, yet serves the purpose nicely, is to append -yyyy-mm for the year and month of the certificate's expiration date to the end of the name, making it easy to differentiate between them at a glance... and using this pattern, when the list sorted is lexically, they're coincidentally sorted chronologically as well.
Related
Apologies if this is a duplicate, I'm going a bit snowblind with blogs and articles trying to find a solution.
I'm trying to use the AWS CDK to deploy a Stack - specifically a CloudFront Distribution layered over an S3 Bucket. I want to retrieve a cert from Cert Manager, and I also want to update a Hosted Zone in R53.
I want to put the zone ID and cert ARN in SSM Parameter Store, and have my CDK app pull the correct ID/ARN from there, so as not to leave it in my code.
I'm currently pulling the values like this in my Go code:
certArn := awsssm.StringParameter_ValueFromLookup(stack, certArnSSM)
certificate := awscertificatemanager.Certificate_FromCertificateArn(stack, wrapName("certificate"), certArn)
Where certArnSSM is the path to the parameter.
However, when I run the synth I get this:
panic: "ARNs must start with \"arn:\" and have at least 6 components: dummy-value-for-/dev/placeholder/certificateArn"
From some reading, this is expected. However, I'm not sure on the 'best practice' approach to solving it. I'm not totally clear on how to use Lazy to solve this - do I need to create a type and implement the Produce() method?
I was unable to replicate your error. The following synths and deploys without error, correctly retrieving the certArn param from ssm as a valid certificate arn lookup input:
func NewCertLookupStack(scope constructs.Construct, id string, props *awscdk.StackProps) awscdk.Stack {
stack := awscdk.NewStack(scope, &id, &props)
certArn := awsssm.StringParameter_ValueFromLookup(stack, jsii.String("/dummy/certarn"))
certificate := awscertificatemanager.Certificate_FromCertificateArn(stack, jsii.String("Certificate"), certArn)
awscdk.NewCfnOutput(stack, jsii.String("ArnOutput"), &awscdk.CfnOutputProps{
Value: certificate.CertificateArn(), // demonstrate it works: the correct cert arn storeed as a stack output
})
return stack
}
I worked around the issue by making the UUID of the cert a variable in my code, and then constructing an ARN manually. It feels like the wrong way to solve the problem though.
createdArn := jsii.String(fmt.Sprintf("arn:aws:acm:us-east-1:%s:certificate/%s", *sprops.Env.Account, certUuid))
certificate := awscertificatemanager.Certificate_FromCertificateArn(stack, wrapName("certificate"), createdArn)
How do I update the certificate of an existing Thing in AWS IoT, assuming I know the thing name and an attribute with the same value? I.e. the thing has name "foo" and attribute "id=foo".
From the limited documentation, I'm assuming I do something like:
Register the replacement certificate (RegisterCertificate)
Find the existing thing (ListThings, filtered by attribute)
Attach the new certificate to the Thing (AttachThingPrincipal?)
Somehow find the old certificate (is there no better way than ListCertificates and paging)??
Update the old certificate to be INACTIVE (UpdateCertificate)
Can anyone confirm the correct, most succinct way to do this?
I welcome better solutions, but this worked for me:
Call RegisterThing again (same ThingName, same policy, different cert). This seems to attach a new certificate to my thing.
Called ListThingPrincipals, filtering on ThingName. The result will be a list of ARNs representing the certificates associated with the thing, of the form arn:aws:iot:<region>:<account id>:cert/<cert id>.
Iterative through the list, strip out the certificate id and call DescribeCertificate, with the certificate id as parameter.
Compare the result (which includes the PEM form of the certificate) with the new certificate. If it's not a match, this is one of the previous certificates. Consequently, call UpdateCertificate and mark that certificate as INACTIVE.
Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.
What I wanted to achieve is pretty simple, if you send a request to some address, the response you get is a single integer number, like 13 for example. I think it is equivalent to hosting a .html page with single number on that page and then I can parse that string in my application. (It is a Unity game, using the WWW class to send the request.)
(This is actually a version number. If it is greater than what I stored in my app I would update it and then send another request to other place and retrieve something bigger)
I am looking for the cheapest way that can handle this. I planned to use AWS but confused what component should be use? S3? EC2? Lambda? CloudFront?
If you think doing this on a web hosting or Heroku or something else is better, I also wanted to hear about it.
To serve up a simple value, S3 should do the trick.
Create a bucket in the console, using lonely lowercase letters, digits, and dashes in the name. The name has to be globally unique among all of S3, so make up something unique. We'll call the bucket name example-bucket.
Create your file on your computer with the desired contents. If plain text, call it version.txt.
In the AWS console, select the bucket, and upload the file. While clicking through the "next" screens, put a check next to "make everything public" and accept the defaults. Upload the file.
Now, go to https://example-bucket.s3.amazonaws.com/version.txt in your browser and verify (using your actual bucket name. That's your download link.
Done. As long as you don't expect to handle over about 800 requests per second, this will do exactly what you want.
Review the S3 pricing, of course.
Although this question is suitable for Server Fault,
EC2 using nginx or apache web server will be sufficient.
Put Load balancer in front of EC2 instances.
I am working with a team that is using S3 to host content and they moved from a single bucket for all brands to one bucket for each brand and now we are having trouble when linking to the content from within salesforce site.com page. When I copy the link from S3 as HTTPS, I get a >"Your connection is >not private, Attackers might be trying to steal your information from >spiritxpress.s3.varsity.s3.amazonaws.com (for example, passwords, messages, or credit cards)."
I have asked them to compare the settings from the one that is working, and I don't have access to dig into it myself, and we are pretty new to this as well so thought I would see if there were any known paths to walk down. The ID and Key have not changed and I can access the content via CyberDuck, it just is not loading when reached via a link.
Let me know if additional information is needed and I will provide as quickly as I can.
[EDIT] the bucket naming convention they are using is all lowercase and meets convention guidelines as well, but it seems strange to me they way it is structured as they have named the bucket "brandname.s3.companyname" and when copying the link it comes across as "https://brandname.s3.company.s3.amazonaws.com/directory/filename" where the other bucket was being rendered as "https://s3.amazonaws.com/bucketname/......
Whoever made this change has failed to account for the way wildcard certificates work in HTTPS.
Requests to S3 using HTTPS are greeted with a certificate identifying itself as "*.s3[-region].amazonaws.com" and in order for the browser to consider this to be valid when compared to the link you're hitting, there cannot be any dots in the part of the hostname that matches the * offered by the cert. Bucket names with dots are valid, but they cannot be used on the left side of "s3[-region].amazonaws.com" in the hostname unless you are willing and able to accept a certificate that is deemed invalid... they can only be used as the first element of the path.
The only way to make dotted bucket names and S3 native wildcard SSL to work together is the other format: https://s3[-region].amazonaws.com/example.dotted.bucket.name/....
If your bucket isn't in us-standard, you likely need to use the region in the hostname, so that the request goes to the correct endpoint, e.g. https://s3-us-west-2.amazonaws.com/example.dotted.bucket.name/path... for a bucket in us-west-2 (Oregon). Otherwise S3 may return an error telling you that you need to use a different endpoint (and the endpoint they provide in the error message will be valid, but probably not the one you're wanting for SSL).
This is a limitation on how SSL certificates work, not a limitation in S3.
Okay, it appears it did boil down to some permissions that were missed and we were able to get the file to display as expected. Other issues are present, but the present one is resolved so marking as answered.