Change CNAME to Active AWS Region - amazon-web-services

I have an application running in AWS; both us-east-1 and us-west-2
When I have to perform maintenance or there is some other issue I want to switch DNS based on an nslookup.
I currently have two Jenkins jobs set up: east-to-west and west-to-east which requires me to manually verify the DNS record and pick the appropriate job. I now want to have a master job that will perform the nslookup then kick off the appropriate job.
I'm stuck trying to use the Jenkins conditional. If I do an "nslookup myapp | grep west" then I can trigger the west-to-east job. I'm not finding a way to do an, "else" if the condition is false.
Another option I'd consider is changing parameters as shown in the logic below and then doing a post build. My jobs names are flip-us-east-1-to-us-west-2 and flip-us-west-2-to-us-east-1
a=us-east-1
b=us-west-2
if nslookup east # if true will run us-west-2-to-us-east-1
a=us-west-2
b=us-east-1
fi
flip-a-to-b

Pipeline is an excellent idea but it's not installed :(
I ended up using a choice parameter which requires me to do the nslookup. I then do a string match conditional. Not ideal but works and boss is happy ;)

Related

Trying to come up with a way to track any ec2 instance type changes across our account

I have been trying to come up with a way to track any and all instance type changed that happen in our companies account. (ex: t2.micro to t2.nano)
I settled on creating a custom config rule that would alert us if the instance changed with a uncompliant warning, but I think this might be over complicating it and am suspecting that I should be using CloudWatch alarms or EventBridge.
I have used the following setup (from the CLI):
rdk create ec2_check_instance_type --runtime python3.7 --resource-types AWS::ED2::Instance --input-parameters '{"modify-instance-type":"*"}'
modify-instance-type seemed to be the only thing I could find which related to what I was looking for the lambda function to track and I used the wildcard to signify any changes.
I then added the following to the lambda function:
if configuration_item['resourceType'] != 'AWS::EC2::Instance':
return 'NOT_APPLICABLE'
if configuration_item['configuration']['instanceType'] == valid_rule_parameters['ModifyInstanceAttribute']:
return 'NON_COMPLIANT'
is there a different input-parameter that I should be using for this instead of "modify-instance-type"? so far this has returned nothing. I don't think it is evaluating properly.
or does anyone have a service that might be a better way to track configuration changes like this within aws that I'm just not thinking of?

Forced region split in HBase not causing any splits

I have an EMR cluster running HBase on s3. I have a table with the following configuration
hbase.regionserver.region.split.policy = org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy
I have disabled the split policy because I want to run split commands manually.
So I have a region say 'e85b1fe7c708500a7ae44427a76b3391' whose size is 14GB. I issue the following split command on the region:
split 'e85b1fe7c708500a7ae44427a76b3391'
The command runs successfully on the hbase shell, but no region split occurs. Can anyone help me on this.
Even though a force-split (such as the 'split' command in hbase shell) should be expected to expected to ignore 'DisabledRegionSplitPolicy', and try splitting a region, I have found, at least in the versions 1.4.x, that the policy is stopping the force-split.
You may try changing to some other policy, say 'ConstantSizeRegionSplitPolicy'. If this doesn't fix, then the issue could be something else. In that case, enable debug level in logging both in master, and in the target region server (which is hosting the target region), and troubleshoot.
Based on information available in https://hbase.apache.org/book.html#manual_region_splitting_decisions it states The DisabledRegionSplitPolicy policy blocks manual region splitting..
If you want to avoid manual splits, you can increase the size for auto-split to something very large.

Renewing IAM SSL Server Certificates

I have been using IAM server certificates for some of my Elastic Beanstalk applications, but now its time to renew -- what is the correct process for replacing the current certificate with the updated cert?
When I try repeating an upload using the same command as before:
aws iam upload-server-certificate --server-certificate-name foo.bar --certificate-body file://foobar.crt --private-key file://foobar.key --certificate-chain file://chain_bundle.crt
I receive:
A client error (EntityAlreadyExists) occurred when calling the UploadServerCertificate operation: The Server Certificate with name foo.bar already exists.
Is the best practice to simply upload using a DIFFERENT name then switch the load balancers to the new certificate? This makes perfect sense - but I wanted to verify I'm following the correct approach.
EDIT 2015-03-30
I did successfully update my certificate using the technique above. That is - I uploaded the new cert using the same technique as originally, but with a different name, then updated my applications to point to the new certificate.
The question remains however, is this the correct approach?
Yes, that is the correct approach.
Otherwise, you would be forced to roll it out to every system that used it at the same time, with no opportunity to test, first, if desired.
My local practice, which is I don't intend to imply is The One True Way™, yet serves the purpose nicely, is to append -yyyy-mm for the year and month of the certificate's expiration date to the end of the name, making it easy to differentiate between them at a glance... and using this pattern, when the list sorted is lexically, they're coincidentally sorted chronologically as well.

Can't close ElasticSearch index on AWS?

I've created a new AWS ElasticSearch domain, for testing. I use ES on a different host right now, and I'm looking to move to AWS.
One thing I need to do is set the mapping (analyzers) on my instance. In order to do this, I need to "close" the index, or else ES will just raise an exception.
Whenever I try to close the index, though, I get an exception from AWS:
Your request: '/_all/_close' is not allowed by CloudSearch.
The AWS ES documentation specifically says to do this in some cases:
curl -XPOST 'http://search-weblogs-abcdefghijklmnojiu.us-east-1.a9.com/_all/_close'
I haven't found any documentation that says why I wouldn't be able to close my indices on AWS ES, nor have I found anyone else who has this problem.
It's also a bit strange that I've got an ElasticSearch domain, but it's giving me a CloudSearch error message, since I thought those were different services, though I suppose one is implemented in terms of the other.
thanks!
AWS Elasticsearch does not support the "close" operation on indexes.
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html
"Currently, Amazon ES does not support the Elasticsearch _close API"
According to the AWS document I found recently, you have to first upgrade your elastic search domain to version 7.4 or greater.
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-handling-errors.html#aes-troubleshooting-close-api
Since closing all indices at once is a dangerous action, it is maybe disabled by default on your cluster. You need to make sure that your elasticsearch.yml configuration file doesn't contain this:
action.destructive_requires_name: true
You can either set this in your configuration file and restart your cluster, but I strongly advise against that since this opens the door to all kinds of other destructive actions, like deleting all your indices at once.
action.destructive_requires_name: false
What you should do instead is to temporarily update the cluster settings using
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"action.destructive_requires_name" : false
}
}'
Then close all your indices
curl -XPOST localhost:9200/_all/_close
And then reset the settings to a safer value:
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"action.destructive_requires_name" : true
}
}'

Need to get name of cloudformation template used to deploy ec2 from the command line using aws cli or api

I used a cloudformation template to create an ec2 instance. Is there any way besides tagging that I can get the name of the cloudformation template via the command line?
Method 1: Tagging
Tagging is going to be the cleanest and easiest way to get that data. You do need to do some advance work and this won't work for existing instances, but it's going to be fast and reliable.
Method 2: Cross-referencing
If you have the instance id, you can ask Cloudformation to search for it's sibling stack resources, from which you can infer the stack name, id, etc.
c = boto.cloudformation.connect_to_region('us-east-1')
c.describe_stack_resources(physical_resource_id='i-830e2869')[0].stack_name
If the instance is not part of a stack, you'll get a Stack for i-830e2869 does not exist 400 error.
Method 3: User data
I'll admit - this was pretty creative so kudos for thinking it up.
curl http://169.254.169.254/latest/user-data | grep 'cfn-init -s' | awk '{print $3}'
The reason this works is that instances created by Cloudformation need to run /opt/aws/bin/cfn-init to install packages and /opt/aws/bin/cfn-signal in order to report their successful creation and one of the parameters is the stack name.
It'll fail if someone edits the user-data, but despite feeling a bit hacky, it seems pretty reliable. I still wouldn't recommend using it in prod given it's brittle reliance on a script parameter.